id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.02406 | Boosting the STM's spatial and energy resolution with
double-functionalized probe tips | Scattering of superconducting pairs by magnetic impurities on a
superconducting surface leads to pairs of sharp in-gap resonances, known as
Yu-Shiba-Rusinov (YSR) bound states. Similarly to the interference of itinerant
electrons scattered by defects in normal metals, these resonances reveal a
periodic texture around the magnetic impurity. However, the wavelength of these
resonances is often too short to be resolved even by methods capable of atomic
resolution, like scanning tunneling microscopy (STM). Here, we combine a CO
molecule with a superconducting cluster pre-attached to an STM tip to maximize
both spatial and energy resolution. The superior properties of such a
double-functionalized probe are demonstrated by imaging the spatial
distribution of YSR states around magnetic Fe atoms on a Nb(110) surface. Our
approach reveals rich interference patterns of the hybridized YSR state,
previously inaccessible with conventional STM probes. This advancement extends
the capabilities of STM techniques, providing insights into superconducting
phenomena at the atomic scale. | Artem Odobesko, Raffael L. Klees, Felix Friedrich, Ewelina M. Hankiewicz, Matthias Bode | 2023-03-04T12:48:29Z | http://arxiv.org/abs/2303.02406v2 | # Resolving the interference of Yu-Shiba-Rusinov states with multi-functionalized STM probe
###### Abstract
Scattering of superconducting pairs by magnetic impurities on a superconducting surface leads to pairs of sharp in-gap resonances, known as Yu-Shiba-Rusinov bound states. Similarly to the interference of itinerant electrons scattered on defects in normal metals, these resonances reveal a periodic texture around the magnetic impurity. However, the wavelength of these resonances was often too short to be resolved even by methods capable of atomic resolution, like STM. Here, we combine a CO molecule with a superconducting cluster pre-attached to an STM tip to maximize both spatial and energy resolution. The superior properties of such a double-functionalized probe are demonstrated by imaging the interference of YSR states around magnetic Fe dimers on a Nb(110) surface. Our novel approach reveals rich interference patterns of both the even and odd combinations of the hybridized YSR states, which are inaccessible with conventional STM probes.
The invention of the scanning tunneling microscope has revolutionized our understanding of materials and their properties [1]. This progress was made possible by the capability of correlating topographic data of the sample structure obtained by constant-current or constant-height scanning tunneling microscopy (STM) [2] with the data obtained by scanning tunneling spectroscopy (STS) or spin-polarized (SP)-STM. While the former is sensitive to the local density of states [3], the latter grants access to the atomic scale spin structure [4].
However, when performed with normal metal tips, all these methods have their specific limitations, which can be overcome by purposive functionalization. The spatial resolution of topographic STM measurements can be enhanced by attaching a CO molecule to the apex of the STM tip [5; 6], see red circle in Fig. 1. A superconducting probe boosts the energy resolution in STS beyond the thermal broadening limit [7] (green circle) and a magnetic atom at the probe apex acts as a spin sensor in SP-STM measurements [8] (blue). Hereby, the advantages of the probe functionalization not only lie in the improved STM performance, but also in the fact that the probe can sequentially be prepared in a single experimental run by dressing the apex _in-situ_ for specific needs [9].
Interestingly, further improvements can be achieved by combining different probe functionalization methods. As sketched in the intersection areas A-C in Fig. 1, this double-functionalization may enable the simultaneous utilization of advantages of several of the above-mentioned methods. For example, as shown in Ref. [10], a combination of a magnetic atom and a superconducting probe can yield a significantly increased spin contrast at the atomic level compared to bulk magnetic tips, which corresponds to the intersection area B in Fig. 1. Similarly, region C covers combinations with magnetic moment-bearing molecules, such as the functionalization with a nickelocene molecule [11; 12], which provides a spin polarization of almost 100% and also leads to an improved spatial resolution. We are not aware, however, of any successful double-functionalization that would cover intersection area A. To address this research gap, we
Figure 1: **Diagram displaying the three basic methods of STM, STS, and SP-STM.** Their sensitivity can be enhanced by functionalization with a CO molecule, a superconducting cluster, or a magnetic atom, resulting in higher spatial resolution in topography, improved energy resolution in spectroscopy, and increased magnetic sensitivity in spin-polarized measurements, respectively. The intersection between these areas represents double-functionalization approaches.
use the combination of a superconducting probe and CO molecule, thus creating double-functionalized CO-SC-probe. The improved spectroscopic and spatial resolution allows us to obtain previously inaccessible details in the local density of states (LDOS) of bound states arising around a pair of magnetic Fe atoms on a superconducting Nb(110) surface.
A single magnetic impurity results in a pair of particle-hole-symmetric sub-gap resonances, known as Yu-Shiba-Rusinov (YSR) bound states. The YSR wave function (i) reflects the shape of atomic orbital responsible for the magnetic scattering channel, (ii) oscillates with the Fermi wave vector \(k_{\rm F}\), and (iii) decays with increasing distance \(r\) from the impurity [13; 14; 15]. This decay involves two length scales: an exponential term \(\propto e^{-r/\xi_{0}}\), determined by the superconducting coherence length \(\xi_{0}\), and an algebraic term \(\propto(k_{\rm F}\cdot r)^{(1-D)/2}\) that depends on the dimension \(D\) of the system. Since \(\xi_{0}\gg k_{\rm F}^{-1}\) for most elemental superconductors, the algebraic term usually determines the fast attenuation of YSR wave function in three-dimensional materials and precludes the direct detection of its oscillatory behavior.
A successful observation of oscillating YSR states was only possible for systems with a reduced dimensionality of the band structure, such as in quasi-2D superconductors [16; 17], or systems with a strong Fermi surface nesting where parallel flat segments, of the constant-energy contour provide multiple scattering vectors, resulting in the the so-called "focusing effect" with a longer propagation of the YSR along specific directions [18; 19; 20]. Even for a pairs of magnetic impurities, where this effect should be pronounced due to the interference of YSR wave functions from individual atoms [21; 22], the lack of simultaneous high spectroscopic and high spatial resolution limited the experimental access to long-range oscillatory interference patterns. Numerous STS experiments have been performed for various magnetic dimers on different superconducting substrates [23; 24; 25; 26; 27; 28; 29; 30; 31], but usually only show the first maximum of odd and even combinations of the monomer YSR wave function.
By utilizing novel double-functionalized CO-SC-probe tips, we are able to detect these interference patterns in spatially resolved differential conductance maps that correspond to hybridized YSR states of the Fe dimer with even and odd spatial symmetry. We demonstrate the simultaneous enhancement of the spatial and the energy resolution by comparing data obtained on the Fe dimer using the same superconducting probe tip before and after additional functionalization with a CO molecule. Only when the double-functionalized CO-SC probe is used, do the characteristic features in the interference maps appear, which carry information about the Fermi surface of the Nb(110) substrate. A simplified model with an anisotropic Fermi contour allows us to reproduce the focused propagation direction visible in the experimental data. It explains the resulting interference patterns by introducing a Fermi wave vector \(k_{\rm F}=(9.4\pm 1.5)\,\mathrm{nm}^{-1}\).
## Results
Figure 2(a) shows a constant-current STM image of Fe atoms deposited on a clean Nb(110) surface, taken with the double-functionalized CO-SC probe. The Nb(110) surface with its lattice constant of \(a_{\rm Nb}=3.3\,\mathrm{\AA}\) is atomically resolved. Dark areas in Fig. 2(a) are contaminated with hydrogen and oxygen. The Fe adatoms, visible as bright protrusions, adsorb in four-fold hollow sites of the Nb(110) lattice [32; 33]. Some Fe atoms spontaneously form dimers. In the following, we will focus on those Fe dimers with the shortest interatomic distance, which are roughly oriented along the \([1\bar{1}1]\) (dimer A) and the \([1\bar{1}\bar{1}]\) direction (dimer B) [examples marked by ellipses in Fig. 2(a)]. As shown in the schematic drawing of Fig. 2(b), A- and B-dimers are equivalent due to their surface mirror symmetry. It has been shown that these Fe dimers exhibit energy-split YSR states [9].
The tunneling conductance spectrum obtained on the clean Nb surface, see Fig. 2(c), shows well-pronounced coherence peaks shifted by \(\Delta_{\rm tip}\approx 1.2\,\mathrm{meV}\). A pair of weak resonances at \(U=\pm 5.3\,\mathrm{mV}\) can be recognized, which are absent when CO is released from the apex. They correspond to the first vibrational mode of the CO molecule, which--for resonant tunneling in SIS junctions--appears as peaks rather than steps in \(\mathrm{d}I/\mathrm{d}U\) signal, as is the case for normal-metallic tips [34].
A closer inspection of the zoomed image of the A-type dimer in Fig. 3(a) shows that the dimer axis is slightly rotated clockwise away from the \([1\bar{1}1]\) direction (dashed line) [35]. This rotation is most likely caused by the epitaxial strain the Fe dimer experiences if deposited onto a Nb(110) surface. Since the nearest-neighbor distance of bcc-Fe (\(2.48\,\mathrm{\AA}\)) is smaller than the nearest-neighbor distance along fourfold hollow adsorption sites in the Nb \([1\bar{1}1]\) direction (\(2.85\,\mathrm{\AA}\)), the Fe atoms will not assume fourfold hollow adsorption sites, but relax inward and shift toward threefold-coordinated sites of the Nb(110) substrate [33]. As a consequence, the dimer axis is slightly tilted clockwise for the A-type and counter-clockwise for the B-type dimer, as confirmed by an inspection of a large number of dimers [9].
In Fig. 3(b), the \(\mathrm{d}I/\mathrm{d}U\) spectrum measured at the center of the dimer with a CO-SC probe clearly shows two pairs of split YSR resonances, resulting from the hybridization of the YSR bound states of individual Fe atoms. As shown in Ref. [9], the even and odd symmetry of these resonances can already be recognized in \(\mathrm{d}I/\mathrm{d}U\) maps measured with single-functionalized SC probes. In order to visualize the substantial advantage of double-functionalized, we compare spectra measured along the dimer axis with two types of functionalized probes: a bare SC probe without CO and the same SC probe with
a CO molecule attached.
Figure 3(c) shows a waterfall representation of the \(\mathrm{d}I/\mathrm{d}U\) spectra measured with the bare SC probe along the dashed line in panel (a). In agreement with Ref. [9], the high-energy YSR state at \(U=\pm 2\,\mathrm{mV}\) exhibits an even symmetry with the strongest signal occurring around the dimer center. In contrast, the low-energy YSR state at \(U=\pm 1.2\,\mathrm{mV}\) shows a much lower intensity and an odd symmetry, with two lobes offset relative to the dimer center, as indicated by two white ellipses. A closer look at the signal profile of the low-energy state at positive range reveals two rather broad maxima that strongly overlap in the center, resulting in a non-zero intensity between the two atoms, see Fig. 3(d).
A completely different picture emerges if a CO molecule is attached to the probe. While the high-energy state preserves its even symmetry with wide maxima in the center, an oscillatory set of peaks becomes visible at the position of the low-energy YSR state, see Fig. 3(e), The clearly improved spatial resolution allows to distinguish up to seven narrow maxima which rapidly decay away from the dimer, as displayed in the line profile in Fig. 3(f). The picture resembles the one taken without CO molecule at large broadening, with the exception of the central peak (marked with a green arrow), where a zero signal is expected for a YSR state with odd-symmetry. The potential origin of this central peak will be discussed below.
To evaluate the spatial distribution of YSR states, Fig. 4 shows high-resolution \(\mathrm{d}I/\mathrm{d}U\) maps measured with a double-functionalized CO-SC probe on two different Fe dimers on Nb(110). The data presented in the left and right columns of Fig. 4 correspond to the A-type and B-type dimer shown in the topography maps in Fig. 4(a) and (d), respectively. The energy position of the hybridized YSR states slightly differs for various dimers, which may be caused by the on-site Coulomb potential. At the same time, the relative energy difference remains approximately the same for all measured dimers (\(\Delta E\approx 0.8\,\mathrm{meV}\approx 0.53\,\Delta_{\mathrm{Nb}}\)). Even a superficial in
Figure 3: **Comparison of single- and double functionalized probe.** (**a**) STM image of an Fe dimer on a clean Nb(110) surface. (**b**) Single \(\mathrm{d}I/\mathrm{d}U\) spectrum measured in the center of Fe dimer in panel (**a**). (**c**) \(\mathrm{d}I/\mathrm{d}U\) line profile acquired along the dimer axis [dashed arrow in panel (**a**)], measured with a superconducting probe without a CO molecule. (**d**) Line profile of the \(\mathrm{d}I/\mathrm{d}U\) signal measured at the low-energy YSR peak position at positive tunneling bias \(U=1.2\,\mathrm{mV}\), see arrow in (**c**). (**e**, **f**) Same as panels (**c**, **d**), respectively, but measured with the CO molecule attached to the superconducting probe. Setpoint parameters: \(U_{\mathrm{set}}=10\,\mathrm{mV}\), \(I_{\mathrm{set}}=1\,\mathrm{nA}\), \(U_{\mathrm{mod}}=0.1\,\mathrm{mV}\).
Figure 2: **Fe atoms on Nb surface.** (**a**) STM topography of Fe atoms (bright protrusions) deposited on Nb(110) taken with a CO-SC probe tip. (**b**) Atomic lattice of the Nb(110) surface (gray spheres) with the Fe atoms (red spheres) on top, bound in two mirror-symmetric A- and B-type dimers along the \([1\bar{1}1]\) and the \([1\bar{1}\bar{1}]\) direction, respectively. (**c**) Differential tunneling conductance spectrum taken with the CO-SC probe on a clean Nb area (dark solid line). The red dashed line shows the fit calculated as a convolution of the DOS of the sample and the tip with BCS gaps \(\Delta_{\mathrm{Nb}}=1.5\,\mathrm{meV}\) and \(\Delta_{\mathrm{tip}}=1.2\,\mathrm{meV}\), respectively. Setpoint parameters: (**a**) \(U_{\mathrm{set}}=10\,\mathrm{mV}\), \(I_{\mathrm{set}}=2\,\mathrm{nA}\); (**b**) \(U_{\mathrm{set}}=7\,\mathrm{mV}\), \(I_{\mathrm{set}}=0.4\,\mathrm{nA}\).
spection of these experimental d\(I\)/d\(U\) maps measured at the high-energy [Figs. 4(b,e)] and low-energy YSR states [Figs. 4(c,f)] reveals interference patterns which carry a much higher degree of details than comparable data obtained with a SC probe without CO [9].
The d\(I\)/d\(U\) maps of the high-energy YSR state shown in Figs. 4(b) and (e) exhibit four distinguishable maxima arranged along the nodal plane of the dimer. The maxima have an elongated shape along the dimer axis and the two central ones partially overlap. Such a spatial arrangement is consistent with even-symmetry and explicitly rejects the case with odd-symmetry, since the latter should lead to a vanishing charge density in the entire nodal plane.
A very different interference pattern is observed for the low-energy YSR state, Fig. 4(c) and (f), revealing a sequence of arc-shaped maxima arranged along the dimer axis. Up to three maxima are visible on each side, the intensity of which rapidly decreases with the distance from the dimer. Their arrangement would unambiguously signal an odd symmetry, if it were not for the maximum in the center.
The unexpected position of the maxima on the d\(I\)/d\(U\) maps in Figs. 4 is probably caused by the fact that the tunneling conductance depends not only on the convolution of the LDOS of both tip and sample, but also on the tunneling matrix elements between different orbitals. Therefore, the role of the tip tunneling orbital is of fundamental importance. Only if measured with an STM tip governed by an \(s\)- or \(p_{z}\)-like frontier orbital, the d\(I\)/d\(U\) maps reflect the symmetry of the sample wave function.
However, it is known that CO-terminated tips also cause tunneling through the \(p_{x}\) and \(p_{y}\) orbitals [36]. As already discussed in the context of high-resolution imaging of molecular orbitals [37], these frontier orbitals result in maps which represent "the modulus squared of the lateral gradient of the wave functions", resulting in a contrast inversion at the dimer center. Moreover, it has been pointed out that the "relative contributions from \(p_{z}\)- and \(p_{x,y}\)-orbitals not only depend on the tip but also on the lateral tip position" [37], thereby potentially giving rise to an unexpected offset and asymmetric distribution of the d\(I\)/d\(U\) signal around the Fe dimers in Fig. 4.
Closer inspection of the data presented in Fig. 4(c, f) also indicates that the intensity along the arcs is not uniform, but more concentrated along the direction marked by the arrows, where it experiences a weaker attenuation with distance. Comparison with the crystallographic axes indicated in panels (b,e) reveals that--regardless of dimer orientation--this direction is aligned with the Nb \(\langle 1\bar{1}0\rangle\) direction. This observation strongly suggests the presence of the aforementioned "focusing effect" in [1\(\bar{1}\)0] direction.
### Theory
To rationalize our results, we model the experimental data with a simplified 2D continuous model of a bare superconducting surface with an anisotropic stadium-shaped Fermi surface, as shown in Fig. 5(a). A lattice model with a similar Fermi surface was already used in Ref. [38]. We introduce a parameter \(a\in[0,k_{\rm F}]\) that defines the length of the flat segments of the Fermi contour, where \(a=0\) corresponds to a circular shape with Fermi quasi-momentum \(k_{\rm F}\). Since the Fermi velocity points always perpendicular to the Fermi surface, a finite \(a>0\) generates a focusing of the hybridized YSR-state propagation along the [1\(\bar{1}\)0]-direction in real space, whose strength increases with increasing \(a\).
In the following, we focus our analysis on the LDOS of the impurity-dressed substrate, keeping in mind that the experimental data obtained from STS is actually a convolution of the LDOS of the substrate and the tip. Following the Green's function approach described in Ref. [20], we model the Fe dimer by adding two magnetic impuri
Figure 4: **Spatial maps of differential tunneling conductance.** (**a, d**) STM topography images of two equivalent Fe dimers on a clean Nb(110) surface oriented in different directions. (**b, e**) d\(I\)/d\(U\) maps at the tunneling bias corresponding to the high-energy YSR state with even symmetry. (**c, f**) d\(I\)/d\(U\) maps at the tunneling bias corresponding to the low-energy YSR state with odd-symmetry. Setpoint parameters: \(U_{\rm set}=10\,\)mV, \(I_{\rm set}=0.4\,\)nA.
ties at an inter-impurity distance \(d\) on the bare superconducting substrate. We fix \(d=(2.66\pm 0.18)\,\mathrm{\SIUnitSymbolAngstrom}\) to the average between the two extremes Fe and Nb with nearest-neighbor distances of \(2.48\,\mathrm{\SIUnitSymbolAngstrom}\) and \(2.85\,\mathrm{\SIUnitSymbolAngstrom}\)), as discussed earlier. We also account for the epitaxial strain between the Fe dimer and the Nb(110) substrate, where axis of the Fe dimer is rotated clockwise by an angle \(4^{\circ}\) with respect to the \([1\bar{1}1]\) direction [35]. For simplicity, these impurities are assumed to be identical and described by a semiclassical Shiba model [14] with an onsite energy \(U\) and an exchange coupling \(J\).
In Fig. 5(b), we show the resulting LDOS at one of the impurity sites, which shows two pairs of hybridized YSR states at the energies \(E_{\mathrm{S,1}}\approx\pm 0.15\,\Delta_{\mathrm{Nb}}\) and \(E_{\mathrm{S,2}}\approx\pm 0.69\,\Delta_{\mathrm{Nb}}\). The parameters \(U\) are \(J\) are chosen such that the energy difference of the positive and negative pair is \(\Delta E=|E_{\mathrm{S,2}}|-|E_{\mathrm{S,1}}|\approx 0.54\,\Delta_{\mathrm{Nb}}\).
In Figs. 5(c) and (d), we show the spatially resolved LDOS of the even- and odd-symmetry states at the occupied (hole) part of the high- and low-energy YSR resonances for A-type dimer, respectively. The calculated LDOS maps are in good agreement with the tunneling conductance maps observed experimentally, in particular, they qualitatively reproduce the high (low) LDOS in the nodal plane at for the even- (odd)-symmetric YSR state at \(U=-2.0\,\mathrm{mV}\) [Fig. 4(b)] (\(U=-1.2\,\mathrm{mV}\) [Fig. 4(c)]). Furthermore, the model reproduces the "focusing effect" in \([1\bar{1}0]\) direction, marked with arrows in Fig. 5(d) similar as in Fig. 4(c).
Since the theoretical model is two-dimensional and the attenuation of YSR wave function is significantly reduced, an additional set of maxima in the other direction is also observed. They arise from the remaining circular segment of the Fermi contour and their direction rotates with the orientation of the dimer, whereas the "focusing" direction is independent of the dimer orientation and always directed along \([1\bar{1}0]\). As expected, the calculated data for the odd state in Fig. 5(c) lack the finite signal at the dimer center, which only appears in our experimental data due to the contribution of the CO-related \(p\)-orbitals.
For the data of Fig. 5, the \(k_{\mathrm{F}}\) parameter is chosen to match the oscillation pattern in the experimental data. We find the good match between our theoretical model with a simplified shape of the Fermi contour and the experimental data for \(k_{\mathrm{F}}=2.5/d\approx(9.4\pm 1.5)\,\mathrm{nm}^{-1}\), see Fig. S5 in [35]. Surprisingly, this value for \(k_{\mathrm{F}}\) estimated for Fe impurity on Nb(110) is almost two times larger than the one obtained for Mn on Nb(110) [39]. It's important to note, that for Fe atoms the \(k_{\mathrm{F}}\) is estimated for YSR states which correspond to scattering channels related to the \(d_{z^{2}}\)-orbital, whereas for Mn \(k_{\mathrm{F}}\) is obtained for the \(d_{yz}\)-orbital [39]. We speculate that the relatively large difference in the effective Fermi wave length \(k_{\mathrm{F}}\) for screening magnetic impurities is caused by the fact that the \(d_{yz}\)-states of Mn and the \(d_{z^{2}}\)-states of Fe hybridize with very different bands of the Nb Fermi surface. Further _ab initio_ calculations would be desirable to clarify this issue.
## Discussion
In summary, our results clearly demonstrate the advantages of double-functionalized STM tips which consist of a superconducting cluster and an additional CO molecule attached to it, as they simultaneously enable high energy and extra spatial resolution. The \(p\)-orbital nature of the CO frontier orbital at the apex of the STM tip results in a higher contrast. Since \(p_{x,y}\)-like orbital result in a signal which is proportional to the square of the wave function gradient, the image resolution is greatly improved as compared to conventional bulk superconducting or metal probes. Yet, depending on the actual mixture of \(p_{z}\)-, and \(p_{x,y}\)-like orbitals at the STM tip changes of the symmetry as well as lateral offsets of the features may occur. With an appropriate consideration of these effects, double-functionalized STM tips provide access to the wave functions of YSR bound states on superconductors, as well as, to the character of the
Figure 5: **Fermi surface model and resulting DOS.** (**a**) Sketch of the stadium-shaped Fermi surface at the Fermi energy \(E_{\mathrm{F}}\). \(a\in[0,k_{\mathrm{F}}]\) and the angle \(\beta=\arccos(a/k_{\mathrm{F}})\) define the regions with flat segments, where \(a=0\) corresponds to a circular Fermi surface with radius \(k_{\mathrm{F}}>0\) and \(\beta=\pi/2\). (**b**) LDOS at the impurity position \(\mathbf{r}=\mathbf{r}_{1}\) normalized to its large-energy value at \(E=10^{3}\Delta_{\mathrm{Nb}}\) for a pair of magnetic impurities located at \(\mathbf{r}_{1,2}\). There are two pairs of YSR states at the energies \(E_{\mathrm{S,1}}\approx\pm 0.15\,\Delta_{\mathrm{Nb}}\) and \(E_{\mathrm{S,2}}\approx\pm 0.69\,\Delta_{\mathrm{Nb}}\). Surface LDOS normalized to the maximum value at the YSR state energies (**c**) \(E_{\mathrm{S,2}}\) and (**d**) \(E_{\mathrm{S,1}}\). Black dots represent the locations of the individual magnetic impurities. Data were rotated to fit the orientation in Fig. 4**(b, c)**. Parameters for (**b**)-(**d**): \(a=0.6\,k_{\mathrm{F}}\), \(\xi_{0}=100/k_{\mathrm{F}}\), \(k_{\mathrm{F}}=2.5/d\), \(J=-0.82/N_{0}\), \(U=0.70/N_{0}\), where \(N_{0}\) is the normal-state density of states at the Fermi energy.
Fermi surface of the hosting substrate, which is highly important for studying, for example, unconventional superconductors or systems with topological edge states.
## Methods
The experiments are performed in a home-built low-temperature STM at a base temperature of 1.4 K. The Nb(110) surface is cleaned by a series of high-temperature flashes [40]. Fe atoms are deposited _in-situ_ onto the Nb substrate at a temperature of 4.2 K. To get a superconducting probe, an electro-chemically etched W tip was brought in contact with the Nb crystal, thus creating a Nb cluster on the tip apex. CO molecules were picked up from a clean Cu(001) surface using the procedure described in Ref. [9]. The resulting double-functionalized tips exhibit a superconducting gap \(\Delta_{\text{tip}}\) which corresponds to about 70-90% of the bulk Nb value [41]. The presence of a superconducting gap in the LDOS of the tip causes a corresponding shift of the sample's LDOS features in the conductance spectra by \(\Delta_{\text{tip}}\). The experimental data are measured in the tunneling regime at relatively large tip-sample distances, corresponding to a tunnel resistances \(R_{\text{tun}}>10^{6}\,\Omega\), such that the current is dominated by single-electron tunneling rather than Andreev reflections [42; 43; 44; 45]. All spectroscopic measurements are performed with a modulation voltage of 0.1 mV at a frequency of 890 Hz.
|
2306.16332 | The 2-point correlation function covariance with fewer mocks | We present an approach for accurate estimation of the covariance of 2-point
correlation functions that requires fewer mocks than the standard mock-based
covariance. This can be achieved by dividing a set of mocks into jackknife
regions and fitting the correction term first introduced in Mohammad & Percival
(2022), such that the mean of the jackknife covariances corresponds to the one
from the mocks. This extends the model beyond the shot-noise limited regime,
allowing it to be used for denser samples of galaxies. We test the performance
of our fitted jackknife approach, both in terms of accuracy and precision,
using lognormal mocks with varying densities and approximate EZmocks mimicking
the DESI LRG and ELG samples in the redshift range of z = [0.8, 1.2].
We find that the Mohammad-Percival correction produces a bias in the 2-point
correlation function covariance matrix that grows with number density and that
our fitted jackknife approach does not. We also study the effect of the
covariance on the uncertainty of cosmological parameters by performing a
full-shape analysis. We find that our fitted jackknife approach based on 25
mocks is able to recover unbiased and as precise cosmological parameters as the
ones obtained from a covariance matrix based on 1000 or 1500 mocks, while the
Mohammad-Percival correction produces uncertainties that are twice as large.
The number of mocks required to obtain an accurate estimation of the covariance
for 2-point correlation function is therefore reduced by a factor of 40-60. | Svyatoslav Trusov, Pauline Zarrouk, Shaun Cole, Peder Norberg, Cheng Zhao, Jessica Nicole Aguilar, Steven Ahlen, David Brooks, Axel de la Macorra, Peder Doel, Andreu Font-Ribera, Klaus Honscheid, Theodore Kisner, Martin Landriau, Christophe Magneville, Ramon Miquel, Jundan Nie, Claire Poppett, Michael Schubnell, Gregory Tarlé, Zhimin Zhou | 2023-06-28T16:08:52Z | http://arxiv.org/abs/2306.16332v3 | # 2-point statistics covariance with fewer mocks
###### Abstract
We present an approach for accurate estimation of the covariance of 2-point correlation functions that requires fewer mocks than the standard mock-based covariance. This can be achieved by dividing a set of mocks into jackknife regions and fitting the correction term first introduced in Mohammad & Percival (2022), such that the mean of the jackknife covariances corresponds to the one from the mocks. This extends the model beyond the shot-noise limited regime, allowing it to be used for denser samples of galaxies. We test the performance of our fitted jackknife approach, both in terms of accuracy and precision, using lognormal mocks with varying densities and approximate EZmocks mimicking the DESI LRG and ELG samples in the redshift range of \(z=[0.8,1.1]\).
We find that the Mohammad-Percival correction produces a bias in the 2-point correlation function covariance matrix that grows with number density and that our fitted jackknife approach does not. We also study the effect of the covariance on the uncertainty of cosmological parameters by performing a full-shape analysis.
We find that our fitted jackknife approach based on 25 mocks is able to recover unbiased and as precise cosmological parameters as the ones obtained from a covariance matrix based on 1000 or 1500 mocks, while the Mohammad-Percival correction produces uncertainties that are twice as large. The number of mocks required to obtain an accurate estimation of the covariance for 2-point correlation function is therefore reduced by a factor of 40-60.
The FitCov code that accompanies this paper is available at this GitHub repository.
keywords: dark energy - large-scale structure of Universe - dark matter - miscellaneous
## 1 Introduction
A new generation of cosmological surveys such as Dark Energy Spectroscopic Instrument (DESI Collaboration et al., 2016, DESI Collaboration et al., 2022) have started taking data and even more will in the coming years with e.g. the start of operations of Euclid
(Laureijs et al., 2011) and the Vera Rubin Observatory (Ivezic et al., 2019). Therefore, it is becoming vital to develop methods for deriving covariance matrices in order to estimate the uncertainties on the cosmological parameters of interest.
Existing methods of evaluating the covariance matrix that quantifies the errors on the galaxy 2-point correlation function of galaxy redshift surveys can be separated into three different categories: mock-based, analytic and internal, each best suited to different scenarios. Mock-based covariance matrices are built from a large suite of numerical simulations, "mock" catalogues, that mimic the properties of the cosmological surveys with high fidelity. These mocks need to be i) accurate in the sense that they have to reproduce the two- and higher-point statistics with limited biases and ii) numerous in order to avoid sample variance, which introduces noise in the covariance matrices that could bias the inferred parameter uncertainties (e.g. Dawson et al., 2013; Percival et al., 2014).
Analytic approaches provide expectation values of the large-scale structure statistics directly and are much less computationally expensive. However, that requires a description of the non-Gaussian terms that enter the four-point correlation function, which are needed to compute the covariance of the two-point correlation function. Accurate modelling of the non-linear gravitational evolution, galaxy bias, redshift-space distortions and shot noise is thus a challenge to compute analytic covariance matrices. The modelling usually relies on Perturbation Theory (PT) which limits the domain of accuracy to the quasi-linear regime when the density perturbations remain small compared to unity. Moreover, one also needs to account for survey geometry and window function effects. Recent progress in this direction has been made to develop codes for the power spectrum (CovaPT, Wadekar et al., 2020). Additionally, we can mention semi-analytic approaches, which use the data to calibrate themselves, for example RascalC code (Philcox et al., 2019; O'Connell et al., 2016).
Finally, data-based or internal methods, such as jackknife and bootstrap, are often used especially when large sets of mocks are not available. They consist in resampling the survey data by slicing the original data into sub-samples and weighting these sub-samples following specific prescriptions. In the standard jackknife approach, for a given jackknife realisation \(i\), the sub-samples have unit weight except the sub-sample indexed \(i\) that is weighted 0, hence this approach is also called 'delete-one' jackknife resampling. Internal resampling methods do not rely on any assumption about the underlying gravity model and are thus less sensitive to unknown physics. However, they can lack precision and suffer from biases, as discussed in Norberg et al. (2009); Friedrich et al. (2016) and Farole et al. (2021). Recently, a correction to the standard jackknife resampling method was proposed in Mohammad and Percival (2022) which consists in introducing a different weighting scheme for the cross-pairs than for the auto-pairs, where the auto-pairs are made up of objects that lie in the same sub-sample and cross-pairs of two objects that reside in two distinct sub-samples. Indeed, the choice of assigning weights to pairs of objects is arbitrary and Mohammad and Percival (2022) tested different prescriptions. They found that by adjusting the weighting of the pairs that compose the estimates of the two-point correlation function, they were able to provide more accurate estimates of the variance than the standard jackknife.
In this work, we follow a similar methodology but propose to go beyond that work by i) considering some cross-pairs that were neglected in both the standard jackknife and the jackknife method with Mohammad-Percival correction, ii) fit the appropriate weighting scheme to a mock-based covariance built from a smaller number of mocks than for traditional mock-based approach. The outline of the paper is as follows: in Section 2 we review the formalism associated with the standard jackknife resampling method and the correction proposed in Mohammad and Percival (2022). We introduce there the formalism of our proposed hybrid approach, which performance on mocks is presented in Section 3 and compared with the original correction for jackknife and with mock-based method for estimating the covariance matrix. We conclude and discuss further prospects in Section 4.
## 2 Covariance Estimators
In the present paper we work in configuration space. We use the Landy-Szalay estimator with double-counting assumed, (Landy and Szalay, 1993), which can be written as:
\[\xi(s,\mu)=\frac{DD(s,\mu)-2DR(s,\mu)+RR(s,\mu)}{RR(s,\mu)} \tag{1}\]
where \(s\) is the redshift space separation of a pair of galaxies, \(\mu\) is the cosine of the angle between the separation vector and the line of sight, \(\xi(s,\mu)\) is the 2-point correlation function in redshift space, \(DD(s,\mu)\) are the binned auto-pair counts of the data catalogue, \(RR(s,\mu)\) are the binned pair counts computed from a matching random catalogue, and \(DR(s,\mu)\) are the binned cross-pair counts between the random and the data catalogue. All pair counts are assumed to be suitably normalised in Eq. 1.
The 2-point correlation function can be decomposed into Legendre multipoles defined as:
\[\xi_{\ell}(s)=(2\ell+1)\int_{0}^{1}\xi^{s}(s,\mu)L_{\ell}(\mu)\,d\mu, \tag{2}\]
where \(\ell\) is the order of the multipole, and \(L_{\ell}(\mu)\) are the Legendre polynomials.
### Covariance from data or data-like mocks
Cosmological simulations can be divided into two categories: i) precise and expensive computationally N-body simulations, which are known to treat properly non-linear gravitational evolution; ii) less accurate approximate mock methods, such as BAM (Balaguera-Antolinez et al., 2018), COLA (Tassev et al., 2013), EZmock (Chiang et al., 2014), (Zhao et al., 2021), FastPM (Feng et al., 2016), GLAM (Klypin and Prada, 2018), lognormal, PATCHY (Kitaura et al., 2016) etc. They can provide a good covariance for scales \(>10\)\(h^{-1}\)Mpc, but small-scale clustering is not properly resolved.
Assuming a survey with \(N_{\rm m}\) mocks, the covariance matrix of the 2-point correlation function is defined as:
\[C_{ij}=\frac{1}{N_{\rm m}-1}\sum_{k=1}^{N_{\rm m}}\left[\xi_{i}^{[k]}-\langle \xi_{i}\rangle\right]\,\left[\xi_{j}^{[k]}-\langle\xi_{j}\rangle\right]\,, \tag{3}\]
where \(\xi_{i}^{[k]}\) is the \(i^{\rm th}\) bin of correlation function of the \(k^{\rm th}\) mock, and \(\langle\xi_{i}\rangle\) is the mean over the \(N_{\rm m}\) mocks of the \(i^{\rm th}\) bin of the correlation function.
However, for some subsets of modern surveys, like the DESI Bright Galaxy Survey (BGS), the number of galaxies and their number density sometimes becomes so large, even these approximate methods become expensive computationally, posing a problem.
### Jackknife covariance
Jackknife is a data resampling approach that involves creating multiple sub-samples of the same dataset by systematically excluding regions of the data. When applied to the cosmological surveys, the footprint is divided into regions of similar area and it is these that are systematically excluded to make the multiple sub-samples.
This approach has the advantage of making no assumptions regarding non-linear evolution and non-standard physics, and at the same time is extremely cheap from the computational perspective, as it does not require expensive production of thousands of mocks. Assuming we have cut our dataset into \(N_{\rm jk}\) pieces, the covariance matrix is:
\[C_{ij}=\frac{N_{\rm jk}-1}{N_{\rm jk}}\sum_{k=1}^{N_{\rm jk}}\left[\xi_{i}^{[k ]}-\langle\xi_{i}\rangle\right]\left[\xi_{j}^{[k]}-\langle\xi_{j}\rangle\right]\,, \tag{4}\]
where \(\xi_{i}^{[k]}\) is the \(i^{\rm th}\) bin of the correlation function of the \(k^{\rm th}\) jackknife region, and \(\langle\xi_{i}\rangle\) is its mean over all the \(N_{\rm jk}\) jackknife regions. The coefficient on the right-hand side is larger than the corresponding factor in Eq. 3 as it compensates for the reduction in the covariance due to the overlap between the subsamples.
In practice, we consider the galaxy 2-point correlation function and the \(DD\), \(DR\) and \(RR\) pair counts mentioned in the Landy-Szalay estimator defined in Eq. (1).
#### 2.2.1 Standard approach
We will assume the number of sub-samples is \(N_{\rm jk}\) and work in terms of pair counts rather than correlation functions. For simplicity, we will denote as \(AA_{k}\) the auto-counts that are contributed by pairs of galaxies that both reside in the \(k^{\rm th}\) area of the survey (the areas that are systematically excluded to form the jackknife sub-samples), and \(CC_{k}\) the cross-counts between galaxies in this \(k^{\rm th}\) area and those in the jackknife sub-sample that is made by excluding this area. The counts in the jackknife sub-sample \(TT_{k}\) are related to the overall number of counts in the full survey \(TT_{\rm tot}\) and the above quantities by
\[TT_{k}=TT_{\rm tot}-AA_{k}-CC_{k}. \tag{5}\]
where in defining each of these pair counts we count each unique pair only once. The total number of auto- and cross-pairs can be related to their means over the jackknife samples by
\[AA^{\rm tot}=N_{\rm jk}\overline{AA} \tag{6}\]
and, as we account for double counting with the cross-pairs only while looking at the full sample, we need to divide the obtained estimate by 2 to be consistent with the auto-pairs:
\[CC^{\rm tot}=\frac{N_{\rm jk}}{2}\overline{CC}. \tag{7}\]
Following (Mohammad Percival, 2022), we choose to define an estimator of the normalised auto-pairs \(\theta_{\rm a,k}\) in a specific realisation, such that \(\overline{\theta}_{\rm a}=\overline{AA}\) by
\[\theta_{\rm a,k}=\frac{1}{N_{\rm jk}-1}\left(N_{\rm jk}\overline{AA}-AA_{k}\right) \tag{8}\]
and the estimator of the normalised cross-pairs \(\theta_{c,k}\) such that \(\overline{\theta}_{\rm c}=\overline{CC}\) by
\[\theta_{c,k}=\frac{2}{N_{\rm jk}-2}\left(\frac{N_{\rm jk}}{2}\overline{CC}- CC_{k}\right), \tag{9}\]
where it was taken into account that the cross-pairs contribute to the total estimate twice, while the auto-pairs only once.
We can then further compute for each jackknife realization the deviation from the mean value of the auto paircounts
\[\theta_{a,k}-\overline{\theta_{a}}=\frac{1}{N_{\rm jk}-1}\left(\overline{AA} -AA_{k}\right) \tag{10}\]
and cross paircounts
\[\theta_{c,k}-\overline{\theta_{c}}=\frac{2}{N_{\rm jk}-2}\left(\overline{CC} -CC_{k}\right). \tag{11}\]
We can now express how the covariance of each type of pair count can be represented in terms of the estimators above, if we assume the following definition for the covariance, where \(DD_{t}\) are just some pair counts of type \(t\):
\[\begin{split}\mathrm{cov}(DD_{1},DD_{2})=\sqrt{\frac{\overline{ DD}_{1}\overline{DD}_{2}}{DD_{1}^{\rm tot}DD_{2}^{\rm tot}}}\frac{1}{N_{\rm jk}-1} \times\\ \times\sum_{k=1}^{N_{\rm jk}}(DD_{1k}-\overline{DD}_{1})(DD_{2k}- \overline{DD}_{2})\end{split} \tag{12}\]
By replacing \((DD_{1},DD_{2})\) by \((AA,AA)\) or \((CC,CC)\) or \((CC,AA)\) in Eq. 12 and using Eqs. 10 and 11, one obtains:
\[\mathrm{cov}(AA,AA)=\frac{N_{\rm jk}-1}{N_{\rm jk}}\sum_{k=1}^{N_{\rm jk}} \left(\theta_{\rm a,k}-\bar{\theta}_{\rm a}\right)^{2} \tag{13}\]
\[\mathrm{cov}(CC,CC)=\frac{(N_{\rm jk}-2)^{2}}{2N_{\rm jk}(N_{\rm jk}-1)}\sum_{k =1}^{N_{\rm jk}}\left(\theta_{\rm c,k}-\bar{\theta}_{\rm c}\right)^{2} \tag{14}\]
\[\mathrm{cov}(CC,AA)=\frac{(N_{\rm jk}-2)}{\sqrt{2}N_{\rm jk}}\sum_{k=1}^{N_{ \rm jk}}\left(\theta_{\rm c,k}-\bar{\theta}_{\rm c}\right)\left(\theta_{\rm a, k}-\bar{\theta}_{\rm a}\right) \tag{15}\]
This gives all the components needed to compute the covariance of \(TT\), using its definition in Eq. 5:
\[\begin{split}\mathrm{cov}(TT,TT)=\mathrm{cov}(AA,AA)+\mathrm{cov} (CC,CC)+2\mathrm{cov}(AA,CC)\\ =\frac{N_{\rm jk}-1}{N_{\rm jk}}\sum_{k=1}^{N_{\rm jk}}\left( \theta_{\rm a,k}-\bar{\theta}_{\rm a}\right)^{2}+\frac{(N_{\rm jk}-2)^{2}}{2N_{ \rm jk}(N_{\rm jk}-1)}\sum_{k=1}^{N_{\rm jk}}\left(\theta_{\rm c,k}-\bar{ \theta}_{\rm c}\right)^{2}+\\ +\frac{\sqrt{2}(N_{\rm jk}-2)}{N_{\rm jk}}\sum_{k=1}^{N_{\rm jk}} \left(\theta_{\rm c,k}-\bar{\theta}_{\rm c}\right)\left(\theta_{\rm a,k}-\bar{ \theta}_{\rm a}\right)\end{split} \tag{16}\]
Note how the terms scale differently with the number of the jackknife regions. Mohammad and Percival (2022) argue that this inconsistent scaling is the source of the problems that arise with the standard jackknife approach.
#### 2.2.2 Mohammad-Percival correction
Mohammad and Percival (2022) proposed to weight the cross-pairs \(CC\) in order to fix the mismatch in the scaling, as seen in Eq. 16. With this weight \(\alpha\) multiplying all the \(CC\) pair counts, the expression for \(TT_{k}\) becomes
\[TT_{k}=TT_{\rm tot}-AA_{k}-\alpha CC_{k}. \tag{17}\]
Following the steps from equations (9), (11) and (14), the modified expression for the covariance of the \(CC\) paircounts is
\[\mathrm{cov}(CC,CC)=\frac{(N_{\mathrm{jk}}-2\alpha)^{2}}{2\alpha^{2}N_{\mathrm{ jk}}(N_{\mathrm{jk}}-1)}\sum_{k=1}^{N_{\mathrm{jk}}}\left(\theta_{c,k}-\bar{ \theta}_{c}\right)^{2} \tag{18}\]
We see that for \(\alpha=1\) we recover the ordinary jackknife, as it will remove the cross-pairs in the same way as it removes the auto-pairs. Alternatively, by choosing \(\alpha=N_{\mathrm{jk}}/\left[2+\sqrt{2}(N_{\mathrm{jk}}-1)\right]\) we can achieve equal scaling for the first two terms. Therefore, under the assumption of \(\mathrm{cov}(CC,AA)=0\) we indeed have all the terms scaling with \(N_{\mathrm{jk}}\) in same manner, which can be seen by rewriting the expression for \(\mathrm{cov}(TT(\alpha),TT(\alpha))\) as
\[\mathrm{cov}(TT(\alpha),TT(\alpha))=\mathrm{cov}(AA,AA)+\alpha^{2 }\mathrm{cov}(CC,CC) \tag{19}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{(N_ {\mathrm{jk}}-2\alpha)^{2}}{2N_{\mathrm{jk}}(N_{\mathrm{jk}}-1)}\sum_{k=1}^{N_ {\mathrm{jk}}}\left(\theta_{c,k}-\bar{\theta}_{c}\right)^{2}\]
In order to illustrate the effect of introducing the \(\alpha\) weighting of Mohammad and Percival (2022), we create 1000 Poisson random catalogues in a box with a size of 1 Gpc/h, divide them into 125 cubic regions and then compute the covariance matrices of the real-space correlation function. We do this for both the standard jackknife and jackknife with the Mohammad-Percival correction. The results are presented in Fig. 1. We show the ratio of the mean of the diagonal elements, \(\sigma^{2}=C_{ii}\), of the covariance matrix between jackknife-based \(\sigma_{\mathrm{jk}}\) and mock-based \(\sigma\) (estimated directly using Eq. 3), where the blue curve uses the standard jackknife and the orange one includes the Mohammad-Percival correction. The standard jackknife is over-estimating the covariance with respect to that from the mocks, while introducing the \(\alpha\) weighting of Mohammad and Percival (2022) for the cross-pairs removes this bias.
### Hybrid approach
The real galaxy density has physical correlations and so galaxy distributions are not Poisson distributions. Therefore, the assumption of \(\mathrm{cov}(CC,AA)=0\) is not valid. With the \(\alpha\) weighting of the cross-pairs that was introduced in Section 2.2.2, Eq. 15 becomes
\[\mathrm{cov}(CC,AA)=\frac{(N_{jk}-2\alpha)^{N_{jk}}}{\sqrt{2}\alpha N_{jk}} \sum_{k=1}^{N_{\mathrm{jk}}}\left(\theta_{c,k}-\bar{\theta}_{c}\right)\left( \theta_{a,k}-\bar{\theta}_{a}\right). \tag{20}\]
We can see that adopting any general fixed value of \(\alpha\) unfortunately leaves the scaling of \(\mathrm{cov}(CC,AA)\) different from those of \(\mathrm{cov}(AA,AA)\) and \(\mathrm{cov}(CC,CC)\), so, in order to try to recover the benefits of the Mohammad-Percival approach, we are treating \(\alpha\) as a free parameter. We propose therefore to augment the jackknife method with \(\alpha\) weighting where the value of \(\alpha\) is tuned by fitting the covariance estimate from a limited number of mocks. A scheme that represents the approach is shown in Fig. 2. First, let us assume we have a set of \(N_{m}\) mocks \(S=\{S_{1}...S_{N_{m}}\}\). Then, \(S/S_{k}\) denotes the set of mocks with the \(k^{\mathrm{th}}\) mock removed. Then, we refer to the mock covariance from such a set \(S/S_{k}\) as \(C_{ij}[S/S_{k}]\). We also introduce the \(\alpha\)-dependent jackknife covariance obtained from a mock \(S_{k}\) with a chosen \(\alpha\) weighting as \(C_{ij}[S_{k}](\alpha)\), from correlation functions constructed with counts following eq. (17).
Having that in our possession, we are able to estimate the uncertainty on the diagonal elements of the covariance \(\Xi_{ij}(\mathrm{diag}(C))\). First, we resemble the given set of mocks and produce \(N_{m}\) covariances \(C_{ij}[S/S_{k}]\). Then we compute the covariance matrix of the diagonals \(\Xi_{ij}(\mathrm{diag}(C))\), where we limit ourselves to the diagonal elements for computational reasons1:
Footnote 1: There are not enough degrees of freedom to build a proper Wishart distribution, hence the reason to consider only the diagonal elements
\[\Xi_{ij}(\mathrm{diag}(C))=\mathrm{cov}(C_{ii},C_{jj})=\\ =\frac{N_{\mathrm{m}}-1}{N_{\mathrm{m}}}\sum_{k=1}^{N_{\mathrm{ m}}}(C_{ii}[S/S_{k}]-C_{ii}[S])(C_{jj}[S/S_{k}]-C_{jj}[S])\]
The next step consists in finding which specific \(\alpha\) is needed to obtain a realisation of the covariance matrix to describe \(C_{ij}[S]\). First, we can write the \(\alpha\) dependent estimator of the covariance \(C_{ij}(\alpha)\) based on the mean of \(N_{\mathrm{m}}\)\(\alpha\) dependent jackknife covariances:
\[C_{ij}(\alpha)=\frac{1}{N_{\mathrm{m}}}\sum_{k=1}^{N_{\mathrm{m}}}C_{ij}[S_{k} ](\alpha) \tag{21}\]
Then, the \(\chi^{2}\) of the \(C_{ii}(\alpha)\) describing the \(C_{ii}[S]\) can be written as:
\[\chi^{2}_{C}(\alpha)=\sum_{ij}(C_{ii}(\alpha)-C_{ii}[S])\left(\Xi^{-1}\right) _{ij}(C_{jj}(\alpha)-C_{jj}[S]) \tag{22}\]
Following that, we minimise \(\chi^{2}_{C}\) by varying \(\alpha\), such that we obtain \(\chi^{2}_{C}(\alpha_{\mathrm{min}})=\min(\chi^{2}_{C}(\alpha))\). To justify using the Gaussian likelihood in this procedure, we first notice that we are using only the diagonals of the covariance matrix. That allows us, with sufficiently large \(N_{\mathrm{m}}\), to approximate the distribution of the separate bins of the diagonals \(C_{ii}\) with a Gaussian.
Therefore, our proposed estimator of the \(\alpha\) dependent covariance matrix \(C^{\mathrm{(fit)}}_{ij}\) can be defined as:
Figure 1: Comparison of the accuracy in the estimate of the diagonal elements of the covariance matrix for the real-space correlation functions as a function of scale obtained from 1000 cubic box independent mock catalogues. The ratio is the mean of the diagonal elements obtained using different jackknife approaches to those obtained directly from the ensemble of mocks. The noticeable scale-dependent bias that is visible for the standard jackknife estimate is absent when the Mohammad-Percival correction is employed.
\[C_{ij}^{\rm(fit)}=C_{ij}(\alpha_{\rm min})=\frac{1}{N_{\rm m}}\sum_{k=1}^{N_{\rm m} }C_{ij}[S_{k}](\alpha_{\rm min}) \tag{23}\]
While only the diagonal of \(C_{ij}^{\rm(fit)}\) are used when fitting for \(\alpha\), all the elements of \(C_{ij}^{\rm(fit)}\) are consistently adjusted with the value of \(\alpha\) that is found. In the original Mohammad-Percival approach, the contribution of the cross-pairs to the covariance is adjusted to match that of the auto-pairs. Our hybrid approach allows us to adjust the cross-pair contribution on the \(\alpha\) weighted covariance so that the covariance matches the one obtained from the limited set of mocks. We will show in the next section that by doing so, we can greatly reduce the bias that can appear for dense samples when using the fixed \(\alpha\) weighting of Mohammad and Percival (2022). However, the hybrid approach does require more than a single mock to create a covariance estimate, but in the next section we will also show that the number of mocks needed is significantly reduced compared to a purely mock-based approach.
Figure 2: Schematic describing the procedure to obtain the fitted covariance \(C_{\rm fit}^{ij}\) as defined in Eq. (23) and discussed in Section 2.3.
## 3 Tests on Mocks
We test the performance of the fitted jackknife method with respect to other covariance matrix estimation methods on different sets of mocks that include RSD and some geometrical effects that we will describe in subsequent sections. For each specific set of mocks we also generate a set of matching random synthetic catalogues.
In section 3.1 we present the methodology of the tests that we perform on our mocks. In section 3.2, a set of tests is performed on log-normal mocks produced by the \(\tt{MockFactory}\) code2 with three number densities to explore shot noise-dominated and sample variance-dominated regimes, but also to mimic the DESI LRG and ELG samples. In section 3.3 approximate EZmocks mimicking the DESI LRG and ELG samples are used to provide a mock-based covariance matrix which has the level of statistical precision of expected from the DESI Year-5 data. The corresponding number densities can be seen in Fig. 3 for LRG EZmocks in red, ELG EZmocks in purple and the different lognormal mocks at \(\bar{n}=(2,\ 5,\ 15)\times 10^{-4}\,[\rm{Mpc}/h]^{-3}\) in blue, orange and green respectively. We use 1500 lognormal mocks for each space density, and 1000 ELG and LRG EZ mocks respectively.
Footnote 2: [https://github.com/cosmodesi/mockfactory](https://github.com/cosmodesi/mockfactory)
### Methodology
Both the random and data samples are divided into \(N_{\rm jk}=196\) jackknife regions (the results, shown in Sec. 3.2, are not sensitive to \(N_{\rm jk}\)) and FKP weights \(w_{i}\) for each point \(i\) in the dataset are assigned as follows:
\[w_{i}=\frac{1}{1+\bar{n}_{i}P_{0}} \tag{24}\]
where \(P_{0}=10^{4}h^{3}\rm{Mpc}^{-3}\) is the power spectrum estimate at the given redshift. The FKP weights (Feldman et al., 1994) minimize the variance of the power spectrum estimate for samples that have a number density that varies with redshift. Then, the correlation functions are computed using \(\tt{pycorr}\)3 for both the samples and the jackknife realisations, which allows us to obtain \(C_{ij}\), \(C_{ij}(\alpha)\) and \(C_{ij}^{\rm(fit)}\), defined in eq. (3), eq. (21) and eq. (23).
Footnote 3: [https://github.com/cosmodesi/pycorr](https://github.com/cosmodesi/pycorr)
In order to test the robustness and precision of different covariance estimators,
We first create 30 fitted covariances. Then we randomly select 50 mocks. These mocks are fitted with produced covariances in all possible combinations, allowing us to have 1500 different estimations of the cosmological parameters from different mocks obtained with different covariances. To have the same test for the jackknife covariances, we are randomly selecting 30 jackknife covariances and repeat the same procedure, obtaining the same total number of fits. Unfortunately for the mock covariance, only 1 covariance matrix is available, so we fit all of the 1500 mocks, in order to obtain the same number of fits as for previous estimation techniques. The procedure is the same for the approximate mocks, with the only difference being that we only have 1000 mocks, so for the tests of fitted and jackknife covariance estimators we are using only 20 different covariances.
### Lognormal mocks
In order to quickly test our approach with different parameters, such as number density, we produce a set of lognormal mocks which are often used as a simple approximation to the non-linear density field that evolves from Gaussian initial conditions. The lognormal distributed density contrast \(\delta(\vec{x})\) is related to a Gaussian field \(G(\vec{x})=\ln[1+\delta(\vec{x})]-\left\langle\ln[1+\delta(\vec{x})]\right\rangle\) as:
\[\delta(\vec{x})=e^{-(G^{2})+G(\vec{x})}-1 \tag{25}\]
The two-point correlation function \(\xi(r)\) is related to the correlation function of the Gaussian field \(\xi_{G}(r)\) as:
\[\xi_{G}(r)=\ln[1+\xi(r)] \tag{26}\]
So, a fiducial power spectrum \(P(k)\) can be transformed into the correlation function \(\xi(r)\), which is then converted to the correlation function of the Gaussian field using eq. (26). We Fourier transform
Figure 4: The average of the quantity defined in Eq. (30) representing the bias of the specific covariance estimation approach plotted as a function of separation, \(s\), for various number densities.
Figure 3: Number density dependence on redshift for different datasets used. The lognormal mock samples were chosen to have a constant density selection function, to simplify the matters, while LRG and ELG mock samples follow the expected values from the corresponding DESI survey subsets.
it to the power spectrum \(P_{G}(k)\) and eventually generate the Fourier space Gaussian field \(G(k)\) as:
\[G(k)=\sqrt{\frac{P_{G}(k)V}{2}}(\theta_{r}+i\theta_{i}) \tag{27}\]
where \(\theta_{r},\theta_{i}\) are Gaussian random variables with unit variance and zero mean, and \(V\) is the volume of the simulation. After simulating the Fourier Gaussian field \(G(k)\) on the grid, we then use Fast Fourier Transform (FFT) to transform it and obtain the regular configuration space Gaussian field \(G(x)\). This is then transformed into the over-density field using eq. (25). The expectation value for the number of galaxies in a particular cell is computed given a fixed mean number density \(\bar{n}\), and galaxies are then drawn using the Poisson distribution and placed randomly the cell. Velocities are then assigned using the linearised continuity equation:
\[a(t)\frac{\partial\delta(\overline{x})}{\partial t}+\overline{\psi}\cdot \overline{v}(\overline{x})=0 \tag{28}\]
where \(a(t)\) is a scale factor, and which is solved using Zeldovich approximation (Zel'dovich, 1970).
Eventually, the RSD effect is modelled at a chosen redshift using the velocity information by affecting the coordinates of the galaxy \(x^{l}\) as:
\[x^{l}_{\rm{rsd}}=x^{i}+f(\overline{n}.\overline{v})n^{i} \tag{29}\]
where \(x^{i}_{\rm{rsd}}\) are the redshift-distorted coordinates, \(f\) is the linear growth rate of structure, \(\overline{v}\) is the velocity of the galaxy, and \(n^{i}\) is the line of sight.
#### 3.2.1 Dependence on number density
We create 3 sets of lognormal mocks, each set containing 1500 realisations, for number densities \(\bar{n}=2\times 10^{-4}\), \(5\times 10^{-4}\) and \(15\times 10^{-4}h^{3}{\rm{Mpc}}^{-3}\) at \(z=1\). Each of the realisations is made from a cubic box with a volume of \((2{\rm{Gpc}}/\bar{h})^{3}\) with grid of size \(384^{3}\) and fiducial cosmology with \(h=0.674\), \(\sigma_{8}=0.816\) and \(\Omega_{m}^{(0)}=0.31\). The CLASS code (Blas et al., 2011) is used to generate the initial power spectrum. Redshift space distortions are then added, and each box is cut to have a footprint that covers 15% of the full sky. Each mock is then analysed in the redshift range from 0.8 to 1.2, and the corresponding randoms are generated, which are about 4 times denser than the data mocks. The procedure to obtain the fitted jackknife covariance is summarised in Fig. 2 and explained in the previous section. Here, we use \(N_{\rm{m}}=50\) mocks. We measure correlation functions from the mocks in bins of \(5h^{-1}\)Mpc. Fig. 5 presents the \(\alpha\) parameter value distribution, obtained from the fits of the covariances.
Fig. 4 shows a measure of the relative bias \(\Delta\sigma^{2}(\xi_{\ell})/\sigma(\sigma_{\rm{Mock}}^{2})\) between a jackknife-based covariance matrix and the mock-based covariance as a function of pair separation \(s\). For simplicity we only consider the diagonal elements of each covariance matrix estimate. This relative bias is defined as
\[\frac{\Delta\sigma^{2}(\xi_{\ell})}{\sigma(\sigma_{\rm{Mock}}^{2})}=\frac{ \sigma^{2}(\xi_{\ell})-\sigma_{\rm{Mock}}^{2}(\xi_{\ell})}{\sigma(\sigma_{\rm {Mock}}^{2}(\xi_{\ell}))}, \tag{30}\]
where \(\sigma(\xi_{\ell})\) is the variance on a given multipole \(l\) obtained from the jackknife method, \(\sigma_{\rm{Mock}}(\xi_{\ell})\) is the variance on the same multipole obtained from the 1500 lognormal mocks and \(\sigma(\sigma_{\rm{Mock}}^{2})\) is the uncertainty on the mock-based error bar, determined by applying the classical jackknife delete-one mock estimator to the set of mocks from which the covariance is estimated.
The left panel of Fig. 4 shows this relative bias of the jackknife method with the Mohammad-Percival correction while the right panel shows the result for our fitted jackknife method. In both cases, the monopole, \(\xi_{0}\), is displayed in the top panel, the quadrupole, \(\xi_{2}\), in the middle and the hexadecapole, \(\xi_{4}\), in the bottom. The coloured lines show different number densities and the solid lines are the baseline configuration of 196 jackknife regions while the dashed lines show the test of using 100 jackknife regions instead. As expected, the underestimation slightly worsens with the increase in the number of jackknife regions, as predicted by eq. (15).
However, as the number density \(\bar{n}\) increases, the underestimation of the jackknife method with the Mohammad-Percival correction becomes more and more significant, especially for \(\bar{n}=15\times 10^{-4}\)\(h^{3}{\rm{Mpc}}^{-3}\). This underestimation is not visible on the jackknife covariance matrix estimates produced from the random catalogues as shown in Fig. 1. As explained in the previous section, the clustering of the data leads to higher covariance due to additional covariance coming from cross-correlations between \(CC\) and \(AA\) pair counts.
Additionally, there is no strong dependence on the number density for the fitted jackknife method which makes it more robust whatever the density regime of the galaxy sample of interest.
#### 3.2.2 Effect on the cosmological parameters
To test the performance of different covariance estimation techniques we infer \(f\sigma_{8}\), \(a_{\parallel}\) and \(\alpha_{\perp}\) by fitting the theoretical predictions for the multipoles to the ones from the mocks using covariances from estimators reviewed earlier. The fit is performed using a 5-parameter model, which is based on Lagrangian Perturbation Theory and includes the linear growth rate \(f\sigma_{8}\), Alcock-Paczynski parameters (Alcock and Paczynski, 1979)\(\alpha_{\parallel}\) and \(\alpha_{\perp}\), first- and second-order biases \(b_{1}\), \(b_{2}\) and the effective Fingers Of God parameter (FOG) \(\sigma_{\rm{FOG}}\). The theoretical power spectrum \(P_{\rm{FOG}}\) is obtained using the MomentumExpansion module of thevelocileptors package (for more details, see Chen et al., 2020, 2021). The Fingers-Of-God effect is modelled following Taruya et al. (2010), as
\[P_{\rm{FOG}}(\mathbf{k})=\frac{1}{1+(\mathbf{k}\cdot\mathbf{\hat{n}}\ \sigma_{\rm{FOG}})^{2}/2}P(\mathbf{k}), \tag{31}\]
where \(P(\mathbf{k})\) is the power spectrum without the FOG effect obtained with velocileptors, \(\sigma_{\rm FOG}\) is the one-dimensional velocity dispersion and \(\mathbf{\hat{u}}\) is the LOS direction unit vector. The power spectrum \(P_{\rm FOG}(\mathbf{k})\) is then transformed into the 2-point correlation function \(\xi^{\rm th}(s,\mu)\) using a Fast-Fourier-Transform and from that we compute the theoretical correlation function multipoles \(\xi^{\ell th}_{\ell}(s,\mu)\).
Once we have the correlation function multipoles \(\xi_{\ell}(s_{i})\), and covariance matrix \(\Sigma^{\ell,\ell_{i}\xi_{j}}_{ij}=C_{ij}\), \(C^{\rm t}_{ij}(\alpha)\) or \(C^{\rm(fit)}_{ij}\), we can obtain the likelihood \(L(p_{1},...,p_{n})\):
\[\begin{split}\log(L(p_{1},-p_{n}))&=\sum_{\ell_{1},\ell_{2}}\sum_{i,j}\left(\xi_{\ell_{1}}(s_{i})-\xi^{\rm(th)}_{\ell_{1}}(s_{i} )\right)\times\\ &\times\left(\Sigma^{-1}\right)^{\ell_{1}\ell_{2}}_{ij}\left(\xi _{\ell_{2}}(s_{j})-\xi^{\rm(th)}_{\ell_{2}}(s_{j})\right)\end{split} \tag{32}\]
where \(\xi^{(th)}_{\ell}(s)\) is the theoretical prediction of the multipole for the set of \(k\) parameters \(\{a_{1},...,a_{k}\}\) and we apply the Hartlap correction (Hartlap, J. et al., 2007) on the inverse of the covariance matrix such that the original uncorrected covariance matrix denoted as \(\mathbf{C}^{\rm(orig)}_{ij}\) and the corrected inverse covariance matrix \(\Sigma^{-1}_{ij}\) are related by:
\[\Sigma^{-1}_{ij}=\frac{n-p-2}{n-1}\left(C^{-1}\right)^{\rm(orig)}_{ij} \tag{33}\]
where \(n\) is the number of discrete samples, and \(p\) is the number of entries in the data vector (number of bins used). We use a likelihood maximisation method to find the \(\chi^{2}\) minima using iminuit(Dembinski and others, 2020). Errors are estimated from the region of \(\Delta\chi^{2}=1\) of the marginalized \(\chi^{2}\) distribution, and they are allowed to be asymmetric. The choice of a frequentist method of analysis is motivated by its low computational cost.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\hat{n}(z)(h^{3}{\rm Mpc}^{-3})\) & Mock & Mohammad-Percival & Fit \\ \hline \hline \(2\times 10^{-4}\) & 1.03 & 1.40 & 1.05 \\ \hline \(5\times 10^{-4}\) & 0.99 & 1.42 & 1.05 \\ \hline \(15\times 10^{-4}\) & 1.00 & 1.56 & 1.08 \\ \hline \end{tabular}
\end{table}
Table 1: For each of the estimation methods we tabulate the standard deviation \(\sigma\) of \((f\sigma_{Ni}-\overline{f\sigma_{N}})/\sigma_{i}(f\sigma_{N})\), over independent fits, \(i\). For the mock covariance method \(\sigma\approx 1\) (as expected when all the fits are performed consistently with the same covariance), for the fitted covariance method it is also quite close to unity, but for the jackknife method \(\sigma>1.4\), which shows a much higher degree of deviation from the truth.
Figure 6: The summary of the results from the cosmological fits from the lognormal mocks with varying density (one for each column and with density in (Mpc/h)\({}^{3}\) indicated at the top) for the three covariance matrix estimation methods: jackknife covariance with Mohammad-Percival correction in green, fitted jackknife covariance in blue and mock covariance in red. The top panels shows the histograms of the reduced \(\chi^{2}\), while the three bottom ones show the marginalised 2D-distributions of parameters and their uncertainties for \(f\sigma_{8}\), \(\alpha_{\parallel}\) and \(\alpha_{\perp}\), obtained from the set of fits described in the Section 3.1
Figure 7: The summary of the cosmological fits from the lognormal mocks with a varying density. Similar to Fig. 6 but for a slightly different set of covariance matrix methods: jackknife covariance with the Mohammad-Percival correction in green, mock covariance in red (the same contours as on Fig. 6) and standard jackknife covariance in blue.
In Fig. 6, the first row shows the distributions of reduced \(\chi^{2}\) for different choices of \(\bar{n}\), and the other rows show the marginalised 2D-distributions of parameters and their uncertainties for respectively \(f\sigma_{8},\alpha_{\parallel}\) and \(\alpha_{\perp}\). We can see how, for all the parameters, the spread from the Mohammad-Percival jackknife in green is in general much wider than the one from the mock covariance in red both in terms of uncertainty and parameter values. While in case of the fitted jackknife covariance, the blue contours are very similar to the mock covariance ones. Presumably, this improvement comes from using 50 realisations rather than one. In Fig. 7 we also show in the same form the performance from the standard jackknife in comparison with the Mohammad-Percival corrected jackknife and mock-based covariance. As expected, the standard jackknife produces slightly larger contours, which are noticeably shifted with respect to the mock covariance, especially for \(f\sigma_{8}\). On the plots of the distribution of the \(\chi^{2}/\)df we can also notice that \(\chi^{2}/\)df from fits with fitted jackknife covariance is slightly smaller than in other cases, and closer to 1. The difference grows with the \(\bar{n}(z)\), but does not become significantly large, and the distributions are still compatible.
To additionally test the validity of our inference approaches, we will define the quantity
\[x=\frac{\eta-\bar{\eta}}{\sigma(\eta)}, \tag{34}\]
where \(\eta\) is an inferred parameter from a specific fit, \(\bar{\eta}\) is the mean from all the fits, and \(\sigma(\eta)\) is the error estimation from a specific fit. The distribution of quantity \(x\) is called a pull distribution. If \(\eta\) follows a Gaussian distribution, the distribution of \(x\) will form a normal distribution with \(\bar{x}=0\) and \(\sigma(x)=1\).
For the mock covariance, we fit the 1500 available samples, while for the Mohammad-Percival jackknife and for the fitted jackknife 50 random mocks are fitted using 30 realisations of the covariance, under the assumption that all of the covariance estimators are probing the same underlying likelihood.
clustering in the quasi-linear regime, although they are less accurate than N-body simulations. They provide a better representation of the real survey and better reproduce the non-Gaussian effects, which are not present in the lognormal mocks. The EZmocks used here are built using a 4-parameter model that is calibrated to match the clustering of N-body simulations, the 25 AbacusSummit simulations designed to meet the DESI requirements (Maksimova et al., 2021). The 4 model parameters are: (1) \(\rho_{\rm c}\) - critical density required to overcome the background expansion; (2) \(\rho_{\rm exp}\) - responsible for the exponential cut-off of the halo bias relation; (3) \(b\) - argument in the power law probability distribution function \(P(n)=const\times b^{n}\) of having \(n\) galaxies in the limited volume; (4) \(v\) is the standard deviation for the distribution modelling peculiar velocities.
In this work, we use a set of 1000 EZmocks generated from N-body simulations with 6 Gpc/h box size. The fiducial cosmology employed is Planck 2018 (Aghanim et al., 2020), and the boxes are generated at \(z=0.8\) for the LRGs and \(z=1.1\) for the ELGs. We use the redshift range of \(z=[0.8,1.1]\) and the mocks are cut to a footprint that reproduces that planned for the 5-year DESI data in order to match the expected final precision of the mock-based covariance matrix. The comparison of the difference with the mock covariance for the single realisation of the jackknife covariance and the fitted covariance is presented in Fig. 10.
On Fig. 11 the relative bias of the diagonals of jackknife-based vs mock-based covariances as defined by eq. 30 are shown for the LRG sample on the left and for the ELG sample on the right, in a similar way to Fig. 4. First, The same trend is seen for the Mohammad-Percival jackknife as we found with the lognormal mocks: the bias of the jackknife method with the Mohammad-Percival correction tends to increase with number density, so from LRG to ELG, and the fitted
\begin{table}
\begin{tabular}{c c c c} \hline Survey & Mock & Mohammad-Percival & Fit \\ \hline \hline LRG & 1.04 & 1.49 & 1.07 \\ \hline ELG & 1.08 & 1.80 & 1.07 \\ \hline \end{tabular}
\end{table}
Table 2: Standard deviation \(\sigma\) of \((f\sigma_{8,i}-\overline{f\sigma_{8}})/\sigma_{i}(f\sigma_{8})\), where \(i\) is a separate fit for each of the methods. We can see, that for the mock covariance, it is close to 1 (as it is supposed to be when all of the fits share the same covariance.), for fitted covariance it is closing on it, and for jackknife usually takes values \(>1.4\), which shows a much higher degree of deviation from what we assumed to be the truth.
Figure 11: The quantity defined in Eq. (30) representing the bias of the specific covariance estimation approach plotted for three multipoles of LRG and ELG EZmocks (left and right panels respectively). Solid lines are with Mohammad-Percival correction and dashed lines for the fitted jackknife.
Figure 12: The summary of the cosmological fits for the EZ mocks for LRGs and ELGs (left and right column respectively), similar to Fig. 6 layout.
Figure 10: Comparison of the deviation of jackknife and fit covariances from the mock covariance multiplied by a square of separation for multipoles \(\ell=0,2,4\) for the EZ LRG mocks.
jackknife is still able to mitigate it. However, we can also notice that the differences are less pronounced in the case of the EZmocks which is due to a bigger volume being probed by the same number density. In Appendix A, we test the impact of the size of the footprint on the diagonal elements of the covariance matrices by considering the North Galactic Cap, South Galactic Cap and full footprint separately.
As in the previous section, we also infer the values of the cosmological parameters \(f\sigma_{8}\), \(\alpha_{\parallel}\) and \(\alpha_{\perp}\), using the same methodology as for the lognormal mocks. The results of the fits are shown in Fig. 12 where the first row shows the \(\chi^{2}/\)dof distribution and the other rows show the marginalised 2D contours for best-fit values and uncertainties on the cosmological parameters. We confirm the findings with the lognormal mocks that the fitted jackknife method provides results which are in much better agreement with the mock-based method while the jackknife method with the Mohammad-Percival correction over-estimates clearly the uncertainties on all the cosmological parameters. The effect is also stronger as the number density of the galaxy sample increases. Moreover, as we have fewer mocks than for the tests with the lognormal mocks, we can notice that the fitted covariance based on 50 mocks actually produces smaller contours overall than the mock covariance which uses 1000 EZmocks.
In Fig. 13, we show the pull distribution as defined by eq. 34 for the cosmological parameters and the standard deviations of the \(f\sigma_{8}\) distribution, which is taken as an example, are presented on Table 2. The results are also similar to the ones obtained with the lognormal mocks: both the fitted jackknife and mock covariances produce a Gaussian shape with \(\sigma=1\), while the standard deviation of the pull distribution obtained using the Mohammad-Percival correction for the jackknife method is larger (\(\sigma\)=1.5, 1.8 for LRG and ELG respectively). This quantitative test thus demonstrates that the fitted jackknife method performs better in estimating an unbiased and accurate covariance matrix for the two-point correlation function.
Overall, throughout all of the tests for varying number densities, different types of mocks and number of fitted mocks, the fitted jackknife approach shows a considerable improvement over the correction for standard jackknife proposed by Mohammad and Percival (2022). The fitted jackknife approach can achieve an unbiased estimate of the covariance matrix with similar precision to a mock-based covariance but with the major advantage of requiring a much smaller number of mocks.
## 4 Conclusions
Obtaining an accurate covariance matrix is a key ingredient for any cosmological analysis, and raises a significant challenge due to the limitations in computing power for mock-based methods or in the assumptions used in the analytical approaches. Additionally, as was shown in a series of reviews comparing different approximate methods, they still have problems reproducing exactly the results of more computationally intensive codes, especially in the non-linear regime (Lippich et al., 2018; Blot et al., 2019; Colavincenzo et al., 2018). Some works also focused on decreasing the number of simulations needed to obtain a precise covariance matrix (Chartier et al., 2021), for example combining the results from N-body and approximate simulations.
In this work we have attempted to tackle this challenge with the use of internal resampling methods. In Section 2, we review the basics of the jackknife formalism for two-point correlation function covariance estimation and perform a test on a toy model which confirms the improvement brought by a correction to the standard jackknife approach proposed by Mohammad and Percival (2022). Instead of using an analytically fixed correction to some terms that enter the jackknife covariance matrix, we propose to fit the correction to a mock-based covariance obtained from a small number of mocks. Moreover, we also noticed an unconstrained term in the different pairs that comprise the jackknife estimate of the covariance matrix, which we propose to account for by the same fitted jackknife procedure. In Section 3, we have tested this fitted jackknife covariance method and compared its performance with respect to the jackknife method with Mohammad-Percival correction and to a mock-based approach using lognormal mocks and approximate EZ mocks. We showed that the underestimation of the covariance obtained when using the Mohammad-Percival correction increases with galaxy number density while the fitted jackknife covariance remains unbiased. Performing the cosmological inference showed that the fitted jackknife covariance based on 50 mocks performs with the same accuracy as the covariance created from 1000-1500 mocks, both in terms of precision (unbiased constraints) and accuracy (similar uncertainties). There is also a significant decrease in computational power needed and we also stress that the method is simple to implement on top of the standard jackknife covariance computation. We provide a Python package that contains the implementation of the fitted jackknife method: [https://github.com/theonefromnowhere/FitCov](https://github.com/theonefromnowhere/FitCov)
Future work may include further tests of such a fitted jackknife covariance estimation technique when applied to scales smaller than \(\sim 20h^{-1}\)Mpc. We plan to investigate the small scales in another work that aims at fitting the clustering of DESI Early Data with this method and mock-based covariances in order to estimate the galaxy-halo connection for different galaxy samples. A similar technique could also be developed in Fourier space, however, it would require a proper treatment of the window function effects when splitting the footprint
Figure 13: Pull distributions for different covariance estimation techniques with results from fits on LRG and ELG mocks with line colors as in Fig. 8
into subsamples, together with a significant computational effort. We leave for future work the application of such techniques to other statistics, such as 3-point correlation function. Such a fitted jackknife covariance method can also be beneficial for multi-tracer analysis where it could accommodate all the degrees of freedom needed without requiring too many additional mocks. We plan to continue this work and apply the multi-tracer technique on the upcoming DESI Bright Galaxy Survey (Zarrouk et al., 2021; Hahn et al., 2022) whose high-density sampling make it a challenging test of the performance of the fitted jackknife covariance method.
## Acknowledgements
The authors acknowledge and are highly grateful for the fruitful discussions with Will Percival, Arnaud de Mattia and Michael Rashkovetskyi. ST and PZ acknowledge the Fondation CFM pour la Recherche for their financial support. PN and SC acknowledge STFC funding ST/T000244/1 and ST/X001075/1.
This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
This research is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; additional support for DESI is provided by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF's National Optical-Infrared Astronomy Research Laboratory; the Science and Technologies Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: [https://www.desi.lbl.gov/collaborating-institutions](https://www.desi.lbl.gov/collaborating-institutions).
The authors are located to be permitted to conduct scientific research on loklan Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
## Data Availability
The lognormal mocks can be easily reproduced using the public MockFactory code at [https://github.com/cosmodesi/mockfactory](https://github.com/cosmodesi/mockfactory). The approximate EZmocks will be made public with upcoming DESI data releases. Data enabling the reproduction of the plots in this paper are published at [https://zenodo.org/record/7635683](https://zenodo.org/record/7635683).
|
2303.13494 | Attention! Dynamic Epistemic Logic Models of (In)attentive Agents | Attention is the crucial cognitive ability that limits and selects what
information we observe. Previous work by Bolander et al. (2016) proposes a
model of attention based on dynamic epistemic logic (DEL) where agents are
either fully attentive or not attentive at all. While introducing the realistic
feature that inattentive agents believe nothing happens, the model does not
represent the most essential aspect of attention: its selectivity. Here, we
propose a generalization that allows for paying attention to subsets of atomic
formulas. We introduce the corresponding logic for propositional attention, and
show its axiomatization to be sound and complete. We then extend the framework
to account for inattentive agents that, instead of assuming nothing happens,
may default to a specific truth-value of what they failed to attend to (a sort
of prior concerning the unattended atoms). This feature allows for a more
cognitively plausible representation of the inattentional blindness phenomenon,
where agents end up with false beliefs due to their failure to attend to
conspicuous but unexpected events. Both versions of the model define
attention-based learning through appropriate DEL event models based on a few
and clear edge principles. While the size of such event models grow
exponentially both with the number of agents and the number of atoms, we
introduce a new logical language for describing event models syntactically and
show that using this language our event models can be represented linearly in
the number of agents and atoms. Furthermore, representing our event models
using this language is achieved by a straightforward formalisation of the
aforementioned edge principles. | Gaia Belardinelli, Thomas Bolander | 2023-03-23T17:55:32Z | http://arxiv.org/abs/2303.13494v2 | # Attention!
###### Abstract.
Attention is the crucial cognitive ability that limits and selects what information we observe. Previous work by Bolander et al. (2016) proposes a model of attention based on dynamic epistemic logic (DEL) where agents are either fully attentive or not attentive at all. While introducing the realistic feature that inattentive agents believe nothing happens, the model does not represent the most essential aspect of attention: its selectivity. Here, we propose a generalization that allows for paying attention to subsets of atomic formulas. We introduce the corresponding logic for propositional attention, and show its axiomatization to be sound and complete. We then extend the framework to account for inattentive agents that, instead of assuming nothing happens, may default to a specific truth-value of what they failed to attend to (a sort of prior concerning the unattended atoms). This feature allows for a more cognitively plausible representation of the inattentional blindness phenomenon, where agents end up with false beliefs due to their failure to attend to conspicuous but unexpected events. Both versions of the model define attention-based learning through appropriate DEL event models based on a few and clear edge principles. While the size of such event models grow exponentially both with the number of agents and the number of atoms, we introduce a new logical language for describing event models syntactically and show that using this language our event models can be represented linearly in the number of agents and atoms. Furthermore, representing our event models using this language is achieved by a straightforward formalisation of the aforementioned edge principles.
Dynamic Epistemic Logic; Attention; Inattentional Blindness; Default Values; Syntactic Event Models; Succinctness +
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Journal: Information and Communication
+
Footnote †: journal: Journal: Journal: Journal: Journal: Journal: Journal: Information and Communication
+
+
Footnote †: journal:
are a sort of prior that agents have and use to update their beliefs in case they miss some information. This addition gives us a more cognitively plausible representation of the experimental findings mentioned above, as now agents can default to the non-existence of the gorilla in the video even if they were previously uncertain about it. We introduce a logic for the first model of propositional attention (without defaults), and prove its axiomatization sound and complete. Lastly, we show that our idea of representing edges of event models by edge principles can be generalised to a new type of syntactic event models where events and edges are specified using logical formulas. We show exponential succinctness of these syntactic event models as compared to standard (semantic) event models.
Besides providing insights into how human attention interacts with beliefs, this research also goes towards the improvement of human-AI interaction, as it may help e.g. robots to reason about humans, required in human-robot collaboration settings. As explained by Verbrugge (Verbrugge, 2017), it's potentially dangerous if a robot in a human-robot rescue team makes too optimistic assumptions about the reasoning powers of human team members. The robot might for example falsely rely on a human to have paid attention to a certain danger, where in fact the human didn't. A proactively helpful robot should be able to take the perspective of the human and reason about what the human might or might not have paid attention to, and therefore which false beliefs the human might have. This requires that the robot has a model of the attention system of the human, and how this impacts her beliefs. We believe our models can be used in this way. Concretely, there has already been research on using epistemic planning based on DEL for human-robot collaboration (Crane and Raghavan, 2017), and since the models of this paper are also based on DEL, they lend themselves to immediate integration into such frameworks and systems.
This paper is an extended version of our paper accepted for AAMAS 2023 (paper #1142). It has been extended with the proofs from the supplementary material of the original submission.
## 2. Propositional Attention
### Language
Throughout the paper, we use \(Ag\) to denote a finite set of _agents_, \(At\) to denote a finite set of _propositional atoms_, and we let \(H=\{\mathsf{h}_{\mathsf{a}}p\colon p\in At,a\in Ag\}\) denote the corresponding set of _attention atoms_. With \(p\in At,a\in Ag\), \(\mathsf{h}_{\mathsf{a}}p\in H\) and \(\mathcal{E}\) being a multi-pointed event model1, define the language \(\mathcal{L}\) by:2
Footnote 1: Defined further below. As usual in DEL, the syntax and semantics are defined by mutual recursion (Leskovec et al., 2017).
Footnote 2: So \(\mathcal{L}\) lacks the sets \(Ag\) and \(At\) as parameters, but we’ll keep that dependency implicit throughout the paper.
\[\varphi:=\top\mid p\mid\mathsf{h}_{\mathsf{a}}p\mid\neg\varphi\mid\varphi \land\varphi\mid B_{\mathsf{a}}\varphi\mid\mid\mathcal{E}\varphi.\]
The attention atom \(\mathsf{h}_{\mathsf{a}}p\) reads "agent \(a\) is paying attention to whether \(p\)", \(B_{\mathsf{a}}\varphi\) reads "agent \(a\) believes \(\varphi\)", and the dynamic modality \([\mathcal{E}]\varphi\) reads "after \(\mathcal{E}\) happens, \(\varphi\) is the case". The formulas in \(At\cup H\cup\{\top\}\) are called the _atoms_, and a _literal_ is an atom or its negation. We often write \(\bigwedge S\) to denote the conjunction of a set of formulas \(S\). If \(S\) is empty, we take \(\bigwedge S\) as a shorthand for \(\top\). To keep things simple, we will assume that all consistent conjunction of literals are in a normal form where: (i) each atom occurs at most once; (ii) \(\top\) doesn't occur as a conjunct, unless the formula itself is just \(\top\); and (iii) the literals occur in a predetermined order (ordered according to some total order on \(At\cup H\)). This implies that given any disjoint sets of atoms \(P^{+}\) and \(P^{-}\), there exists a unique conjunction of literals (in normal form) containing all the atoms of \(P^{+}\) positively and all the atoms of \(P^{-}\) negatively. For conjuncts that are _not_ on this normal form, we assume them to always be replaced by their corresponding normal form. For any conjunction of literals \(\varphi=\bigwedge_{1\leq i\leq n}\ell_{i}\) and any literal \(\ell\), we say that \(\varphi\)_contains_\(\ell\) if \(\ell=\ell_{i}\) for some \(i\), and in that case we often write \(\ell\in\varphi\). For any conjunctions of literals \(\varphi\), we define \(Lit(\varphi)\) to be the set of literals it contains, that is, \(Lit(\varphi)=\{\ell\mid\ell\in\varphi\}\). For an arbitrary formula \(\varphi\), we let \(At(\varphi)\) denote the set of propositional atoms appearing in it.
### Kripke Model and Dynamics
We are going to model attention and beliefs using DEL (Leskovec et al., 2017), where static beliefs are modelled by pointed Kripke models, and attention-based belief updates are modelled by multi-pointed event models (our product update and satisfaction definitions will be slightly non-standard due to the multi-pointedness of the event models).
Definition 2.1 (Kripke Model).: A _Kripke model_ is a tuple \(\mathcal{M}=(W,R,V)\) where \(W\neq\emptyset\) is a finite set of _worlds_, \(R:Ag\to\mathcal{P}(W^{2})\) assigns an _accessibility relation_\(R_{a}\) to each agent \(a\in Ag\), and \(V:W\to\mathcal{P}(At\cup H)\) is a _valuation function_. Where \(w\) is the _designated world_, we call \((\mathcal{M},w)\) a _pointed Kripke model_.
Definition 2.2 (Event Model).: An _event model_ is a tuple \(\mathcal{E}=(E,Q,pre)\) where \(E\neq\emptyset\) is a finite set of _events_, \(Q:Ag\to\mathcal{P}(E^{2})\) assigns an _accessibility relation_\(Q_{a}\) to each agent \(a\in Ag\) and \(pre:E\to\mathcal{L}\) assigns a _precondition_ to each event \(e\in E\). Where \(E_{d}\subseteq E\) is a set of _designated events_, \((\mathcal{E},E_{d})\) is a _multi-pointed event model_. When \(Ag=\{a\}\) for some \(a\), we usually refer to the single-agent event model \((E,Q,pre)\) as \((E,Q_{a},pre)\).
We will often denote event models by \(\mathcal{E}\) independently of whether we refer to an event model \((E,Q,pre)\) or a multi-pointed event model \(((E,Q,pre),E_{d})\). Their distinction will be clear from context.
Definition 2.3 (Product Update).: Let \(\mathcal{M}=(W,R,V)\) be a Kripke model and \(\mathcal{E}=(E,Q,pre)\) be an event model. The _product update_ of \(\mathcal{M}\) with \(\mathcal{E}\) is the Kripke model \(\mathcal{M}\otimes\mathcal{E}=(W^{\prime},R^{\prime},V^{\prime})\) where:
\(W^{\prime}=\{(w,e)\in W\times E\colon(\mathcal{M},w)\neq pre(e)\}\),4
Footnote 4: We haven’t yet defined satisfaction of formulas in \(\mathcal{L}\). It’s defined in Definition 2.4 below, where we again note the standard mutual recursion used in defining DEL (Leskovec et al., 2017).
\(R^{\prime}_{a}=\{((w,e),(v,f))\in W^{\prime}\times W^{\prime}:(w,v)\in R_{a}\text { and }(e,f)\in Q_{a}\}\),
\(V^{\prime}((w,e))=\{p\in At\cup H\colon w\in V(p)\}\).
Given a pointed Kripke model \((\mathcal{M},w)\) and a multi-pointed event model \((\mathcal{E},E_{d})\), we say that \((\mathcal{E},E_{d})\) is _applicable_ in \((\mathcal{M},w)\) iff there exists a unique \(e\in Ed\) such that \(\mathcal{M},w\neq pre(e)\). In that case, we define the _product update_ of \((\mathcal{M},w)\) with \((\mathcal{E},E_{d})\) as the pointed Kripke model \((\mathcal{M},w)\otimes(\mathcal{E},E_{d})=(\mathcal{M}\otimes\mathcal{E},(w,e))\) where \(e\) is the unique element of \(E_{d}\) satisfying \((\mathcal{M},w)\vDash pre(e)\).
Definition 2.4 (Satisfaction).: Let \((\mathcal{M},w)=((W,R,V),w)\) be a pointed Kripke model. For any \(q\in At\cup H,a\in Ag,\varphi\in\mathcal{L}\) and any
multi-pointed event model \(\mathcal{E}\), satisfaction of \(\mathcal{L}\)-formulas in \((\mathcal{M},w)\) is given by the following clauses extended with the standard clauses for the propositional connectives:
\[\begin{array}{ll}(\mathcal{M},w)\vDash q&\text{iff}\quad q\in V(w);\\ (\mathcal{M},w)\vDash\vDash_{\alpha}\varphi&\text{iff}\quad(\mathcal{M},w) \vDash\varphi\text{ for all }(w,v)\in R_{a};\\ (\mathcal{M},w)\vDash[\mathcal{E}]\varphi&\text{iff}\quad\text{if }\mathcal{E} \text{ is applicable in }(\mathcal{M},w)\text{ then}\\ &(\mathcal{M},w)\otimes\mathcal{E}\vDash\varphi.\end{array}\]
We say that a formula \(\varphi\) is _valid_ if \((\mathcal{M},w)\vDash\varphi\) for all pointed Kripke models \((\mathcal{M},w)\), and in that case we write \(\vDash\varphi\).
Example 2.5 ().: Ann and Bob are watching the Invisible Gorilla video (Gorilla et al., 2016). Unbeknownst to Ann, Bob has already seen the video, so he knows the correct answer is 15 and that a clearly visible gorilla will pass by. Ann instead has no information about these things, as she has never seen that video. However, she likes riddles and tests of this sort, in which she gets absorbed very easily. Bob knows that, and thus he also knows that she will completely focus on counting the passages only, without realising that there is a gorilla, and thereby thinking to be paying attention to everything happening in the video, just as Bob. This situation is represented in Figure 1. We have \((\mathcal{M},w)\vDash B_{a}h_{a}g\wedge\neg h_{a}g\). Ann believes she is paying attention to whether there is a gorilla or not, but she isn't.
## 3. Principles for Attention Dynamics
In this section, we first present the existing attention model (Gorilla et al., 2016). We then propose an alternative representation using our edge principles, introduce a variant, and, finally, generalize to multiple propositions (capturing that agents can pay attention to subsets of \(At\)).
### The Existing Model and our Version of it
As in (Gorilla et al., 2016), attention is represented as a binary construct where agents can either be paying attention to everything that happens or to nothing. The language they adopt is as the language above, except for their attention atoms \(h_{a}\), \(a\in Ag\), that are not relativised to propositional formulas. The intended meaning of such atoms is that the agent pays attention to everything, so they can be expressed in our language by letting \(h_{a}\), \(a\in Ag\), be an abbreviation of the formula \(\bigwedge_{p\in At}h_{a}p\). Let \(H^{\prime}=\{h_{a}\colon a\in Ag\}\). Then \(H^{\prime}\cup At\) is the set of "atoms" on which their language is based. The static part of their model is a Kripke model, where it is assumed that agents are _attention introspective_, namely for all \(w,v\in W,a\in Ag\), and \(p\in At\), if \((w,v)\in R_{a}\) then \(h_{a}(p)\in V(w)\) iff \(h_{a}(p)\in V(v)\). The dynamics are given by the following event models. These event models represent situations in which any formula can be announced, true or false, and attentive agents will come to believe it.
Definition 3.1 (Event Model \(\mathcal{E}(\varphi)\), (Gorilla et al., 2016)).: Given a \(\varphi\in\mathcal{L}\), the multi-pointed event model \(\mathcal{E}(\varphi)=((E,Q,pre),E\setminus\{\tau\})\) is defined by:
\(E=\{(i,J)\colon i\in\{0,1\}\text{ and }J\subseteq Ag\}\cup\{\tau\}\);
\(Q_{a}=\{((i,J),(1,K))\colon i\in\{0,1\},J,K\subseteq Ag\text{ and }a\in J\}\cup\)
\(\{(i,J),\tau\colon i\in\{0,1\},J\subseteq Ag\text{ and }a\notin J\}\);
\(pre\colon E\to\mathcal{L}\) is defined as follows, for \(J\subseteq Ag\):
* \(pre((0,J))=\neg\varphi\wedge\bigwedge_{a\in J}h_{a}\wedge\neg_{a\not\in J} \neg_{h}a\);
* \(pre((1,J))=\varphi\wedge\bigwedge_{a\in J}h_{a}\wedge\neg_{a\not\in J}\neg_{h}a\);
* \(pre(\tau_{\tau})=\top\).
This event model contains \(2^{\lfloor Ag/\rfloor+1}+1\) events (Gorilla et al., 2016). The preconditions of these events express whether the announced \(\varphi\) is true (i.e., whether it occurs positively or negatively in the precondition) and whether each agent \(a\) is attentive or not (i.e., whether \(h_{a}\) occurs positively or negatively in the precondition). We now briefly explain the intuition behind the edges of the model, but refer to (Gorilla et al., 2016) for more details. The elements of \(Q_{a}\) of the form \(((i,J),(1,K))\) encode the following: Provided that agent \(a\) is attentive (i.e., \(a\in J\)), she believes that any event with precondition \(\varphi\) could be the actual one. The elements of \(Q_{a}\) of the form \(((i,J),s_{\top})\) then encodes: If instead she is not paying attention (i.e., \(a\notin J\)), she keeps the beliefs she had before the announcement (represented by the event \(s_{\top}\) having the precondition \(\top\). The \(s_{\top}\) event induces a copy of the original model, thereby modeling the "skip" event where nothing happens).
In the following, for any set \(S\), we use \(id_{S}\) to denote the identity function on \(S\), i.e., \(id_{S}(s)=s\), for all \(s\in S\). From now on, most of our event models will be of a particular form where the set of events is a set of (conjunctive) formulas and where preconditions are given by the identify function on \(E\), i.e., \(pre=id_{E}\) (meaning that the events are their own preconditions). Our principle-based version of \(\mathcal{E}(\varphi)\) is then the following.
Definition 3.2 (Principle-Based Event Model \(\mathcal{E}^{\prime}(\varphi)\)).: Given a \(\varphi\in\mathcal{L}\), the multi-pointed event model \(\mathcal{E}^{\prime}(\varphi)=((E,Q,id_{E}),E\setminus\{\tau\})\) is: \(E=\{\psi\wedge\bigwedge_{a\in J}h_{a}\wedge\neg_{a\not\in J}\neg_{a}\colon\psi \in\{\varphi,\neg\varphi\},J\subseteq Ag\}\cup\{\top\}\);
\(Q_{a}\) is such that \((e,f)\in Q_{a}\) iff all the following are true:
* Basic Attentiveness: if \(h_{a}\in e\), then \(\varphi\in f\);
* Inertia: if \(h_{a}\notin e\), then \(f=\top\).
The _edge principles_ of the model above are Basic Attentiveness and Inertia, describing the conditions under which there is an edge from \(e\) to \(f\) for agent \(a\), that is, what an agent considers possible after the announcement. By Basic Attentiveness, paying attention implies that, in all events considered possible, the announcement is true--and hence attentive agents believe what is announced. By Inertia, inattentive agents believe nothing happened, namely they maintain the beliefs they had before the announcement was made.
Figure 1. The pointed Kripke model \((\mathcal{M},w)\). In the figure, \(p\) stands for “the players in the video pass the ball 15 times”, \(g\) for “a clearly visible gorilla crosses the scene”. We use the following conventions. Worlds are represented by sequences of literals true at the world. The model above has 5 worlds, 4 of which are inside the inner dashed box. Designated worlds are underlined. Whenever a world appears inside a dashed box, all the literals in the label of that box are also true in the world—and if the label is underlined, all worlds inside are designated. In this model, \(h_{b}p\) and \(h_{b}g\) hold in all worlds, and additionally, \(h_{a}p\) and \(h_{a}g\) hold in the worlds of the inner box. The accessibility relations are represented by labelled arrows. An arrow from (or to) the border of a dashed box means that there is an arrow from (or to) all the events inside the box.
Note that we have exactly the same set of event preconditions in \(\mathcal{E}^{\prime}(\varphi)\) as in \(\mathcal{E}(\varphi)\). The difference is just that we define the events to be their own preconditions, which is possible since all pairs of events have distinct and mutually inconsistent preconditions. It's easy to check that \(\mathcal{E}(\varphi)\) and \(\mathcal{E}^{\prime}(\varphi)\) also have the same edges, hence the models are isomorphic. The following proposition shows this.
**Proposition 3.3**.: \(\mathcal{E}(\varphi)\) _of Definition 3.1 and \(\mathcal{E}^{\prime}(\varphi)\) of Definition 3.2 are isomorphic._
Proof.: We already concluded that the two models have the same set of preconditions, and that all events have distinct preconditions. We then just need to show that for all \(a\in Ag\) and all events \(e,f\in E\) of \(\mathcal{E}(\varphi)\), we have \((e,f)\in Q_{a}\) in \(\mathcal{E}(\varphi)\) iff \((pre(e),pre(f))\in Q_{a}\) in \(\mathcal{E}^{\prime}(\varphi)\). To see this, consider first an edge in \(Q_{a}\) of \(\mathcal{E}(\varphi)\). It's either of the form \(((i,J),(1,K))\) for some \(i\in\{0,1\},J,K\subseteq Ag\) and \(a\in J\) or it's of the form \(((i,J),s_{\top})\) for some \(i\in\{0,1\},J\subseteq Ag\) and \(a\notin J\). According to Definition 3.1, an edge of the first form is an edge from an event with precondition \(\neg\varphi\wedge\wedge_{a\in J}h_{a}\wedge\neg_{a\not\in J}\neg_{a}\) or \(\varphi\wedge_{a\in J}h_{a}\wedge\neg_{a}\) to an event with precondition \(\varphi\wedge\bigwedge_{a\in K}h_{a}\wedge\neg_{a\not\in K}\neg_{a}\). Such an edge clearly satisfies Basic Attentiveness (since \(\varphi\) is a conjunct of the target of the edge) and Inertia (the condition \(a\in J\) for the source event implies that \(h_{a}\) is contained in the precondition of the source, and hence Inertia holds trivially). This shows that edges in \(\mathcal{E}(\varphi)\) of the first type are also edges in \(\mathcal{E}^{\prime}(\varphi)\). The argument for edges of the second type is similar, but here the condition of the source is \(a\notin J\), meaning that Basic Attentiveness instead is trivial, and we only need to show Inertia. According to Definition 3.1, an edge of the second type is an edge from an event with precondition \(\neg\varphi\wedge\bigwedge_{a\in J}h_{a}\wedge\neg_{a\not\in J}\neg_{a}\) or \(\varphi\wedge\bigwedge_{a\in J}h_{a}\wedge\neg_{a\not\in J}\neg_{a}\) (as before) to an event with precondition \(\top\). Since \(a\notin J\), we have that \(h_{a}\) is not contained in the precondition of the source event. Inertia then requires that the precondition of the target is \(\top\), but that we already concluded. So Inertia holds, as required.
For the other direction, we start with an edge \((e,f)\in Q_{a}\) of \(\mathcal{E}^{\prime}(\varphi)\) satisfying both Basic Attentiveness and Inertia, and show that it is of one of the two types in \(\mathcal{E}(\varphi)\). We split into cases depending on whether \(h_{a}\in e\) or not. If \(h_{a}\in e\), then by Basic Attentiveness, \(\varphi\in f\). Let \(f\) denote the set of agents for which \(h_{a}\) occurs positively in \(e\), and let \(K\) denote the same set for \(f\). Since \(h_{a}\in e\), we get \(a\in J\). Let \(i=0\) if \(\neg\varphi\) occurs in \(e\), otherwise let \(i=1\). Then \(e=pre((i,J))\), using the notation from Definition 3.1. Since \(\varphi\in f\), we have that \(f=pre((1,K))\). By Definition 3.1, \(Q_{a}\) contains an edge from \((i,J)\) to \((1,K)\). This covers the case where \(h_{a}\in e\). Consider now the case \(h_{a}\notin e\). By Inertia, \(f=\top\). Define \(J\) and \(i\) as before from \(e\). Then, as before, \(e=pre((i,J))\). Since \(f=\top\), \(f=pre(s_{\top})\). By Definition 3.1, \(Q_{a}\) contains an edge from \((i,J)\) to \(s_{\top}\), and we're done.
Compare the edge specification from \(\mathcal{E}(\varphi)\) with the one from \(\mathcal{E}^{\prime}(\varphi)\). We are defining the same set of edges, but whereas the definition of \(Q_{a}\) in \(\mathcal{E}(\varphi)\) does not make it immediately clear what those edges are encoding, we believe that our definition of \(Q_{a}\) in \(\mathcal{E}^{\prime}(\varphi)\) does. It is simply two basic principles, one specifying what events are considered possible by the agents paying attention (Basic Attentiveness), and another specifying the same for those not paying attention (Inertia). Even though from a technical viewpoint it is not a big step to introduce such principles, we find it helpful to be able to specify the relevant event models in a clear and concise manner. This makes it easier to use the model and build on it--as should become evident when we later generalise the event model.
#### 3.1.1. Modified model
We now introduce a variant of the event model \(\mathcal{E}^{\prime}(\varphi)\) from Def. 3.2, one that is more appropriate for the types of scenarios that we would like to be able to model.
_Trutful announcements_.: As the present work aims at modeling (noise-free) attention to external stimuli from the environment, in particular visual attention, the first assumption we give up is that announcements may be false. More precisely, we assume that if an agent pays attention to \(p\) and the truth-value of \(p\) is being revealed, then the agent sees the true truth-value of \(p\). The new event model for announcing \(\varphi\) should then only contain events where \(\varphi\) is true:
\[E=\{\varphi\wedge\bigwedge_{a\in J}h_{a}\wedge\bigwedge_{a\in J}\neg_{a}:J \subseteq Ag\}\cup\{\top\}.\]
_Learning that you were attentive_.: An assumption we have already given up is attention introspection, so in our models the agents may falsely believe to be paying attention (see Example 2.5). In this setting, it is very plausible to assume that, besides learning what the true event is, attentive agents also learn that they were attentive. This does not happen in the event model \(\mathcal{E}^{\prime}(\varphi)\). We thus substitute Basic Attentiveness with the following principle:
* - Attentiveness: if \(h_{a}\in e\), then \(h_{a},\varphi\in f\).
Summing up, the event model where announcements are truthful and attentive agents learn that they paid attention, looks as follows.
_Definition 3.4_ (Trutful and Introspective Event Model \(\mathcal{E}^{\prime\prime}(\varphi)\)).: Given \(\varphi\in\mathcal{L}\), the multi-pointed event model \(\mathcal{E}^{\prime\prime}(\varphi)=((E,Q,id_{E}),E\setminus\{\top\})\) is defined by:
\(E=\{\varphi\wedge\bigwedge_{a\in J}h_{a}\wedge\bigwedge_{a\not\in J}\neg_{a}:J \subseteq Ag\}\cup\{\top\}\);
\(Q_{a}\) is such that \((e,f)\in Q_{a}\) iff all the following are true:
* Attentiveness:s: if \(h_{a}\in e\), then \(h_{a},\varphi\in f\);
* Inertia: if \(h_{a}\notin e\), then \(f=\top\);
The event model \(\mathcal{E}^{\prime\prime}(p\wedge g)\) with \(Ag=\{a,b\}\) is shown in Figure 2.
### Event Models for Propositional Attention
In this section, we introduce event models for agents that only pay attention to subsets of \(At\). As our main aim is to model attention to external stimuli, we are interested in modeling the "announcement" of a conjunction of literals \((\neg)p_{1}\wedge\cdots\wedge(\neg)p_{n}\), which we interpret
Figure 2. The event model \(\mathcal{E}^{\prime\prime}(p\wedge g)\) with \(Ag=\{a,b\}\). As our event models will have conjunctive preconditions, all distinct, and our events are their own preconditions, we can represent events by lists of formulas, the formulas contained in the event precondition. All other conventions are as for Kripke models (see Fig. 1).
as the parallel exposure to multiple stimuli (the truth value of all \(p_{i}\) being revealed concurrently). It could for instance be that we see a video that has 15 ball passes and a gorilla passing by, and that would correspond to the "announcement" \(p\wedge g\), cf. Example 2.5.
Definition 3.5 (Propositional Attention Event Model \(\mathcal{F}(\varphi)\)).: Let \(\varphi=\ell(p_{1})\wedge\cdots\wedge\ell(p_{n})\in\mathcal{L}\), where for each \(p_{i}\), either \(\ell(p_{i})=p_{i}\) or \(\ell(p_{i})=\neg p_{i}\). The multi-pointed event model \(\mathcal{F}(\varphi)=((E,Q,id_{E}),E_{d})\) is defined by:
\[E=\{\bigwedge_{p\in S}\ell(p)\wedge\bigwedge_{d\in A\varphi}( \bigwedge_{p\in X_{a}}\mathrm{h}_{a}p\wedge\bigwedge_{p\in S\setminus X_{a}} \neg\mathrm{h}_{a}p):\\ S\subseteq At(\varphi)\text{ and for all }a\in Ag,X_{a}\subseteq S\}\]
\(Q_{a}\) is such that \((e,f)\in Q_{a}\) iff all the following hold for all \(p\):
* [noitemsep,topsep=0pt]
* Attentiveness: if \(\mathrm{h}_{a}p\in e\) then \(\mathrm{h}_{a}p,\ell(p)\in f\);
* Inertia: if \(\mathrm{h}_{a}p\notin e\) then \(\ell(p)\notin f\);
* \(E_{d}=\{\psi\in E\colon\ell(p)\in\psi,\text{ for all }\ell(p)\in\varphi\}\).
In \(\mathcal{F}(\varphi)\) we have, for each subset of literals in \(\varphi\), an event containing those literals in the precondition. For those literals, the event also specifies whether each agent is paying attention to it or not. In this way, events account for all possible configurations of attention to any subset of the announcement and for the learning of truthful information regarding it. The edges are again given by two simple principles. Attentiveness states that if an agent pays attention to a specific atom, then she learns the literal in the announcement corresponding to it and that she was paying attention to it. Inertia says that if an agent doesn't pay attention to an atom, then she will not learn anything about it. As we take announcements as truthful revelations, the set of designated events only contains events where all the announced literals are true. The event model \(\mathcal{F}(\varphi)\) with \(\varphi=p\wedge g\) and \(Ag=\{a,b\}\) is shown in Figure 3.
Example 3.6 ().: Continuing Example 2.5, Ann and Bob have finished watching the Invisible Gorilla video (event model \(\mathcal{F}(p\wedge g)\)). As Bob expected, Ann learns that there are 15 ball passes, but she still doesn't know anything about whether there is a gorilla in the video, and believes Bob is in the same situation as herself. The pointed Kripke model \((\mathcal{M}^{\prime},w^{\prime})=(\mathcal{M},w)\otimes\mathcal{F}(p\wedge g)\) in Figure 4 (left) represents the situation after exposure to the video, i.e., after the revelation of \(p\wedge g\). Ann has only learnt about \(p\) and still has no information about \(g\). We thus have \((\mathcal{M}^{\prime},w^{\prime})\vDash_{a}p\wedge_{a}g\wedge_{a}\neg g\). Moreover, she wrongly believes Bob too hasn't received any information about the gorilla, so \((\mathcal{M}^{\prime},w^{\prime})\vDash_{b}g\wedge B_{a}(\neg B_{b}g\wedge \neg B_{b}\neg g)\).
## 4. Axiomatization
We move to the axiomatization of our logic and show that it is sound and complete. The axiomatization is given by the set of axioms and inference rule of Table 1. It comprises standard axioms and inference rules for normal modal logic as well as reduction axioms. All propositional reduction axioms in Table 1 are for state-eliminating updates, in that they are relativized to the announced \(\varphi\). Then, the only non-standard axiom we introduce is the one expressing the consequences of attention-dependent announcements for what concerns agents' beliefs. Where \(\varphi=\ell(p_{1})\wedge\cdots\wedge\ell(p_{n})\) is the announced formula, the axiom is the following:
\[[\mathcal{F}(\varphi)]B_{a}\psi\leftrightarrow(\varphi\to \bigvee_{S\subseteq At(\varphi)}(\bigwedge_{p\in S}\mathrm{h}_{a}p\wedge \bigwedge_{p\in At(\varphi)\setminus S}\neg\mathrm{h}_{a}p)\\ \to B_{a}([\mathcal{F}(\bigwedge_{p\in S}\ell(p))]\psi)))\]
The axiom can be read as saying that after exposure to the revelation of \(\varphi\), agent \(a\) believes that only the conjunction of literals from \(\varphi\) to which she was paying attention to has been revealed.
To prove soundness and completeness of the axiomatization in Table 1, we will use the following lemma, which shows that updating a Kripke model \((\mathcal{M},w)\) with event model \(\mathcal{F}(\varphi)\) where \(\varphi=\ell(p_{1})\wedge\cdots\wedge\ell(p_{n})\) is announced, or updating it with \(\mathcal{F}(\bigwedge_{p\in S}\ell(p))\), where \(\bigwedge_{p\in S}\ell(p)\) are the literals from \(\varphi\) that agent \(a\) is paying attention to at \((\mathcal{M},w)\), yields updates \((\mathcal{M},w)\otimes\mathcal{F}(\varphi)\) and \((\mathcal{M},w)\otimes\mathcal{F}(\bigwedge_{p\in S}\ell(p))\) that are bisimilar from agent \(a\)'s perspective.
In what follows, events containing all the announced literals will be called "maximal". We will use notation \(Q_{a}[\varepsilon]\) to indicate
Figure 4. Pointed Kripke models \((\mathcal{M}^{\prime},w^{\prime})=(\mathcal{M},w)\otimes\mathcal{F}(p\wedge g)\) (left) and \((\mathcal{M}^{\prime\prime},w^{\prime\prime})=(\mathcal{M},w)\otimes \mathcal{E}(p\wedge g,d)\) (right), where the default map \(d\) is \(d_{a}(p)=d_{b}(p)=\top\) and \(d_{a}(g)=d_{b}(g)=\neg g\). Worlds inaccessible from the designated world are not shown.
Figure 3. The event model \(\mathcal{F}(p\wedge g)\) with \(Ag=\{a,b\}\). Solid arrows are for agent \(a\), dotted for agent \(b\). A small Python program for computing the above edges from the edge principles can be found here: [https://tinyurl.com/5ekjmsud](https://tinyurl.com/5ekjmsud).
the states (worlds or events) that are \(Q_{a}\)-accessible from \(e\), i.e., \(Q_{a}[e]=\{f\colon(e,f)\in Q_{a}\}\). Lastly, if \(\varphi,\psi\in\mathcal{L}\) are conjunctions of literals, we will say that \(\psi\in\varphi\) iff \(Lit(\psi)\subseteq Lit(\varphi)\). In that case, we will also say that \(\varphi\) contains \(\psi\).
**Lemma 4.1**: _For any pointed Kripke model \((\mathcal{M},\mathsf{w})\) with \((\mathcal{M},\mathsf{w})\vDash\varphi\), and any \(a\in Ag\), consider the unique \(S\subseteq At(\varphi)\) that is such that \((\mathcal{M},\mathsf{w})\vDash\varphi\). Then, the updated models \((\mathcal{M},\mathsf{w})\otimes\mathcal{F}(\bigwedge_{\rho\in S}\ell(p))=((W^ {\varphi},R^{\varphi\otimes},V^{\varphi\otimes}),(\mathsf{w},e^{\prime}))\) and \((\mathcal{M},\mathsf{w})\otimes\mathcal{F}(\varphi)=((W^{\varphi},R^{\varphi },V^{\varphi\otimes}),(\mathsf{w},e))\) are such that:_
1. \(R^{\varphi}_{a}[(\mathsf{w},e)]=R^{\varphi\otimes}_{a}[(\mathsf{w},e^{\prime})]\)_;_
2. _for all_ \((v,f)\in R^{\varphi}_{a}[(\mathsf{w},e)]\)_, there exists a bisimulation between_ \((\mathcal{M}^{\varphi},(v,f))\) _and_ \((\mathcal{M}^{\varphi\otimes},(\mathsf{a},f))\)_, notation_ \((\mathcal{M}^{\varphi},(v,f))\) __\(\mathsf{(M}^{\varphi\otimes},(\mathsf{a},f))\)_,_6__
Footnote 6: The notion of bisimulation for Kripke model is standard, see e.g., [5].
Let \((\mathcal{M},\mathsf{w})=((W,R,V),w)\) be a pointed Kripke model. We will use the same notation as in the previous proof for \(\varphi_{S}\), for \(\mathcal{F}(\varphi)=((E,Q,pre),E_{d})\) and \(\mathcal{F}(\varphi_{S})=((E^{\prime},Q^{\prime},pre^{\prime}),E^{\prime}_{d})\). For the \(\varphi\)- and \(\varphi_{S}\)-updates of \((\mathcal{M},\mathsf{w})\) we will use the notation introduced in the statement of the lemma, if not otherwise stated.
Assume that \((\mathcal{M},\mathsf{w})\vDash\varphi\). Then \(\mathcal{F}(\varphi)\) and \(\mathcal{F}(\varphi_{S})\) are applicable to \((\mathcal{M},\mathsf{w})\), so \((\mathcal{M},\mathsf{w})\otimes\mathcal{F}(\varphi)=(\mathcal{M}^{\varphi },(\mathsf{w},e))\) and \((\mathcal{M},\mathsf{w})\otimes\mathcal{F}(\varphi_{S})=(\mathcal{M}^{\varphi \otimes},(\mathsf{w},e^{\prime}))\) exist. Now let \(S\subseteq At(\varphi)\) be the unique \(S\) that is such that \((\mathcal{M},\mathsf{w})\vDash(\bigwedge_{\rho\in S}\mathsf{h}_{\alpha}p \wedge\bigwedge_{\rho\in At(\varphi)\setminus S}\neg\mathsf{h}_{\alpha}p)\), for some \(a\in Ag\).
(1) We first show that \(R^{\varphi}_{a}[(\mathsf{w},e)]=R^{\varphi\otimes}_{a}[(\mathsf{w},e^{\prime})]\), proving the two inclusions separately.
(\(\mathsf{\Rightarrow}\)) Let \((v,f)\in R^{\varphi}[(\mathsf{w},e)]\). This means that \(v\in R_{a}[\mathsf{w}]\) and \(f\in Q_{a}[e]\). Then, to reach the desired result that \((v,f)\in R^{\varphi\otimes}[(\mathsf{w},e^{\prime})]\), we only need to show that \(f\in Q^{\prime}_{a}[e^{\prime}]\), as then we would have that \(v\in R_{a}[\mathsf{w}]\) and \(f\in Q^{\prime}_{a}[e^{\prime}]\), and since \((\mathcal{M},\mathsf{u})\vDash pre(f)\) then \((v,f)\in W^{\varphi\otimes}\) and we could conclude that \((v,f)\in R^{\varphi\otimes}[(\mathsf{w},e^{\prime})]\). We show that \(f\in Q^{\prime}_{a}[e^{\prime}]\) by showing that \(f\) is such that \(f\in E^{\prime}\) and that it satisfies the requirements that Attentiveness and Inertia pose to belong to \(Q^{\prime}_{a}[e^{\prime}]\), i.e., it contains the needed formulas.
So let's first see what formulas \(f\) contains. By initial assumption, \((\mathcal{M},\mathsf{w})\vDash(\bigwedge_{\rho\in S}\mathsf{h}_{\alpha}p \wedge\bigwedge_{\rho\in At(\varphi)\setminus S}\neg\mathsf{h}_{\alpha}p)\). As \((w,e)\in W^{\varphi}\), then by product \(\bigwedge_{\rho\in At(\varphi)\setminus S}\neg\mathsf{h}_{\alpha}p\) and maximality of \(e\), it holds that \(\bigwedge_{\rho\in S}\mathsf{h}_{\alpha}p\wedge\bigwedge_{\rho\in At(\varphi )\setminus S}\neg\mathsf{h}_{\alpha}p)\in e\). Then by Attentiveness and \(\bigwedge_{\rho\in S}\mathsf{h}_{\alpha}p\in e\), we know that \(\bigwedge_{\rho\in S}(\ell(p)\wedge\mathsf{h}_{\alpha}p)\in f\). Moreover, as \(\bigwedge_{\rho\in At(\varphi)\setminus S}\neg\mathsf{h}_{\alpha}p\in e\), then by def. of event model for propositional attention (in particular by definition of its set of events) for all \(p\in At(\varphi)\setminus S\), \(\mathsf{h}_{\alpha}p\notin e\) and so by Inertia, \(f\) doesn't contain \(\ell(p)\), for all \(p\in At(\varphi)\setminus S\), which then also means that \(\mathsf{h}_{\alpha}p\notin f\) for all such \(p\in At(\varphi)\setminus S\), by def. of event models for propositional attention. Hence, \(f\) is such that \(\bigwedge_{\rho\in S}(\ell(p)\wedge\mathsf{h}_{\alpha}p)\in f\) as well as, for all \(p\in At(\varphi)\setminus S\), \(\mathsf{h}_{\alpha}p\in At(\varphi)\setminus S\), and \(f(p)\notin f\).
Now let's see what is required to belong to \(Q^{\prime}_{a}[e^{\prime}]\). Since by initial assumption \((\mathcal{M},\mathsf{w})\vDash\bigwedge_{\rho\in S}\mathsf{h}_{\alpha}p\) for some \(S\subseteq At(\varphi)\) and since \((w,e^{\prime})\in W^{\varphi\otimes}\), then by product update definition and maximality of \(e^{\prime}\), it holds that \(\bigwedge_{\rho\in S}\mathsf{h}_{\alpha}p\in e^{\prime}\). Then we can use Attentiveness to see that in order to belong to \(Q^{\prime}_{a}[e^{\prime}]\) an event must contain \(\bigwedge_{\rho\in S}(\ell(p)\wedge\mathsf{h}_{\alpha}p)\). Moreover, since all events in \(Q^{\prime}_{a}[e^{\prime}]\) are events from \(\mathcal{F}(\varphi_{S})\) then they contain only literals and attention atoms from \(\varphi_{S}\). So to belong to \(Q^{\prime}_{a}[e^{\prime}]\), and thus to \(E^{\prime}\), an event must not contain \(\ell(p)\), \(\mathsf{h}_{\alpha}p\), for all \(p\in At(\varphi)\setminus S\). Hence, to belong to \(Q^{\prime}_{a}[e^{\prime}]\), an event \(f^{\prime}\) must be such \(\bigwedge_{\rho\in S}(\ell(p)\wedge\mathsf{h}_{\alpha}p)\in f^{\prime}\) as well as, for all \(p\in At(\varphi)\setminus S\), \(\mathsf{h}_{\alpha}p\notin f^{\prime}\) and \(f(p)\notin f^{\prime}\). This is exactly what we have with \(f\) and since Attentiveness and Inertia are the only requirements to satisfy to be part of \(Q^{\prime}_{a}[e^{\prime}]\), then and \(f\in Q^{\prime}_{a}[e^{\prime}]\).
Hence, we have that if \(f\in Q_{a}[e]\) then \(f\in Q^{\prime}_{a}[e^{\prime}]\). Above we assumed that \((v,f)\in R^{\varphi}[(\mathsf{w},e)]\), i.e., that \(v\in R_{a}[w]\) and \(f\in Q_{a}[e]\). This now implies that \(v\in R_{a}[w]\) and \(f\in Q^{\prime}_{a}[e^{\prime}]\), and since \((\mathcal{M},\mathsf{v})\vDash pre(f)\) and so \((v,f)\in W^{\varphi\otimes}\), then by def. of product update that \((v,f)\in R^{\varphi\otimes}_{a}[(\mathsf{w},e^{\prime})]\).
(\(\Leftarrow\)) This proof proceed analogously to the above proof of the other inclusion.
We can conclude that \(R^{\varphi}_{a}[(\mathsf{w},e)]=R^{\varphi\otimes}_{a}[(\mathsf{w},e^{\prime})]\).
(2) We now show that for all \((v,f)\in R^{\varphi}_{a}[(\mathsf{w},e)]\), \((\mathcal{M}^{\varphi},(v,f))\)\(\mathsf{(M}^{\varphi\otimes},(\mathsf{a},f))\). Consider a bisimulation \(\mathcal{Z}\subset(\mathsf{w}^{\varphi\otimes}\mathsf{k}^{\prime\prime\prime \prime\prime\prime\prime})\) defined by \((u^{\prime},g^{\prime})\in\mathcal{Z}[(\mathsf{u},g)]\) iff \(u=u^
Proof.: _Completeness:_ It proceeds by usual reduction arguments (Kripke, 1973). _Soundness:_ We show that axioms and inferences rules from Table 1 are valid. Axioms and inference rules for normal modal logic are valid in pointed Kripke models, by standard results (Kripke, 1973). As our product update is of the state-eliminating kind, the propositional reduction axioms are valid (Kripke, 1973). Thus, we only need to show the validity of the reduction axiom for attention-based belief updates. We prove the two directions separately.
Let \((\mathcal{M},w)=((W,R,V),w)\) be a pointed Kripke model. We use the same notation as in the previous proof for \(\varphi_{S}\), for \(\mathcal{F}(\varphi)\) and \(\mathcal{F}(\varphi_{S})\), and for the updates \((\mathcal{M}^{\varphi},(w,e))\) and \((\mathcal{M}^{\varphi_{S}},(w,e^{\prime}))\).
(\(\Rightarrow\)) In this direction we want to prove that if we assume \((\mathcal{M},w)\vDash[\mathcal{F}(\varphi)]B_{a}\psi\) for some arbitrary \(a\in Ag\), then it follows \(B_{a}(\bigwedge_{p\in S}\mathsf{r}(\varphi))\rightarrow[\mathcal{F}( \varphi_{S})]\). We will show that the claim follows straightforwardly from Lemma 4.1. Let \((\mathcal{M},w)\vDash[\mathcal{F}(\varphi)]B_{a}\psi\) for some arbitrary \(a\in Ag\), let \((\mathcal{M},w)\vDash[\mathcal{F}(\varphi)]\) and let \(S\subseteq At(\varphi)\) be the unique \(S\) such that \((\mathcal{M},w)\vDash\bigwedge_{p\in S}\mathsf{h}_{B}p\wedge\bigwedge_{p\in At (\varphi)\setminus S}\neg\mathsf{h}_{B}p\). As \(\varphi\) then \(\mathcal{F}(\varphi)\) is applicable in \((\mathcal{M},w)\) and \((\mathcal{M}^{\varphi_{S}},(w,e^{\prime}))\) exist. As \((\mathcal{M},w)\vDash[\mathcal{F}(\varphi)]B_{a}\psi\), then we know, by semantics of the dynamic modality and by applicability of \(\mathcal{F}(\varphi)\) to \((\mathcal{M},w)\), that \((\mathcal{M}^{\varphi},(w,e))\vDash B_{a}\psi\), and so, by semantics of belief modality, for all \((\mathsf{o},f)\in R_{a}^{\varphi}[(w,e)]\), \((\mathcal{M}^{\varphi},(\mathsf{o},f))\vDash\psi\). As our assumptions here are the same assumptions made in Lemma 4.1, we can then use that Lemma to obtain that \(R_{a}^{\varphi}[(w,e)]=R_{a}^{\varphi_{S}}[(w,e^{\prime})]\), and that for all \((\mathsf{o},f)\in R_{a}^{\varphi}[(w,e)]\), \((\mathcal{M}^{\varphi_{S}},(f))\vDash[\mathcal{M}^{\varphi_{S}},(w,f))\). By standard results, bisimulation implies modal equivalence (see e.g., (Kripke, 1973)). Hence, it follows that for all \((\mathsf{o},f)\in R_{a}^{\varphi_{S}}[(w,e^{\prime})]\), \((\mathcal{M}^{\varphi_{S}},(w,f))\vDash\psi\). This means that for all \(\mathsf{o}\in R_{a}[w]\) and all \(f\in Q_{a}^{\prime}[e^{\prime}]\) that are such that \((\mathsf{o},f)\in W^{\varphi_{S}}\), \((\mathcal{M}^{\varphi_{S}},(\mathsf{o},f))\vDash\psi\).
Now we have two cases: for any \(\mathsf{o}\in R_{a}[w]\), either \(\mathcal{F}(\varphi_{S})\) is applicable in \((\mathcal{M},\mathsf{o})\) or it is not. If it is not applicable, we can directly conclude that \((\mathcal{M},\mathsf{o})\vDash[\mathcal{F}(\varphi_{S})]\psi\), by semantics of dynamic modality, and since this holds for an arbitrary \(\mathsf{o}\in R_{a}[w]\), then \((\mathcal{M},w)\vDash B_{a}([\mathcal{F}(\varphi_{S})]\psi)\), by semantics of belief modality. Now consider the case in which \(\mathcal{F}(\varphi_{S})\) is applicable in \((\mathcal{M},\mathsf{o})\). In this case, we need to show that for any \(f\in Q_{a}^{\prime}[e^{\prime}]\) with \((\mathsf{o},f)\in W^{\varphi}\), \(f\) is maximal, i.e., \(f\in E_{d}^{\prime}\), to then be able to infer, by semantics of dynamic modality, that for all \(\mathsf{o}\in R_{a}[w]\), \((\mathcal{M},\mathsf{o})\vDash[\mathcal{F}(\varphi_{S})]\psi\). To that goal notice that since \((\mathcal{M},w)\vDash\bigwedge_{p\in S}\mathsf{h}_{B}p\), then by maximality of \(\epsilon^{\prime}\) with respect to \(\varphi_{S}\) and product update definition, \(\bigwedge_{p\in S}\mathsf{h}_{B}p\in e^{\prime}\), and so by \(\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A} \mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\mathsf{A}\
circumstances. Similarly, if \(q\) means "a gorilla is passing by", then agent \(a\) might have \(\neg q\) as the default value: if the occurrence of a gorilla is not paid attention to, the agent will believe there was none. Finally, if \(q\) takes default value \(\top\), it means that the agent doesn't default to any value, but preserves her previous beliefs. Maybe she has no strong beliefs about whether all the basket ball players are wearing white, and hence if \(q\) denotes that they are all wearing white, her default value for \(q\) would be \(\top\). We can think of default values as representing some kind of qualitative priors: They encode what an agent believes about what normally occurs in a given situation, and where those beliefs are sufficiently strong to let agent update her beliefs using these priors even when no direct evidence for or against them is observed (paid attention to).
**Definition 5.1** (Default Event Model \(\mathcal{E}(\varphi,d)\)): Suppose \(\varphi=\ell(p_{1})\wedge\cdots\wedge\ell(p_{n})\), and suppose that \(d\) is a _default map_: to each agent \(a\) and atom \(p_{i}\), \(d\) assigns a _default value_\(d_{a}(p_{i})\in\{p_{i},\neg p_{i},\top\}\). The _default event model_\(\mathcal{E}(\varphi,d)=((E,Q,id_{E}),E_{d})\) is:
\[\begin{array}{l}E=\{\bigwedge\limits_{p\in S}\ell(p)\wedge\bigwedge\limits_ {p\in At(\varphi)\setminus S}d_{b}(p)\wedge\bigwedge\limits_{a\in Ag}( \bigwedge\limits_{p\in X_{a}}\neg p\wedge\bigwedge\limits_{p\in S\setminus X _{a}}\neg\mathsf{h}_{a}p):\\ b\in Ag,S\subseteq At(\varphi)\text{ and for all }a\in Ag,X_{a}\subseteq S\} \end{array}\]
\(Q_{a}\) is such that \((e,f)\in Q_{a}\) iff all the following hold for all \(p\):
* [noitemsep,topsep=0pt]
* Attentiveness: if \(\mathsf{h}_{a}p\in e\) then \(\mathsf{h}_{a}p,\ell(p)\in f\);
* Defaulting: if \(\mathsf{h}_{a}p\notin e\) then \(d_{a}(p)\in f\).
\(E_{d}=\{\psi\in E:\ell(p)\in\psi,\text{ for all }\ell(p)\in\varphi\}\).
Default event models differ from event models for propositional attention in that if an event in a default model does not contain a literal from the announced formula, then it contains its default value for one of the agents. Each event contains default values for one agent only, so that no event may contain contradicting default values. The accessibility relations are given by similar principles as above, with the difference that the second principle is now called Defaulting, and this principle implies that inattentive agents only consider possible the default values of what they left unattended. Note that defaults are common knowledge among the agents (the event model doesn't encode any uncertainty about the default map \(d\)). Figure 4 (right) illustrates the revised update of our initial model with the default event model representing Ann seeing the video. In lack of attention to \(g\), she defaults to \(\neg g\), the intuition being that she believes that she would see the gorilla had it been there. She comes to believe there is no gorilla: \((\mathcal{M}^{\prime\prime},w^{\prime\prime})\vDash B_{a}\neg g\).
_Axiomatization._ The axiomatization of the logic for propositional attention with defaults is given by the same axioms as in Table 1, except for the axiom for belief dynamics which is replaced by the following axiom where inattentive agents adopt the default option for the unattended atoms (where \(\varphi=\ell(p_{1})\wedge\cdots\wedge\ell(p_{n})\)). For \(\varphi_{Sd}=\bigwedge_{p\in S}\ell(p)\wedge\bigwedge_{p\in At(\varphi) \setminus S}d_{a}(p)\), call the resulting table _Table 2_:
\[\begin{array}{l}[\mathcal{E}(\varphi,d)]B_{a}\psi\leftrightarrow(\varphi \rightarrow\bigvee_{S\subseteq At(\varphi)}((\bigwedge\limits_{p\in S} \mathsf{h}_{a}p\wedge\bigwedge\limits_{p\in At(\varphi)\setminus S}\neg \mathsf{h}_{a}p)\\ \to B_{a}(\{\mathcal{E}(\varphi_{Sd},d)\}|\psi))\end{array}\]
To prove soundness and completeness, we need a lemma similar to Lemma 4.
**Lemma 5.2**: _For any pointed Kripke model \((\mathcal{M},w)\) with \((\mathcal{M},w)\vDash\varphi\), and for any \(a\in Ag\), consider the \(S\subseteq At(\varphi)\) that is such that \((\mathcal{M},w)\vDash\wedge_{p\in S}\mathsf{h}_{a}p\wedge\bigwedge_{p\in At( \varphi)\setminus S}\neg\mathsf{h}_{a}p\). Let \(\varphi_{Sd}=\bigwedge_{p\in S}\ell(p)\wedge\bigwedge_{p\in At(\varphi) \setminus S}d_{a}(p)\). The updated models \((\mathcal{M},w)\otimes\mathcal{E}(\varphi_{Sd},d)=((W^{\varphi_{Sd}},R^{ \varphi_{Sd}},V^{\varphi_{Sd}}),(w,e^{\prime}))\) and \((\mathcal{M},w)\otimes\mathcal{E}(\varphi,d)=((W^{\varphi_{S}},R^{\varphi _{S}},V^{\varphi_{S}}),(w,e))\) are such that_
1. \(R^{\varphi}_{a}[(w,e)]=R^{\varphi_{Sd}}_{a}[(w,e^{\prime})]\)__
2. _For all_ \((\varphi,f)\in R^{\varphi}_{a}[(w,e)]\)_,_ \((\mathcal{M}^{\varphi},(v,f))\vDash(\mathcal{M}^{\varphi_{Sd}},(v,f))\)_._
The proofs of both (1) and (2) proceed analogously to the proofs of (1) and (2) of Lemma 4.1, respectively. We hence only show left to right of (1). We follow similar notational conventions as in Lemma 4.1, letting \(\mathcal{E}(\varphi,d)=((E,Q,pre),E_{d})\) and \(\mathcal{E}(\varphi_{Sd},d)=((E^{\prime},Q^{\prime},pre^{\prime}),E^{\prime}_{ d})\).
Let \((v,f)\in R^{\varphi}[(w,e)]\). This means that \(v\in R_{a}[w]\) and \(f\in Q_{a}[e]\). Then, to reach the desired result that \((v,f)\in R^{\varphi_{Sd}}[(w,e^{\prime})]\), we only need to show that \(f\in Q^{\prime}_{a}[e^{\prime}]\), as then we would have that \(v\in R_{a}[w]\) and \(f\in Q^{\prime}_{a}[e^{\prime}]\), and since \((\mathcal{M},v)\vDash pre(f)\) then \((v,f)\in W^{\varphi_{S}}\) and we could conclude that \((v,f)\in R^{\varphi_{Sd}}[(w,e^{\prime})]\). Similarly to the proof above, we show this by showing that \(f\) is such that \(f\in E^{\prime}\) and that \(f\) satisfies the requirements that Attentiveness and Defaulting pose to belong to \(Q^{\prime}_{a}[e^{\prime}]\), i.e., it contains the needed formulas.
So let's first see what formulas \(f\) contains. By initial assumption, \((\mathcal{M},w)\vDash(\bigwedge_{p\in S}\mathsf{h}_{a}p\wedge\bigwedge_{p\in At( \varphi)\setminus S}\neg\mathsf{h}_{a}p)\) for some \(S\subseteq At(\varphi)\). As \((w,e)\vDash W^{\varphi}\), then by product update definition and maximality of \(e\), it holds that \((\bigwedge_{p\in S}\mathsf{h}_{a}p\wedge\bigwedge_{p\in At(\varphi)\setminus S} \neg\mathsf{h}_{a}p)\in e\). Then by Attentiveness and \(\bigwedge_{p\in S}\mathsf{h}_{a}p\in e\), we know that \(\bigwedge_{p\in S}(\ell(p)\wedge\mathsf{h}_{a}p)\in f\). Moreover, as \(\bigwedge_{p\in At(\varphi)\setminus S}\neg\mathsf{h}_{a}p\in e\), then by def. of event model for propositional attention with defaults (in particular by definition of its set of events) for all \(p\in At(\varphi)\setminus S\), \(\mathsf{h}_{a}p\notin e\) and so by Defaulting, \(f\) contains \(\bigwedge_{p\in At(\varphi)\setminus S}d_{a}(p)\), which then implies that \(\mathsf{h}_{a}p\notin f\) for all such \(p\in At(\varphi)\setminus S\), by def. of event models for propositional attention with defaults. Hence, \(f\) is such that \(\bigwedge_{p\in S}(\ell(p)\wedge\mathsf{h}_{a}p)\wedge\bigwedge_{p\in At(\varphi) \setminus S}d_{a}(p)\in f\), and, for all \(p\in At(\varphi)\setminus S\), \(\mathsf{h}_{a}p\notin f\).
Now let's see what is required to belong to \(Q^{\prime}_{a}[e^{\prime}]\). Since by initial assumption \((\mathcal{M},w)\vDash\wedge_{p\in S}\mathsf{h}_{a}p\) and since \((w,e^{\prime})\in W^{\varphi_{Sd}}\), then by product update definition and maximality of \(e^{\prime}\), it holds that \(\bigwedge_{p\in S}\mathsf{h}_{a}p\in e\). Then we can use Attentiveness to see that in order to belong to \(Q^{\prime}_{a}[e^{\prime}]\) an event must contain \(\bigwedge_{p\in S}(\ell(p)\wedge\mathsf{h}_{a}p)\). Moreover, as \(\bigwedge_{p\in At(\varphi)\setminus S}\neg\mathsf{h}_{a}p\in e\), then by Defaulting, all events in \(Q^{\prime}_{a}[e^{\prime}]\) must contain \(\bigwedge_{p\in At(\varphi)}d_{a}(p)\) which implies, by the way events with defaults are defined, that they must not contain \(\mathsf{h}_{a}p\) for all such \(p\in At(\varphi)\setminus S\). Hence, to belong to \(Q^{\prime}_{a}[e^{\prime}]\), an event \(f^{\prime}\) must be such \(\bigwedge_{p\in S}(\ell(p)\wedge\mathsf{h}_{a}p)\wedge\bigwedge_{p\in At(\varphi) \setminus S}d_{a}(p)\in f^{\prime}\) as well as, for all \(p\in At(\varphi)\
for the defaults introduced in the beginning of this section, we now have the following.
Theorem 5.3 ().: _The axiomatization in Tbl. 2 is sound and complete._
Proof.: _Completeness_: It proceeds by usual reduction arguments (Kri
The formula \(\psi\Rightarrow\mathsf{e}\) is read as "\(\psi\) implies the precondition of the (current) event" and \(\mathsf{e}\Rightarrow\psi\) as "the precondition of the (current) event implies \(\psi\). We will use \(\mathsf{e}\Leftrightarrow\psi\) as shorthand for \(\psi\Rightarrow\mathsf{e}\land\mathsf{e}\Rightarrow\psi\). Formulas of \(\mathcal{L}_{\mathcal{E}}\) are to be evaluated in single-agent event models, since we are going to specify the edge principles for each agent \(a\) by a separate formula \(\varphi_{a}\) of \(\mathcal{L}_{\mathcal{E}}\).
Definition 6.1 (Satisfaction).: Let \(\mathcal{E}=(E,Q,pre)\) be a single-agent event model over \(\mathcal{L}\) (so \(Q\subseteq E^{2}\)). For any \(e\in E\), satisfaction of \(\mathcal{L}_{\mathcal{E}}\)-formulas in \(\mathcal{E}\) is given by the following clauses extended with the standard clauses for the propositional connectives:
\[\begin{array}{lcl}(\mathcal{E},\mathsf{e})\vDash\varphi\Rightarrow\mathsf{e }&\text{iff}&\vDash\eta\succ pre(\mathsf{e});\\ (\mathcal{E},\mathsf{e})\vDash\varphi\Rightarrow\psi&\text{iff}&\vDash\eta\; pre(\mathsf{e})\rightarrow\psi;\\ (\mathcal{E},\mathsf{e})\vDash\Box\varphi&\text{iff}&(\mathcal{E},\mathsf{f}) \vDash\varphi\text{ for all }(e,f)\in Q.\end{array}\]
A formula \(\psi\) is called _valid_ in \(\mathcal{E}=(E,Q,pre)\) if \((\mathcal{E},\mathsf{e})\vDash\psi\) holds for all \(e\in E\). We then write \(\mathcal{E}\vDash\psi\). To have a convient notation for reasoning about what holds true for a single event with precondition \(\varphi\in\mathcal{L}\), we introduce the following notation, where \(\psi\in\mathcal{L}_{\mathcal{E}}\):
\[\varphi\vDash\psi\quad\text{iff}\quad((\{\varphi\},\emptyset,\mathsf{h}id_{ \{\varphi\}}),\varphi)\vDash\psi\]
Note that the \(\mathsf{e}\) in the syntax is bound to the event \(\mathsf{e}\) at which the formula is evaluated. So \(\mathsf{e}\Rightarrow p\rightarrow\mathsf{\mathbbm{e}}\Rightarrow\neg p\) means that if the precondition of the current event implies \(p\), then the precondition of any accessible event implies \(\neg p\). Concerning the notation \(\varphi\vDash\psi\), note that we for instance have \(p\land q\vDash\mathsf{e}\Rightarrow p\land\mathsf{e}\Rightarrow q\): Both \(p\) and \(q\) are implied by an event with precondition \(p\land q\). Note that the \(\Rightarrow\mathsf{e}\) operator is not truth-functional: For instance we have \(\mathsf{T}\vDash e\Rightarrow(p\lor\neg p)\), but we don't have \(\mathsf{T}\vDash e\Rightarrow p\lor\mathsf{e}\Rightarrow\neg p\).
Example 6.2.: Consider the event model \(\mathcal{E}^{\prime}(\varphi)\) of Definition 3.2 for some \(\varphi\in\mathcal{L}\) where \(Ag=\{a\}\). By Inertia, if \(e\) is an event not containing \(\mathsf{h}_{a}\), then for any other event \(f\) with \((e,f)\in Q_{a}\), we have \(f=\mathsf{T}\). We can express this using an \(\mathcal{L}_{\mathcal{E}}\)-formula: \(\neg\,\mathsf{e}\Rightarrow\mathsf{h}_{a}\rightarrow\mathsf{\mathbbm{e}}\Leftrightarrow\mathsf{T}\). The formula says: if \(\mathsf{h}_{a}\) is not implied by the precondition of the current event, then any accessible event has a precondition equivalent to \(\mathsf{T}\). The formula is simply Inertia expressed in \(\mathcal{L}_{\mathcal{E}}\), and we have \(\mathcal{E}^{\prime}(\varphi)\vDash\neg\,\mathsf{e}\Rightarrow\mathsf{h}_{a} \rightarrow\mathsf{\mathbbm{e}}\Leftrightarrow\mathsf{T}\).
When trying to come up with a new way of representing event models syntactically, there is a trade-off between generality and expressivity on one side and succinctness and elegance on the other. The more general a class of event models we want to be able to describe, the more complex the language might have to be and the longer and more complicated the formulas might become. Here we will aim for keeping things simple, even if it implies less generality. For instance, opposite the approach of [10], we decided not to include propositional atoms in \(\mathcal{L}_{\mathcal{E}}\) for referring to the names of specific events. This limits expressivity, as then the language can only distinguish events by their preconditions and can not represent distinct events with the same precondition. However, for the event models of this paper, this is not a limitation.
We move to define our syntactic event models. To make the distinction clear, we will now refer to the standard event models of Definition 2.2 as _semantic event models_.
Definition 6.3 ().: A syntactic event model is a pair \(\mathcal{G}=(\psi_{E},(\psi_{a})_{a\in Ag})\), where all the \(\psi\) formulas belong to \(\mathcal{L}_{\mathcal{E}}\). The semantic event model \(\mathcal{H}=(E,Q,id_{E})\)_induced_ by \(\mathcal{G}\) is defined as follows:
* \(E=\{\varphi\in\mathcal{L}:\varphi\text{ is a conjunction of literals s.t. }\varphi\vDash\varphi_{E}\}\);
* For all \(a\in Ag\), \(Q_{a}\) is the largest subset of \(E^{2}\) satisfying \((E,Q_{a},id_{E})\vDash\psi_{a}\). If such a unique largest set doesn't exist, let \(Q_{a}\) be the empty set.
Where \(\psi_{E_{d}}\in\mathcal{L}_{\mathcal{E}}\), we call \((\mathcal{G},\psi_{E_{d}})\) a _syntactic multi-pointed event model_. The _induced_ multi-pointed event model of \((\mathcal{G},\psi_{E_{d}})\) is \((\mathcal{H},E_{d})\) where \(\mathcal{H}\) is the event model induced by \(\mathcal{G}\) and \(E_{d}=\{\varphi\in\mathcal{L}:\varphi\text{ is a conjunction of literals s.t. }\varphi\vDash\psi_{E_{d}}\}\).
Example 6.4.: Consider again the event model \(\mathcal{E}^{\prime}(\varphi)\) of Definition 3.2, where we here let \(\varphi=q\), assume \(At=\{q\}\) and assume \(Ag\) to be any set of agents. Then \(\mathcal{E}^{\prime}(\varphi)\) is induced by the syntactic event model \(\mathcal{G}=(\psi_{E},(\psi_{a})_{a\in Ag})\) defined as follows:
\(\psi_{E}=\mathsf{e}\Leftrightarrow\mathsf{T}\vee\{(\mathsf{e}\Rightarrow q \lor\mathsf{e}\Rightarrow\neg q)\land\land_{a\in Ag}((\mathsf{e}\Rightarrow \mathsf{h}_{a})\vee(\mathsf{e}\Rightarrow\mathsf{h}_{a}))\]
\(\psi_{a}=(\mathsf{e}\Rightarrow\mathsf{h}_{a}\rightarrow\mathsf{\mathbbm{e}} \Rightarrow\mathsf{\mathbbm{e}}\Rightarrow q)\land(\neg\,\mathsf{e}\Rightarrow \mathsf{h}_{a}\rightarrow\mathsf{\mathbbm{e}}\Leftrightarrow\mathsf{T})\).
The definition of \(\psi_{E}\) states that any event is either (equivalent to) \(\top\) or else: 1) it implies either \(q\) or \(\neg q\) and, 2) for all \(a\in Ag\), it implies either \(\mathsf{h}_{a}\) or \(\neg\mathsf{h}_{a}\). Note that since the induced event model is always a model over a set of conjunctive preconditions, we can reformulate this as follows: \(\psi_{E}\) states that any event is either \(\top\) or else 1) it contains either \(q\) or \(\neg q\) and, 2) for all \(a\in Ag\), it contains either \(\mathsf{h}_{a}\) or \(\neg\mathsf{h}_{a}\). Comparing with Definition 3.2, we see that this is exactly how we defined the set of events of this model. Concerning \(\psi_{a}\), we earlier concluded that the second conjunct expresses Inertia. The first conjunct expresses Basic Attentiveness.
The _size_ of a syntactic event model is the sum of the lengths of the formulas it consists of. We say that two semantic event models \(\mathcal{E}=(E,Q,pre)\) and \(\mathcal{E}^{\prime}=(E^{\prime},Q^{\prime},pre^{\prime})\) are _equivalent_ if there exists \(e\in E,e^{\prime}\in E^{\prime}\) such that for all pointed Kripke models \(\mathcal{M}=((W,V,R),w)\) and all formulas \(\varphi\in\mathcal{L}\), \(\mathcal{M}\otimes\mathcal{E},(w,e)\vDash\varphi\) iff \(\mathcal{M}\otimes\mathcal{E}^{\prime},(w,e^{\prime})\vDash\varphi\).7 We can now prove an exponential succinctness result for syntactic event models. We show that for all \(n\geq 1\), we can construct a particular syntactic event model \(\mathcal{G}(n)\) that can't be represented by any semantic event model with less than \(2^{n}\) events.
Footnote 7: We could have defined this notion equivalently in terms of bisimulations [10], but as we haven’t defined bisimulations in this paper, we choose this equivalent formulation [12].
Proposition 6.5 (Exponential succinctness).: _There exists syntactic event models \(\mathcal{G}(n)\), \(n\geq 1\), such that all of the following holds:_
* \(\mathcal{G}(n)\) _has size_ \(O(n)\)_._
* _The semantic event model_ \(\mathcal{H}(n)\) _induced by_ \(\mathcal{G}(n)\) _has_ \(2^{n}\) _events (and is hence of size_ \(\Omega(2^{n})\)_)._
* _Any other semantic event model that is equivalent to_ \(\mathcal{H}(n)\) _will have at least_ \(2^{n}\) _events._
_Furthermore, we can construct the \(\mathcal{G}(n)\) so that they use only one agent and where \(n\) is the number of atomic propositions._
Proof.: For each \(n\geq 1\), let \(\mathcal{G}(n)\) denote the syntactic event model of Example 6.4 with \(Ag=\{1,\ldots,n\}\). Let \(\mathcal{H}(n)\) denote the semantic event model induced by \(\mathcal{G}(n)\). The induced event model \(\mathcal{H}(n)\) is the one defined in Definition 3.2 that we already concluded to have at least \(2^{n}\) events (due to there being one event per subset of \(\{\mathsf{h}_{a}:a\in Ag\}\), the subset containing the \(\
\(\psi_{E}\) is of size \(O(n)\): the inner-most disjunction is repeated once for each agent, but everything else is of fixed size. The formula \(\psi_{a}\) is also of fixed size, it simply has size (length) 22. We however need one of these formulas for each agent, so in total \((\psi_{a})_{a\in A\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! |
2302.02868 | Lyman continuum leaker candidates among highly ionised, low-redshift
dwarf galaxies selected from HeII | Contemporary research suggests that the reionisation of the intergalactic
medium (IGM) in the early Universe was predominantly realised by star-forming
(proto-)galaxies (SFGs). Due to observational constraints, our knowledge on the
origins of sufficient amounts of ionising Lyman continuum (LyC) photons and the
mechanisms facilitating their transport into the IGM remains sparse. Recent
efforts have thus focussed on the study of local analogues to these
high-redshift objects.
We used archival spectroscopic SDSS DR12 data to select a sample of low-z He
II 4686 emitters and restricted it to a set of SFGs with an emission line
diagnostic sensitive to the presence of an AGN, which serves as our only
selection criterion. Our final sample consists of eighteen low-mass,
low-metallicity dwarf galaxies which appear to be predominantly ionised by
stellar sources. We find large O32 ratios and [S II] deficiencies, which
provide strong indications for these galaxies to be LyC Emitters (LCEs). At
least 40% of these objects are candidates for featuring cosmologically
significant LyC escape fractions >10%. Their SFHs exhibit strong similarities
and almost all galaxies appear to contain an old (>1 Gyr) stellar component,
while also harbouring a young, two-stage (~10 Myr and <1 Myr) starburst, which
we speculate might be related to LyC escape.
The properties of the compact emission line galaxies presented here align
well with those observed in many local LCEs. In fact, our sample may prove as
an extension to the rather small catalogue of local LCEs, as the extreme
interstellar medium (ISM) conditions we find are assumed to facilitate LyC
leakage. Notably, all of our eighteen candidates are significantly closer
(z<0.1) than most established LCEs. If the inferred LyC photon loss is genuine,
this demonstrates that selecting SFGs from He II 4686 is a powerful selection
criterion in the search for LCEs. | Adam U. Enders, Dominik J. Bomans, Anna Wittje | 2023-02-06T15:36:12Z | http://arxiv.org/abs/2302.02868v1 | Lyman continuum leaker candidates among highly ionised, low-redshift dwarf galaxies selected from He ii
###### Abstract
Context:Contemporary research suggests that the reionisation of the intergalactic medium (IGM) in the early Universe was predominantly realised by star-forming (proto-)galaxies (SFGs). Due to observational constraints, our knowledge on the origins of sufficient amounts of ionising Lyman continuum (LyC) photons and the mechanisms facilitating their transport into the IGM remains sparse. Recent efforts have thus focussed on the study of local analogues to these high-redshift objects.
Aims:We aim to acquire a set of very low-redshift SFGs that exhibit signs of a hard radiation field being present. A subsequent analysis of their emission line properties is intended to shed light on how the conditions prevalent in these objects compare to those predicted to be present in early SFGs that are thought to be LyC emitters (LCEs).
Methods:We used archival spectroscopic SDSS DR12 data to select a sample of low-redshift He ii 4686 emitters and restricted it to a set of SFGs with an emission line diagnostic sensitive to the presence of an active galactic nucleus, which serves as our only selection criterion. We performed a population spectral synthesis with zero to reconstruct these galaxies' star-formation histories (SFHs). Utilising the spectroscopic information at hand, we constrained the predominant ionisation mechanisms in these galaxies and inferred information on ISM conditions relevant for the escape of LyC radiation.
Results:Our final sample consists of eighteen ionised, metal-poor galaxies (IMPs). These low-mass (\(6.2\leq\log{(M_{\star}/\mathrm{M}_{\odot})}\leq 8.8\)), low-metallicity (\(7.54\leq\log{(\mathrm{O/H})}+12\leq 8.13\)) dwarf galaxies appear to be predominantly ionised by stellar sources. We find large [O iii] 5007/[O ii] 3727 ratios and [S ii] 6717/6731/H\(\alpha\) deficiencies, which provide strong indications for these galaxies to be LCEs. At least 40% of the objects are candidates for featuring cosmologically significant LyC escape fractions \(\geq 10\)%. The IMPs' SFHs exhibit strong similarities and almost all galaxies appear to contain an old (\(>\)1 Gyr) stellar component, while also harbouring a young, two-stage (\(\sim\)10 Myr and \(<\)1 Myr) starburst, which we speculate might be related to LyC escape.
Conclusions:The properties of the compact emission line galaxies presented here align well with those observed in many local LCEs. In fact, our sample may prove as an extension to the rather small catalogue of local LCEs, as the extreme interstellar medium (ISM) conditions we find are assumed to facilitate LyC leakage. Notably, all of our eighteen candidates are significantly closer (\(z<0.1\)) than most established LCEs. If the inferred LyC photon loss is genuine, this demonstrates that selecting SFGs from He ii 4686 is a powerful selection criterion in the search for LCEs.
## 1 Introduction
About 12.8 Gyr ago, at a redshift of \(z\sim 6\), the last major phase transition of our Universe's intergalactic medium (IGM) from neutral gas to ionised plasma was complete (e.g. Fan et al., 2006; McGreer et al., 2015). In the period preceding this, known as the Epoch of Reionisation (EoR), the young galaxies at that time must have contained a population of objects that have leaked large amounts of Lyman continuum (LyC) photons (\(\lambda<912\) A) into intergalactic space in order to account for this transition.
Precisely which astrophysical objects exhibit properties that qualify them as LyC emitters (LCEs, often also referred to as LyC leakers) is a topic of debate in contemporary astrophysical research. While in the low-redshift Universe active galactic nuclei (AGN) readily provide the energy required to explain the IGM's ionised state (Haardt and Madau, 2012), their number density drops steeply with increasing redshift (e.g. McGreer et al., 2013; Kulkarni et al., 2019). Thus, unless a signficant population of low-luminosity AGN existed in the early Universe (Madau and Haardt, 2015), star-forming galaxies (SFGs) must have provided the majority of ionising radiation during this era (Robertson et al., 2015; Mitra et al., 2018). Here, some researchers advocate that luminous, massive SFGs play a dominant role (Sharma et al., 2016; Naidu et al., 2020; Marques-Chaves et al., 2021, 2022), but many arguments seem to be in favour of the faint, low-mass class of dwarf galaxies (DGs), as they are the most abundant class of galaxy with properties that may facilitate the transport of LyC photons into the IGM (e.g. Wise et al., 2014; Finkelstein et al., 2019). Ultimately though, the magnitude of their contribution as well as the mechanisms driving the loss of ionising radiation into the IGM remain a topic of debate.
In part, this uncertainty is owed to the poorly understood intrinsic properties of these galactic systems, which in conjunction need to result in a Lyman continuum escape fraction (\(f_{\mathrm{esc}}\) (LyC))
of some \(10-20\) % to provide a plausible scenario for reionisation (e.g. Ouchi et al., 2009; Robertson et al., 2013; Stark, 2016; Naidu et al., 2020). In young galaxies, most of the intrinsic LyC flux is provided by OB stars, and the total amount of high energy photons can be elevated by an overabundance of these sources due to intense starburst episodes, but likewise due to variations of the initial mass function (IMF, e.g. Hopkins and Beacom, 2006; Wise and Cen, 2009). As studies of integrated galaxy properties strongly hinge on current stellar models, one needs to take into account that for low-metallicity systems, these models are backed by few to no observations (the latter certainly being true for Population III stars, whose impact on a galaxy's LyC output is significant, Schaerer 2002, 2003) and consequently introduce a high amount of uncertainty. Parameters governing stellar evolution such as rotation and binarity introduce further uncertainty, as they may significantly alter the spectral energy distribution (SED) of a given star, particularly in the ultra-violet (UV) regime (e.g. Eldridge and Stanway, 2009). Additionally, other sources capable of boosting a galaxy's LyC output need to be taken into consideration, with the most prominent contenders being Wolf-Rayet (WR) stars (Schaerer, 1996; Smith et al., 2002), high-mass X-ray binaries (HMXBs, Garnett et al., 1991; Sana et al., 2012; Schaerer et al., 2019), and fast radiative shocks (Garnett et al., 1991; Dopita and Sutherland, 1996; Thuan and Izotov, 2005; Plat et al., 2019). This list can be expanded by including other sources such as post-AGB stars and the nuclei of planetary nebulae (Binette et al., 1994), but these bear less relevance here due to the insufficient evolutionary timescale of these systems.
Likewise, our insights regarding the mechanisms facilitating the transport of ionising LyC radiation into the IGM remain sparse. Here, the distribution of the interstellar medium (ISM) in different stages of ionisation certainly plays a significant role in LyC escape, and several plausible models backed by observations have been suggested. Such scenarios include almost entirely density-bounded SFGs (Nakajima and Ouchi, 2014; de Barros et al., 2016), a low-density ISM permeated by optically thick HI clumps (the picket-fence model, Heckman et al., 2011; Gronke et al., 2016), or an optically thick ISM riddled by low-density, highly ionised cavities and tunnels carved by supernovae (SNe) and galactic winds (Zackrisson et al., 2013; Behrens et al., 2014). Especially the latter scenario introduces a strong viewing angle dependence, as the ionised tunnels would then have a strong directional preference, mediated by the host galaxy's gravitational potential gradient and thus, its morphology (Zastrow et al., 2013; Cen and Kimm, 2015). We provide a sketch of different LyC escape scenarios in Fig. 1, for illustrative purposes.
Generally speaking, studying the properties of early galaxies remains a challenging observational task, owed to the apparent faintness and the low spatial resolution that can be achieved with present-day telescopes for such high-redshift objects. In particular, the intrinsically faint UV regime of the SED is heavily affected by extinction, and beyond redshifts of \(z\sim 4\), the detection of LyC photons is rendered highly unlikely due to absorption by neutral hydrogen on the line of sight (Madau and Dickinson, 2014).
In order to gain further insights into the characteristics of early SFGs, a promising approach to circumvent these problems is achieved by studying local counterparts which share similar properties. Notably, the starbursts among the luminous compact galaxies (LCGs, Izotov et al., 2011) and compact SFGs (CSFGs, Izotov et al., 2021), characterised by H\(\beta\) equivalent widths (EWs) of EW(H\(\beta\))\(>\)100 A, have many commonalities to high-\(z\) SFGs, such as low metallicities, low stellar masses, and large specific star-formation rates (sSFRs). The first results from observations with the _James Webb Space Telescope_ suggest similar findings for galaxies well in the EoR, further reinforcing the argument that the study of local analogues offers many insights into the high-\(z\) Universe (e.g. Trussler et al., 2022; Endsley et al., 2022; Schaerer et al., 2022; Topping et al., 2022; Rhoads et al., 2023).
If SFGs were the dominant driver of reionisation, we can reasonably expect to find LCEs among such populations of local analogues. Indeed, studies of nearby, low-metallicity DGs have revealed high-ionisation nebular emission lines, such as O iii], C iv, or He ii (e.g. Senchyna et al., 2017; Berg et al., 2019), requiring energies well in excess of those necessary to ionise hydrogen (13.6 eV). Not only is this another link to several high-\(z\) SFGs (e.g. Sobral et al., 2015; Stark et al., 2015; Mainiail et al., 2017), but also a strong motivator for the search for LCEs among these objects, as a hard radiation field is evidently present. We would like to point out that this connection has been acknowledged before, for instance by Berg et al. (2019) and Perez-Montero et al. (2020).
Over the last few years, a significant, albeit small sample of low-\(z\) galaxies with direct detection of LyC leakage has been compiled (Bergvall et al., 2006; Leitet et al., 2013; Borthakur et al., 2014; Leitherer et al., 2016; Izotov et al., 2016, 2018, 2018, 2019; Wang et al., 2019; Malkan and Malkan, 2021; Izotov et al., 2021, 2021; Flury et al., 2022). A particularly high detection rate of LyC escape was achieved by Izotov et al. (2016, 2016) among the so-called green pea galaxies (GPs, Cardamone et al., 2009), which have been shown to be a subset of the LCGs (Izotov et al., 2011).
Whereas many LCEs do show nebular He ii 4686 emission, recent works have demonstrated the absence of a correlation between spectral hardness (as traced by He ii 4686/H\(\beta\)) and \(f_{\rm esc}\) (LyC) (Marques-Chaves et al., 2022). This does not imply, however, that the inverse is true, and drawing a sample of SFGs from high-ionisation emission lines may yield an appreciable fraction of LCEs, which we intend to explore in this paper. This idea is further reinforced by the fact that several spectra of high redshift LCE candidates feature intense high-ionisation UV emission lines (e.g. Schaerer et al., 2022; Naidu et al., 2022; Saxena et al., 2022).
It is to be noted that high-ionisation lines are often difficult to reproduce in spectral modelling of SFGs. Emission of He ii 4686, anti-correlated with metallicity (Schaerer et al., 2019) and utilised in this work, appears to be particularly challenging. Often assumed to be a spectral signature of WR stars, Shirazi and Brinchmann (2012) found no evidence of WR stars being present in \(\sim 40\)% of their He ii 4686 emitting SFG sample. Similar situations arise for example in the works of Stasinska et al. (2015) and Schaerer et al. (2019), who find that current stellar models cannot reproduce the observed He ii 4686 intensities, and the inclusion of non-stellar sources such as HMXBs or shocks are necessary to bring their model SEDs in line with observations.
In this work, we present our findings from legacy data of the Sloan Digital Sky Survey (SDSS, Eisenstein et al., 2011), adding a total of 18 SFGs to the list of high-\(z\) analogues, selected from their He ii 4686 emission lines. Inferred from indirect tracing methods, virtually all of these galaxies are promising candidates to expand the list of local LCEs, and their recent star-formation histories (SFHs) may provide an explanation as to why their LyC radiation can escape into the IGM.
The paper is structured as follows: in Sect. 2, we describe the dataset and sample selection, while the analysis methods are covered in Sect. 3. We present the properties we derived for the galaxies we selected in Sect. 4, and summarise our results and augment them by a few concluding remarks in Sect. 5.
Throughout this paper, we assume a flat \(\Lambda\)CDM cosmology with \(H_{0}=67.4\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=0.315\)(Planck Collaboration et al., 2020).
## 2 Data
### Dataset
In order to obtain a sample of SFGs with He ii 4686 emission, we used the Baryon Oscillation Spectroscopic Survey (BOSS, Dawson et al., 2013) dataset from the SDSS's third run (SDSS-III, Eisenstein et al., 2011) in their twelfth data release (DR12), the final data release in the SDSS-III. The choice of this dataset is motivated by an advantage particular to DR12, as opposed to the more recent data releases, which is found in the augmentation by Thomas et al. (2013). They have published a value-added catalogue derived from fits of model spectra to the data, in which they provide refined line fluxes for the most prominent emission lines amongst other spectroscopic properties. This allows for a fairly efficient scouring of the database for the objects of our interest, whose properties are most readily derived from emission line ratios.
It is to be noted that DGs can be expected to be somewhat underrepresented in this dataset. This is evidently owed to the primary science goal of the BOSS, which has an intentional bias towards luminous, and hence massive galaxies (Dawson et al., 2013). However, their observation runs allowed for a fraction of ancillary targets deviating from their main science goal (Dawson et al., 2013), among which we deemed it likely to find suitable candidates. Previous works have demonstrated this - Guseva et al. (2017) have used this catalogue to spectroscopically identify 287 metal-deficient DG candidates, and Yang et al. (2017) extracted a sample of analogues to high-\(z\) Lyman \(\alpha\) (Ly\(\alpha\)) emitters by photometric selection (dubbed 'blueberry galaxies').
### Sample selection
Typically, studies concerning SFGs utilise the presence of He ii 4686 emission as an exclusion criterion (e.g. Izotov et al., 2011; Guseva et al., 2017), as this line is indicative of the presence of an AGN. However, not all He ii 4686 emitters have a significant AGN contribution to their radiation field (Shirazi & Brinchmann, 2012), and as we are primarily interested in finding LyC leaking galaxies, helium with its ionisation potential of 54.4 eV is chosen as an excellent indicator for the presence of a hard radiation field.
Based on SDSS DR7 data, Shirazi & Brinchmann (2012) have shown that an AGN contribution to the He ii 4686 emission is negligible (\(<\)10%) if a line ratio relation of
\[\log\left(\frac{\mathrm{He\,\,\textsc{ii}}\,4686}{\mathrm{H}\beta}\right)\leq- 1.22+\frac{1}{8.92\log\left(\mathrm{[N\,\,\textsc{ii}]}\,6583/\mathrm{H} \alpha\right)+1.32} \tag{1}\]
is satisfied. We expect this restriction to be sufficient to ensure the purity of our SFG sample with respect to AGN contamination and thus draw our final sample from Eq. 1 alone. To ensure credibility in the measured emission line fluxes, following Thomas et al. (2013), we required all objects to have an amplitude over noise (AoN) \(>\)2 in each of the utilised emission lines.
We note that as opposed to LCGs (Izotov et al., 2011), the CSFG sample compiled in Izotov et al. (2021) by selection allows for the presence of He ii 4686 emission. Thus, at least a part of our sample is a priori likely to be a subset of the CSFGs.
## 3 Data analysis
### Spectral analysis
After our initial sample selection, we analysed the SDSS spectra using rado(Gomes & Papaderos, 2017). In brief, this code performs a population spectral synthesis by fitting a set of simple stellar populations to a galaxy's spectrum. rado then reconstructs the SFH from these fits, while computing a multitude of derived galaxy properties. This principle method is well established and has been successfully applied in the past to study the physical properties of galaxies (e.g Cid Fernandes et al., 2007). We selected rado as our synthesis code of choice, since as opposed to other commonly used tools such as starlight(Cid Fernandes et al., 2005), rado also accounts for the contribution of nebular continuum emission, which can be assumed to be substantial in emission line galaxies such as those studied in this work. While the overall agreement between starlight and rado appears to be adequate, the study by Cardoso et al. (2019) particularly emphasises the better performance of rado for (starbursting) galaxies with strong nebular emission.
Prior to the analysis, we corrected the input spectra for foreground extinction assuming a Cardelli et al. (1989) extinction curve. We assumed a standard ratio of total to selective extinction
Figure 1: Annotated sketch highlighting several scenarios present in an H ii region around a dominant central object (DCO). Light blue arrows represent LyC photons, where an arrow ending within the extent of the H ii region indicates absorption, and an arrow exceeding the region’s extent represents escape. The top left quadrant depicts the extent of H i and H ii for the density- and ionisation-bounded textbook scenarios, whereas the bottom left quadrant is intended to highlight the role of a more complex gas geometry on LyC escape. The bottom right quadrant serves as a reminder that the spatial distribution of ionising sources further influences the observed LyC flux. Finally, the top right quadrant illustrates that the regions occupied by different ions vary in size, mediated by their respective ionisation potentials (an overview of the ionisation energies of a few select atoms is given in the table at the top right).
of \(R_{V}\equiv A_{V}/E(B-V)=3.1\). For each object, we adopted the V-band extinction values as listed in the NED1, taken from Schlafly & Finkbeiner (2011). Afterwards, the spectra were de-redshifted using the values provided on the website of the SDSS2.
Footnote 1: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)
Footnote 2: [https://www.sdss.org/](https://www.sdss.org/)
As for our choice of the stellar populations, we adopted the models from Bruzual & Charlot (2003) in their latest version3 assuming a universal Kroupa (2001) IMF. We note that this choice is somewhat arbitrary, and the implied evolutionary parameters may be different for the stellar populations present here. In particular, as we find our objects to be compact, star-forming galaxies, binary stellar evolution may impact the galaxies' SEDs, which is covered in the BPASS models by Stanway & Eldridge (2018). Likewise, as stated above, the results are sensitive to the choice of IMF, and it has been suggested that high-z SFGs may be governed by a top-heavy IMF (e.g. Steinhardt et al., 2022), which would have significant implications for LyC leakage (Wise & Cen, 2009). As this paper is of an exploratory nature and observations of deeper optical spectra for future analysis have already been scheduled, we postpone this to a follow-up study and limit this analysis to the models by Bruzual & Charlot (2003).
Footnote 3: [https://www.bruzual.org/bc03/Updated_version_2016/](https://www.bruzual.org/bc03/Updated_version_2016/)
In principle, these models cover a range of metallicities extending to 2.5 \(Z_{\odot}\), spanning 220 unequally spaced time steps from 0.1 Myr - 20 Gyr. In order to limit the range of free parameters and therefore improve the accuracy of the age estimates, we calculated preliminary oxygen abundances as outlined below (Sect. 3.2) from the emission line fluxes provided by Thomas et al. (2013). With the derived metallicities from the emission lines analysis (values in the range of \(Z/Z_{\odot}=0.07-0.22\)), we thus limit the set of stellar populations to a total of 600 model spectra with values of \(Z/Z_{\odot}\in[0.005,0.02,0.2]\) and stellar ages of \(\tau_{\star}=0.1\) Myr - 15 Gyr.
Using this set of models, we then performed the spectral synthesis by fitting the spectral range of \(3400-9000\) A. As we expect our objects to contain very little dust, we initially assumed negligible internal extinction, that is \(A_{\lambda}\sim 0\). We then consider the spectral fits by fado as a reasonable approximation for the best (extinction-corrected) solution; in particular, the emission line fluxes found this way should be adequately corrected for underlying stellar absorption. The (emission) flux ratios of H\(\alpha\), H\(\gamma\) and H\(\delta\) relative to H\(\beta\) then are compared to the expected values from theory, where we consider the Case B values tabulated in Storey & Hummer (1995) for \(t_{\rm e}=15\,000\) K and \(n_{\rm e}=100\) cm\({}^{-3}\) a reasonable match for the bulk of the galaxies studied here. Following Osterbrock & Ferland (2006), we then determine the extinction coefficient \(c\) (H\(\beta\)) as the mean of the extinction coefficients found for the individual line ratios, and subsequently correct the spectra assuming an SMC-like extinction law (Gordon et al., 2003, \(R_{V}=2.74\)). In the cases where we find a slightly negative mean value for \(c\) (H\(\beta\)), we assume no internal extinction, that is \(c\) (H\(\beta\)) = 0.
Following this, we then performed an anewed population spectral synthesis with the dereddened spectra, leaving all other parameters unchanged. During this process, fado performed Gaussian fits for up to 51 prominent emission lines ranging from [Ne v] 3426 to [Fe ii] 8617. After visual inspection of the spectra, we found these fits to be in very good agreement to the line fluxes observed, and hence used these values for each subsequent analysis in this work. The line fluxes along with their corresponding EWs and the derived extinction coefficients are listed in the Appendix in Table B.1.
For the sake of completeness, we note that fado occasionally would shift the peak of a measured emission line by \(\sim 1\) A bluewards. Our visual inspection of the spectra suggested that this offset had no impact on the measured fluxes, and we verified this via manual line fitting using naf's (Tody, 1986) splot, finding excellent agreement between the values obtained.
While rano models a detailed SFH for each galaxy, we additionally calculated the star-formation rate (SFR) one would infer from H\(\alpha\) for the sake of comparability with other studies. Here, we follow Kennicutt & Evans (2012), thus log SFR (H\(\alpha\)) = \(\log L\) (H\(\alpha\)) \(-41.27\). As our analysis (Sect. 4) reveals that the galaxies in our final sample exhibit many similarities in the properties we considered, we further produced a stack of their spectra and analysed it in the same way as the individual objects.
### Abundance determination
To determine the metallicities of the objects considered here, we largely follow the prescriptions as outlined in Perez-Montero (2017). For the determination of the oxygen abundance, we adopt the so-called 'direct' method, as the auroral [O iii] 4363 line is detected in all of our objects. In brief, we first calculate the electron temperature \(t_{\rm e}\) ([O iii]) in the high-excitation zone from the [O iii] lines. The low-excitation zone electron temperature \(t_{\rm e}\) ([O ii]) then is inferred from \(t_{\rm e}\) ([O iii]) and the electron density \(n_{\rm e}\), using the calibration of Hagele et al. (2006). The latter is calculated from the [S ii] 6717,6731 lines whenever possible and otherwise assumed to have a value of 100 cm\({}^{-3}\). This value is comparable to those found in the LCGs of Izotov et al. (2011), who report median electron densities between 90 cm\({}^{-3}\) and 180 cm\({}^{-3}\). These values, along with [O iii] 4959,5007 and [O ii] 3727=[O ii] 3726,3729 fluxes then are used to determine the fractions of singly and doubly ionised oxygen relative to ionised hydrogen.
As the galaxies studied here are highly ionised, we also consider an ionisation correction factor (ICF) compensating for the neglection of O\({}^{3+}\) in this calculation. Again following Perez-Montero (2017), we first calculate the abundance of doubly ionised helium \(y^{2+}\) from He ii 4686 and a weighted mean of the singly ionised helium abundance \(y^{+}\) from He ii 4471,5876,6678,7065, under the assumption that the intrinsic attenuation is small and the optical depth function \(f_{\lambda}(n,t,\tau)\)(Olive & Skillman, 2004) can be approximated as \(f_{\lambda}\approx 1\)(Perez-Montero, 2017). We note that for a handful of objects, the \(y^{+}\) values derived from He i 6678 would deviate from the values obtained from the other He i lines by up to 12 orders of magnitude. In those cases, we excluded these unphysical values from the calculations and limit our evaluation to He i 4471,5876,7065. Using the calibrations from Perez-Montero et al. (2020, their Appendix A), \(y^{+}\) and \(y^{2+}\) then are converted into \({\rm ICF}\) \(\left({\rm O}^{+}+{\rm O}^{2+}\right)\), and the total oxygen abundance is calculated as \({\rm O}/{\rm H}={\rm ICF}\left({\rm O}^{+}+{\rm O}^{2+}\right)\times\left({ \rm O}^{+}/{\rm H}^{+}+{\rm O}^{2+}/{\rm H}^{+}\right)\).
For our preliminary oxygen abundance evaluation (see above, Sect. 3.1), this was not possible for nine objects due to the absence of reliable measurements of various required emission line fluxes in the catalogue of Thomas et al. (2013). To determine the oxygen abundance, we then used the empirical relation between oxygen abundance and \(t_{\rm e}\) ([O iii]) as calibrated in Perez-Montero et al. (2021). In case both methods were not applicable, we adopted the strong-line method by Pagel et al. (1979), also known as the R23 method. As this diagnostic is double-valued
and does not allow a conclusive oxygen abundance determination in the range of 8.0 \(\leq\) log \(\rm(O/H)\) + 12 \(\lesssim\) 8.3 (Perez-Montero, 2017), we first computed a rough estimate of the metallicity from [N ii] 6583/H\(\alpha\)(Perez-Montero & Contini, 2009), and consequently applied the upper or lower branch of the R23 method.
From the values obtained during the ICF calculations outlined above, we also determine the helium abundance, assuming that He/H = \(\rm\left(He^{+}+He^{2+}\right)/H^{+}=y^{+}+y^{2+}\). The nitrogen abundance relative to oxygen, again following Perez-Montero (2017)4, was calculated from [N ii] 6583, [O ii] 3727, and \(t_{\rm e}\) ([O ii]), assuming that N\({}^{+}\)/N = O\({}^{+}\)/O.
Footnote 4: We note that the prescription in Pérez-Montero (2017, their Eq. 51) includes a typo in the second-to-last summand, where 0.687\({}_{\rm e}\) ([O ii])\({}^{-1}\) should be a subtrahend rather than a factor.
### LyC leakage diagnostics
To this date, direct measurements of the LyC flux shortwards of 912 A are the only fully reliable diagnostic for LyC leakage and derivations of \(f_{\rm esc}\) (LyC). However, significant efforts to relate other observable quantities to LyC escape have been made in the last few years. Particularly promising is the study of Ly\(\alpha\), whose line profile is related to the ISM structure (e.g. Gronke et al., 2016), and the peak separation of the double-peaked Ly\(\alpha\) line emerging from a clumpy ISM may relate to \(f_{\rm esc}\) (LyC) (Verhamme et al., 2015). Izotov et al. (2020) provide a calibration relating the Ly\(\alpha\) escape fraction \(f_{\rm esc}\) (Ly\(\alpha\)) to \(f_{\rm esc}\) (LyC), but caution this is a preliminary relation inferred from low sample statistics.
In case of LyC escape being realised through ionised low density tunnels in the ISM, several FUV metal absorption lines may display residual flux at their line centre (Heckman et al., 2001). As this residual flux then directly relates to the covering fraction of neutral gas, they can be used to infer a prediction for \(f_{\rm esc}\) (LyC) (Chisholm et al., 2018).
These diagnostics, however, require information from the ultraviolet portion of the spectrum, a wavelength range that for objects in the low-redshift regime is accessible only to the _Cosmic Origins Spectrograph_ (COS) aboard the Hubble Space Telescope (HST). No such data is available for our objects, and even the Mg ii 2796,2803 lines that provide strong evidence for the possibility of LyC escape as they trace an optically thin medium (Chisholm et al., 2020) were not or not reliably measured here.
In the optical regime, there are a few emission line diagnostics that relate to conditions favourable for LyC escape. A promising approach to look for LyC leakers is to select galaxies with large O\({}_{32}\) = [O iii] 5007/[O ii] 3727 flux ratios, which indicates optically thin gas with a high degree of ionisation (Jaskot & Oey, 2013), and using it as a selection criterion has lead to several successful detections of genuine LyC leakers (e.g. Izotov et al., 2016, 2018, 2018). Chisholm et al. (2018) even provide a tentative calibration relating this line ratio to \(f_{\rm esc}\) (LyC), but as Jaskot et al. (2019) note, a high O\({}_{32}\) flux ratio alone is an insufficient LyC leakage diagnostic, as the line ratio can be modulated by the structure of the ISM. Nakajima et al. (2020) confirm that such an elevated line ratio is indeed a necessary, yet not sufficient condition for a galaxy to be a LyC Leaker. Still, the results presented in Flury et al. (2022) show that along with the (related) parameter \(\Sigma_{\rm SFR,H\beta}\), O\({}_{32}\) to date is the best optical diagnostic relating to \(f_{\rm esc}\) (LyC). A similar pre-selection of LyC leaker candidates may be possible with the [Ne iii] 3869/[O ii] 3727 ionisation parameter diagnostic (Levesque & Richardson, 2014), which has the additional benefit of being less affected by internal reddening.
Another characteristic many LyC leakers seem to share is a deficiency in [S ii] 6717,6731/H\(\alpha\). Sulphur with its rather low ionisation potential of 10.36 eV causes these lines to predominantly originate in the partially ionised zone beyond the edge of the fully ionised gas, which is less pronounced in density-bounded nebulae and thus correlates with LyC leakage (Alexandroff et al., 2015). Wang et al. (2019) have demonstrated this as a powerful selection tool for LyC leaker candidates and expanded upon this diagnostic in Wang et al. (2021) with the sample of the recent Low-Redshift LyC Survey (Flury et al., 2022), where they provide a calibration in the [O iii] 5007/H\(\beta\) vs. [S ii] 6717,6731/H\(\alpha\) parameter space for typical emission line galaxies. A deviation in [S ii] 6717,6731/H\(\alpha\) from this ridge line (denoted as \(\rm\Delta[S\,{\textsc{ii}}]\)) statistically correlates with a larger \(f_{\rm esc}\) (LyC), even though it does not allow for an accurate determination of this parameter.
In conclusion, there is no definite, independent method to accurately predict a galaxy's \(f_{\rm esc}\) (LyC) as of now, but the application of a multitude of diagnostics correlated with ionising photon loss as outlined above provide strong selection criteria for genuine LyC leakers. Here, we make use of the O\({}_{32}\) based calibration by Chisholm et al. (2018) to obtain an estimate of \(f_{\rm esc}\) (LyC), that is
\[f_{\rm esc}\left(\rm LyC\right)=\left(0.0017\pm 0.0004\right)\rm O_{32}^{2}+ \left(0.005\pm 0.007\right). \tag{2}\]
An alternative calibration can be found in Izotov et al. (2018), yielding significantly larger values of \(f_{\rm esc}\) (LyC) in galaxies with large O\({}_{32}\).
## 4 Sample properties
### Global sample characteristics
Of a total of 1 381 398 objects present in the dataset, 1 504 (\(\sim\) 0.01%) satisfied the AoN limitation imposed upon the He ii 4686, H\(\beta\), H\(\alpha\), and [N ii] 6583 diagnostic lines. Within this subset, an initial sample of 22 galaxies satisfied our selection criterion given by Eq. 1. After our renewed emission-line fitting, we found that for two of those objects, the He ii 4686 / H\(\beta\) ratios had been understimated in the catalogue of Thomas et al. (2013), and the diagnostic by Shirazi & Brinchmann (2012) reveals these to comprise a significant AGN contribution (red diamonds in Fig. 2). Furthermore, when considering the diagnostic diagram first introduced by Baldwin et al. (1981, hereafter BPT) (Fig. 3), commonly used to separate SFGs from AGN, we find the majority of the remaining objects to occupy the same region in the high-ionisation tail of the diagram. Two galaxies are found to be clearly offset from this population (yellow circles in Fig. 3) and their line ratios are more in line with those found in typical SFGs, which justifies their exclusion from our final sample. For completeness, and also as each of these outliers exhibit interesting characteristics in their own right, we discuss them in Appendix A. Our final sample now consists of 18 objects, which we dub the ionised, metal poor galaxies (IMPs). Within the diagnostic diagram used for the selection of our sample (Fig. 2), these reside in a well-defined locus significantly offset from the bulk of He ii 4686 emitters, suggesting a similar ionisation mechanism other than an AGN is prevalent in these galaxies.
To the best of our knowledge, none of these objects have previously been studied individually in the literature. We list their coordinates and redshifts along with several general properties in Table 1. For reasons of legibility, we reduce the galaxy name to the first four+four digits of the SDDS identifier.
We find the IMPs ubiquitously reside at low redshifts (\(0.01\la z\la 0.09\)), which can be attributed to a selection effect as we require a reliably measured He ii 4686 line. All of them are readily classified as DGs, with stellar masses ranging from \(6.2\la 1\) log (\(M_{\star}/\mathrm{M}_{\odot}\)) \(\la 8.8\). They are characterised by moderate to large sSFRs of \(\sim 0.74-137\) Gyr\({}^{-1}\) with a median of \(\sim 7.09\) Gyr\({}^{-1}\), which serves as a first indicator that the IMPs blend well into the population of LCGs and CSFGs (Izotov et al., 2011, 2021a), where similar values are reported. This likeness is further supported by the large values we find for EW(H\(\beta\)) and
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline ID & RA & DEC & \(z^{\mathrm{d}}\) & \(r_{\mathrm{peto}}^{\mathrm{mg}}\) & \(r_{\mathrm{peto}}^{\mathrm{HB}}\) & \(M_{\star}\)\({}^{\mathrm{d}}\) & SFR(H\(\alpha\))\({}^{\mathrm{e}}\) & sSFR/ & SFR\({}_{\mathrm{B}}\)\({}^{\mathrm{g}}\) & sSFR\({}_{\mathrm{B}}\)\({}^{\mathrm{g}}\) \\ \hline J0006+0255 & 00:06:06.53 & 02:55:04.13 & 0.09382828 & 1.59 & 2.85 & 8.48 & 1.118 & -8.43 & 1.158 & -8.42 \\ J0028+3035 & 00:28:33.55 & 30:35:54.14 & 0.07451165 & 2.65 & 3.88 & 8.59 & 0.707 & -8.74 & 0.660 & -8.77 \\ J0131+0210 & 01:31:38.15 & 02:10:14.58 & 0.05806014 & 2.12 & 2.44 & 6.66 & 0.564 & -6.91 & 0.577 & -6.90 \\ J0138+1114 & 01:38:27.65 & 11:14:34.45 & 0.06219218 & 1.43 & 1.77 & 6.95 & 1.228 & -6.86 & 1.271 & -6.85 \\ J0150+1643 & 01:50:46.25 & 16:43:26.43 & 0.05909079 & 1.65 & 1.94 & 7.82 & 1.316 & -7.70 & 1.281 & -7.71 \\ J0744+1858 & 07:44:52.39 & 18:58:30.01 & 0.05227381 & 3.35 & 3.53 & 8.25 & 1.272 & -8.14 & 1.265 & -8.14 \\ J0753+2820 & 07:53:25.26 & 28:20:12.74 & 0.06752457 & 3.94 & 5.24 & 8.35 & 0.867 & -8.41 & 0.821 & -8.43 \\ J0809+4918 & 08:09:42.74 & 49:18:21.46 & 0.07809167 & 1.72 & 2.63 & 8.40 & 1.470 & -8.23 & 1.447 & -8.24 \\ J1037+2325 & 10:37:28.61 & 23:25:29.81 & 0.01058074 & 5.27 & 1.21 & 7.10 & 0.009 & -9.13 & 0.009 & -9.13 \\ J1109+3429 & 11:09:20.89 & 34:29:02.90 & 0.066791757 & 2.48 & 3.31 & 8.83 & 0.662 & -9.01 & 0.649 & -9.02 \\ J1141+6059 & 11:41:46.74 & 60:59:43.13 & 0.01096911 & 6.93 & 1.46 & 6.88 & 0.027 & -8.45 & 0.026 & -8.47 \\ J1311+3750 & 13:11:35.82 & 37:50:58.73 & 0.05381261 & 3.95 & 4.24 & 7.70 & 1.657 & -7.48 & 1.643 & -7.48 \\ J1313+6044 & 13:13:31.26 & 60:44:54.52 & 0.07113630 & 2.52 & 3.52 & 8.11 & 3.104 & -7.62 & 2.871 & -7.65 \\ J1338+4213 & 13:38:42.16 & 42:13:38.20 & 0.00863322 & 2.83 & 0.52 & 6.20 & 0.011 & -8.16 & 0.011 & -8.16 \\ J1411+0550 & 14:11:13.40 & 05:50:50.57 & 0.04940260 & 1.61 & 1.58 & 7.99 & 1.715 & -7.75 & 1.607 & -7.78 \\ J1528+2318 & 15:28:26.53 & 23:18:43.10 & 0.05948849 & 2.97 & 3.56 & 8.54 & 0.705 & -8.69 & 0.665 & -8.71 \\ J1556+1818 & 15:56:47.51 & 18:18:25.57 & 0.09052273 & 2.28 & 3.94 & 7.82 & 1.964 & -7.53 & 2.041 & -7.51 \\ J1608+0413 & 16:08:01.16 & 04:13:23.85 & 0.06322286 & 1.50 & 1.89 & 7.42 & 0.690 & -7.58 & 0.640 & -7.61 \\ \hline \end{tabular} 1
\end{table}
Table 1: Global properties of the galaxies studied in this work.
Figure 3: BPT diagram including the photoionisation limit from Kewley et al. (2001) (red dash-dotted line) and the AGN separation line from Kauffmann et al. (2003) (green dashed line). Symbols are the same as in Fig. 2.
Figure 2: Shirzai & Brinchmann (2012) diagram including their AGN diagnostic (green dashed line). Blue squares represent the IMPs, whereas the position of their stack is shown as a black downward-facing open triangle. Red diamonds and yellow circles mark the AGN contaminated galaxies and star-forming objects classified as outliers (see text). The grey scale density diagram represents all eligible galaxies from the SDSS DR12 with AoN>2 in the utilised emission lines, binned in \(0.02\times 0.02\) dex bins for the line ratios shown. Lower contrast indicates lower bin occupancy.
EW([O iii] 5007) (Fig. 4), the former of which already points towards the IMPs harbouring a young starburst.
### Morphology
The SDSS image cutouts of the final sample are shown in Fig. 5. While most of the IMPs are only marginally resolved, they exhibit diverse morphological features. A few of them appear almost circular in projection (arguably most pronounced in J1411+0550), whereas others display signs of anisotropic extended emission. A variety of possible causes may serve to explain this, such as the galaxies' intrinsic irregular morphology, large-scale outflows, or a recent merger, which is difficult to deduce from the SDSS images alone. Occasionally, we see more than one bright, star-forming knot (e.g. in J0006+0255, J1141+6059) which lends further credibility to a merging scenario, albeit it is equally plausible these are isolated regions of increased star-forming activity within the same galactic system.
One feature all IMPs certainly appear to share is their compactness. The SDSS pipeline lists petrosian radii (\(r\)-band) between \(1.43\arcsec\) and \(6.93\arcsec\) for the IMPs, which translate to sizes of \(r_{\rm petro}\sim 0.52-5.2\) kpc at their given redshifts (calculated using Ned Wright's Cosmology Calculator (Wright 2006) with cosmological parameters from Planck Collaboration et al. 2020), with a median value of \(r_{\rm petro}\sim 2.74\) kpc. Note that most of these objects have been flagged _MANYETRO_ by the SDSS pipeline, meaning several possible values for \(r_{\rm petro}\) were found and those listed are the maximum values. Consequently, these are best interpreted as upper limits to the true (petrosian) sizes. We further point out that for several (5/18) of the IMPs, the radii (see Table 1) exceed the \(3\arcsec\) fibre diameter of the BOSS spectrograph (Smee et al. 2013), hence the properties derived for these objects exclude the contribution from the galactic outskirts (to varying degrees).
Their compactness further reinforces the notion that the IMPs can be considered a subclass of the LCGs, discovered by Izotov et al. (2011). For most of the objects, the same conclusion can immediately be drawn regarding the CSFGs (Izotov et al. 2021a), which (amongst other criteria) were selected as galaxies with \(r_{\rm petro}\leq 3\arcsec\). The conversion to linear scales of the IMPs' radii (see Table 1) reveals these likewise are suitable objects to fulfil the compactness criterion imposed on the CSFGs.
### Integrated nebular characteristics
The properties we derived from the observed emission lines (listed in Table 2) of our sample are presented in Table 2. Particularly noteworthy is the fact that the choice of stellar population models allows fado to successfully reproduce the He ii 4686 line for all objects studied here, without any need of invoking 'exotic' radiative sources as discussed in the Sect. 1.
The IMPs are ubiquitously characterised by large electron temperatures in the high-ionisation regime, with values typically found in the range of \(t_{\rm e}\) ([O iii]) \(\sim 12\,750-17\,670\) K. As we selected the IMPs as highly ionised galaxies evident from He ii 4686 emission, this is reflected in the relatively large values of the ionisation parameter-sensitive line ratio O\({}_{32}\), spanning a range of O\({}_{32}=2.3-24\). Considering their respective error ranges, 16 of the IMPs are consistent with a ratio of O\({}_{32}>5\), a value large enough to indicate a potential escape of LyC (Izotov et al. 2016a,b, 2018a). A galaxy's ionisation parameter can primarily be elevated by two means, then being a large intrinsic flux of ionising photons or low densities in the ISM. While the former criterion is a given here, the large scatter in O\({}_{32}\) is suggestive of an additional mechanism influencing these values. An obvious candidate for this are density-bounded conditions, where [O ii] emission is diminished. We explore this scenario further in Sect. 4.7.
The electron densities derived from [S ii] are found to be consistent with the low values typically found in SFGs and lie between \(n_{\rm e}=20-470\) cm\({}^{-3}\) when accounting for the full error range, subsequently justifying our assumption of \(n_{\rm e}=100\) cm\({}^{-3}\) for the galaxies where no physically meaningful value for \(n_{\rm e}\) could be derived. In terms of oxygen abundance the IMPs are markedly sub-solar, with abundances ranging between 7% and 28% \(Z_{\odot}\) at a median of 14.4% \(Z_{\odot}\), assuming \(\log{\rm(O/H)_{\odot}}+12=8.69\) (Asplund et al. 2009). This is comparable to, yet slightly lower than the LCGs and CSFGs from Izotov et al. (2011, 2021a), who find median values in \(\log{\rm(O/H)}+12\) of 8.11 (26% \(Z_{\odot}\)) and 8.0 (20% \(Z_{\odot}\)), respectively.
The chemical enrichment history as traced by N/O exhibits no apparent peculiarities with respect to values typically found in SFGs. The relative nitrogen abundance is strongly intertwined with a galaxy's SFH, as SNe of young, massive stars expel large amounts of oxygen into the ISM, thereby decreasing N/O. Evolved low- and intermediate-mass stars, on the other hand, can substantially increase N/O via hot bottom burning during their AGB phase (Vincenzo et al. 2016). Here, as evident from Fig. 6, the values we derived are broadly consistent with \(\log{\rm(N/O)}\sim-1.6\), where \(\log{\rm(N/O)}\) plateaus for low-metallicity SFGs (Vincenzo et al. 2016). In particular, the absence of a deviation towards elevated N/O values with respect to other galaxies with similar O/H suggests no dominant older stellar component is present in the IMPs. However, we note that with the large errors associated with N/O alongside the fact that other mechanisms such as gas accretion and galactic winds can alter this ratio, this conclusion remains fairly uncertain at this point.
The fact that the IMPs are otherwise quite extraordinary objects with respect to typical SFGs is reflected in their position in the O\({}_{32}\) vs. R23 diagram (Fig. 7), a diagnostic diagram trac
Figure 4: Distribution of EW(H\(\beta\)) (top) and EW([O iii] 5007) (bottom) for the IMPs, binned in 50 Å and 500 Å bins, respectively.
ing the ionisation parameter and metallicity. Not only are they clearly offset from the bulk of the DR12 SFGs, but they also share roughly the same parameter space as many known LyC leakers, which we explore further in Sect. 4.7. Lastly, we remark that the visual inspection of the spectra suggests that at least two of the IMPs (J1311+3750 and J1608+0413) are contenders for studying the rarely detected [Fe v] 4227 emission line (e.g. Thuan & Izotov, 2005; Izotov et al., 2017), but as the SDSS spectra are increasingly noisy towards the blue end, we cannot claim a reliable detection with the data at hand.
### Balmer decrements
A seemingly peculiar behaviour of the IMPs is found in their Balmer decrements (the line ratios of hydrogen Balmer lines relative to H\(\beta\)). Typically, one would expect these to be in good agreement to the theoretical values after accounting for internal reddening, as it is common procedure to scale the extinction law of choice by the H\(\beta\) extinction coefficient, which itself is a mean value derived from comparison of theoretically expected to observed Balmer decrements.
As evident from Fig. 8, several of the Balmer decrements in the IMPs disagree with the theoretical predictions. For reference, we show the Case A and B values derived by Storey & Hummer (1995) for \(n_{\rm e}=100\) cm\({}^{-3}\) and \(t_{\rm e}=15\,000\) K, which is broadly consistent with what we found for the IMPs. We note that for the low densities present here, the assumption of tenfold higher values for \(n_{\rm e}\) (1 000 cm\({}^{-3}\)) has virtually no effect on the line ratios. Likewise, varying the temperature by \(\pm 5\,000\) K only moderately affects the line ratios and does not alleviate the tension presented here.
A marked deviation to higher relative fluxes is found in H\(\alpha\)/H\(\beta\), but let us consider the Balmer lines of higher order first. A significant deviation is observed in H\(\epsilon\)/H\(\beta\), where the values measured are a factor of \(\sim 2\) larger than expected. This finding has a rather straightforward explanation, as H\(\epsilon\) at 3970 A is located at roughly the same wavelength as [Ne iii] 3967 and Ca ii H at 3969 A. The line fitting catalogue contained in fado does not encompass Ca ii emission, and its fitting routine would not fit the [Ne iii] 3967 line in any of the objects studied here, which
Figure 5: Mosaic of the SDSS _gri_ cutouts of our final sample obtained from the SDSS SkyServer at [http://skyserver.sdss.org/](http://skyserver.sdss.org/). Each cutout spans a region of \(12.5\arcsec\times 12.5\arcsec\), where north is up and east is to the left.
Figure 6: Nitrogen vs. oxygen abundance, with symbols representing the same objects as in Fig. 2. The SDSS DR12 galaxies (black scatter diagram) are limited to SFGs with appropriate A\(\alpha\)N in the line fluxes relevant for the determination of N/O and O/H with the direct method. The green dashed line is the generic scaling relation from Nicholls et al. (2017), i.e. \(\log\left({\rm N/O}\right)=\log\left(10^{-1.732}+10^{\log 0.01/h+2.19}\right)\).
we ascribe to a resolution effect. The values we report here for He are thus likely a blend of H\(\epsilon\), [Ne iii] 3967, and interstellar Ca ii H emission, whereas the latter's contribution is expected to be small. An additional source of uncertainty that becomes evident here is the neglection of interstellar Ca ii H absorption, whose omission may negatively impact our stellar population fit. While the spectra at hand do not have the resolution sufficient to reliably disentangle these lines, by comparison to the fiducial Case B values we predict relative emission line fluxes of \(100\times\) ([Ne iii] 3967 + Ca ii H\({}_{\rm em+abs}\)) / H\(\beta=9-20\) with a median of 16 in the IMPs.
H\(\delta\)/H\(\beta\) and H\(\gamma\)/H\(\beta\) appear to be skewed to slightly lower line ratios, though the median deviation from the theoretical value is reasonably small at 4.1 % and 1.8 %, respectively. Considering the non-quantified error sources present here, such as the choice of extinction law and its application to an integrated galactic
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & J0006+0255 & J0028+3035 & J0131+0210 & J0138+1114 & J0150+1643 & J0744+1858 \\ \hline \(t_{\rm e}\) ([O iii]) / K & 16530 \(\pm\) 610 & 17670 \(\pm\) 550 & 14550 \(\pm\) 810 & 16230 \(\pm\) 610 & 16770 \(\pm\) 650 & 15320 \(\pm\) 460 \\ \(t_{\rm e}\) ([O ii]) / K & 15780 \(\pm\) 10000 & 10960 \(\pm\) 4100 & 15040 \(\pm\) 14000 & 11700 \(\pm\) 5000 & 11570 \(\pm\) 2700 & 11510 \(\pm\) 2500 \\ \(n_{\rm e}\) ([S ii]) /cm\({}^{-3}\) & 50 \(\pm\) 310 & 380 \(\pm\) 570 & 30 \(\pm\) 400 & 260 \(\pm\) 480 & 290 \(\pm\) 280 & 250 \(\pm\) 250 \\ (O\({}^{+}\)/H\({}^{+}\)) \(\times\) 10\({}^{5}\) & 0.6 \(\pm\) 1.1 & 1.6 \(\pm\) 2.4 & 0.9 \(\pm\) 2.6 & 1 \(\pm\) 1.7 & 0.9 \(\pm\) 0.9 & 0.98 \(\pm\) 0.82 \\ \(\left({\rm O}^{2+}/{\rm H}^{+}\right)\times 10^{5}\) & 4.42 \(\pm\) 0.37 & 5.07 \(\pm\) 0.37 & 7.7 \(\pm\) 1.1 & 6.36 \(\pm\) 0.56 & 5.05 \(\pm\) 0.44 & 8.59 \(\pm\) 0.63 \\ \(y^{+}\times 10^{2}\) & 7.56 \(\pm\) 0.16 & 5.34 \(\pm\) 0.02 & 7.59 \(\pm\) 0.18 & 6.217 \(\pm\) 0.009 & 7.731 \(\pm\) 0.013 & 6.932 \(\pm\) 0.002 \\ \(y^{+}\times 10^{2}\) & 0.1 \(\pm\) 1.2 & 0.1 \(\pm\) 1.3 & 0.1 \(\pm\) 1.6 & 0.1 \(\pm\) 1.5 & 0.1 \(\pm\) 1.3 & 0.05 \(\pm\) 0.92 \\ \({\rm ICF}\left({\rm O}^{+}+{\rm O}^{2+}\right)\) & 1 \(\pm\) 0.14 & 1.01 \(\pm\) 0.21 & 1 \(\pm\) 0.17 & 1 \(\pm\) 0.2 & 1 \(\pm\) 0.14 & 0.99 \(\pm\) 0.11 \\ \(\log\) (O/H) + 12 & 7.7 \(\pm\) 0.12 & 7.83 \(\pm\) 0.18 & 7.93 \(\pm\) 0.16 & 7.87 \(\pm\) 0.14 & 7.77 \(\pm\) 0.09 & 7.98 \(\pm\) 0.07 \\ \(\log\) (He/H) & \(-\)1.12 \(\pm\) 0.07 & \(-\)1.26 \(\pm\) 0.11 & \(-\)1.12 \(\pm\) 0.09 & \(-\)1.2 \(\pm\) 0.1 & \(-\)1.11 \(\pm\) 0.07 & \(-\)1.16 \(\pm\) 0.06 \\ \(\log\) (N/O) & \(-\)1.59 \(\pm\) 0.28 & \(-\)1.72 \(\pm\) 0.23 & \(-\)1.46 \(\pm\) 0.67 & \(-\)1.72 \(\pm\) 0.51 & \(-\)1.66 \(\pm\) 0.26 & \(-\)1.55 \(\pm\) 0.15 \\ O\({}_{32}\) & 5.91 \(\pm\) 0.77 & 10.69 \(\pm\) 0.85 & 5.8 \(\pm\) 7.3 & 13.7 \(\pm\) 8.2 & 13.2 \(\pm\) 5.7 & 17.2 \(\pm\) 1.2 \\ \(\Delta\)[S ii] & \(-\)0.34 \(\pm\) 0.04 & \(-\)0.33 \(\pm\) 0.07 & \(-\)0.13 \(\pm\) 0.05 & \(-\)0.56 \(\pm\) 0.06 & \(-\)0.54 \(\pm\) 0.02 & \(-\)0.17 \(\pm\) 0.03 \\ \(f_{\rm esc}\) (LyC)\({}^{b}\) & (0.064) & (0.199) & (0.062) & (0.323) & (0.303) & (0.509) \\ & J0753+2820 & J0809+4918 & J1037+2325 & J1109+3429 & J1141+6059 & J1311+3750 \\ \hline \(t_{\rm e}\) ([O iii]) / K & 14970 \(\pm\) 590 & 15760 \(\pm\) 820 & 16300 \(\pm\) 1100 & 14100 \(\pm\) 490 & 12750 \(\pm\) 360 & 16650 \(\pm\) 360 \\ \(t_{\rm e}\) ([O ii]) / K & 13910 \(\pm\) 6800 & 14080 \(\pm\) 7200 & 16010 \(\pm\) 15000 & 12940 \(\pm\) 8900 & 12120 \(\pm\) 7400 & 11660 \(\pm\) 3200 \\ \(n_{\rm e}\) ([S ii]) / cm\({}^{-3}\) & 80 \(\pm\) 300 & 90 \(\pm\) 310 & 30 \(\pm\) 430 & 100 \(\pm\) 500\({}^{a}\) & 100 \(\pm\) 500\({}^{a}\) & 270 \(\pm\) 320 \\ (O\({}^{+}\)/H\({}^{+}\)) \(\times\) 10\({}^{5}\) & 0 \(\pm\) 2 & 1 \(\pm\) 1.6 & 0.7 \(\pm\) 1.9 & 0 \(\pm\) 4 & 2.3 \(\pm\) 5.1 & 0.64 \(\pm\) 0.79 \\ \(\left({\rm O}^{2+}/{\rm H}^{+}\right)\times 10^{5}\) & 7.03 \(\pm\) 0.69 & 5.52 \(\pm\) 0.67 & 4.4 \(\pm\) 0.7 & 6.87 \(\pm\) 0.63 & 11.16 \(\pm\) 0.88 & 6.77 \(\pm\) 0.36 \\ \(y^{+}\times 10^{2}\) & 7.737 \(\pm\) 0.005 & 8.206 \(\pm\) 0.023 & 6.88 \(\pm\) 0.34 & 6.950 \(\pm\) 0.009 & 7.14 \(\pm\) 0.08 & 6.775 \(\pm\) 0.003 \\ \(y^{+}\times 10^{2}\) & 0.1 \(\pm\) 1.5 & 0.1 \(\pm\) 1.5 & 0.1 \(\pm\) 1.8 & 0.1 \(\pm\) 1.5 & 0.1 \(\pm\) 1.3 & 0.1 \(\pm\) 1.1 \\ \({\rm ICF}\left({\rm O}^{+}+{\rm O}^{2+}\right)\) & 1 \(\pm\) 0.16 & 1 \(\pm\) 0.15 & 1 \(\pm\) 0.22 & 1 \(\pm\) 0.18 & 1 \(\pm\) 0.15 & 1 \(\pm\) 0.13 \\ \(\log\) (O/H) + 12 & 7.92 \(\pm
spectrum, implicitly assuming the same extinction properties independent of the line of sight, some deviations are not unexpected. With the strongest outliers deviating in the line ratios by a mere 6.5% and 4.2%, we consider these values to be in decent agreement with the expectations.
Revisiting H\(\alpha\)/H\(\beta\), we find these line ratios to be up to 8.1% larger than the reported Case B values. While the mean deviation is moderately low at \(\sim 3.4\)%, about a third of our sample shows line ratios of H\(\alpha\)/H\(\beta\gtrsim 2.9\), reminiscent of what is typically found in AGN. We note that as we find indications for an optically thin ISM throughout this paper, Case A conditions might even be the better approximation here, but this only worsens the disagreement to a median deviation of 4.5% at a maximum of 10.9%. These values are highly suggestive of an additional low-energy phenomenon that is able to elevate the electron bound by hydrogen to \(n=3\), resulting in a net increase in H\(\alpha\) flux without altering the higher-order line ratios. We speculate that collisional excitation in a large-scale galactic wind or conversely, during the accretion of large volumes of extragalactic gas, may account for this observation. Ultimately, deeper high-resolution spectra are needed to appropriately characterise the ISM dynamics that would be supportive of this argument. Alternatively, it could turn out to be an effect of technical rather than astrophysical nature, and may be the result of the best fit from population spectral synthesis being a mix of stellar populations that underpredict the total amount of stellar H\(\alpha\) absorption.
In Table 1 we list the recalculated (s)SFR(H\(\alpha\)) values denoted as (s)SFR\({}_{\rm B}\) based on the assumption that H\(\alpha\) shows such an flux excess and the Case B values better represent the star-formation induced H\(\alpha\) flux. While this slightly diminishes the values we find (to a median sSFR of \(\sim 7.08\) Gyr\({}^{-1}\)), it does not qualitatively alter our conclusion. The same is true for the diagnostics incorporating H\(\alpha\) (Figs. 2 and 3).
### Star-formation histories
The population synthesis by fado reveals the IMPs to have undergone similar star formation episodes in the past. We show the mass- and luminosity-weighted SFHs of the IMPs in Figs. 9 and 10. As evident from their SFHs, the IMPs typically contain a population of old stars at \(\sim 1-10\) Gyr that dominate in mass. This is different for J0131\(+\)0210, J0138\(+\)1114, and J1556\(+\)1818, whose SEDs are consistent with that of a galaxy no older than \(\sim 5-120\) Myr.
The SFHs subsequent to the formation of the old population, where present, vary from object to object. In about a third of the sample, the majority of the recent Gyr is a period of relative quiescence, whereas the other galaxies have undergone one or more bursts of star-formation during that time.
The most recent SFH, however, appears to be virtually the same for most of the IMPs: roughly \(5-15\) Myr ago, star formation in the IMPs had a major resurgence, where a young, typically luminous stellar population was formed. This then is followed by one or several bursts several Myr later, in which the currently youngest stellar generation with ages of \(0.1-1.3\) Myr was formed. In two of the IMPs (J1037\(+\)2325, J1109\(+\)3429), the youngest stellar population is less dominant with respect to those of the remaining galaxies, providing less than \(\sim 0.1\)% of the
Figure 8: Balmer decrements derived for the IMPs after accounting for foreground and internal extinction. The horizontal lines mark the expected theoretical values calculated by Storey & Hummer (1995) for \(n_{\rm e}=100\) cm\({}^{-3}\) and \(t_{\rm e}=15\,000\) K, assuming Case A (dotted line) and Case B (dashed line) conditions. The \({}^{*}\) symbol signifies a likely blend of multiple lines (see text).
Figure 7: Ionisation parameter and metallicity sensitive log (O\({}_{\rm 22}\)) = log ([O iii] 5007/[O iii] 3727) vs. log (R23) = log ([O ii] 3727 + [O iii] 4959 + [O iii] 5007) /H\(\beta\) diagram. Symbols are the same as in Fig. 2. Additional markers were added for several low-\(z\) LyC leakers, i.e. the GPs from Izotov et al. (2016a,b, 2018a,b) (green upward triangles) and low-mass galaxies from Izotov et al. (2021b) (cyan left-facing triangles). Also shown are the intermediate-\(z\) LyC leakers _Ion2_ from de Barros et al. (2016) (orange pentagram), four galaxies from Fletcher et al. (2019) (dark red hexagons, where we converted their listed [O iii] 4959,5007 values to [O iii] 5007 assuming [O iii] 5007/[O iii] 4959=2.98, Storey & Zeippen 2000), and AUDFS01 from Saha et al. (2020) (purple right-facing triangle). The SDSS DR12 galaxies (black scatter diagram) are limited to those identified as SFGs with the BPT diagnostics from Kewley et al. (2001) and Kauffmann et al. (2003).
Figure 9: SFHs of the first half of the IMPs. The top panel in each plot pair shows the SFH weighted by the stellar population’s luminosity at 4020 Å, whereas the bottom panel shows the mass-weighted SFH. Here, the black arrow marks the time where half of the galaxy’s mass was assembled, as indicated by the \(t_{1/2}\) inset. The bar colours correspond to the stellar metallicity as indicated by the numbers above each plot pair, in units of \(Z_{\odot}\).
Astronomy, Astrophysics, and Space Science, 100, 101-102
Figure 10: Same as Fig. 9 for the second half of the IMPs.
galaxies' total mass. The overall pattern, however, is the same as in the other objects.
The situation we find in the majority of the IMPs is remarkably similar to that in other metal-poor DGs, where this two-stage starburst scenario appears to be linked to superbubbles and galactic outflows, for instance in NGC 1569 (Heckman et al. 1995; Vallenari & Bomans 1996; Origlia et al. 2001) or NGC 5253 (Monreal-Ibero et al. 2010; Calzetti et al. 2015). Several LCGs, such as the GPs studied by Amorin et al. (2012), have been shown to have a similar SFH. In particular, we highlight the likeness to some well-studied (candidate) LCEs. A prominent example is ESO 338-IG04, whose SFH has been reconstructed from the study of 124 star clusters by Ostlin et al. (2003). While they found a significant number of Gyr old clusters to be present, a population of massive star clusters was formed 8 - 11 Myr ago, followed by the formation of numerous young stars within the last 2 Myr. A similar situation is found in the GP analogue NGC 2366, where a 3 Myr star cluster has carved a superbubble into the ISM which is now ionised by the young \(<\)1 Myr stellar population in the H ii region Mrk 71 (Micheva et al. 2017). Tololo 1247-232 likewise was found to contain a young, two-stage starburst at 12 Myr and \(<\)4 Myr by Micheva et al. (2018), who suggest this feature might be common in LCEs. Zastrow et al. (2013) similarly conclude that a multi-stage, young starburst may be required for the transmission of LyC radiation into the IGM.
### Ionising sources
While strong He ii 4686 emission is a feature typically seen in galaxies with an AGN producing a hard radiation field, we are fairly confident in excluding this possibility for the IMPs as per our selection criterion. This conclusion is further supported by the absence of spectral features characteristic of an AGN, such as broadened emission lines. Hence, it is most worthwhile to investigate the origin driving this highly energetic emission, as it may shed light on the mechanisms facilitating LyC escape in low-metallicity SFGs during the EoR.
Judging from their low oxygen abundances and large SFRs, it stands to reason that the stellar contribution to the high-energy part of the spectrum mainly comes from a young starburst. Our reconstructed SFHs from Sect. 4.5 confirm this, thus we are confident that OB stars contribute significantly to the ionisation of the ISM. However, similar to other LCGs (e.g. Amorin et al. 2012; Clarke et al. 2021), the IMPs appear to harbour older stellar populations (Sect. 4.5). Due to the exposed, hot cores of remnant stars one may reasonably expect to find here, the intrinsic LyC production in the IMPs is likely elevated by these older populations.
Features associated with WR stars, such as the 'blue WR bump' that one might expect at \(\sim\) 4650 A (e.g. Schaerer et al. 1999), are remarkably absent even in the stacked spectrum of the IMPs (Fig. 11, 2nd panel in the top row). There is a tentative detection of [Fe iii] 4658 in this wavelength regime, but as the WR bump typically is a broad spectral feature and we furthermore detect a rather strong [Fe ii] 4986 signal (see Fig. 11, bottom panel), we deem this feature likely is of nebular origin. In essence, unless a significant population of low-metallicity nitrogen WR stars (in which this feature is pronouncedly fainter than in higher-metallicity WRs, Crowther & Hadfield 2006) is present in our sample, WR stars as the origin of ionising radiation can be excluded. Further affirmation for this is found in the absence of the'red WR-bump' (Fig. 11, 3rd panel in the top row), a carbon Wolf-Rayet feature of broad C iv 5808 emission (Schaerer et al. 1999). This conclusion is in line with other studies of SFGs, where the relative amount of WR stars reportedly decreases with metallicity (e.g. Guseva et al. 2000; Shirazi & Brinchmann 2012). Alternatively, as WR stars constitute a late evolutionary stage of massive stars, this again hints at the fact that the most recent starburst in the IMP galaxies is still young, which in turn further affirms that OB stars producing ionising radiation are present here.
A similar argument can be made for the presence of HMXBs, as these objects require one of the binary partners to have collapsed into a compact object, such as a neutron star or a black hole. Then again, the most massive of stars evolve rapidly and are known to leave behind such compact objects at the end of their lives. Considering the most recent starbursts shown in Figs. 9 and 10, a large fraction of OB stars created at the onset of the recent star-forming episodes have already evolved into their remnant form. Thus, given a sufficient fraction of massive stars have been born in a binary configuration with an orbit close enough so that mass overflow from their companion can take place, their accretion disks can in principle contribute significant amounts of high-energy photons even in a relatively young starburst. In the case of the IMPs no archival data from Chandra or XMM-Newton is available, so we currently cannot deduce anything about the presence or absence of HMXBs. Future investigations should take this into account, as HMXBs appear to be more abundant in metal-poor dwarf galaxies and thus may contribute significantly to the ionising radiation field (Brorby et al. 2014).
Higher resolution and at best, spatially resolved data will be required to conclusively study the contribution of shocks to the ionisation of the ISM. The principle requirements for the occurence of shocks are given in star-forming regions, as kinematic phenomena with supersonic motion can for example be found in stellar winds and the expanding shells of SNe. For a preliminary evaluation, Fig. 12 shows the IMPs in three commonly used BPT diagrams, where we overplotted the model calculations from Allen et al. (2008). Shown are the tracks from their SMC model (shock front plus photoionised precursor at a pre-shock density of \(n=1\) cm\({}^{-3}\)), as that model at \(\log\left(\mathrm{O/H}\right)+12=8.03\) most closely matches the metallicity found in the IMPs. As evident from these diagrams, even the most favoured models with shock velocities \(v_{\mathrm{s}}=400\) km s\({}^{-1}\) and a magnetic field of \(B=10\)\(\mu\)G predict line ratios strongly differing from the ones found in the IMPs. Given that we are considering integrated spectra, this is not surprising, as the ISM as a whole is obviously affected by a superposition of many ionising mechanisms and shocks can be expected to be more localised phenomena. Still, we can conclude that the ISM's ionisation is unlikely to be dominated by shocks, although their contribution in supposed galactic outflow channels may be more pronounced than suggested in the diagnostics used here. Future infrared observations may aid in disentangling the shock component from the other ionisation mechanisms and provide another avenue for comparisons to high-\(z\) objects (e.g. Brinchmann 2022, and references therein).
### LyC leakage
The lack of UV spectra for the objects studied here makes a direct determination of their LyC escape fraction impossible. That said, the IMPs as He ii 4686 emitters clearly have a significant intrinsic production of photons beyond the Lyman edge \(<\) 912 A. This by itself already qualifies these galaxies as remarkable objects for astrophysical research, and in conjunction with the large
O\({}_{32}\) values as previously discussed makes them suitable candidates for studies concerning LyC leakage, hence we explore the possibility a little further with the optical data at hand.
To get a feel for the relevance in terms of LyC leakage, a first guess for \(f_{\rm esc}\) (LyC) can be obtained from Eq. 2. As pointed out in Sect. 3.3, O\({}_{32}\) as a LyC leakage diagnostic is problematic, and the calibration from Chisholm et al. (2018) is based on low sample statistics, so we caution against an overinterpretation of the results. Performing the calculation yields LyC escape fractions between 1.4% and 98% in the IMP sample, the upper bound constituted by J1311+3750. Taken at face value, this would mean that eight of the eighteen IMPs were to be characterised by a cosmologically significant escape fraction \(>\)10%, with eight more candidates residing at 4-10%. The fraction of strong LCEs increases to 10/18 if one utilises the O\({}_{32}-f_{\rm esc}\) (LyC) relation from Izotov et al. (2018a), and all but two galaxies are found to have \(f_{\rm esc}\) (LyC) \(>\) 4 %.
We note that the inferred LyC escape fraction of our sample shows no correlation with the relative strength of He ii 4686 (Fig. 14), which is in line with the few other studies considering this potential connection (Guseva et al., 2020; Marques-Chaves et al., 2022). As their authors have stated, this implies that the physical properties traced by He ii 4686 do not govern the LyC escape fraction. However, we remark that the IMPs show considerably larger He ii 4686/H\(\beta\) ratios than most established LCEs. Considering these galaxies selected as He ii emitters show strong indications for appreciable escape fractions, we suggest that a mechanism responsible for the _detectability_ of He ii may be related to LyC escape. Considering that the bulk of He ii emission will originate close to the sources of hard radiation, an efficient
Figure 11: 3400 Å \(-\) 9600 Å region of the stack of eighteen redshift-corrected SDSS spectra of the IMP sample, binned in 1 Å bins and truncated in flux density \(f\) to enhance the visibility of weak spectral features (bottom row). The coloured boxes indicate the regions shown in the blow-ups in the top row, centred at [Fe v] 4227, He ii 4686, and C iv 5808 (blue, green, and red insets, respectively). Noteworthy emission lines (both present and absent) are marked by coloured vertical lines. The SDSS wavelengths were converted to vacuum wavelengths following Morton (1991).
Figure 12: Three commonly used BPT diagrams for shock diagnostics; symbols are the same as in Fig. 2. Solid lines show the shock+precursor SMC model grids from Allen et al. (2008) with their parameters (shock velocity \(v_{\rm s}\) and transverse magnetic field \(B\)) coloured as indicated by the colour bars.
removal of neutral gas is a strong contender here, which we explore further below.
The indications for LyC leakage in the IMPs are further reinforced by the strong deficit in [S ii] \(6717,6731/\)H\(\alpha\) with respect to typical SFGs, and following the definition of Wang et al. (2021), they span a range between \(\Delta\)[S ii] \(=0.026\) and \(-0.82\) (Table 2). The markedly negative values indicate that large volumes in the IMPs may be density-bounded, and as such provide conditions strongly favourable for LyC leakage. Figure 13 shows their position in the plot as worked out in Wang et al. (2021) alongside with a few other known LyC leakers, demonstrating their similarity in this parameter space.
In principle, such conditions could likewise arise in an ionisation-bounded nebula, given a hard enough radiation field
Figure 14: Spectral hardness as traced by He ii \(4686\)/H\(\beta\) vs. LyC escape fraction \(f_{\rm esc}\) (LyC). Blue squares represent the IMPs, whereas the position of their stack is shown as a black downward-facing open triangle. We show no error bars in \(f_{\rm esc}\) (LyC), as the values we infer are mere estimates from O\({}_{32}\). The stacks from the Low-redshift Lyman Continuum Leaker Survey as provided in Marques-Chaves et al. (2022) are shown as orange circles, and the LCEs studied in Guseva et al. (2020) as green diamonds.
Figure 13: log ([O iii] \(5007\)/H\(\beta\)) vs. log ([S ii] \(6717,6731\)/H\(\alpha\)) diagram including the [S ii] ridge line from Wang et al. (2021) (green dashed line). Symbols are the same as in Fig. 7. Left: Panel highlighting the IMPs’ positions in the parameter space covered by SDSS DR12 galaxies. Right: Zoom-in on the region occupied by the IMPs.
Figure 15: log (O\({}_{32}\)) = log ([O iii] \(5007\)/[O ii] \(3727\)) vs. \(\Delta\)[S ii] diagram. Symbols are the same as in Figure 7.
such that the inner regions where the higher-ionisation emission lines originate span a larger spatial extent and as such, dominate the line flux ratio relative to that of a lower ionisation species. If this were the case, the [S ii] deficiency could be the result of most of the sulphur being doubly ionised. In the spectra of the IMPs, we detect both [S iii] 9069 and [S iii] 9531 lines (redshift permitting). While some variation is present, the overall ratio of S\({}_{32}=\) ([S iii] 9069 + [S iii] 9531) /[S ii] 6717, 6731 shows no obvious excess. For our stacked spectrum, we find \(\log{\rm S}_{32}\sim 0.3\), which at \(\log{\rm([O\textsc{iii}]\,5007/H\beta)}\sim 0.94\) appears to be compatible with the extrapolation of the running median of \(z=0\) SFGs as presented in Sanders et al. (2020, their Fig. 4). Judging from this, we conclude that our previous findings are primarily governed by the ISM structure rather than the hardness of the radiation field.
Equally remarkable is the area the IMPs reside in when comparing the two best optical LyC leakage indications, O\({}_{32}\) and \(\Delta\)[S ii] (Fig. 15), also adapted from Wang et al. (2021). Here, the IMPs are well offset from the bulk of the DR12 galaxies, and roughly occupy the same region as many confirmed local LyC leakers. While Wang et al. (2021) report no statistically significant correlation between a galaxy's \(f_{\rm esc}\) (LyC) and \(\log{\rm(O_{32})}\) vs. \(\Delta\)[S ii], it is evident that galaxies with large escape fractions generally reside in the upper left of this diagram (Wang et al., 2021, their Fig. 3), which is equally true for the IMPs.
Our reconstruction of the SFHs (Sect. 4.5) offers a tempting explanation how these conditions may have come to be. While the IMPs appear to actively form young stars, a major burst in star formation has occured within the last \(\sim 10\) Myr in a majority of these objects. Within the timespan following this burst, all newly formed massive stars have since exploded as SNe, which constitutes a kinematic phenomenon that may have driven a significant fraction of the galaxies' ISM out of their shallow potential wells. The ionising photons provided by the younger stellar generation, whose formation process may well have been triggered by or benefitted from these SN shock waves, will then find ISM conditions strongly favourable for their transmission into the IGM.
While some of the IMPs apparently lack the 10 Myr population, their youngest stellar generation is systematically more massive than that of the two-stage starburst galaxies. Here, the increased ionising flux due to the larger number of OB stars (relative to galaxy mass) may be responsible for a reduction in the neutral gas fraction in these galaxies. Alternatively, the larger number of SNe from massive (O) stars may already have provided ISM conditions similar to what we deduced for the rest of the IMPs.
Our attempts to fit an additional broad Gaussian component to some of the stronger emission lines (e.g. H\(\alpha\), [O iii] 5007) did not yield any appreciable results. If such a component, indicative of a large-scale outflow (e.g. Castaneda et al., 1990), is present in the spectra of the IMPs, deeper observations are required for its detection. In about half of our sample, we can reproduce the H\(\alpha\) and [O iii] 5007 lines with a double-Gaussian fit, where the peak separations indicate a velocity of \(\sim 80\) km s\({}^{-1}\) in the blue-shifted component. For an unambiguous detection of these components, however, higher resolution spectra are needed.
## 5 Summary and conclusion
By applying a single selection criterion based on the He ii 4686 recombination line (Eq. 1), we have extracted a sample of twenty galaxies from the SDSS DR12. Their position in the BPT diagram (Fig. 3) reveals that eighteen (90%) of these are clustered together in the respective parameter space, suggesting similar inherent properties, which our subsequent analysis confirms. These previously undiscussed galaxies, dubbed 'IMPs', are revealed to be metal-poor, star-forming dwarf galaxies. Their highly ionised ISM appears to be predominantly photoionised by a young stellar population, with an additional contribution of an older (\(1-10\) Gyr), evolved population in most of the galaxies. A dominant contribution from AGN, shocks, and WR stars can be ruled out at this stage, whereas the presence or absence of HMXBs remains an open question for now.
We find the IMPs with their low stellar masses, large sSFRs and markedly sub-solar metallicity blend well into the local populations of LCGs and CSFGs (Izotov et al., 2011, 2021), which share many properties with high-\(z\) SFGs. This likeness is further reinforced by their large values in EW(H\(\beta\)) and EW([O iii] 5007) along with their compact morphology. In all of these properties, however, the IMPs typically occupy the tail of the distribution found in the CSFGs, them being low-mass, low-metallicity objects with emission lines of large EWs, implying these galaxies are fairly extreme objects among the population of local analogues to high-\(z\) SFGs. This is notably similar to the CSFG subset constituted by the 'blueberry galaxies', which themselves are low-\(z\) LCE candidates with \(\log{\rm(\textit{M}_{\star}/\textit{M}_{\odot})}\sim 7\) and \(\log{\rm(O/H)}+12\sim 7.7\)(Yang et al., 2017).
Judging from the elevated O\({}_{32}\) values and pronounced [S ii] deficiency, our results strongly point towards large density-bounded volumes of ISM within the IMPs. This leads us to suspect that a majority of them are very likely LyC leakers, making them excellent candidates for follow-up studies with respect to LyC photon loss. Ultimately, future acquisition of spectroscopic FUV data is inevitable to unequivocally confirm this. Direct detection of LyC radiation will require the next generation of FUV instruments, though, as the sensitivity curve of HST's COS does not permit the detection of faint LyC at these low redshifts.
The IMPs' recent SFHs are found to be similar and offer a tempting scenario that may explain how conditions favourable for LyC leakage have come to be. Most of the galaxies have formed a stellar population which constitutes \(\sim 0.1\%-10\%\) of their total stellar mass some \(5-15\) Myr ago. This timescale is sufficient for the massive stars of this population to have exploded as SNe by now, and the kinetic energy injected into the surrounding matter by these may have served as a mechanism for clearing low-density cavities in the ISM, through which LyC photons then may ultimately escape into the IGM.
If found to be true, these galaxies then would constitute the most close-by LyC leakers, in parts only superseded by Haro 11 (Bergvall et al., 2006), Mrk 54 (Leitherer et al., 2016) and Tol 1247-232 (Leitet et al., 2013). Moreover, it would demonstrate the validity of using high-ionisation nebular lines as a selection criterion in search of LCEs. Our O\({}_{32}\)-based estimates of \(f_{\rm esc}\) (LyC) suggest a detection rate of significant (\(>\)10%) escape fractions for at least eight (40%) of the studied SFGs, though the [S ii] deficient nebular emission indicates that the remaining galaxies likewise are good LCE candidates.
###### Acknowledgements.
Many thanks go to our anonymous referee, whose diligent report greatly improved the quality of this paper. The authors would like to express their gratitude towards P. Papaderos for providing further insights into the inner working at too. Additionally, we thank L. Dirks for many helpful discussions. AUE, DIB and WX acknowledge funding from the German Science Foundation DFG, via the Collaborative Research Center SFB1491 'Cosmic Interacting Matters - From Source to Signal.' This research has been made possible due to the immense data collected in the SDSS project. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is [http://www.sdsds3.org/](http://www.sdsds3.org/) SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona,
the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. All plots in this paper have been created with MATLAB R20210 ([https://mathworks.com/products/matlab.html](https://mathworks.com/products/matlab.html)), except those in Figs. 9 and 10, which are part of the default nano output.
|
2301.04949 | A Formal Power Series Approach to Multiplicative Dynamic Feedback
Interconnection | The goal of the paper is multi-fold. First, an explicit formula is derived to
compute the non-commutative generating series of a closed-loop system when a
(multi-input, multi-output) plant, given in Chen--Fliess series description is
in multiplicative output feedback interconnection with another system, also
given as Chen--Fliess series. Furthermore, it is shown that the multiplicative
dynamic output feedback connection has a natural interpretation as a
transformation group acting on the plant. A computational framework for
computing the generating series for multiplicative dynamic output feedback is
devised utilizing the Hopf algebras of the coordinate functions corresponding
to the shuffle group and the multiplicative feedback group. The pre--Lie
algebra in multiplicative feedback is shown to be an example of Foissy's
com-pre-Lie algebras indexed by matrices with certain structure. | Kurusch Ebrahimi-Fard, G. S. Venkatesh | 2023-01-12T11:39:35Z | http://arxiv.org/abs/2301.04949v2 | # A formal power series approach to multiplicative dynamic feedback connection
###### Abstract.
The goal of the paper is multi-fold. The first of which is to derive an explicit formula to compute the generating series of a closed-loop system when a plant, given in a Chen-Fliess series description is in multiplicative output feedback connection with another system given in Chen-Fliess series description. Further, the objective extends in showing that the multiplicative dynamic output feedback connection has a natural interpretation as a transformation group acting on the plant. A computational framework for computing the generating series for multiplicative dynamic output feedback is devised utilizing the dual Hopf algebras corresponding to the shuffle group and the multiplicative feedback group.
###### Contents
* 1 Introduction
* 2 Preliminaries: Formal Power Series
* 2.1 Shuffle Product
* 3 Bialgebra and Hopf algebra: Preliminaries
* 3.1 Algebra
* 3.2 Coalgebra
* 3.3 Bialgebra
* 3.4 Hopf Algebra
* 4 Unshuffle Hopf algebra and its Coaction
* 4.1 Unshuffle Hopf Algebra
* 4.2 Gradation of Bialgebra
* 4.3 Coaction of \(H_{\mathfrak{u}}\)
* 5 Chen-Fliess Series and its Interconnections
* 5.1 Chen-Fliess Series
* 5.2 Interconnections of Chen-Fliess Series: Parallel and Cascade Connections
* 5.3 Cascading of Chen-Fliess with Multiplicative Feedforward of Input
* 5.4 Multiplicative Dynamic Output Feedback Group
* 6 Chen-Fliess Series Under Multiplicative Dynamic Output Feedback
* 7 Invariance of Class and Relative Degree under multiplicative dynamic feedback connection
* 8 Computational Framework for Multiplicative Mixed Composition & Dynamic Feedback Product
* 8.1 Hopf Algebra Corresponding to the Multiplicative Dynamic Feedback Subgroup
* 8.2 Coaction of Hopf algebra \(H\) on Algebra of Coordinate Map
* 8.3 Coaction of Hopf algebra \(H\) on the Hopf algebra \(H_{\mathfrak{u}}\)
* 8.4 Coproduct, Antipode Computations and Grading of Hopf algebra \(H\)
* 9 Conclusions and Future work
## 1. Introduction
The objective of the document is two fold and works with the Chen-Fliess functional series [11]. There is no need that these input-output systems have a state space realization and thus, the results presented here are independent of any state space embedding when a realization is possible [11]. Firstly, let \(F_{c}\) and \(F_{d}\) be two nonlinear input-output systems represented by Chen-Fliess series. It was shown in [10] that the _additive feedback_ interconnection of two such systems result in a Chen-Fliess series description for the closed-loop system. The convergence of the closed-loop system was characterized in [11]. An efficient computation of the generating series for closed-loop system is facilitated through a combinatorial Hopf algebra [10], [11], [12]. The feedback product formula and its computation were used to solve system inversion problems [10] and trajectory generation problems [13].
However, when the nature of interconnection becomes _multiplicative feedback_, the similar set of questions persist in general. It is known that, in single-input single-output (SISO) setting, the closed-loop system in the affine feedback case (of which multiplicative feedback is a special case) has a Chen-Fliess series description and the computation of feedback formula is facilitated through a combinatorial Hopf algebra [10]. The present document, in one part, shows that even in multi-input multi-output (MIMO) setting the closed-loop system under multiplicative feedback has a Chen-Fliess series representation and provides an explicit expression of the closed-loop generating series which will be called as _multiplicative dynamic feedback product_. Furthermore, it will be shown that this feedback product has a natural interpretation as a transformation group acting on the plant. The algorithmic framework for the computation of the multiplicative dynamic feedback product formula for a general MIMO case is devised using the dual Hopf algebras corresponding to the shuffle product and to the multiplicative dynamic output feedback group. The characterization of convergence of the Chen-Fliess series for the closed-loop system is deferred for future work.
The paper is organized as follows. The next section provides a summary of the concepts related to non-commutative formal power series, Hopf algebra, Chen-Fliess series and their interconnections. The Section 5.4 builds the pivotal _multiplicative dynamic output feedback group_. The Hopf algebra construction corresponding to the shuffle group is drafted in Section 4. Section 6 is where the multiplicative dynamic feedback connection is analyzed. The invariance of relative degree under multiplicative output feedback is asserted in Section 7. The framework for computing the feedback product is devised in Section 8 and is demonstrated using examples. The conclusions of the paper and directions for future work is given in the last section.
## 2. Preliminaries: Formal Power Series
A finite nonempty set of noncommuting symbols \(X=\{x_{0},x_{1},\ldots,x_{m}\}\) is called an _alphabet_. Each element of \(X\) is called a _letter_. Any finite sequence, \(\eta=x_{i_{1}}\cdots x_{i_{k}}\), of letters from \(X\) is called a _word_ over \(X\) and its _length_ is \(|\eta|=k\). The set \(X^{*}\) of all words includes the empty word, denoted \(\emptyset\in X^{*}\) and \(X^{+}:=X^{*}\backslash\emptyset\), and forms a monoid under catenation. Any mapping \(c:X^{*}\to\mathbb{R}^{\ell}\) is called a _formal power series_. The value of \(c\) at \(\eta\in X^{*}\) is denoted by \((c,\eta)\) and called the _coefficient_ of \(\eta\) in \(c\). Normally, \(c\) is written as a formal sum \(c=\sum_{\eta\in X^{*}}(c,\eta)\eta.\) A series \(c\) is _proper_ when the coefficient \((c,\emptyset)=0\) else it is a _non-proper_ series. The _support_ of \(c\) is the set \(\operatorname{supp}(c)\) containing all words having nonzero coefficients. The _order_ of \(c\), denoted \(\operatorname{ord}(c)\), is the length of the minimal length word in its support. The
collection of all formal power series over \(X\) is denoted by \(\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\). The \(i^{th}\) component of a vector \(v\in\mathbb{R}^{\ell}\) is denoted by \(v_{i}\) and consequently the \(i^{th}\) component of a series \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) is denoted by \(c_{i}\) viz. \(\left(c_{i},\eta\right)=\left(c,\eta\right)_{i}\).
A series \(c^{\prime}\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) is called a subseries of \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) if there exists another series \(c^{\prime\prime}\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) such that the intersection \(\operatorname{supp}\left(c^{\prime}\right)\cap\operatorname{supp}\left(c^{ \prime\prime}\right)\) is empty and the series \(c\) can be decomposed as \(c=c^{\prime}+c^{\prime\prime}\).
**Definition 2.1**.: _Let \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\), then the natural part of the series \(c\) is the subseries denoted by \(c_{N}\) such that \(c=c_{N}+c_{F}\) and \(\operatorname{supp}\left(c_{F}\right)\subseteq X^{*}\setminus\{x_{0}^{k}:k \in\mathbb{N}_{0}\}\). The subseries \(c_{F}\) is called as forced part of the series \(c\)._
Definition 2.1 asserts that the forced part \(c_{F}\) of a series \(c\) should not contain any word formed by the letter \(x_{0}\) alone, including the empty word \(\emptyset\). For the remainder of the document, \(\mathbb{R}^{\ell}\) is given the structure of a unital commutative ring under _Hadamard_ or pointwise product viz. \(\left(xy\right)_{i}=x_{i}y_{i}\) with \(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.
**Example 2.1**.: _Let \(X=\{x_{0},x_{1}\}\) and \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) described as \(c=1-x_{1}\). Then the shuffle inverse is computed as:_
\[c^{\boldsymbol{\shuffle}-1} =\sum_{k\in\mathbb{N}_{0}}\left(1-(1-x_{1})\right)^{\boldsymbol{ \shuffle}k}\] \[=\sum_{k\in\mathbb{N}_{0}}x_{1}^{\boldsymbol{\shuffle}k}=\sum_{k \in\mathbb{N}_{0}}k!x_{1}^{k}.\]
_Therefore, \(c^{\boldsymbol{\shuffle}-1}=1+x_{1}+2x_{1}^{2}+6x_{1}^{3}+\cdots+n!x_{1}^{n}+\cdots\)._
Observe that \((c\boldsymbol{\shuffle}d,\emptyset)=(c,\emptyset)\,(d,\emptyset)\). Hence, the set
\[M_{\boldsymbol{\shuffle}}=\{\,\mbox{\rm 1l}+c\,:c\in\mathbb{R}_{p}^{n}\, \langle\langle X\rangle\rangle\},\]
where \(c\) is a proper series in \(\mathbb{R}^{n}\langle\langle X\rangle\rangle\), forms a subgroup of the shuffle group. The group \(M_{\boldsymbol{\shuffle}}\) is vital in the design of a computational framework of multiplicative dynamic feedback product as explained in Section 8.
The set \(\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) is endowed with ultrametric structure where the metric \(\kappa\) is defined as
\[\kappa(c,d)=\sigma^{\operatorname{ord}(c-d)},\]
for \(c,d\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) and \(\sigma\in\,]0,1[\). For brevity, \(\kappa(c,0)\) is written as \(\kappa(c)\), and \(\kappa(c,d)=\kappa(c-d)\). The ultrametric space \((\mathbb{R}^{\ell}\langle\langle X\rangle\rangle,\kappa)\) is Cauchy complete [Berstel & Reutenauer(1988)]. The following definition of contraction maps between metric spaces will be useful.
**Definition 2.3**.: _Given metric spaces \((E,d)\) and \((E^{\prime},d^{\prime})\), a map \(f:E\longrightarrow E^{\prime}\) is said to be a strong contraction map if \(\forall s,t\in E\), it satisfies the condition \(d^{\prime}(f(s),f(t))\leq\alpha d(s,t)\) where \(\alpha\in[0,1[\). If \(\alpha=1\), then the map \(f\) is said to be a weak contraction map or a non-expansive map._
## 3. Bialgebra and Hopf algebra: Preliminaries
The goal is to provide the definitions of algebraic structures such as algebra, coalgebra, bialgebra and Hopf algebra [Abe(2004), Sweedler(1969)]. We let \(K\) be a commutative ring with identity \(1_{K}\).
### Algebra
The definition of an algebra can be facilitated through the category of modules. It allows to define the concept of a coalgebra (the dual notion) with ease.
**Definition 3.1**.: _An algebra over \(K\) is a \(K\)-module \(\mathscr{A}\) along with the morphisms of \(K\)-modules \(\boldsymbol{m}:\mathscr{A}\otimes\mathscr{A}\longrightarrow\mathscr{A}\), called the multiplication or product map, and \(\eta:K\longrightarrow\mathscr{A}\), called the unit map, such that the following diagrams are commutative._
(1)
_The tuple \((\mathscr{A},\mathbf{m},\eta)\) is called a \(K\)-algebra._
The commutative diagrams (1) mean that a \(K\)-algebra \(\mathscr{A}\) must satisfy the following properties:
1. The product map \(\mathbf{m}\) must be associative.
2. The scalar multiplication through the \(\eta\) map must have a unit.
The concept of a \(K\)-algebra morphism is defined next.
**Definition 3.2**.: _Let \((\mathscr{A},\mathbf{m},\eta)\), \((\mathscr{A}^{\prime},\mathbf{m}^{\prime},\eta^{\prime})\) be \(K\)-algebras. A map \(f:\mathscr{A}\longrightarrow\mathscr{A}^{\prime}\) is called a \(K\)-algebra morphism provided the following diagrams commute._
**Definition 3.3**.: _Let \(P\) and \(Q\) be modules over \(K\). The twisting morphism \(\tau\) of \(K\)-modules is \(\tau:P\otimes Q\longrightarrow Q\otimes P\) with_
\[\tau(p\otimes q)=q\otimes p\quad\forall\ q\in Q,p\in P.\]
A \(K\)-algebra \(\mathscr{A}\) is commutative if and only if the following diagram commutes.
A \(K\)-algebra \(\mathscr{A}\) is a graded algebra if the underlying \(K\)-module structure is graded viz. \(\mathscr{A}=\bigoplus_{n\in\mathbb{N}_{0}}\mathscr{A}_{n}\), where \(\mathscr{A}_{n}\) is a \(K\)-module for all \(n\in\mathbb{N}_{0}\) such that \(\mathbf{m}\left(\mathscr{A}_{m}\otimes\mathscr{A}_{n}\right)\subseteq\mathscr{A}_{m +n}\), for all \(m,n\in\mathbb{N}_{0}\). The graded \(K\)-algebra is connected if \(\eta:K\longrightarrow\mathscr{A}_{0}\) is a \(K\)-algebra isomorphism.
### Coalgebra
The notion of a \(K\)-coalgebra is a categorical structure dual to that of a \(K\)-algebra.
**Definition 3.4**.: _A \(K\)-coalgebra \(\mathscr{C}\) is a \(K\)-module with the \(K\)-module morphisms \(\Delta:\mathscr{C}\longrightarrow\mathscr{C}\otimes\mathscr{C}\), called the comultiplication or coproduct map, and \(\epsilon:\mathscr{C}\longrightarrow K\), called the counit map, such that the following diagrams commute._
(2)
_\(\Delta\)_\(\Delta\)_\(\Delta\otimes\mathscr{C}\)_\(\Delta\otimes\mathscr{C}\otimes\mathscr{C}\)_\(\Delta\otimes\mathscr{C}\otimes\mathscr{C}\)_\(\Delta\otimes\
_The tuple \((\mathscr{C},\Delta,\epsilon)\) is called a \(K\)-coalgebra._
The commutative diagrams (2) imply that a \(K\)-coalgebra \(\mathscr{C}\) must satisfy the following properties:
1. The coproduct map \(\Delta\) must be coassociative.
2. The counit map \(\epsilon\) is the categorical dual to the unit map \(\eta\) for a \(K\)-algebra.
The coalgebra \(\mathscr{C}\) is called cocommutative if the following diagram commutes,
where \(\tau\) is the twisting morphism given in Definition 3.3. Sweedler's notation is very useful in representing the coproduct map and is adopted in Sections 4 and 8.
**Definition 3.5**.: [Sweedler(1969)]_. Given the \(K\)-coalgebra tuple \((\mathscr{C},\nabla,\epsilon)\) and an element \(c\in\mathscr{C}\), then the Sweedler notation for the coproduct_
\[\Delta(c)=\sum_{(c)}c_{(1)}\otimes c_{(2)},\]
_where \(c_{(1)},c_{(2)}\in\mathscr{C}\) are the components of the tensors resulting from the coproduct of \(c\)._
Next, the definition of a \(K\)-coalgebra morphism is given.
**Definition 3.6**.: _Let \((\mathscr{C},\Delta,\epsilon)\), \((\mathscr{C}^{\prime},\Delta^{\prime},\epsilon^{\prime})\) be \(K\)-coalgebras. A map \(f:\mathscr{C}\longrightarrow\mathscr{C}^{\prime}\) is called a \(K\)-coalgebra morphism provided the following diagrams commute._
### Bialgebra
The bialgebra structure over a commutative ring is fundamental for defining a Hopf algebra. A bialgebra is an amalgamation of the algebra and coalgebra structures such that both are compatible with each other.
**Definition 3.7**.: _A bialgebra \(H\) over \(K\) is a tuple \((H,\boldsymbol{m},\eta,\Delta,\epsilon)\) such that_
1. \(H\) _is a_ \(K\)_-module._
2. \((H,\boldsymbol{m},\eta)\) _is a_ \(K\)_-algebra, where_ \(\boldsymbol{m}\) _and_ \(\eta\) _are the product and unit maps, respectively._
3. \((H,\Delta,\epsilon)\) _is a_ \(K\)_-coalgebra, where_ \(\Delta\) _and_ \(\epsilon\) _are the coproduct and counit maps, respectively._
_such that the following diagrams commute._
(3)
(4)
(5)
The diagrams (3) and (4) state that the product map \(\boldsymbol{m}\) and the unit map \(\eta\) are \(K\)-coalgebra morphisms, while the coproduct map \(\Delta\) and the counit map \(\epsilon\) are \(K\)-algebra morphisms. Diagram (5) describes that the unit map \(\eta\) is a section of the counit map \(\epsilon\) in the category of \(K\)-modules.
### Hopf Algebra
Hopf algebras are an important class of bialgebras. A Hopf algebra is a bialgebra equipped with a particular \(K\)-linear map called antipode.
**Definition 3.8**.: _A Hopf algebra \(H\) over \(K\) is a tuple \((H,\boldsymbol{m},\eta,\Delta,\epsilon,S)\) such that the following conditions are satisfied:_
1. \((H,\boldsymbol{m},\eta,\Delta,\epsilon)\) _is a_ \(K\)_-bialgebra._
2. \(S:H\longrightarrow H\) _is a_ \(K\)_-linear map such that the following diagram commutes._
(6)
An element \(a\in H\) is called _group-like_ if \(\Delta(a)=a\otimes a\) and thus \(a\not\in\)ker\((\epsilon)\), where ker\((.)\) represents the kernel of a \(K\)-module map. A graded Hopf algebra \(H=\bigoplus_{n\in\mathbb{N}_{0}}H_{n}\) is _connected_ if and only if \(H_{0}\cong K\eta(1_{K})\) as \(K\)-modules. Equivalently, a graded Hopf algebra \(H\) is connected if and only if \(H^{+}:=\bigoplus_{k\geq 1}H_{k}\) is isomorphic to ker\((\epsilon)\) as \(K\)-modules viz. \(\eta\circ\epsilon=\mathbf{id}_{H_{0}}\) and zero otherwise. For simplicity denote \(\boldsymbol{m}\,(a,b):=ab\), for all \(a,b,\in H\). Using Sweedler's
notation, diagram (6) implies that for all \(c\in H\),
\[\sum_{(c)}S\left(c_{(1)}\right)c_{(2)}=\sum_{(c)}c_{(1)}S\left(c_{(2)}\right)= \epsilon\left(c\right)1_{H}\;,\]
where \(1_{H}\) is the multiplicative unit of the Hopf algebra \(H\). The computation of the antipode of an element \(c\) becomes easier when the algebra structure of \(H\) is graded and connected.
**Theorem 3.1**.: _If the Hopf algebra \(H\) is graded and connected, then the antipode can be computed for any \(a\in H^{+}:=\bigoplus_{k\geq 1}H_{k}\) as_
\[S(a)=-a-\sum a^{\prime}_{(1)}S(a^{\prime}_{(2)}),\]
_where the summation is taken over all components of the reduced coproduct \(\Delta^{\prime}\) defined as:_
\[\Delta^{\prime}\left(a\right):=\Delta\left(a\right)-a\otimes\eta\left(1_{K} \right)-\eta\left(1_{K}\right)\otimes a.\]
## 4. Unshuffle Hopf algebra and its Coaction
The goal of this section is to explain and illustrate the computational framework to compute the shuffle product of two series and the shuffle inverse using the coordinate maps of the series. The framework is well-developed in the literature [10] and was utilized in study of interconnections of Chen-Fliess series [21], Venkatesh & Gray(2022), Venkatesh & Gray(2021), Gray, et al.(2014b), Gray, et al.(2014a)].
### Unshuffle Hopf Algebra
We construct a dual Hopf algebra reflecting the group structure of \(M_{\boldsymbol{\omega}}\) as defined in Section 2. The antipode constructed in the Hopf algebra provides a framework for computing the shuffle inverse of purely improper series \(c\).
Let the set \(W_{b}\subset\mathbb{R}^{m}\langle\langle X\rangle\rangle^{*}\) (dual module of \(\mathbb{R}^{m}\langle\langle X\rangle\rangle\)) be defined as the collection of coordinate maps:
\[W_{b}=\{a_{\eta}\::a_{\eta}(c)=(c,\eta),\ \eta\in X^{*},c\in\mathbb{R}^{m} \langle\langle X\rangle\rangle\}.\]
Define \(W\) to be the free \(\mathbb{R}^{m}\)-module spanned by the set \(W_{b}\). Let \(H_{\boldsymbol{\omega}}\) denote the reduced symmetric algebra generated by the module \(W\). The \(\mathbb{R}^{m}\)-algebra \(H_{\boldsymbol{\omega}}\) can equivalently be seen as the polynomial algebra of coordinate maps (corresponding to non-empty words) of \(\mathbb{R}^{m}\langle\langle X\rangle\rangle\). The unit map \(\xi:\mathbb{R}^{m}\longrightarrow H_{\boldsymbol{\omega}}\) is defined by \(\xi(\,\hbox{1\kern-2.5ptl})=a_{\emptyset}\). Observe that \(a_{\emptyset}:c\mapsto\hbox{1\kern-2.5ptl}\), for all \(c\in M_{\boldsymbol{\omega}}\). By construction, \(H_{\boldsymbol{\omega}}\) is an \(\mathbb{R}^{m}\)-associative, commutative and unital algebra with addition and scalar multiplication defined, respectively, as
\[(a_{\eta}+a_{\zeta})(c) =a_{\eta}(c)+a_{\zeta}(c)\] \[(ka_{\eta})(c) =k(a_{\eta}(c)),\]
where \(c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) and \(k\in\mathbb{R}^{m}\), and product
\[\boldsymbol{m}(a_{\eta},a_{\zeta})(c)=a_{\eta}(c).a_{\zeta}(c),\]
for \(c\in M_{\boldsymbol{\omega}}\). Then \(H_{\boldsymbol{\omega}}\) is equipped with a coproduct \(\hat{\Delta}_{\boldsymbol{\omega}}:H_{\boldsymbol{\omega}}\longrightarrow H _{\boldsymbol{\omega}}\bigotimes H_{\boldsymbol{\omega}}\) such that \(\hat{\Delta}_{\boldsymbol{\omega}}a_{\eta}(c,d)=(c\,\boldsymbol{\omega}d,\eta)\), for all \(c,d\in M_{\boldsymbol{\omega}}\) and \(\eta\in X^{*}\). The counit map \(\epsilon:H_{\boldsymbol{\omega}}\longrightarrow\mathbb{R}^{m}\) is defined as
\[\epsilon(h)=\begin{cases}\,\hbox{1\kern-2.5ptl}\,:\,h=a_{\emptyset}\\ \,0\,:\,\text{otherwise}.\end{cases}\]
Since the shuffle product is associative and commutative, thus dually the coproduct \(\hat{\Delta}_{\boldsymbol{\omega}}\) is coassociative and cocommutative. Therefore, \((H_{\boldsymbol{\omega}},\boldsymbol{m},\xi,\hat{\Delta}_{\boldsymbol{\omega} },\epsilon)\) forms a \(\mathbb{R}^{m}\)-bialgebra. The
following lemma is vital in the framework for computing both shuffle product and dynamic feedback group product. Define a collection of linear endomorphisms \(\{\theta_{i}\}_{i=0}^{m}\) on \(W\)
\[\theta_{i} :W \longrightarrow W\] \[a_{\eta} \longmapsto a_{x_{i}\eta},\]
for all \(x_{i}\in X\), \(\eta\in X^{*}\). Thus \(\theta_{i}\left(a_{\eta}\right)\left(c\right)=a_{\eta}\left(x_{i}^{-1}\left(c \right)\right)\).
The coproduct \(\hat{\Delta}_{\boldsymbol{\upmu}}\) can be recursively constructed as defined in the following proposition.
**Proposition 4.1**.: [11] _On the module \(W\)_
\[\hat{\Delta}_{\boldsymbol{\upmu}}\circ\theta_{k}=\left(\theta_{k}\otimes \mathbf{id}+\mathbf{id}\otimes\theta_{k}\right)\circ\hat{\Delta}_{\boldsymbol {\upmu}},\]
_for all \(i=1,2,\ldots,m\) and \(k=0,1,\ldots,m\) with base case being \(\hat{\Delta}_{\boldsymbol{\upmu}}a_{\emptyset}=a_{\emptyset}\otimes a_{\emptyset}\)._
Proposition 4.1 infers that the maps \(\theta_{i}\), for \(i=1,2,\ldots,m\), are coderivations on the underlying coalgebra of \(H_{\boldsymbol{\upmu}}\).
We note that the unshuffle coproduct \(\hat{\Delta}_{\boldsymbol{\upmu}}\) was utilized in the design of an algorithmic framework for computation of Wiener-Fliess composition product and subsequently additive static feedback product [12], Venkatesh & Gray(2022), Venkatesh(2021)] and also in the computation of shuffle-rational series from its representation [12]. Moreover, the unshuffle coproduct was also crucial in the computational framework for the multivariate additive output feedback [10], Gray, et al.(2014b)] and for SISO affine output feedback [10].
Let \(\{\pi^{i}\}_{i=1}^{m}\) be the collection of co-ordinate projection maps on the module \(W\) defined as
\[a_{\eta}^{i}(c):=\pi^{i}(a_{\eta})(c)=(c,\eta)_{i}=(c_{i},\eta),\]
for all \(\eta\in X^{*}\). Thus, define the following notation
\[\hat{\Delta}_{\boldsymbol{\upmu}}^{j}a_{\eta}^{i}:=(\pi^{i}\otimes\pi^{j}) \circ\hat{\Delta}_{\boldsymbol{\upmu}}a_{\eta}.\]
Note that the projection maps \(\{\pi^{i}\}_{i=1}^{m}\) commute with the maps \(\{\theta_{j}\}_{j=0}^{m}\) viz. \(\theta_{i}\left(a_{\eta}^{j}\right)=a_{x_{i}\eta}^{j}\). The significance of these notations are well-reflected in the computational framework in Section 8. The following example is to demonstrate the result of Proposition 4.1 for a few words.
**Example 4.1**.: _A few examples of the computation of deshuffle coproduct \(\hat{\Delta}_{\boldsymbol{\upmu}}\) on \(W\) (akin to Example 4.3) using Proposition 4.1 are given as follows(indices \(i=1,2,\ldots,m\) and \(k,s=0,1,\ldots,m\)):_
\[\hat{\Delta}_{\boldsymbol{\upmu}}^{j}a_{x_{k}}^{i} =a_{x_{k}}^{i}\otimes a_{\emptyset}^{j}+a_{\emptyset}^{i}\otimes a _{x_{k}}^{j}.\] \[\hat{\Delta}_{\boldsymbol{\upmu}}^{j}a_{x_{k}x_{k}}^{i} =a_{x_{k}x_{k}}^{i}\otimes a_{\emptyset}^{j}+2a_{x_{k}}^{i}\otimes a _{x_{k}}^{j}+a_{\emptyset}^{i}\otimes a_{x_{k}x_{k}}^{j}.\] \[\hat{\Delta}_{\boldsymbol{\upmu}}^{j}a_{x_{k}x_{s}}^{i} =a_{x_{k}x_{s}}^{i}\otimes a_{\emptyset}^{j}+a_{x_{k}}^{i}\otimes a _{x_{s}}^{j}+a_{x_{s}}^{i}\otimes a_{x_{k}}^{j}+a_{\emptyset}^{i}\otimes a_{x_ {k}x_{s}}^{j}.\]
The connected \(\mathbb{R}^{m}\)-bialgebra \(H_{\boldsymbol{\upmu}}\) is endowed with an antipode map \(S_{\boldsymbol{\upmu}}\) given as:
\[S_{\boldsymbol{\upmu}} :H_{\boldsymbol{\upmu}}\longrightarrow H_{\boldsymbol{\upmu}}\] \[a_{\eta} \mapsto S_{\boldsymbol{\upmu}}a_{\eta}\]
such that \(S_{\boldsymbol{\upmu}}a_{\eta}\left(c\right)=\left(c^{\boldsymbol{\upmu}-1},\eta\right)\), for \(\eta\in X^{*}\), \(c\in M_{\boldsymbol{\upmu}}\).
### Gradation of Bialgebra \(H_{\boldsymbol{\mathsf{u}}}\)
The Hopf algebra \(H_{\boldsymbol{\mathsf{u}}}\) can be equipped with a grading such that it is connected and all its homogeneous components are finite-dimensional.
**Definition 4.1**.: _Given \(\eta\in X^{+}\), define the degree of \(a_{\eta}\) as \(\deg\left(a_{\eta}\right)=|\eta|\)._
1. _Define gradation on the_ \(\mathbb{R}^{m}\)_-module_ \(W\) _viz._ \[W=\bigoplus_{k\geq 1}W_{k},\] _where_ \(W_{k}\) _is the free_ \(\mathbb{R}^{m}\)_-module spanned by the_ \(a_{\eta}\) _of_ \(\deg\left(a_{\eta}\right)=k\)_._
2. _The gradation on the module_ \(W\) _induces a graded structure on the algebra_ \(H_{\boldsymbol{\mathsf{u}}}\) _as_ \[H_{\boldsymbol{\mathsf{u}}}=\bigoplus_{n\in\mathbb{N}_{0}}\hat{H}_{n},\] _with_ \(\hat{H}_{0}\cong\mathbb{R}^{m}\) _in the category of_ \(\mathbb{R}^{m}\)_-modules._
The following proposition asserts that the above gradation is connected and all its homogeneous components are finite-dimensional.
**Proposition 4.2**.: _Given the gradation for the Hopf algebra \(H_{\boldsymbol{\mathsf{u}}}\),_
1. \(H_{\boldsymbol{\mathsf{u}}}\) _is a graded and connected Hopf algebra viz._ \[\hat{\Delta}_{\boldsymbol{\mathsf{u}}}\left(\hat{H}_{n}\right)\subseteq \bigoplus_{\begin{subarray}{c}i+j=n\\ i,j\geq 0\end{subarray}}\hat{H}_{i}\otimes\hat{H}_{j}.\]
2. _For all_ \(k\)_: define_ \(w_{k}=\dim\left(W_{k}\right)\) _and_ \(F_{W}=\sum_{k\geq 1}w_{k}Z^{k}\) _is the geometric series given by_ \[F_{W}=\frac{1}{1-mZ}\,,\] _where_ \(m=|X|\) _and for all_ \(k\geq 1\)_:_ \[w_{k}=\dim\left(W_{k}\right)=m^{k}.\]
3. _Define_ \(F_{\hat{H}}=\sum_{n\geq 1}h_{n}Z^{n}\) _where_ \(h_{n}=\dim(\hat{H}_{n})\) _then_ \[F_{\hat{H}}=\prod_{k=1}^{\infty}\frac{1}{\left(1-Z^{k}\right)^{w_{k}}}.\]
_Proof:_
1. The Hopf algebra \(H_{\boldsymbol{\mathsf{u}}}\) follows from the fact that if \(\gamma(\neq\eta,\zeta)\in\operatorname{supp}(\eta\boldsymbol{\mathsf{u}}\,\zeta)\) then \[\deg\left(\gamma\right)=|\gamma|=|\eta|+|\zeta|=\deg\left(\eta\right)+\deg \left(\zeta\right),\] for all \(\eta,\zeta,\gamma\in X^{*}\).
2. Define the formal power series \[F(Z_{0},Z_{1},\ldots,Z_{m}) =\sum_{k\geq 1}\sum_{\begin{subarray}{c}i_{0},i_{1},\ldots,i_{m} \geq 0\\ i_{0}+i_{1}+\cdots+i_{m}=k\end{subarray}}\#\{\eta:|\eta|_{x_{j}}=i_{j}\,\forall \,j=0,1,2,\ldots,m\}Z_{0}^{i_{0}}Z_{1}^{i_{1}}\cdots Z_{m}^{i_{m}}\] \[=\frac{\left(Z_{0}+Z_{1}+\cdots+Z_{m}\right)}{1-\left(Z_{0}+Z_{1} +\cdots+Z_{m}\right)}.\]
Since each letter contributes equally to the degree (viz. length), thus \[F_{W}=F(Z,Z,\ldots,Z)=\frac{mZ}{1-mZ}.\]
3. The proposition follows from the item 2 as \(\hat{H}\) is the symmetric algebra generated by the \(\mathbb{R}^{m}\)-module \(W\).
**Example 4.2**.: _The dimensions of the homogeneous components of the graded module \(W\) (up to \(k=10\)) and the graded algebra \(H_{\boldsymbol{\mathfrak{u}}}\) for \(m=2\) viz when \(X=\{x_{0},x_{1}\}\) is tabulated in Table 1._
_The sequence \(\{\dim(\hat{H}_{k})\}_{k\in\mathbb{N}_{0}}\) is the sequence \(A034899\) in [1] which corresponds to the number of multisets of binary words of total length \(n\)._
### Coaction of \(H_{\boldsymbol{\mathfrak{u}}}\)
The subsection explains the coaction of the Hopf algebra \(H_{\boldsymbol{\mathfrak{u}}}\) (4.1) on the algebra of coordinate functions. It is utilized subsequently to develop an algorithm to compute the multiplicative mixed composition product explained in Section 5.2 and dynamic feedback product as defined in Theorem 6.2. Let \(W\) to be the \(\mathbb{R}^{m}\)-module as described in Section 4.1. Let \(S^{+}\left(W\right)\) denote the reduced symmetric algebra generated by the module \(W\). The non-unital \(\mathbb{R}^{m}\)-algebra \(S^{+}(W)\) are equivalently the polynomials without constant term of coordinate maps of \(\mathbb{R}^{m}\langle\langle X\rangle\rangle\). By construction \(S^{+}(W)\) has a non-unital \(\mathbb{R}^{m}\)-associative, commutative algebra structure with addition, scalar multiplication and product defined, respectively, as
\[(a_{\eta}+a_{\zeta})(c) =a_{\eta}(c)+a_{\zeta}(c)\] \[(ka_{\eta})(c) =k(a_{\eta}(c))\]
where \(c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), and
\[\boldsymbol{m}(a_{\eta},a_{\zeta})(c)=a_{\eta}(c).a_{\zeta}(c),\]
where \(c\in M_{\boldsymbol{\mathfrak{u}}}\). The \(\mathbb{R}^{m}\)-algebra \(S^{+}(W)\) is isomorphic to the algebra structure of \(H_{\boldsymbol{\mathfrak{u}}}\) with forgetting of the unit map \(\xi\). The right coaction map \(\rho_{\boldsymbol{\mathfrak{u}}}:S^{+}\left(W\right)\longrightarrow S^{+} \left(W\right)\otimes H_{\boldsymbol{\mathfrak{u}}}\) is recursively defined on the module \(V\) as given by the following proposition.
**Proposition 4.3**.: _For all \(i=0,1,2,\ldots,m:\)_
\[\rho_{\boldsymbol{\mathfrak{u}}}\circ\theta_{i}=(\theta_{i}\otimes \mathbf{id}+\mathbf{id}\otimes\theta_{i})\circ\rho_{\boldsymbol{\mathfrak{u}}},\]
_with base case being \(\rho_{\boldsymbol{\mathfrak{u}}}a_{\emptyset}=a_{\emptyset}\otimes a_{\emptyset}\)._
Proposition 4.3 might appear as repetition of Proposition 4.1. It is vital to note that Proposition 4.1 is for defining the coproduct of Hopf algebra \(H_{\boldsymbol{\mathfrak{u}}}\), where \(a_{\emptyset}\) is the unit element. Observe that,
\[\rho_{\boldsymbol{\mathfrak{u}}}a_{\eta}^{i}(c,d)=a_{\eta}^{i}(c\boldsymbol{ \mathfrak{u}}d),\]
\begin{table}
\begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c||} \hline k & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \(\dim\left(W_{k}\right)\) & 1 & 2 & 4 & 8 & 16 & 32 & 64 & 128 & 256 & 512 & 1024 \\ \hline \(\dim(\hat{H}_{k})\) & 1 & 2 & 7 & 20 & 59 & 162 & 449 & 1200 & 3194 & 8348 & 21646 \\ \hline \end{tabular}
\end{table}
Table 1. Dimensions of the homogeneous components of module \(W\) and \(H_{\boldsymbol{\mathfrak{u}}}\) (when \(m=2\))
where \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) (not necessarily in \(M_{\boldsymbol{\shuffle}}\)) and \(d\in M_{\boldsymbol{\shuffle}}\). The coaction \(\rho_{\boldsymbol{\shuffle}}\) thus is a corepresentation of the Hopf algebra \(H_{\boldsymbol{\shuffle}}\) on the algebra \(S^{+}\left(W\right)\) or equivalently, \(\rho_{\boldsymbol{\shuffle}}\) makes \(S^{+}\left(W\right)\), a \(H_{\boldsymbol{\shuffle}}\)-algebra. Let \(\{\pi^{i}\}_{i=1}^{m}\) be the collection of co-ordinate projection maps on the module \(W\) defined as
\[a_{\eta}^{i}(c):=\pi^{i}(a_{\eta})(c)=(c,\eta)_{i}=(c_{i},\eta),\]
for all \(\eta\in X^{*}\) and thus the following notation is well-defined,
\[\rho_{\boldsymbol{\shuffle}}^{j}a_{\eta}^{i}:=(\pi^{i}\otimes\pi^{j})\circ \rho_{\boldsymbol{\shuffle}}a_{\eta}.\]
These notations are very much utilized in developing computational framework for the multiplicative mixed composition product as discussed in Section 8.
**Corollary 4.1**.: _If \(n\in\mathbb{N}_{0}\), then for all \(i=0,1,2,\ldots,m\) and \(j,k=1,2,\ldots,m\) (defining \(x_{j}^{0}:=\emptyset\)):_
\[\rho_{\boldsymbol{\shuffle}}^{j}a_{x_{i}n}^{k}=\sum_{r=0}^{n}{n\choose r}a_{x_ {i}^{r}}^{k}\otimes a_{x_{i}^{n-r}}^{j}\;.\]
_Proof:_ The statement is proved by induction on \(n\in\mathbb{N}_{0}\). The base case (\(n=0\)) follows from Proposition 4.3. Assume the statement is true for \(n=p-1\), then
\[\rho_{\boldsymbol{\shuffle}}^{j}a_{x_{i}^{p}}^{k} =\rho_{\boldsymbol{\shuffle}}^{j}\circ\theta_{i}a_{x_{i}^{p-1}}^{ k}\] \[=(\theta_{i}\otimes\mathbf{id}+\mathbf{id}\otimes\theta_{i}) \circ\Delta_{\boldsymbol{\shuffle}}^{j}a_{x_{i}^{p-1}}^{k}.\]
Using the induction hypothesis,
\[\rho_{\boldsymbol{\shuffle}}^{j}a_{x_{i}^{p}}^{k} =(\theta_{i}\otimes\mathbf{id}+\mathbf{id}\otimes\theta_{i}) \left(\sum_{r=0}^{p-1}{p-1\choose r}a_{x_{i}^{r}}^{k}\otimes a_{x_{i}^{p-1-r }}^{j}\right)\] \[=\sum_{r=1}^{p}{p-1\choose r-1}a_{x_{i}^{r}}^{k}\otimes a_{x_{i} ^{p-r}}^{j}+\sum_{r=0}^{p-1}{p-1\choose r}a_{x_{i}^{r}}^{k}\otimes a_{x_{i}^{p- r}}^{j}.\] \[=\sum_{r=0}^{p}{n\choose r}a_{x_{i}^{r}}^{k}\otimes a_{x_{i}^{p- r}}^{j}.\]
Since the \(S^{+}\left(W\right)\) and \(H_{\boldsymbol{\shuffle}}\) are isomorphic as \(\mathbb{R}^{m}\)-modules, the following lemma states the coaction of \(H_{\boldsymbol{\shuffle}}\) on \(S^{+}\left(W\right)\) and the unshuffle coproduct coincide when the evaluation of coordinate maps are restricted to the group \(M_{\boldsymbol{\shuffle}}\).
**Lemma 4.1**.: _Given \(c,d\in M_{\boldsymbol{\shuffle}}\), \(\eta\in X^{*}\) and \(i=1,2,\ldots,m\),_
\[\hat{\Delta}_{\boldsymbol{\shuffle}}a_{\eta}\left(c,d\right)=\left(c \boldsymbol{\shuffle}d,\eta\right)=\rho_{\boldsymbol{\shuffle}}a_{\eta} \left(c,d\right),\]
_where \(c,d\in M_{\boldsymbol{\shuffle}}\) and \(\hat{\Delta}_{\boldsymbol{\shuffle}}^{i}\) is the coproduct from the bialgebra \(H_{\boldsymbol{\shuffle}}\) constructed in Section 4.3._
**Example 4.3**.: _A few examples of the computation of the coaction map \(\rho_{\boldsymbol{\shuffle}}\) on \(W\) using Proposition 4.3 are given as follows(indices \(i,j=1,2,\ldots,m\) and \(k,s=0,1,\ldots,m\)):_
\[\Delta_{\boldsymbol{\shuffle}}^{j}a_{\emptyset}^{i} =a_{\emptyset}^{i}\otimes a_{\emptyset}^{j}.\] \[\Delta_{\boldsymbol{\shuffle}}^{j}a_{x_{k}}^{i} =a_{x_{i}}^{i}\otimes a_{\emptyset}^{j}+a_{\emptyset}^{i}\otimes a _{x_{i}^{j}}^{j}.\] \[\Delta_{\boldsymbol{\shuffle}}^{j}a_{x_{k}x_{k}}^{i} =a_{x_{k}x_{k}}^{i}\otimes a_{\emptyset}^{j}+2a_{x_{k}}^{i} \otimes a_{x_{k}}^{j}+a_{\emptyset}^{i}\otimes a_{x_{k}x_{k}}^{j}.\]
\[\Delta^{j}_{\boldsymbol{\shuffle}}a^{i}_{x_{k}x_{s}}=a^{i}_{x_{k}x_{s}}\otimes a^{ j}_{\emptyset}+a^{i}_{x_{k}}\otimes a^{j}_{x_{s}}+a^{i}_{x_{s}}\otimes a^{j}_{x_{k}}+a^{i}_ {\emptyset}\otimes a^{j}_{x_{k}x_{s}}.\]
The following example illustrates the application of the deshuffle coproduct \(\Delta_{\boldsymbol{\shuffle}}\) in the computation of the shuffle product of two series.
**Example 4.4**.: _Let \(X=\{x_{0},x_{1}\}\) and \(c,d\in\mathbb{R}^{2}\langle\langle X\rangle\rangle\) described as_
\[c=\begin{bmatrix}1+x_{1}+x_{1}^{2}+x_{1}^{3}+\cdots\\ x_{0}+x_{0}x_{1}+x_{1}^{100}\end{bmatrix}\quad\&\quad d=\begin{bmatrix}1+x_{0}^ {2}+\exp\left(x_{1}\right)\\ 1+x_{0}^{2}x_{1}\end{bmatrix},\]
_where \(\exp(.)\) is the standard exponential function expressed in its Taylor series. Note that \(c\not\in M_{\boldsymbol{\shuffle}}\) but \(d\in M_{\boldsymbol{\shuffle}}\). The coefficient of \(x_{0}x_{1}^{2}\) in series \(c_{2}\boldsymbol{\shuffle}d_{1}\) can be computed as:_
\[\left(c_{2}\boldsymbol{\shuffle}d_{1},x_{0}x_{1}^{2}\right) =\Delta^{1}_{\boldsymbol{\shuffle}}a^{2}_{x_{0}x_{1}^{2}}\left(c,d\right)=(\pi^{2}\otimes\pi^{1})\circ\Delta_{\boldsymbol{\shuffle}}a_{x_{0}x_ {1}^{2}}\left(c,d\right)\] \[=\Delta^{1}_{\boldsymbol{\shuffle}}\circ\theta_{0}a_{x_{1}^{2}} \left(c,d\right).\]
_Using Proposition 4.3,_
\[\left(c_{2}\boldsymbol{\shuffle}d_{1},x_{0}x_{1}^{2}\right)=(\theta_{0} \otimes\boldsymbol{\shuffle}+\boldsymbol{\shuffle}\otimes\theta_{0})\circ \Delta^{1}_{\boldsymbol{\shuffle}}a^{2}_{x_{1}^{2}}\left(c,d\right).\]
_Using Corollary 4.1,_
\[\left(c_{2}\boldsymbol{\shuffle}d_{1},x_{0}x_{1}^{2}\right) =(\theta_{0}\otimes\boldsymbol{\shuffle}+\boldsymbol{\shuffle} \otimes\theta_{0})\circ\left(a^{2}_{x_{1}^{2}}\otimes a^{1}_{\emptyset}+2a^{2 }_{x_{1}}\otimes a^{1}_{x_{1}}+a^{2}_{0}\otimes a^{1}_{x_{1}^{2}}\right)\left(c,d\right)\] \[=\left(a^{2}_{x_{0}x_{1}^{2}}\otimes a^{1}_{\emptyset}+2a^{2}_{x_ {0}x_{1}}\otimes a^{1}_{x_{1}}+a^{2}_{x_{0}}\otimes a^{1}_{x_{1}^{2}}+a^{2}_{ x_{1}^{2}}\otimes a^{1}_{x_{0}}+\] \[\quad 2a^{2}_{x_{1}}\otimes a^{1}_{x_{0}x_{1}}+a^{2}_{\emptyset} \otimes a^{1}_{x_{0}x_{1}^{2}}\right)\left(c,d\right)\] \[=(0)(1)+2(1)(1)+(1)(0.5)+(0)(0)+2(0)(0)+(0)(0)=2.5.\]
_Therefore \((c_{2}\boldsymbol{\shuffle}d_{1},x_{0}x_{1}^{2})=2.5\)._
## 5. Chen-Fliess Series and its Interconnections
The objective of the section is to describe Chen-Fliess series and the necessary non-recursive interconnections of Chen-Fliess series to understand the results about the multiplicative dynamic feedback product in Section 6.
### Chen-Fliess Series
Let \(\mathfrak{p}\geq 1\) and \(t_{0}<t_{1}\) be given. For a Lebesgue measurable function \(u:[t_{0},t_{1}]\to\mathbb{R}^{m}\), define \(\|u\|_{\mathfrak{p}}=\max\{\|u_{i}\|_{\mathfrak{p}}:\ 1\leq i\leq m\}\), where \(\|u_{i}\|_{\mathfrak{p}}\) is the usual \(L_{\mathfrak{p}}\)-norm for a measurable real-valued function, \(u_{i}\), defined on \([t_{0},t_{1}]\). Let \(L_{\mathfrak{p}}^{m}[t_{0},t_{1}]\) denote the set of all measurable functions defined on \([t_{0},t_{1}]\) having a finite \(\|\cdot\|_{\mathfrak{p}}\) norm and \(B_{\mathfrak{p}}^{m}(R)[t_{0},t_{1}]:=\{u\in L_{\mathfrak{p}}^{m}[t_{0},t_{1}]: \|u\|_{\mathfrak{p}}\leq R\}\). Given any series \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\), the corresponding _Chen-Fliess series_ is
\[F_{c}[u](t)=\sum_{\eta\in X^{*}}(c,\eta)\,F_{\eta}[u](t,t_{0}), \tag{7}\]
where \(E_{\mathfrak{b}}[u]=1\) and
\[F_{x_{i}\bar{\eta}}[u](t,t_{0})=\int_{t_{0}}^{t}u_{i}(\tau)F_{\bar{\eta}}[u]( \tau,t_{0})\,d\tau\]
with \(x_{i}\in X\), \(\bar{\eta}\in X^{*}\), and \(u_{0}=1\)[Fliess(1981)]. If there exist constants \(K,M>0\) such that
\[|(c_{i},\eta)|\leq KM^{|\eta|}|\eta|!,\ \ \forall\eta\in X^{*},\ \forall i=1, \ldots,\ell\, \tag{8}\]
then \(F_{c}\) constitutes a well-defined mapping from \(B_{\mathfrak{p}}^{m}(R)[t_{0},\,t_{0}+T]\) into \(B_{\mathfrak{q}}^{\ell}(S)[t_{0},\,t_{0}+T]\) for sufficiently small \(R,T>0\), where the numbers \(\mathfrak{p},\mathfrak{q}\in[1,\infty]\) are conjugate exponents, i.e., \(1/\mathfrak{p}+1/\mathfrak{q}=1\)[11][12]. This map is referred to as a _Fliess operator_. A series \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) obeying the growth condition in (8) is called a _locally convergent_ generating series. The set of all locally convergent generating series is denoted by \(\mathbb{R}_{LC}^{\ell}\langle\langle X\rangle\rangle\). The supremum of the set of all \(\max\{R,T\}\) for which a Fliess operator \(F_{c}\) is a well-defined mapping from \(B_{\mathfrak{p}}^{m}(R)[t_{0},\,t_{0}+T]\) into \(B_{\mathfrak{q}}^{\ell}(S)[t_{0},\,t_{0}+T]\) is called the _radius of convergence_ of the Fliess operator \(F_{c}\) and is denoted by \(\rho\left(F_{c}\right)\). A Fliess operator \(F_{c}\) is called _locally convergent_ if \(\rho\left(F_{c}\right)>0\). If there exist constants \(K,M>0\) and \(\gamma\in[0,1[\) such that
\[|(c_{i},\eta)|\leq KM^{|\eta|}\left(|\eta|!\right)^{\gamma},\,\,\,\forall \eta\in X^{*},\,\,\forall i=1,\ldots,\ell\;, \tag{9}\]
then \(F_{c}\) constitutes a well defined mapping from \(B_{\mathfrak{p}}^{m}(R)[t_{0},\,t_{0}+T]\) into \(B_{\mathfrak{q}}^{\ell}(S)[t_{0},\,t_{0}+T]\) for all \(R,T>0\)[13], Winter-Arboleda, et al.(2015)]. The infimum of all the \(\gamma\in[0,1[\) such that (9) is satisfied for a series \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) is called the _Gevrey order_ of the series \(c\).
A series \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) obeying the growth condition in (9) is called a _globally convergent_ series. The set of all globally convergent series in \(\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) is denoted as \(\mathbb{R}_{GC}^{\ell}\langle\langle X\rangle\rangle\). A Fliess operator \(F_{c}\) is _globally convergent_ if and only if there exists no real number \(M>0\) such that \(\rho\left(F_{c}\right)<M\). Observe that a noncommutative polynomial \(\mathbb{R}\langle X\rangle\) is a globally convergent series with Gevrey degree \(0\). As described above, a series \(c\in\mathbb{R}_{GC}^{\ell}\langle\langle X\rangle\rangle\) is only a sufficient condition for the corresponding Fliess operator \(F_{c}\) to be globally convergent. Necessary conditions are well-detailed in the literature [14, 15]. In the absence of any convergence criterion, (7) only defines an operator in a formal sense.
### Interconnections of Chen-Fliess Series: Parallel and Cascade Connections
Given Chen-Fliess series \(F_{c}\) and \(F_{d}\), where \(c,d\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\), the parallel and product connections satisfy \(F_{c}+F_{d}=F_{c+d}\) and \(F_{c}F_{d}=F_{c\boldsymbol{\omega}d}\), respectively [16, 17]. The parallel and product connections preserve local convergence and hence the interconnected systems has a Fliess operator representation [14, 15]. When Chen-Fliess series \(F_{c}\) and \(F_{d}\) with \(c\in\mathbb{R}^{k}\langle\langle X^{\prime}\rangle\rangle\) and \(d\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) are interconnected in a cascade fashion, where \(|X^{\prime}|=\ell+1\), the composite system \(F_{c}\circ F_{d}\) has a Chen-Fliess series representation \(F_{cod}\), where the _composition product_ of \(c\) and \(d\) is given by
\[c\circ d=\sum_{\eta\in X^{\prime*}}(c,\eta)\,\psi_{d}(\eta)(\mathbf{1}) \tag{10}\]
[17]. Here \(\mathbf{1}\) denotes the monomial \(1\emptyset\), and \(\psi_{d}\) is the continuous (in the ultrametric sense) algebra homomorphism from \(\mathbb{R}\langle\langle X^{\prime}\rangle\rangle\) to the set of vector space endomorphisms on \(\mathbb{R}\langle\langle X\rangle\rangle\), \(\operatorname{End}\left(\mathbb{R}\langle\langle X\rangle\rangle\right)\), uniquely specified by
\[\psi_{d}(x_{i}^{\prime}\eta)=\psi_{d}(x_{i}^{\prime})\circ\psi_{d}(\eta)\]
with \(\psi_{d}(x_{i}^{\prime})(e)=x_{0}(d_{i}\boldsymbol{\omega}e)\), \(i=0,1,\ldots,m\) for any \(e\in\mathbb{R}\langle\langle X\rangle\rangle\), and where \(d_{i}\) is the \(i\)-th component series of \(d\) (\(d_{0}:=\mathbf{1}\)). By definition, \(\psi_{d}(\emptyset)\) is the identity map on \(\mathbb{R}\langle\langle X\rangle\rangle\). The cascade interconnection preserves local convergence and thus the composite has a Fliess operator representation [14]. The linearity of the composition product in the left argument is evident form the definition. However, the following theorem states that the composition product distributes over the shuffle product from the right.
**Theorem 5.1**.: _[_12_]_ _Let \(c,d\in\mathbb{R}^{k}\langle\langle X^{\prime}\rangle\rangle\) and \(e\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\), such that \(|X^{\prime}|=\ell+1\), then \((c\boldsymbol{\omega}d)\circ e=(c\circ e)\,\,\boldsymbol{\omega}\left(d\circ e\right)\)._
Given a series \(e\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\), define a map \(\Upsilon_{e}:\mathbb{R}^{k}\langle\langle X^{\prime}\rangle\rangle\longrightarrow \mathbb{R}^{k}\langle\langle X\rangle\rangle\) defined as \(c\mapsto c\circ e\). Theorem 5.1 infers that \(\Upsilon_{e}\) is an \(\mathbb{R}\)-algebra homomorphism from the shuffle algebra of \(\mathbb{R}^{k}\langle\langle X^{\prime}\rangle\rangle\) to the shuffle algebra of \(\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\). The composition product preserves the purely improper property of the left argument which is stated in the following theorem.
**Theorem 5.2**.: _If \(c\in\mathbb{R}^{k}\langle\langle X^{\prime}\rangle\rangle\) and \(d\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) such that \(|X^{\prime}|=\ell+1\), then \((c\circ d,\emptyset)=(c,\emptyset)\). Hence, if \(c\in\mathbb{R}^{k}_{pi}\left\langle\langle X^{\prime}\rangle\right\rangle\) then \(c\circ d\in\mathbb{R}^{k}_{pi}\left\langle\langle X\rangle\right\rangle\) and vice-versa. Similarly if \(c\) is a proper series then \(c\circ d\) is also a proper series and vice-versa._
_Proof:_ The proof follows immediately from (10).
The composition product is a strong contraction map with respect to its right argument in the ultrametric topology and is stated in the following theorem.
**Theorem 5.3**.: _[_Gray & Li(2005)_]_ _Let \(c\in\mathbb{R}^{k}\langle\langle X^{\prime}\rangle\rangle\) and \(d,e\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\), such that \(|X^{\prime}|=\ell+1\), then \(\kappa\left(c\circ d,c\circ e\right)\leq\sigma\kappa\left(d,e\right)\) where \(\sigma\in[0,1[\)._
### Cascading of Chen-Fliess with Multiplicative Feedforward of Input
The cascade interconnection of a Chen-Fliess series \(F_{c}\) and \(F_{d}\) along with the multiplicative feedforward of the input, as shown in Figure 1, arises primarily in the analysis of multiplicative feedback interconnection discussed in Section 6. A semblance of such an interconnection has appeared in Definition 3.1 of [Gray & Ebrahimi-Fard(2017)], without being explicit and limited to the SISO case. With respect to Figure 1, the map \(u\mapsto y\) viz. \(y=F_{c}[u.F_{d}[u]]\) has Chen-Fliess series representation denoted by \(F_{c\curvearrowright d}\), where \(c\curvearrowright d\) denotes the _multiplicative mixed composition product_ of \(c\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\) and \(d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) defined as
\[c\curvearrowright d=\sum_{\eta\in X^{*}}\left(c,\eta\right)\eta \curvearrowright d:=\sum_{\eta\in X^{*}}\left(c,\eta\right)\bar{\phi}_{d} \left(\eta\right)\left(\mathbf{1}\right). \tag{11}\]
Here, \(\bar{\phi}_{d}:\mathbb{R}\langle\langle X\rangle\rangle\longrightarrow \operatorname{End}\left(\mathbb{R}\langle\langle X\rangle\rangle\right)\) is an \(\mathbb{R}\)-algebra homomorphism such that
\[\bar{\phi}_{d}(x_{0})(e)=x_{0}e\]
and
\[\bar{\phi}_{d}(x_{i})(e)=x_{i}(d_{i}\mathbf{u}e).\]
Recall that \(\mathbb{R}\langle\langle X\rangle\rangle\) is an \(\mathbb{R}\)-algebra under Cauchy product and \(\operatorname{End}\left(\mathbb{R}\langle\langle X\rangle\rangle\right)\). The multiplicative mixed composition defined in (11) asserts that, for all \(\eta\in X^{*}\) and \(d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\),
\[\emptyset\curvearrowright d =\emptyset\] \[x_{0}\eta\curvearrowright d =x_{0}\left(\eta\curvearrowright d\right)\] \[x_{i}\eta\curvearrowright d =x_{i}\left(d_{i}\mathbf{u}(\eta\curvearrowright d))\qquad\quad \forall\,i=1,2,\ldots,m.\]
For later reference, we summarise the properties of (11) in the following
**Theorem 5.4**.: _The multiplicative mixed composition product (11) is linear in its left argument and \((c\curvearrowright d,\emptyset)=(c,\emptyset)\), for all \(c\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\) and \(d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\)._
The following results are already known in the single-input single-output (SISO) setting. However, their multi-input multi-output (MIMO) extensions are straightforward and to avoid reiteration of the proofs, only the statements are provided in this document. The foremost of the theorems asserts that the multiplicative mixed composition product distributes over shuffle product from the right.
**Theorem 5.5**.: _[_Gray & Ebrahimi-Fard(2017)_]_ _Let \(c,d\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\) and \(e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), then \((c\,\underline{\upmu}d)\curvearrowright e=(c\curvearrowright e)\,\underline{ \upmu}\left(d\curvearrowright e)\)._
The inference of Theorem 5.5 is that for any \(e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), the map \(\Gamma_{e}:\mathbb{R}^{p}\langle\langle X\rangle\rangle\longrightarrow\mathbb{R} ^{p}\langle\langle X\rangle\rangle\) given by \(d\mapsto d\curvearrowright e\) is an \(\mathbb{R}\)-algebra endomorphism on the shuffle algebra \(\mathbb{R}^{p}\langle\langle X\rangle\rangle\). The next lemma is essential in proving that multiplicative mixed composition product is a strong contraction map in its right argument in the ultrametric topology.
**Lemma 5.1**.: _[_Gray & Ebrahimi-Fard(2017)_]_ _Let \(\eta\in X^{*}\) and \(d,e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), then \(\kappa\left(\eta\curvearrowright d,\eta\curvearrowright e)\leq\sigma^{ \left\lceil\eta\right\rceil}\kappa\left(d,e\right)\) where \(\sigma\in[0,1[\)._
The following theorem states the strong contraction property of the multiplicative mixed composition product which is an essential result in Section 6.
**Theorem 5.6**.: _[_Gray & Ebrahimi-Fard(2017)_]_ _Let \(d,e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) and \(c\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\), then \(\kappa\left(c\curvearrowright d,c\curvearrowright e)\leq\sigma^{\operatorname {ord}(c^{\prime})}\kappa\left(d,e\right)\), where \(c^{\prime}=c-(c,\emptyset)\), the proper part of \(c\)._
Since \(\operatorname{ord}\left(c^{\prime}\right)\geq 1\) and \(\sigma\in]0,1[\), then from Theorem 5.6, the map \(\bar{\Gamma}_{c}:e\mapsto c\curvearrowright e\) is a strong contraction map in the ultrametric topology. The following lemma is essential in proving the mixed associativity of the composition and multiplicative mixed composition product. The result, along with Theorem 5.7 can be inferred in the SISO setting from Lemma 3.6 in [Gray & Ebrahimi-Fard(2017)], and its extension to the MIMO case is straightforward.
**Lemma 5.2**.: _[_Gray & Ebrahimi-Fard(2017)_]_ _Let \(X^{\prime}=\{x^{\prime}_{0},\ldots,x^{\prime}_{p}\}\) and \(\eta\in{X^{\prime}}^{*}\). Let \(d\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\) and \(e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), then \(\eta\circ\left(d\curvearrowright e\right)=\left(\eta\circ d\right) \curvearrowright e\)._
The following theorem states that the composition product and multiplicative mixed composition product are associative in combination.
**Theorem 5.7**.: _[_Gray & Ebrahimi-Fard(2017)_]_ _Let \(X^{\prime}=\{x^{\prime}_{0},\ldots,x^{\prime}_{p}\}\) and \(c\in\mathbb{R}^{q}\langle\langle X^{\prime}\rangle\rangle\). Let \(d\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\) and \(e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), then \(c\circ\left(d\curvearrowright e\right)=\left(c\circ d\right)\curvearrowright e\)._
### Multiplicative Dynamic Output Feedback Group
The dynamic multiplicative feedback group plays a vital role in computation of the multiplicative dynamic feedback formula, as well as in assessing the feedback as a group action in Section 6. Indeed, consider the cascade interconnection of two Chen-Fliess series \(F_{c}\) and \(F_{d}\) along with their multiplicative feedforward of inputs displayed in Figure 2, where \(c,d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\). The input-output relation of the composite system, \(u\mapsto y\) is \(u.F_{d}[u]F_{c}[u.F_{d}[u]]\) and can be represented by Chen-Fliess series as follows. Consider
\[u.F_{c\star d}[u]:=u.F_{d}[u]F_{c}[u.F_{d}[u]],\]
where the _multiplicative composition product_ of \(c\) and \(d\) is defined as
\[c\star d=d\,\underline{\upmu}\left(c\curvearrowright d\right). \tag{12}\]
The following theorems appeared in [Gray & Ebrahimi-Fard(2017)] in the SISO setting. We underline that the latter restriction is not essential, that is, the statements along with the proofs naturally extend to the MIMO setting.
Figure 1. Cascade connection of Chen–Fliess \(F_{d}\) with \(F_{c}\) along with multiplicative feedforward of input
**Theorem 5.8**.: _[_Gray & Ebrahimi-Fard(2017)_]_ _Let \(c,d,e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), then, \((c\star d)\star e=c\star(d\star e)\)._
Observe that (12) and Theorem 5.8 infer that \(\mathbb{R}^{m}\langle\langle X\rangle\rangle\) forms a non-commutative monoid under multiplicative composition product, with the identity element \(\,\mathrm{1\hskip-2.845276ptl}\). The following theorem states that the multiplicative mixed composition product is a right action on \(\mathbb{R}^{q}\langle\langle X\rangle\rangle\) by the monoid \((\mathbb{R}^{m}\langle\langle X\rangle\rangle,\star,\,\mathrm{1\hskip-2.845276ptl})\).
**Theorem 5.9**.: _[_Gray & Ebrahimi-Fard(2017)_]_ _Let \(c\in\mathbb{R}^{q}\langle\langle X\rangle\rangle\) and \(d,e\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), then \((c\curvearrowleft d)\curvearrowleft e=c\curvearrowleft(d\star e\right)\)._
The prominent question is to find the invertible elements of the monoid \((\mathbb{R}^{m}\langle\langle X\rangle\rangle,\star)\) and the motivation to find the unit elements of the monoid shall be evident in Section 6. Let \(d,e\in\mathbb{R}^{m}_{pi}\left\langle\langle X\rangle\rangle\right\rangle\) and suppose
\[d\star e=\,\mathrm{1\hskip-2.845276ptl}.\]
Observe that \(d\in\mathbb{R}^{m}_{pi}\left\langle\langle X\rangle\right\rangle\) implies \((d\curvearrowleft e)\in\mathbb{R}^{m}_{pi}\left\langle\langle X\rangle\right\rangle\) and using Theorem 5.5,
\[e=(d\curvearrowleft e)^{\,\mathbf{u}\!-\!1}=d^{\,\mathbf{u}\!-\!1} \curvearrowleft e.\]
Hence, for \(e\) to be right inverse of \(d\), the purely improper series \(e\) has to satisfy the fixed point equation
\[e=d^{\,\mathbf{u}\!-\!1}\curvearrowleft e \tag{13}\]
Observe from Theorem 5.6 that the map \(e\mapsto d^{\,\mathbf{u}\!-\!1}\curvearrowleft e\) is a strong contraction in the ultrametric space inferring that (13) has a unique fixed point. Suppose \(e\) is the left inverse of \(d\) viz. \(e\star d\), then a similar procedure shows that \(e\) has to satisfy the equation
\[d=e^{\,\mathbf{u}\!-\!1}\curvearrowleft d \tag{14}\]
Note that if \(e\) is a solution of (13), then \(e\) satisfies (14) and also the converse holds true. Hence, \(e\) is the unique inverse of \(d\) and is given the notation \(d^{\star-1}\) for \(d\in\mathbb{R}^{m}_{pi}\left\langle\langle X\rangle\right\rangle\). Thus, \(\mathbb{R}^{m}_{pi}\left\langle\langle X\rangle\right\rangle\) forms a group under multiplicative composition product, \(\star\), and is termed as the _multiplicative dynamic output feedback group_ and is formally stated in the following theorem.
**Theorem 5.10**.: \(\left(\mathbb{R}^{m}_{pi}\left\langle\langle X\rangle\right\rangle,\star\right)\) _forms a group with the identity element \(\,\mathrm{1\hskip-2.845276ptl}\)._
It is worth noting that [Gray & Ebrahimi-Fard(2017)] proved Theorem 5.10 for one-dimensional case viz. \(m=1\). In light of Theorem 5.10, Theorem 5.5 and (12) one obtains the following relations for \(c\in\mathbb{R}^{m}_{pi}\left\langle\langle X\rangle\right\rangle\):
\[c^{\star-1} =c^{\,\mathbf{u}\!-\!1}\curvearrowleft c^{\star-1}\] \[\left(c^{\star-1}\right)^{\,\mathbf{u}\!-\!1} =c\curvearrowleft c^{\star-1}. \tag{15}\]
The following lemma is essential in defining a subgroup of the multiplicative dynamic output feedback group upon which the computational framework for the multiplicative feedback products is discussed in Section 8.
Figure 2. Cascade connection of Chen–Fliess \(F_{d}\) with \(F_{c}\) along with multiplicative feedforward of their inputs.
**Lemma 5.3**.: _Let \(c,d\in\mathbb{R}_{pi}^{m}\left\langle\left\langle X\right\rangle\right\rangle\), then \(\left(c\star d,\emptyset\right)=\left(c,\emptyset\right)\left(d,\emptyset\right)\)._
_Proof:_ Observe from (12) that,
\[\left(c\star d,\emptyset\right) =\left(d\,\mathbf{u}\,\left(c\curvearrowleft d\right),\emptyset\right)\] \[=\left(c\curvearrowleft d,\emptyset\right)\left(d,\emptyset\right)\]
Since \(\left(c\curvearrowleft d,\emptyset\right)=\left(c,\emptyset\right)\),
\[\left(c\star d,\emptyset\right)=\left(c,\emptyset\right)\left(d,\emptyset \right).\]
Lemma 5.3 thus proves that the set of all series which are of the form \(\mbox{1}\hskip-2.845276pt\mbox{l}+c\), where \(c\) is a proper series, forms a subgroup of the multiplicative dynamic feedback group, which is stated in the following theorem.
**Theorem 5.11**.: _Let \(M=\left\{\mbox{1}\hskip-2.845276pt\mbox{l}+c\,:c\in\mathbb{R}_{p}^{m}\left\langle \left\langle X\right\rangle\right\rangle\right\}\), then \(\left(M,\star,\mbox{1}\hskip-2.845276pt\mbox{l}\right)\) forms a subgroup of the multiplicative dynamic feedback group._
The algorithmic framework for the computation of multiplicative feedback products is fundamentally based on the subgroup \(M\) as asserted in Theorem 5.11. The group \(M\) is isomorphic to the character group of the Hopf algebra \(H\) which is used for computation of feedback and the framework is explained in detail in Section 8.
## 6. Chen-Fliess Series Under Multiplicative Dynamic Output Feedback
Let \(F_{c}\) be a Chen-Fliess series with a generating series \(c\in\mathbb{R}^{q}\langle\left\langle X\right\rangle\rangle\). Assume it is interconnected with a Chen-Fliess series \(F_{d}\) with a purely improper generating series \(d\in\mathbb{R}_{pi}^{m}\left\langle\left\langle X^{\prime}\right\rangle\right\rangle\), as shown in Figure 3. Note that, \(\left|X\right|=m+1\) and \(\left|X^{\prime}\right|=q+1\). The primary goal of this section is to show that the closed-loop system has a Chen-Fliess series representation, say \(y=F_{e}[v]\), where \(e\in\mathbb{R}^{q}\langle\left\langle X\right\rangle\rangle\). If this is the case, then necessarily
\[y =F_{e}[v]=F_{c}[u]=F_{c}[vF_{d}[y]]\] \[=F_{c}[vF_{d}[F_{e}[v]]]=F_{c}[vF_{doe}[v]]\] \[=F_{c\curvearrowleft\{doe\}}[v]\]
for any admissible input \(v\). Therefore, the series \(e\) has to satisfy the fixed point equation
\[e=c\curvearrowleft(d\circ e\right). \tag{16}\]
Observe that, in light of Theorem 5.3 and Theorem 5.6 the map \(e\mapsto c\curvearrowleft(d\circ e\right)\) is a strong contraction map in the ultrametric space and thus (16) has a unique fixed point. The following thoerem establishes the first main result of this section, which follows immediately.
**Theorem 6.1**.: _The series \(c\curvearrowleft(d^{\mathbf{u}-1}\circ c\right)^{\star-1}\in\mathbb{R}^{q} \langle\left\langle X\right\rangle\rangle\) is the unique fixed point of the map \(e\mapsto c\curvearrowleft(d\circ e\right)\)._
_Proof:_ If \(e:=c\curvearrowleft(d^{\mathbf{u}-1}\circ c)^{\star-1}\), then
\[c\curvearrowleft(d\circ e\right)=c\curvearrowleft\left[d\circ\left(c \curvearrowleft(d^{\mathbf{u}-1}\circ c\right)^{\star-1}\right)\right]\]
Using Theorem 5.7 and then Theorem 5.5,
\[c\curvearrowleft(d\circ e\right)=c\curvearrowleft\left[(d\circ c) \curvearrowleft(d^{\mathbf{u}-1}\circ c\right)^{\star-1}\right]\]
\[=c\curvearrow\left[(d\circ c)^{\,\boldsymbol{\upmu}-1}\curvearrow\left(d^{\, \boldsymbol{\upmu}-1}\circ c\right)^{\star-1}\right]^{\boldsymbol{\upmu}-1}.\]
Using Theorem 5.1,
\[c\curvearrow(d\circ e)=c\curvearrow\left[\left(d^{\,\boldsymbol{\upmu}-1} \circ c\right)\curvearrow\left(d^{\,\boldsymbol{\upmu}-1}\circ c\right)^{ \star-1}\right]^{\boldsymbol{\upmu}-1}.\]
Using the relations (15),
\[c\curvearrow(d\circ e) =c\curvearrow\left[\left(\left(d^{\,\boldsymbol{\upmu}-1}\circ c \right)^{\star-1}\right)^{\boldsymbol{\upmu}-1}\right]^{\boldsymbol{\upmu}-1}\] \[=c\curvearrow\left(d^{\,\boldsymbol{\upmu}-1}\circ c\right)^{ \star-1}=e.\]
**Theorem 6.2**.: _Given a series \(c\in\mathbb{R}^{q}\langle\langle X\rangle\rangle\) and a purely improper series \(d\in\mathbb{R}^{m}_{pi}\langle\langle X^{\prime}\rangle\rangle\) (such that \(|X|=m+1\) and \(|X^{\prime}|=q+1\)), then the generating series for the closed-loop system in Figure 3 is given by the multiplicative dynamic feedback product \(c\bar{\otimes}d:=c\curvearrow(d^{\,\boldsymbol{\upmu}-1}\circ c)^{\star-1}\)._
The notion that feedback can described mathematically as a transformation group acting on the plant is well established in control theory [1]. The following theorem describes the situation in the present context.
**Theorem 6.3**.: _The multiplicative dynamic feedback product is a right group action by the multiplicative group \(\left(\mathbb{R}^{m}_{pi}\langle\langle X^{\prime}\rangle\rangle,\,\boldsymbol {\upmu},\,\boldsymbol{1}\right)\) on the set \(\mathbb{R}^{q}\langle\langle X\rangle\rangle\), where \(|X|=m+1\) and \(|X^{\prime}|=q+1\)._
_Proof:_ Let \(c\in\mathbb{R}^{q}\langle\langle X\rangle\rangle\). Observe that from Theorem 6.2,
\[c\bar{\otimes}\,\boldsymbol{1} =c\curvearrow\left(\boldsymbol{1}^{\,\boldsymbol{\upmu}-1}\circ c \right)^{\star-1}\] \[=c\curvearrow\boldsymbol{1}=c.\]
Let \(d_{1},d_{2}\in\mathbb{R}^{m}_{pi}\langle\langle X^{\prime}\rangle\rangle\). It needs to be proven that \(\left(c\bar{\otimes}d_{1}\right)\bar{\otimes}d_{2}=c\bar{\otimes}\left(d_{1} \,\boldsymbol{\upmu}d_{2}\right)\). From Theorem 6.2, observe that
\[\left(c\bar{\otimes}d_{1}\right)\bar{\otimes}d_{2} =\left(c\bar{\otimes}d_{1}\right)\curvearrow\left(d_{2}^{\, \boldsymbol{\upmu}-1}\circ\left(c\bar{\otimes}d_{1}\right)\right)^{\star-1}\] \[=\left(c\curvearrow\left(d_{1}^{\,\boldsymbol{\upmu}-1}\circ c \right)^{\star-1}\right)\curvearrow\left(d_{2}^{\,\boldsymbol{\upmu}-1}\circ \left(c\curvearrow\left(d_{1}^{\,\boldsymbol{\upmu}-1}\circ c\right)^{\star-1} \right)\right)^{\star-1}.\]
Applying Theorem 5.7,
Figure 3. Chen–Fliess series \(F_{c}\) in multiplicative output feedback with Chen-Flies series \(F_{d}\)
Applying Theorem 5.9 and fact that the group inverse is anti-homomorphism with respect to the group product,
\[\left(c\bar{\mathbb{a}}d_{1}\right)\bar{\mathbb{a}}d_{2} =c\curvearrow\left[\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c \right)^{\star-1}\star\left(\left(d_{2}^{\boldsymbol{\shuffle}-1}\circ c \right)\curvearrow\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c\right)^{\star-1 }\right)^{\star-1}\right]\] \[=c\curvearrow\left[\left(\left(d_{2}^{\boldsymbol{\shuffle}-1} \circ c\right)\curvearrow\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c\right)^{ \star-1}\right)\star\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c\right)\right] ^{\star-1}.\]
Applying (12),
Using Theorem 5.9,
\[\left(c\bar{\mathbb{a}}d_{1}\right)\bar{\mathbb{a}}d_{2} =c\curvearrow\left[\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c \right)\boldsymbol{\shuffle}\left(\left(d_{2}^{\boldsymbol{\shuffle}-1}\circ c \right)\curvearrow\left(\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c\right)^ {\star-1}\star\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c\right)\right) \right)\right]^{\star-1}\] \[=c\curvearrow\left(\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c \right)\boldsymbol{\shuffle}\left(\left(d_{2}^{\boldsymbol{\shuffle}-1}\circ c \right)\curvearrow\mathds{1}\right)\right)^{\star-1}\] \[=c\curvearrow\left(\left(d_{1}^{\boldsymbol{\shuffle}-1}\circ c \right)\boldsymbol{\shuffle}\left(d_{2}^{\boldsymbol{\shuffle}-1}\circ c \right)\right)^{\star-1}.\]
In light of Theorem 5.1,
\[\left(c\bar{\mathbb{a}}d_{1}\right)\bar{\mathbb{a}}d_{2} =c\curvearrow\left(\left(d_{1}^{\boldsymbol{\shuffle}-1} \boldsymbol{\shuffle}d_{2}^{\boldsymbol{\shuffle}-1}\right)\circ c\right)^{ \star-1}\] \[=c\curvearrow\left(\left(d_{1}\boldsymbol{\shuffle}d_{2}\right)^{ \boldsymbol{\shuffle}-1}\circ c\right)^{\star-1}.\]
Therefore,
\[\left(c\bar{\mathbb{a}}d_{1}\right)\bar{\mathbb{a}}d_{2}=c\bar{\mathbb{a}} \left(d_{1}\boldsymbol{\shuffle}d_{2}\right).\]
It is worth noting that for the _additive dynamic feedback product_ the transformation group is the additive group \((\mathbb{R}^{m}\langle\langle X^{\prime}\rangle\rangle,+,0)\) while here \(\left(\mathbb{R}^{m}_{pi}\langle\langle X^{\prime}\rangle\rangle,\boldsymbol{ \shuffle},\mathds{1}\right)\) plays the role.
## 7. Invariance of Class and Relative Degree under multiplicative dynamic feedback connection
The notion of relative degree of a plant is very essential and prime in the studies of feedback linearization [11], flatness and system inversion etc. The existence and quantification of relative degree of a interconnection of systems is vital in systems theory. The notion of class and relative degree of a SISO Chen-Fliess series is equivalently characterized by the notion of relative degree of its generating series and the definition was furnished in [10], Gray & Venkatesh(2019)] and the existence and quantification of relative degree of interconnected system of Chen-Fliess series was described in [10], Venkatesh(2021)]. In addition, this definition of relative degree is consistent with the classical definition whenever \(y=F_{c}[u]\) has an input-affine analytic state space realization [10], Gray & Ebrahimi-Fard(2017)]. Let \(X=\{x_{0},x_{1}\}\) and the following definition explains the concept of a class, a weaker notion than the relative degree of a series in \(\mathbb{R}\langle\langle X\rangle\rangle\).
**Definition 7.1**.: [10] _A series \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) is said to be of r-class, denoted by \(\mathscr{C}(c)=r\), if \(\mathrm{supp}(c_{F})\subseteq x_{0}^{r-1}X^{+}\) and \(\mathrm{supp}(c_{F})\nsubseteq x_{0}^{r}X^{+}\). By definition, let \(\mathscr{C}(c)=\infty\) if \(c_{F}=0\)._
The notion of class is _universal_ and is versed in the following theorem.
**Lemma 7.1**.: [Gray & Venkatesh(2019)] _Every series \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) has a class._
Definition 7.1 of class is illustrated in the following example.
**Example 7.1**.: _Let \(c=1+x_{0}x_{1}^{2}+x_{0}^{2}x_{1}\), so that \(c_{F}=x_{0}x_{1}^{2}+x_{0}^{2}x_{1}\). Observe that \(\operatorname{supp}(c_{F})\subseteq x_{0}X^{+}\) but \(\operatorname{supp}(c_{F})\nsubseteq x_{0}^{2}X^{+}\). Thus, \(\mathscr{C}(c)=2\)._
The following lemma is essential in the proof of quantification of class for the multiplicative mixed composition product.
**Lemma 7.2**.: _Let \(c,c^{\prime},d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) such that \(\operatorname{supp}\left(c^{\prime}\right)\not\subseteq x_{0}X^{*}\). Then the following statements are true:_
1. \(x_{0}^{k}\curvearrowright d=x_{0}^{k}\;\;\forall k\in\mathbb{N}_{0}\)_._
2. \(c_{N}\curvearrowright d=c_{N}\) _where_ \(c_{N}\) _is the natural part of the series_ \(c\)_._
3. \(\operatorname{supp}\left(c^{\prime}\curvearrowright d\right)\not\subseteq x _{0}X^{*}\)_._
_Proof:_
1. The proof is by induction on \(k\in\mathbb{N}_{0}\). The base case being \(k=0\) is true viz \(\emptyset\curvearrowright d=\emptyset\) from (11). Assume the proposition is true for \(k=n-1\), then using (11) \[x_{0}^{n}\curvearrowright d=x_{0}\left(x_{0}^{n-1}\curvearrowright d\right)=x _{0}\left(x_{0}^{n-1}\right)=x_{0}^{n}.\] Hence proved by induction on \(\mathbb{N}_{0}\).
2. Observe that from Definition 2.1, \(\operatorname{supp}\left(c_{N}\right)\subseteq\{x_{0}^{k}:k\in\mathbb{N}_{0}\}\). Thus, using the previous statement (1) and Theorem 5.4 it follows that \(c_{N}\curvearrowright d=c_{N}\).
3. Since \(\operatorname{supp}\left(c^{\prime}\right)\not\subseteq x_{0}X^{*}\), there exists a word \(x_{i}\eta\in\operatorname{supp}\left(c^{\prime}\right)\) where \(x_{i}\neq x_{0}\) and \(\eta\in X^{*}\). Using (11), \[x_{i}\eta\curvearrowright d=x_{i}\left(d_{i}\shuffle\left(\eta\curvearrowright d \right)\right).\] Thus, \(\operatorname{supp}\left(x_{i}\eta\curvearrowright d\right)\subseteq x_{i}X^ {*}\), where \(x_{i}\neq x_{0}\). Therefore, \(\operatorname{supp}\left(c^{\prime}\curvearrowright d\right)\not\subseteq x _{0}X^{*}\).
The following theorem quantifies that class is invariant under the multiplicative mixed composition product
**Theorem 7.1**.: _Let \(c,d\in\mathbb{R}\langle\langle X\rangle\rangle\), then \(\mathscr{C}\left(c\curvearrowright d\right)=\mathscr{C}\left(c\right)\)._
_Proof:_ Suppose the series \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) is of \(r\)-class, then the series \(c\) can be written as:
\[c=c_{N}+x_{0}^{r-1}c^{\prime},\]
where \(c^{\prime}\) is a proper series such that \(\operatorname{supp}\left(c^{\prime}\right)\not\subseteq x_{0}X^{*}\). Hence by Theorem 5.4,
\[c\curvearrowright d=\left(c_{N}\curvearrowright d\right)+\left(x_{0}^{r-1}c^{ \prime}\curvearrowright d\right).\]
Using (11),
\[c\curvearrowright d=\left(c_{N}\curvearrowright d\right)+x_{0}^{r-1}\left(c^{ \prime}\curvearrowright d\right).\]
Since \(\operatorname{supp}\left(c^{\prime}\right)\not\subseteq x_{0}X^{*}\), then by applying Lemma 7.2,
\[c\curvearrowright d=c_{N}+x_{0}^{r-1}\left(c^{\prime}\curvearrowright d\right),\]
with \(\operatorname{supp}\left(c^{\prime}\curvearrowleft d\right)\not\subseteq x_{0}X^{*}\). Given that \(c^{\prime}\in\mathbb{R}_{p}\left\langle\left\langle X\right\rangle\right\rangle\), whence \(\operatorname{supp}\left(c\curvearrowleft d\right)_{F}\subseteq x_{0}^{r-1} X^{+}\) and \(\operatorname{supp}\left(c\curvearrowleft d\right)_{F}\not\subseteq x_{0}^{r}X^{+}\). Therefore, \(\mathscr{C}\left(c\curvearrowleft d\right)=r=\mathscr{C}\left(c\right)\).
**Example 7.2**.: _Consider the series \(c\) in Example 7.1, given by \(c=1+x_{0}^{2}x_{1}+x_{0}x_{1}^{2}\) and \(d=1+x_{1}\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\). Using (11), the multiplicative mixed composition product of \(c\) and \(d\) is computed as:_
\[c\curvearrowleft d=1+x_{0}x_{1}^{2}+3x_{0}x_{1}^{3}+3x_{0}x_{1}^{4}+x_{0}^{2}x _{1}+x_{0}^{2}x_{1}^{2}.\]
_Observe that \(\mathscr{C}\left(c\curvearrowleft d\right)=2=\mathscr{C}\left(c\right)\), as in Example 7.1. _
The following theorem asserts that class of a series is preserved under the multiplicative dynamic feedback product which is one of the prime goal of this subsection.
**Theorem 7.2**.: _If \(c\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\) with \(\mathscr{C}\left(c\right)=r\), and \(d\in\mathbb{R}_{pi}\left\langle\left\langle X\right\rangle\right\rangle\), then \(\mathscr{C}\left(c\bar{\operatorname{o}}d\right)=r=\mathscr{C}\left(c\right)\)._
_Proof:_ From Theorem 6.2,
\[c\bar{\operatorname{o}}d=c\curvearrowleft\left(d^{\shuffle-1}\circ c\right)^{ \star-1}.\]
Since \(\mathscr{C}\left(c\right)=r\), whence applying Theorem 7.1,
\[\mathscr{C}\left(c\bar{\operatorname{o}}d\right) =\mathscr{C}\left(c\curvearrowleft\left(d^{\shuffle-1}\circ c \right)^{\star-1}\right)\] \[=r=\mathscr{C}\left(c\right).\]
The preservation of class under the multiplicative dynamic feedback connections as asserted in Theorem 7.2 is further illustrated in the following example.
**Example 7.3**.: _Let \(c,d\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\)\(c=x_{1}\) and \(d=1+\sum_{k\in\mathbb{N}}k!x_{1}^{k}\). Note that the class of series \(\mathscr{C}\left(c\right)=1\). Using Theorem 6.2 the multiplicative feedback product is computed as:_
\[c\bar{\operatorname{o}}d=x_{1}+x_{1}x_{0}x_{1}+3x_{1}x_{0}x_{1}x_{0}x_{1}+4x_ {1}x_{0}^{2}x_{1}^{2}+\cdots.\]
_Infer from Definition 7.1 that \(\mathscr{C}\left(c\bar{\operatorname{o}}d\right)=\mathscr{C}\left(c\right)=1\). _
Finally, the main definition of the section details the concept of relative degree in the context of Chen-Fliess series which is characterized on its generating series.
**Definition 7.2**.: [Gray & Venkatesh(2019)] _A series \(c\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\) has relative degree \(r\) if \(\mathscr{C}(c)=r\) and the word \(x_{0}^{r-1}x_{1}\in\operatorname{supp}(c_{F})\). Otherwise, \(c\) does not have relative degree._
The following theorem asserts the quantification of relative degree under multiplicative mixed composition product.
**Theorem 7.3**.: _If \(c\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\) with relative degree \(r_{c}\) and \(d\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\) be non-proper, then \(c\curvearrowleft d\) has relative degree \(r_{c}\)._
_Proof:_ From Theorem 7.1, \(\mathscr{C}\left(c\curvearrowleft d\right)=r_{c}\). It remains to prove that \(x_{0}^{r_{c}-1}x_{1}\in\operatorname{supp}\left(c\curvearrowleft d\right)\).
Given that \(c\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\) has relative degree \(r_{c}\), then \(c\) can be decomposed as:
\[c=c_{N}+\lambda x_{0}^{r_{c}-1}x_{1}+x_{0}^{r_{c}-1}c^{\prime},\]
where \(\lambda\neq 0\) and \(c^{\prime}\) is a proper series such that \(x_{1}\not\in\operatorname{supp}\left(c^{\prime}\right)\). Then,
\[c\curvearrowleft d=\left(c_{N}+\lambda x_{0}^{r_{c}-1}x_{1}+x_{0}^{r_{c}-1}c^ {\prime}\right)\curvearrowleft d.\]
Applying Theorem 5.4,
\[c\curvearrowleft d=\left(c_{N}\curvearrowleft d\right)+\lambda\left(x_{0}^{r_{ c}-1}x_{1}\curvearrowleft d\right)+\left(x_{0}^{r_{c}-1}c^{\prime}\curvearrowleft d \right)\right.\]
Using (11) and Lemma 7.2,
\[c\curvearrowright d=c_{N}+\lambda x_{0}^{r_{c}-1}x_{1}d+x_{0}^{r_{c}-1}\left(c^{ \prime}\curvearrowright d\right).\]
Since \(d\in\mathbb{R}_{pi}\left\langle\left\langle X\right\rangle\right\rangle \longrightarrow d=\alpha+d^{\prime}\), where \(\alpha\neq 0\) and \(d^{\prime}\) is a proper series. Hence,
\[c\curvearrowright d=c_{N}+\lambda\alpha x_{0}^{r_{c}-1}x_{1}+x_{0}^{r_{c}-1}x _{1}d^{\prime}+x_{0}^{r_{c}-1}\left(c^{\prime}\curvearrowright d\right).\]
Observe from (11), \(x_{1}\not\in\operatorname{supp}\left(c^{\prime}\right)\Longrightarrow x_{1} \not\in\operatorname{supp}\left(c^{\prime}\curvearrowright d\right)\) and also that \(\alpha\lambda\neq 0\).
Therefore \(x_{0}^{r_{c}-1}x_{1}\in\operatorname{supp}\left(c\curvearrowright d\right)\), whence the relative degree of \(c\curvearrowright d\) is \(r_{c}\), when \(d\) is a non-proper series.
The following example illustrates the statement from Theorem 7.3.
**Example 7.4**.: _Let \(c=1+x_{0}^{2}+x_{0}x_{1}+x_{0}^{2}x_{1}\) and \(d=1+x_{1}\). Observe that by Definition 7.2, the relative degree of \(c\) is \(r_{c}=2\) and also that \(d\) is non-proper. The multiplicative mixed composition product of \(c\) and \(d\) to computed as:_
\[c\curvearrowright d=1+x_{0}^{2}+x_{0}x_{1}+x_{0}x_{1}^{2}+x_{0}^{2}x_{1}+x_{0} ^{2}x_{1}^{2}.\]
_Using Definition 7.2, note that the relative degree of \(c\curvearrowright d\) is \(2=r_{c}\)._
The following theorem is the prime objective of this section stating that the relative degree of a series remains invariant under multiplicative dynamic feedback product.
**Theorem 7.4**.: _If \(c\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\) with relative degree \(r_{c}\) and \(d\in\mathbb{R}_{pi}\left\langle\left\langle X\right\rangle\right\rangle\), then the relative degree of \(\left(c\bar{\operatorname{@}}d\right)\) is \(r_{c}\)._
Proof.: Since \(c\in\mathbb{R}\langle\left\langle X\right\rangle\rangle\) and \(d\in\mathbb{R}_{pi}\left\langle\left\langle X\right\rangle\right\rangle\), then by Theorem 6.2,
\[c\bar{\operatorname{@}}d=c\curvearrowright\left(d^{\boldsymbol{\upmu}-1} \circ c\right)^{\star-1}.\]
Observe that \(d\in\mathbb{R}_{pi}\left\langle\left\langle X\right\rangle\right\rangle \Leftrightarrow d^{\boldsymbol{\upmu}-1}\in\mathbb{R}_{pi}\left\langle\left\langle X \right\rangle\right\rangle\). Then by Theorem 5.2\(\left(d^{\boldsymbol{\upmu}-1}\circ c\right)\in\mathbb{R}_{pi}\left\langle \left\langle X\right\rangle\right\rangle\). As per Theorem 5.10, the group inverse
\[\left(d^{\boldsymbol{\upmu}-1}\circ c\right)^{\star-1}\in\mathbb{R}_{pi}\left \langle\left\langle X\right\rangle\right\rangle.\]
Hence by Theorem 7.3,
\[c\bar{\operatorname{@}}d=c\curvearrowright\left(d^{\boldsymbol{\upmu}-1} \circ c\right)^{\star-1}.\]
has relative degree \(r_{c}\).
The invariance of the relative degree of a Chen-Fliess series under multiplicative dynamic feedback connections as stated in Theorem 7.4 is illustrated through the following example.
**Example 7.5**.: _Consider the Example 7.3 again where \(c=x_{1}\) and \(d=1+\sum_{k\in\mathbb{N}}k!x_{1}^{k}\). Observe that by Definition 7.2, the relative degree of \(c\) is \(r_{c}=1\). The multiplicative feedback product is computed as:_
\[c\bar{\operatorname{@}}d=x_{1}+x_{1}x_{0}x_{1}+3x_{1}x_{0}x_{1}x_{0}x_{1}+4x_{ 1}x_{0}^{2}x_{1}^{2}+\cdots\]
_Infer that the relative degree of \(c\bar{\operatorname{@}}d=1=r_{c}\) as stated in Theorem 7.4._
## 8. Computational Framework for Multiplicative Mixed Composition & Dynamic Feedback Product
The goal of this section is to describe the computational framework for multiplicative dynamic feedback product as explained in Section 6. The section further illustrates the framework with examples but prior to that it is imperative to understand the dual bialgebra and Hopf algebra constructions corresponding to the multiplicative dynamic output feedback group.
### Hopf Algebra Corresponding to the Multiplicative Dynamic Feedback Subgroup
The goal of the subsection is to construct a dual Hopf algebra reflecting the group structure of the multiplicative dynamic feedback subgroup \(M\) as asserted in Theorem 5.11. The group inverse is computed the antipode of the constructed Hopf algebra and thus provides a computational framework to compute the multiplicative dynamic feedback group inverse. As a recall, the group \(M\) is defined as
\[M=\{\,\mbox{1}\!\mbox{l}+d\,:d\in\mathbb{R}_{p}^{m}\,\langle\langle X\rangle \rangle\},\]
where \(\mbox{1}\!\mbox{l}=\left[1\cdots 1\,1\right]^{T}\in\mathbb{R}^{m}\). In light of Theorem 5.11, \((M,\star)\) forms a subgroup of the multiplicative dynamic feedback group. The algebra structure is same as the algebra of \(H_{\mathbf{\mathfrak{u}}}\) in Section 4.1. Let the set \(W_{b}\subset\mathbb{R}^{m}\langle\langle X\rangle\rangle^{*}\) (dual module of \(\mathbb{R}^{m}\langle\langle X\rangle\rangle\)) be defined as the collection of coordinate maps defined as:
\[W_{b}=\{a_{\eta}\,:a_{\eta}(c)=(c,\eta)\,:\,\eta\in X^{*}\},\]
where \(c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\). Define \(W\) to be the free \(\mathbb{R}^{m}\)-module spanned by the set \(W_{b}\). Let \(H\) denote the reduced symmetric algebra generated by the module \(W\). The unit map \(\xi:\mathbb{R}^{m}\longrightarrow W\) is defined by \(\xi(\mbox{1}\!\mbox{l})=a_{\emptyset}\). Note that \(a_{\emptyset}\,(c)=\mbox{1}\!\mbox{l}\,\forall c\in M\). By construction \(H\) is an \(\mathbb{R}^{m}\)-associative, commutative and unital algebra with addition, scalar multiplication and product defined, respectively, as
\[(a_{\eta}+a_{\zeta})(c) =a_{\eta}(c)+a_{\zeta}(c)\] \[(ka_{\eta})(c) =k(a_{\eta}^{i}(c))\] \[\boldsymbol{m}(a_{\eta},a_{\zeta})(c) =a_{\eta}(c)a_{\zeta}(c),\]
where \(c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\). Then \(H\) is given a coproduct \(\Delta_{H}:H\longrightarrow H\bigotimes H\) such that for all \(c,d\in M\): \(\Delta_{H}a_{\eta}^{i}(c,d)=a_{\eta}^{i}(c\star d)=\left(\left(c\star d\right) _{i},\eta\right)\,\forall\eta\in X^{+}\). The counit map \(\epsilon:H\longrightarrow\mathbb{R}\) is defined as
\[\epsilon(h)=\begin{cases}\mbox{1}\!\mbox{l}\,:\,h=a_{\emptyset}\\ 0\,:\,\mbox{otherwise.}\end{cases}\]
Since \(\circ\) is associative (from Theorem 5.8), thus by the dual the coproduct \(\Delta_{H}\) is coassociative. Therefore, \((H,\boldsymbol{m},\xi,\Delta_{H},\epsilon)\) forms a \(\mathbb{R}^{m}\)-bialgebra. Owing to the group structure of \((M,\circ)\), the bialgebra \(H\) is equipped with antipode \(S\) defined as:
\[Sa_{\eta}\,(c)=a_{\eta}\left(c^{\star-1}\right)=\left(c^{\star-1},\eta\right),\]
for all \(i=1,2,\ldots,m\) and \(\eta\in X^{+}\). Hence \(H\) is a \(\mathbb{R}^{m}\)-Hopf algebra. The computation of coproduct \(\Delta_{H}\) is well-understood through the right coaction of Hopf algebra \(H\) on the Hopf algebra \(H_{\mathbf{\mathfrak{u}}}\). Prior to that, it is imperative to understand the right coaction of Hopf algebra \(H\) on the non-unital algebra of coordinate functions.
### Coaction of Hopf algebra \(H\) on Algebra of Coordinate Map
The subsection explains the coaction of the Hopf algebra \(H\) defined in Section 8.1 on the algebra of coordinate functions. The results in this subsection are utilized subsequently to explain the coaction of \(H\) on the bialgebra \(H_{\boldsymbol{\mathfrak{u}}}\), particularly in proofs in Section 8.3. The right coaction of the Hopf algebra \(H\) is on \(\mathbb{R}^{m}\)-algebra of coordinate maps \(S^{+}\left(W\right)\) constructed in Section 4.3.
The right coaction map \(\tilde{\Delta}:S^{+}\left(W\right)\longrightarrow S^{+}\left(W\right) \bigotimes H\) is defined such that for all \(c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), \(d\in M\) and \(\eta\in X^{*}\),
\[\tilde{\Delta}a_{\eta}\left(c,d\right)=\left(c\curvearrowright d,\eta\right). \tag{17}\]
The map \(\tilde{\Delta}\) being a right coaction map is a reflection of Theorem 5.9. It remains to show how the coaction map \(\tilde{\Delta}\) is computed on \(S^{+}(W)\), for which it is sufficient to define its computation on the module \(W\). Observe that for all \(a_{\eta}\in W\),
\[\tilde{\Delta}a_{\eta}=\left[\tilde{\Delta}\circ\pi^{1}\ \tilde{\Delta} \circ\pi^{2}\ \cdots\ \tilde{\Delta}\circ\pi^{m}\right]^{t}a_{\eta}.\]
On the dual side, the above statement infers that for all \(c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), \(d\in M\) and \(\eta\in X^{*}\),
\[\left(c\curvearrowright d,\eta\right)=\left[\left(\left(c\curvearrowright d \right)_{1},\eta\right)\,\cdots\left(\left(c\curvearrowright d\right)_{m}, \eta\right)\right]^{t}.\]
Hence, the notation \(\tilde{\Delta}a_{\eta}^{i}:=\tilde{\Delta}\circ\pi^{i}a_{\eta}\) for all \(\eta\in X^{*}\) and \(i=1,2,\ldots,m\). The following proposition provides a recursive definition to compute \(\tilde{\Delta}\) on the module \(V\) viz to compute the \(\tilde{\Delta}\left(a_{\eta}^{j}\right)\,\forall\eta\in X^{*}\) and \(j=1,2,\ldots,m\).
**Proposition 8.1**.: _For all \(i=1,\ldots,m\):_
1. \(\tilde{\Delta}a_{\emptyset}^{i}=a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}\)_._
2. \(\tilde{\Delta}\circ\theta_{0}=\left(\theta_{0}\otimes\mathbf{id}\right)\circ \tilde{\Delta}\)_._
3. \(\tilde{\Delta}\circ\theta_{i}=\left(\theta_{i}\otimes\boldsymbol{m}\right) \circ\left(\tilde{\Delta}\otimes\mathbf{id}\right)\circ\rho_{\boldsymbol{ \mathfrak{u}}\boldsymbol{\mathfrak{u}}}^{i}\)_,_
_where \(\rho_{\boldsymbol{\mathfrak{u}}\boldsymbol{\mathfrak{u}}}\) is the coaction map of Hopf algebra \(H_{\boldsymbol{\mathfrak{u}}}\) on \(S^{+}\left(W\right)\) as defined in Section 4.3._
_Proof:_ Observe that \(\forall c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) and \(d\in M\),
\[c=\left(c,\emptyset\right)+\sum_{j=0}^{m}x_{j}\left(x_{j}^{-1}\left(c\right) \right).\]
Hence by Theorem 5.4,
\[c\curvearrowright d=\left(c,\emptyset\right)+x_{0}\left(x_{0}^{-1}\left(c \right)\curvearrowright d\right)+\sum_{j=1}^{m}x_{j}\left(d_{j}\boldsymbol{ \mathfrak{u}}\left(x_{j}^{-1}\left(c\right)\curvearrowright d\right)\right). \tag{18}\]
The proof for each of the statement as follows:
1. Let \(c,d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\). From (17) and (18), \[\tilde{\Delta}a_{\emptyset}^{i}\left(c,d\right) =\left(\left(c\curvearrowright d\right)_{i},\emptyset\right)\] \[=\left(c_{i}\curvearrowright d,\emptyset\right)=\left(c_{i}, \emptyset\right).1=a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}\left(c,d\right).\] Therefore, \(\tilde{\Delta}a_{\emptyset}^{i}=a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}\).
2. Let \(c,d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\), \(\eta\in X^{*}\) and \(\forall\,j=1,2,\ldots,m\). Then, \[\left(\tilde{\Delta}\circ\theta_{0}\right)a_{\eta}^{j}\left(c,d\right) =\left(\left(c\curvearrowleft d\right)_{j},x_{0}\eta\right)\] \[=\left(x_{0}^{-1}\left(c\curvearrowleft d\right)_{j},\eta\right)\] From (18), \[\left(\tilde{\Delta}\circ\theta_{0}\right)a_{\eta}^{j}\left(c,d\right) =\left(x_{0}^{-1}\left(c_{j}\right)\curvearrowleft d,\eta\right)\] \[=\tilde{\Delta}a_{\eta}^{j}\left(x_{0}^{-1}\left(c\right),d\right)\] \[=\left(\theta_{0}\otimes\mathbf{id}\right)\circ\tilde{\Delta}a_{ \eta}\left(c,d\right).\] Therefore, \(\tilde{\Delta}\circ\theta_{0}=\left(\theta_{0}\otimes\mathbf{id}\right)\circ \tilde{\Delta}\).
3. Let \(c,d\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) and \(\eta\in X^{*}\). Then \(\,\forall\,i,j=1,2,\ldots,m\), \[\left(\tilde{\Delta}\circ\theta_{i}\right)a_{\eta}^{j}\left(c,d\right) =\left(\left(c\curvearrowleft d\right)_{j},x_{i}\eta\right)\] \[=\left(x_{i}^{-1}\left(c\curvearrowleft d\right)_{j},\eta\right)\] From (18), \[\left(\tilde{\Delta}\circ\theta_{i}\right)a_{\eta}^{j}\left(c,d\right) =\left(d_{i}\underrightarrow{\mathbf{u}}x_{i}^{-1}\left(c_{j} \right)\curvearrowleft d,\eta\right)\] \[=\rho_{\mathbf{u}}^{i}a_{\eta}^{j}\left(x_{i}^{-1}\left(c \right)\curvearrowleft d\right)\] \[=\rho_{\mathbf{u}}^{i}a_{\eta}^{j}\left(x_{i}^{-1}\left(c \right)\curvearrowleft d,d\right)\] \[=\left(\tilde{\Delta}\otimes\mathbf{id}\right)\circ\rho_{ \mathbf{u}}^{i}a_{\eta}^{j}\left(x_{i}^{-1}\left(c\right),d,d\right)\] \[=\left(\theta_{i}\otimes\mathbf{m}\right)\circ\left(\tilde{\Delta} \otimes\mathbf{id}\right)\circ\rho_{\mathbf{u}}^{i}a_{\eta}^{j}\left(c,d \right).\] Therefore, \(\tilde{\Delta}\circ\theta_{i}=\left(\theta_{i}\otimes\mathbf{m}\right)\circ \left(\tilde{\Delta}\otimes\mathbf{id}\right)\circ\rho_{\mathbf{u}}^{i}\)\(\forall i=1,2,\ldots,m\).
**Example 8.1**.: _A few examples of the computation of \(\tilde{\Delta}\) on \(V\) using Proposition 8.1 are given as follows(indices \(i,j,k=1,2,\ldots,m\).):_
\[\tilde{\Delta}a_{\emptyset}^{i} =a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}.\] \[\tilde{\Delta}a_{x_{0}}^{i} =a_{x_{0}}^{i}\otimes a_{\emptyset}^{i}.\] \[\tilde{\Delta}a_{x_{i}}^{j} =a_{x_{i}}^{j}\otimes a_{\emptyset}^{i}.\] \[\tilde{\Delta}a_{x_{0}}^{i} =a_{x_{0}}^{i}\otimes a_{\emptyset}^{i}.\] \[\tilde{\Delta}a_{x_{0}x_{i}}^{j} =a_{x_{0}x_{i}}^{j}\otimes a_{\emptyset}^{i}.\] \[\tilde{\Delta}a_{x_{0}x_{i}}^{j} =\left(a_{x_{i}x_{0}}^{j}\otimes a_{\emptyset}^{i}\right)+\left( a_{x_{i}}^{j}\otimes a_{x_{0}}^{i}\right).\]
\[\tilde{\Delta}a_{x_{i}x_{j}}^{k}=\left(a_{x_{i}x_{j}}^{k}\otimes a_{\emptyset}^{j} a_{\emptyset}^{i}\right)+\left(a_{x_{i}}^{k}\otimes a_{x_{j}}^{i}\right).\]
The coaction map \(\tilde{\Delta}\) thus provides a framework to compute the multiplicative mixed composition product and multiplicative dynamic feedback group product whenever \(c\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) and \(d\in M\subsetneq\mathbb{R}^{m}\langle\langle X\rangle\rangle\). For computing the multiplicative mixed composition product for \(c\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\) and \(d\in M\subsetneq\mathbb{R}^{m}\langle\langle X\rangle\rangle\) where \(p=m\),
1. If \(p<m\), then define \(\check{c}\in\mathbb{R}^{m}\langle\langle X\rangle\rangle\) such that \(\check{c}_{i}=c_{i}\;\forall\,i=1,2,\ldots,p\) and \(\check{c}_{i}=0\;\forall\,i=p+1,p+2,\ldots,m\). Then for all \(\eta\in X^{*}\), \[\left(\left(c\curvearrowleft d\right)_{i},\eta\right)=\tilde{\Delta}a_{\eta} ^{i}\left(\check{c},d\right)\quad\forall i=1,2,\ldots,p.\] Note that \(\left(\check{c}\curvearrowleft d\right)_{j}=0\;\forall j=p+1,p+2,\ldots,m\).
2. If \(p>m\), then this can be reduced to Case 1 by performing computations component wise viz computing \(c_{i}\curvearrowleft d\) for all \(i=1,2,\ldots,p\).
Thus the computational framework to compute the multiplicative mixed composition product of \(c\in\mathbb{R}^{p}\langle\langle X\rangle\rangle\) and \(d\in M\), denoted by \(c\curvearrowleft d\) for arbitrary \(p\) and \(m\) is well-defined via the coaction map \(\tilde{\Delta}\). The computations of the coproduct \(\Delta_{H}\) and antipode \(S\) (defined in Section 8.1) are well-understood once the right coaction of Hopf algebra \(H\) on Hopf algebra \(H_{\boldsymbol{u}\boldsymbol{u}}\).
### Coaction of Hopf algebra \(H\) on the Hopf algebra \(H_{\boldsymbol{u}\boldsymbol{u}}\)
The objective of the subsection is to define the right coaction map of Hopf algebra \(H\) on the unshuffle Hopf algebra \(H_{\boldsymbol{u}\boldsymbol{u}}\) defined in Section 4.1. The right coaction is pivotal in computation of the coproduct and antipode of Hopf algebra \(H\) which in turn are essential to compute the multiplicative dynamic feedback product.
The right coaction map of \(H\) on \(H_{\boldsymbol{u}\boldsymbol{u}}\) is defined to be \(\tilde{\Delta}_{H}:H_{\boldsymbol{u}}\longrightarrow H_{\boldsymbol{u} \boldsymbol{u}}\bigotimes H\) such that for all \(c,d\in M\) (the underlying sets of \(M\) and \(M_{\boldsymbol{u}\boldsymbol{u}}\) are identical) and \(\eta\in X^{*}\),
\[\tilde{\Delta}_{H}a_{\eta}\left(c,d\right)=\left(c\curvearrowleft d,\eta \right). \tag{19}\]
Observe that the algebra of coordinate functions \(S^{+}(W)\) and \(H_{\boldsymbol{u}\boldsymbol{u}}\) are isomorphic as \(\mathbb{R}^{m}\)-modules. Thus it is vital to understand the relationship between the operator \(\tilde{\Delta}\) operating on the module \(S^{+}(W)\) and operator \(\tilde{\Delta}_{H}\) operating on \(H_{\boldsymbol{u}\boldsymbol{u}}\), which is stated in the following lemma.
**Lemma 8.1**.: _If \(c,d\in M\), then for all \(\eta\in X^{*}\)_
\[\tilde{\Delta}_{H}a_{\eta}\left(c,d\right)=\tilde{\Delta}a_{\eta}\left(c,d \right).\]
_Proof:_ If \(c,d\in M\) and \(\eta\in X^{+}\),
\[\tilde{\Delta}_{H}a_{\eta}\left(c,d\right)=\left(c\curvearrowleft d,\eta \right)=\tilde{\Delta}a_{\eta}\left(c,d\right).\]
Despite the statement of Lemma 8.1, it is vital to understand the difference between the coaction maps \(\tilde{\Delta}\) and \(\tilde{\Delta}_{H}\). The coaction map \(\tilde{\Delta}_{H}\) is compatible with the Hopf algebra structure of \(H_{\boldsymbol{u}\boldsymbol{u}}\) viz.
\[\boldsymbol{m}_{1,3,24}\circ\left(\tilde{\Delta}_{H}\otimes\tilde{\Delta}_{H} \right)\circ\Delta_{\boldsymbol{u}\boldsymbol{u}} =\left(\Delta_{\boldsymbol{u}\boldsymbol{u}}\otimes\mathbf{id} \right)\circ\tilde{\Delta}_{H},\]
\[\tilde{\Delta}_{H}\circ S=\left(S_{\boldsymbol{u}\boldsymbol{u}}\otimes \mathbf{id}\right)\circ\tilde{\Delta}_{H},\]
where \(\boldsymbol{m}_{1,3,24}=\left(\boldsymbol{m}\otimes\boldsymbol{m}\right) \circ\left(\mathbf{id}\otimes\tau\otimes\mathbf{id}\right)\).
Thus the coaction map \(\tilde{\Delta}_{H}\) makes \(H_{\mathbf{u}}\) a comodule-Hopf algebra over \(H\). Equivalently, the coaction map \(\tilde{\Delta}_{H}\) is a corepresentation of Hopf algebra \(H\) over unshuffle Hopf algebra \(H_{\mathbf{u}}\). Similar to Section 8.2, for all \(a_{\eta}\in W\),
\[\tilde{\Delta}_{H}a_{\eta}=\left[\tilde{\Delta}_{H}\circ\pi^{1}\ \tilde{\Delta}_{H}\circ\pi^{2}\ \cdots\ \tilde{\Delta}_{H}\circ\pi^{m}\right]a_{\eta}.\]
The map to compute the \(\tilde{\Delta}_{H}\left(a_{\eta}^{j}\right)\,\forall\eta\in X^{*}\) and \(j=1,2,\ldots,m\) is g module \(W\) is stated in the following proposition.
**Proposition 8.2**.: _For all \(i,j=1,2\ldots,m\) and \(\eta\in X^{*}\):_
1. \(\tilde{\Delta}_{H}a_{\emptyset}^{i}=a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}\)_._
2. \(\tilde{\Delta}_{H}\circ\theta_{0}a_{\eta}^{j}=(\theta_{0}\otimes\mathbf{id}) \circ\tilde{\Delta}_{H}a_{\eta}^{j}\)_._
3. \(\left(\tilde{\Delta}_{H}\circ\theta_{i}\right)a_{\eta}^{j}=(\theta_{i}\otimes \boldsymbol{m})\circ\left(\tilde{\Delta}_{H}\otimes\mathbf{id}\right)\circ \Delta_{\mathbf{u}\mathbf{u}}^{i}a_{\eta}^{j}\)_,_
_where \(\Delta_{\mathbf{u}\mathbf{u}}\) is the unshuffle coproduct defined in Section 4.1._
_Proof:_ Observe that \(\forall c\in M\),
\[c=\,\mathds{1}+\sum_{j=0}^{m}x_{j}\left(x_{j}^{-1}\left(c\right)\right).\]
Hence by Theorem 5.4,
\[c\curvearrowright d=\,\mathds{1}+x_{0}\left(x_{0}^{-1}\left(c\right) \curvearrowright d\right)+\sum_{j=1}^{m}x_{j}\left(d_{j}\shuffle\left(x_{j}^{ -1}\left(c\right)\curvearrowright d\right)\right). \tag{20}\]
The proof for each of the statement as follows:
1. Let \(c,d\in M\). From (19) and (20), \[\tilde{\Delta}_{H}a_{\emptyset}^{i}\left(c,d\right) =\left(\left(c\curvearrowright d\right)_{i},\emptyset\right)\] \[=\left(c_{i}\curvearrowright d,\emptyset\right)=1=(c_{i}, \emptyset)(d_{i},\emptyset)\] \[=a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}(c,d).\] Therefore, \(\tilde{\Delta}_{H}a_{\emptyset}^{i}=a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}\).
2. Let \(c,d\in M\), \(\eta\in X^{*}\) and \(\forall\,j=1,2,\ldots,m\). Then, \[\left(\tilde{\Delta}_{H}\circ\theta_{0}\right)a_{\eta}^{j}\left(c,d\right) =\left(\left(c\curvearrowright d\right)_{j},x_{0}\eta\right)\] \[=\left(x_{0}^{-1}\left(c\curvearrowright d\right)_{j},\eta\right)\] Observe that \(x_{0}^{-1}\left(c\right)\) may not belong to \(M\) and from (20), \[\left(\tilde{\Delta}_{H}\circ\theta_{0}\right)a_{\eta}^{j}\left(c,d\right) =\left(x_{0}^{-1}\left(c_{j}\right)\curvearrowright d,\eta\right)\] \[=\tilde{\Delta}a_{\eta}^{j}\left(x_{0}^{-1}\left(c\right),d\right)\] \[=(\theta_{0}\otimes\mathbf{id})\circ\tilde{\Delta}_{q}\left(c,d \right).\]
Since \(\eta\in X^{+}\) and \(c,d\in M\), then by Lemma 8.1
\[\left(\tilde{\Delta}_{H}\circ\theta_{0}\right)a_{\eta}^{j}\left(c,d\right)=\left( \theta_{0}\otimes\mathbf{id}\right)\circ\tilde{\Delta}_{H}a_{\eta}\left(c,d \right).\]
Therefore, \(\tilde{\Delta}_{H}\circ\theta_{0}=\left(\theta_{0}\otimes\mathbf{id}\right) \circ\tilde{\Delta}_{H}\).
3. Let \(c,d\in M\) and \(\eta\in X^{*}\). Then \(\,\forall\,i,j=1,2,\ldots,m\), \[\left(\tilde{\Delta}_{H}\circ\theta_{i}\right)a_{\eta}^{j}\left(c,d\right) =\left(\left(c\curvearrowleft d\right)_{j},x_{i}\eta\right)\] \[=\left(x_{i}^{-1}\left(c\curvearrowleft d\right)_{j},\eta\right)\] From (20), \[\left(\tilde{\Delta}_{H}\circ\theta_{i}\right)a_{\eta}^{j}\left(c,d\right)= \left(d_{i}\,\underline{\upomega}x_{i}^{-1}\left(c_{j}\right)\curvearrowleft d,\eta\right).\] Since \(x_{i}^{-1}\left(c\right)\) may not belong to group \(M\) (also \(M_{\mathbf{u}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[=\hat{\Delta}_{\textbf{u}}^{i}a_{\eta}^{i}(c\curvearrowleft d,d)\] \[=\left(\tilde{\Delta}_{H}\otimes\textbf{id}\right)\circ\hat{ \Delta}_{\textbf{u}}^{i}a_{\eta}^{i}\left(c,d,d\right)\] \[=\left(\textbf{id}\otimes\textbf{m}\right)\circ\left(\tilde{ \Delta}_{H}\otimes\textbf{id}\right)\circ\hat{\Delta}_{\textbf{u}}^{i}a_{\eta}^ {i}\left(c,d\right).\]
Proposition 8.3 asserts that the computation of coproduct \(\Delta_{H}\) on the module \(W\) (subsequently on the algebra \(H\)) can be carried out post the computation of the operator \(\tilde{\Delta}_{H}\) on \(W\). The computation of the coproduct \(\Delta_{H}\) for the some coordinate maps are given as follows:
\[\Delta_{H}a_{\emptyset}^{i} =a_{\emptyset}^{i}\otimes a_{\emptyset}^{i}.\] \[\Delta_{H}a_{x_{0}}^{i} =\left(a_{x_{0}}^{j}\otimes a_{\emptyset}^{i}\right)+\left(a_{ \emptyset}^{i}\otimes a_{x_{0}}^{i}\right).\] \[\Delta_{H}a_{x_{i}}^{j} =\left(a_{x_{i}}^{j}\otimes a_{\emptyset}^{i}a_{\emptyset}^{j} \right)+\left(a_{\emptyset}^{j}\otimes a_{x_{i}}^{j}\right).\] \[\Delta_{H}a_{x_{0}}^{i} =\left(a_{x_{0}^{2}}^{i}\otimes a_{\emptyset}^{i}\right)+2\left( a_{x_{0}}^{i}\otimes a_{x_{0}}^{i}\right)+\left(a_{\emptyset}^{i}\otimes a_{x_{0} ^{2}}^{i}\right).\] \[\Delta_{H}a_{x_{0}x_{i}}^{j} =\left(a_{x_{0}x_{i}}^{j}\otimes a_{\emptyset}^{j}\right)+\left( a_{x_{0}}^{j}\otimes a_{x_{i}}^{j}\right)+\left(a_{x_{i}}^{j}\otimes a_{ \emptyset}^{i}a_{x_{0}}^{j}\right)+\left(a_{\emptyset}^{j}\otimes a_{x_{0}x_{i }}^{j}\right).\] \[\Delta_{H}a_{x_{i}x_{0}}^{j} =\left(a_{x_{i}x_{0}}^{j}\otimes a_{\emptyset}^{i}a_{\emptyset}^{j }\right)+\left(a_{x_{i}}^{j}\otimes a_{x_{0}}^{i}a_{\emptyset}^{j}\right)+ \left(a_{x_{i}}^{j}\otimes a_{\emptyset}^{i}a_{x_{0}}^{j}\right)+\left(a_{x_{0} }^{j}\otimes a_{x_{i}}^{j}\right)+\left(a_{\emptyset}^{j}\otimes a_{x_{i}x_{0 }}^{j}\right).\] \[\Delta_{H}a_{x_{i}x_{j}}^{k} =\left(a_{x_{i}x_{j}}^{k}\otimes a_{\emptyset}^{j}a_{\emptyset}^{ i}a_{\emptyset}^{k}\right)+\left(a_{x_{i}}^{k}\otimes a_{x_{j}}^{i}a_{ \emptyset}^{k}\right)+\left(a_{x_{i}}^{k}\otimes a_{\emptyset}^{i}a_{x_{j}}^{k }\right)+\left(a_{x_{j}}^{k}\otimes a_{\emptyset}^{j}a_{x_{i}}^{k}\right)+ \left(a_{\emptyset}^{k}\otimes a_{x_{i}x_{j}}^{k}\right).\]
If \(m=2\) (two input-two output MIMO case) viz. \(X=\{x_{0},x_{1},x_{2}\}\), then from above computations
\[\Delta_{H}a_{x_{1}x_{2}}=\begin{bmatrix}\left(a_{x_{1}x_{2}}^{1}\otimes\left(a _{\emptyset}^{1}\right)^{2}a_{\emptyset}^{2}\right)+\left(a_{x_{1}}^{1}\otimes a _{x_{2}}^{1}a_{\emptyset}^{1}\right)+\left(a_{x_{1}}^{1}\otimes a_{\emptyset}^ {1}a_{x_{2}}^{1}\right)+\left(a_{x_{2}}^{1}\otimes a_{\emptyset}^{2}a_{x_{1}}^ {1}\right)+\left(a_{\emptyset}^{1}\otimes a_{x_{1}x_{2}}^{1}\right)\\ \left(a_{x_{1}x_{2}}^{2}\otimes a_{\emptyset}^{1}\left(a_{\emptyset}^{2}\right) ^{2}\right)+\left(a_{x_{1}}^{2}\otimes a_{x_{2}}^{1}a_{\emptyset}^{2}\right)+ \left(a_{x_{1}}^{2}\otimes a_{\emptyset}^{1}a_{x_{2}}^{2}\right)+\left(a_{x_{2 }}^{2}\otimes a_{\emptyset}^{2}a_{x_{1}}^{2}\right)+\left(a_{\emptyset}^{2} \otimes a_{x_{1}x_{2}}^{2}\right)\end{bmatrix}\]
which can be rewritten as
\[\Delta_{H}a_{x_{1}x_{2}}=\left(a_{x_{1}x_{2}}\otimes\left(a_{ \emptyset}^{1}a_{\emptyset}^{2}\,\textbf{l}\right)\!a_{\emptyset}\right)+\left( a_{x_{1}}\otimes\left(a_{x_{2}}^{1}\,\textbf{l}\right)\!a_{\emptyset}\right)+\left(a_{x_{1}} \otimes\left(a_{\emptyset}^{1}\,\textbf{l}\right)\!a_{x_{2}}\right)+\] \[\left(a_{x_{2}}\otimes\left(a_{\emptyset}^{2}\,\textbf{l}\right)\! a_{x_{1}}\right)+\left(a_{\emptyset}\otimes a_{x_{1}x_{2}}\right),\]
where \(\,\textbf{l}=[1\ 1]^{t}\). It is vital to observe that the term \(\left(a_{x_{1}x_{2}}\otimes(a_{\emptyset}^{1}a_{\emptyset}^{2}\,\textbf{l})a_{ \emptyset}\right)\) is a primitive term of the coproduct as \(a_{\emptyset}^{1}a_{\emptyset}^{2}\,\textbf{l}\cong\,\textbf{l}\) since \(a_{\emptyset}\) is the unit of \(H\).
The following corollary is resultant of the Proposition 8.2 to the words of the form \(x_{0}^{n}\) for all \(n\geq 0\).
**Corollary 8.1**.: _If \(n\in\mathbb{N}_{0}\), then for all \(i=1,2,\ldots,m\) (defining \(x_{0}^{0}:=\emptyset\)):_
\[\tilde{\Delta}_{H}a_{x_{0}^{n}}^{i} =a_{x_{0}^{n}}^{i}\otimes a_{\emptyset}^{i}.\] \[\Delta_{H}a_{x_{0}^{n}}^{i} =\sum_{k=0}^{n}\binom{n}{k}a_{x_{0}^{k}}^{i}\otimes a_{\emptyset}^{ i}a_{x_{0}^{n-k}}^{i}.\]
_Proof:_ The proof is by induction on \(n\in\mathbb{N}_{0}\). The base case \((n=0)\) :
\[\tilde{\Delta}_{H}a_{\emptyset}^{i}=a_{\emptyset}^{i}\otimes a_{\emptyset}^{i},\]
is proved in Proposition 8.1. Assume the statement is true for \(n=k\), then
\[\tilde{\Delta}a^{i}_{x_{0}^{k+1}}=\left(\tilde{\Delta}\circ\theta_{0}\right)a^{i} _{x_{0}^{k}}.\]
Using Proposition 8.1,
\[\tilde{\Delta}a^{i}_{x_{0}^{k+1}} =\left(\theta_{0}\otimes\mathbf{id}\right)\circ\tilde{\Delta}a^{i }_{x_{0}^{k}}\] \[=\left(\theta_{0}\otimes\mathbf{id}\right)\{a^{i}_{x_{0}^{k}} \otimes a^{i}_{\emptyset}\}\] \[=a^{i}_{x_{0}^{k+1}}\otimes a^{i}_{\emptyset}.\]
Hence proved by induction on \(n\in\mathbb{N}_{0}\) that: \(\tilde{\Delta}a^{i}_{x_{0}^{n}}=a^{i}_{x_{0}^{n}}\otimes 1\). Observe that from Proposition 7.1
\[\Delta a^{i}_{x_{0}^{n}}=\left(\mathbf{id}\otimes\mathbf{m}\right)\circ\left( \tilde{\Delta}\otimes\mathbf{id}\right)\circ\Delta^{i}_{\underline{\mathbf{u}}}a^{ i}_{x_{0}^{n}}.\]
Using Corollary 4.1,
\[\Delta a^{i}_{x_{0}^{n}} =\left(\mathbf{id}\otimes\mathbf{m}\right)\circ\left(\tilde{\Delta} \otimes\mathbf{id}\right)\left(\sum_{k=0}^{n}\binom{n}{k}a^{i}_{x_{0}^{k}} \otimes a^{i}_{x_{0}^{n-k}}\right)\] \[=\left(\mathbf{id}\otimes\mathbf{m}\right)\left(\sum_{k=0}^{n}\binom{ n}{k}\tilde{\Delta}a^{i}_{x_{0}^{k}}\otimes a^{i}_{x_{0}^{n-k}}\right)\] \[=\left(\mathbf{id}\otimes\mathbf{m}\right)\left(\sum_{k=0}^{n}\binom{ n}{k}a^{i}_{x_{0}^{k}}\otimes a^{i}_{\emptyset}\otimes a^{i}_{x_{0}^{n-k}}\right)\] \[=\sum_{k=0}^{n}\binom{n}{k}a^{i}_{x_{0}^{k}}\otimes a^{i}_{ \emptyset}a^{i}_{x_{0}^{n-k}}.\]
Proposition 8.3 asserted that the calculation of coproduct \(\Delta_{H}\) is carried out post the computation of \(\tilde{\Delta}_{H}\). However the converse is also true viz. the computation of \(\tilde{\Delta}_{H}\) can be carried out if the evaluation fo the coproduct \(\Delta_{H}\) is known a priori which is well-asserted in the following proposition.
**Proposition 8.4**.: _For all \(\eta\in X^{+}\) and for all \(i=1,2,\ldots,m\),_
\[\tilde{\Delta}_{H}a^{i}_{\eta}\left(c,d\right)=\left(\mathbf{id}\otimes\mathbf{m} \right)\circ\left(\Delta_{H}\otimes S_{\underline{\mathbf{u}}}\right)\circ\hat{ \Delta}^{i}_{\underline{\mathbf{u}}}a^{i}_{\eta}.\]
_Proof:_ Given \(c,d\in M\), by Theorem (12)
\[\left(c\star d\right)=\left(d\underline{\mathbf{u}}\left(c\curvearrowleft d\right) \right).\]
Observe that \(d\in M\) implies that \(d\) is shuffle invertible. Thus for any \(\eta\in X^{+}\),
\[\left(\left(c\curvearrowleft d\right)_{i},\eta\right)=\left(d^{\underline{\bm {u}}-1}_{i}\underline{\mathbf{u}}\left(c\star d\right)_{i},\eta\right),\]
for all \(i=1,2,\ldots,m\). Hence,
\[\left(\left(c\curvearrowleft d\right)_{i},\eta\right) =\tilde{\Delta}_{H}a^{i}_{\eta}\left(c,d\right)\] \[=\hat{\Delta}^{i}_{\underline{\mathbf{u}}}a^{i}_{\eta}\left(c\star d,d^{\underline{\mathbf{u}}-1}\right).\] \[=\left(\Delta_{H}\otimes S_{\underline{\mathbf{u}}}\right)\circ\hat{ \Delta}^{i}_{\underline{\mathbf{u}}}a^{i}_{\eta}\left(c,d,d\right).\] \[=\left(\mathbf{id}\otimes\mathbf{m}\right)\circ\left(\Delta_{H} \otimes S_{\underline{\mathbf{u}}}\right)\circ\hat{\Delta}^{i}_{\underline{\mathbf{u }}}a^{i}_{\eta}\left(c,d\right).\]
The key point of the Proposition 8.4 is the shuffle-invertibility of a series \(c\in M\)
The goal of this subsection is to provide a graded structure on the \(\mathbb{R}\)-module \(W\) and consequently on the underlying \(\mathbb{R}\)-module structure of the Hopf algebra \(H\) such that \(H\) is connected and the homogeneous components of \(H\) are finite-dimensional.
**Definition 8.1**.: _Given a word \(\eta\in X^{+}\), denote the degree of the word as \(\deg\left(\eta\right)\) and define \(\deg\left(\eta\right)=\left|\eta\right|\) and for all \(k\geq 1\):_
\[X_{k}:=\{a_{\eta}:\deg\left(\eta\right)=k\}.\]
1. _Define gradation on the_ \(\mathbb{R}\)_-module_ \(W\) _viz._ \[W=\bigoplus_{k\geq 1}W_{k},\] _where_ \(W_{k}\) _is the free_ \(\mathbb{R}\)_-module spanned by_ \(X_{k}\)_._
2. _The gradation on the module_ \(W\) _induces a graded structure on the algebra_ \(H\) _as_ \[H=\bigoplus_{n\in\mathbb{N}_{0}}H_{n},\] _with_ \(H_{0}\cong\mathbb{R}\) _in the category of_ \(\mathbb{R}\)_-modules._
The following lemma aids in proving that the gradation in Definition 8.1 makes the Hopf algebra \(H\) is well-defined.
**Lemma 8.2**.: _If \(\eta\in X^{*}\) such that \(\deg\left(\eta\right)=n\) then_
\[\tilde{\Delta}_{H}\left(a_{\eta}\right)\in\bigoplus_{i+j=n}W_{i}\otimes H_{j},\]
_for all \(k=1,2,\ldots,m\)._
Proof.: The following observations will help in proving the lemma.
1. The map \(\{\theta_{i}\}_{i=0}^{m}\) is a homogeneous operator of degree \(1\) on the module \(W\). If \(\deg\left(\eta\right)=\left|\eta\right|=n\) for some \(\eta\in X^{*}\), then \(\left|x_{i}\eta\right|=n+1\) for all \(i=0,1,\ldots,m\). Hence, \[\theta_{i}:W_{n}\longrightarrow W_{n+1}\] for all \(i=0,1,\ldots,m\) and \(n\geq 1\).
2. Observe that if \(\eta,\zeta,\gamma\in X^{*}\) such that \(\left|\gamma\right|=n\) and \(\gamma\in\operatorname{supp}\left(\eta\boldsymbol{\
By Proposition 8.2, \[\tilde{\Delta}_{H}a_{\eta^{\prime}}=(\theta_{0}\otimes\mathbf{id})\circ\tilde{ \Delta}_{H}a_{\eta}.\] Since \(a_{\eta}\in W_{k}\), then by the induction hypothesis \(\tilde{\Delta}_{H}\left(a_{\eta}\right)\subseteq\bigoplus_{i+j=k}W_{i}\otimes H _{j}\). Then, \[\left(\theta_{0}\otimes\mathbf{id}\right)\left(\bigoplus_{i+j=k}W_ {i}\otimes H_{j}\right) \subseteq\bigoplus_{i+j=k}W_{i+1}\otimes H_{j}\] \[\subseteq\bigoplus_{i+j=k+1}W_{i}\otimes H_{j}.\] Thus, \(\tilde{\Delta}_{H}a_{\eta^{\prime}}\in\bigoplus_{i+j=k+1}W_{i}\otimes H_{j}\) where \(\left|\eta^{\prime}\right|=k+1\).
2. Let \(\eta^{\prime}=x_{i}\eta\) where \(\left|\eta\right|=k\) and \(x_{i}\neq x_{0}\). Then from Proposition 8.2, \[\left(\tilde{\Delta}_{H}\circ\pi_{j}\right)a_{\eta^{\prime}} =\left(\theta_{i}\otimes\mathbf{m}\right)\circ\left(\tilde{\Delta}_{ H}\otimes\mathbf{id}\right)\circ\left(\pi_{j}\otimes\pi_{i}\right)\circ \tilde{\Delta}_{\mathbf{u}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Observe that from Proposition 8.2,
\[\Delta_{H}\circ\pi_{i}=\left(\mathbf{id}\otimes\boldsymbol{m}\right)\circ\left( \tilde{\Delta}_{H}\otimes\mathbf{id}\right)\circ\left(\pi_{i}\otimes\pi_{i} \right)\circ\tilde{\Delta}_{\boldsymbol{\omega}}.\]
Thus (by grouping them along the coordinate \(i\)),
\[\Delta_{H}=\left(\mathbf{id}\otimes\boldsymbol{m}\right)\circ\left(\tilde{ \Delta}\circ\mathbf{id}\right)\circ\tilde{\Delta}_{\boldsymbol{\omega}}.\]
Hence,
\[\Delta_{H}(W_{n}) =\left(\mathbf{id}\otimes\boldsymbol{m}\right)\circ\left(\tilde{ \Delta}\circ\mathbf{id}\right)\circ\tilde{\Delta}_{\boldsymbol{\omega}}(W_{n})\] \[\subseteq\left(\mathbf{id}\otimes\boldsymbol{m}\right)\circ \left(\tilde{\Delta}\circ\mathbf{id}\right)(W\otimes W)_{n}.\]
Using Proposition 8.2,
\[\Delta_{H}(W_{n}) \subseteq\left(\mathbf{id}\otimes\boldsymbol{m}\right)(W\otimes H \otimes W)_{n}\] \[\subseteq(W\otimes H)_{n}.\]
Therefore, the intermediate statement holds true viz.
\[\Delta_{H}\left(W_{n}\right)\subseteq\bigoplus_{i+j=n}W_{i}\otimes H_{j}\quad \forall\,n\geq 0.\]
The statement of the theorem then holds true as \(\Delta\) is an \(\mathbb{R}^{n}\)-algebra morphism from \(H\) to \(H\otimes H\).
Thus Proposition 8.5 asserts that the grading defined on the Hopf algebra \(H\) in Definition 8.1 is well-defined and connected. The homogeneous components are finite-dimensional and dimensions respect the Proposition 4.2 since the bialgebras \(H\) and \(H_{\boldsymbol{\omega}}\) are isomorphic with respect to the underlying graded algebraic structures.
The following example is rework of the Example 4.10 in [Gray & Ebrahimi-Fard(2017)] acting as a check for the computation of feedback group inverse in one-dimensional case.
**Example 8.2**.: _Let \(c=1-x_{1}\,\in\mathbb{R}\langle\langle X\rangle\rangle\). The series \(c^{\circ-1}=1+\cdots+\cdots\). Using the recursive computation formula for antipode as in Theorem 3.1_
\[a_{x_{1}}(c^{\circ-1})=Sa_{x_{1}}\left(c\right)=-a_{x_{1}}(c)=1.\]
_Observe that_
\[\Delta^{\prime}_{H}a_{x_{1}^{2}}=3a_{x_{1}}\otimes a_{x_{1}}.\]
_Thus,_
\[a_{x_{1}^{2}}\left(c^{\circ-1}\right) =Sa_{x_{1}^{2}}\left(c\right)\] \[=-a_{x_{1}^{2}}-3a_{x_{1}}.Sa_{x_{1}}=-a_{x_{1}^{2}}+3a_{x_{1}}^{ 2}.\]
_Therefore, \(a_{x_{1}^{2}}\left(c^{\circ-1}\right)=0+3(1)^{2}=3\). In similar fashion the reduced coproduct of \(a_{x_{1}}^{3}\) is_
\[\Delta^{\prime}_{H}a_{x_{1}^{3}}=4a_{x_{1}}\otimes a_{x_{1}^{2}}+6a_{x_{1}^{2} }\otimes a_{x_{1}}+3a_{x_{1}}\otimes a_{x_{1}}^{2}.\]
_Thus,_
\[a_{x_{1}^{3}}\left(c^{\circ-1}\right) =\left[-a_{x_{1}^{3}}-4a_{x_{1}}.Sa_{x_{1}^{2}}-6a_{x_{1}^{2}}.Sa _{x_{1}}-3a_{x_{1}}.\left(Sa_{x_{1}}\right)^{2}\right]\left(c\right)\] \[=0-4(-1)(3)-6(0)(-1)-3(-1)(1)^{2}=15.\]
_Therefore \(c^{\circ-1}=1+x_{1}+3x_{1}^{2}+15x_{1}^{3}+105x_{1}^{4}+\cdots\). The result matches exactly with that of Example 4.10 in [Gray & Ebrahimi-Fard(2017)]._
## 9. Conclusions and Future work
It was shown that the closed-loop system of a plant in Chen-Fliess series description in multiplicative output feedback with another system, given by Chen-Fliess series, has a Chen-Fliess series representation. An explicit expression of the closed-loop generating series was derived and the multiplicative dynamic feedback connection has a natural interpretation as a transformation group acting on the plant. A computational framework has been devised utilizing the dual Hopf algebras corresponding to the shuffle group and multiplicative output dynamic feedback group. Future work will be to address the solemn problem regarding the local convergence of the both multiplicative dynamic and static output feedback connections and to identify both the multiplicative dynamic and static feedback invariants.
|
2305.10213 | The geometry of the hot corona in MCG-05-23-16 constrained by X-ray
polarimetry | We report on the second observation of the radio-quiet active galactic
nucleus (AGN) MCG-05-23-16 performed with the Imaging X-ray Polarimetry
Explorer (IXPE). The observation started on 2022 November 6 for a net observing
time of 640 ks, and was partly simultaneous with NuSTAR (86 ks). After
combining these data with those obtained in the first IXPE pointing on May 2022
(simultaneous with XMM-Newton and NuSTAR) we find a 2-8 keV polarization degree
$\Pi$ = 1.6 $\pm$ 0.7 (at 68 per cent confidence level), which corresponds to
an upper limit $\Pi$ = 3.2 per cent (at 99 per cent confidence level). We then
compare the polarization results with Monte Carlo simulations obtained with the
MONK code, with which different coronal geometries have been explored
(spherical lamppost, conical, slab and wedge). Furthermore, the allowed range
of inclination angles is found for each geometry. If the best fit inclination
value from a spectroscopic analysis is considered, a cone-shaped corona along
the disc axis is disfavoured. | D. Tagliacozzo, A. Marinucci, F. Ursini, G. Matt, S. Bianchi, L. Baldini, T. Barnouin, N. Cavero Rodriguez, A. De Rosa, L. Di Gesu, M. Dovciak, D. Harper, A. Ingram, V. Karas, D. E. Kim, H. Krawczynski, G. Madejski, F. Marin, R. Middei, H. L. Marshall, F. Muleri, C. Panagiotou, P. O. Petrucci, J. Podgorny, J. Poutanen, S. Puccetti, P. Soffitta, F. Tombesi, A. Veledina, W. Zhang, I. Agudo, L. A. Antonelli, M. Bachetti, W. H. Baumgartner, R. Bellazzini, S. D. Bongiorno, R. Bonino, A. Brez, N. Bucciantini, F. Capitanio, S. Castellano, E. Cavazzuti, C. T. Chen, S. Ciprini, E. Costa, E. Del Monte, N. Di Lalla, A. Di Marco, I. Donnarumma, V. Doroshenko, S. R. Ehlert, T. Enoto, Y. Evangelista, S. Fabiani, R. Ferrazzoli, J. A. Garcia, S. Gunji, J. Heyl, W. Iwakiri, S. G. Jorstad, P. Kaaret, F. Kislat, T. Kitaguchi, J. J. Kolodziejczak, F. La Monaca, L. Latronico, I. Liodakis, S. Maldera, A. Manfreda, A. P. Marscher, F. Massaro, I. Mitsuishi, T. Mizuno, M. Negro, C. Y. Ng, S. L. O'Dell, N. Omodei, C. Oppedisano, A. Papitto, G. G. Pavlov, A. L. Peirson, M. Perri, M. Pesce Rollins, M. Pilia, A. Possenti, B. D. Ramsey, J. Rankin, A. Ratheesh, O. J. Roberts, R. W. Romani, C. Sgrò, P. Slane, G. Spandre, D. A. Swartz, T. Tamagawa, F. Tavecchio, R. Taverna, Y. Tawara, A. F. Tennant, N. E. Thomas, A. Trois, S. S. Tsygankov, R. Turolla, J. Vink, M. C. Weisskopf, K. Wu, F. Xie, S. Zane | 2023-05-17T13:45:08Z | http://arxiv.org/abs/2305.10213v1 | # The geometry of the hot corona in MCG-05-23-16 constrained by X-ray polarimetry
###### Abstract
We report on the second observation of the radio-quiet active galactic nucleus (AGN) MCG-05-23-16 performed with the _Imaging X-ray Polarimetry Explorer_ (_IXPE_). The observation started on 2022 November 6 for a net observing time of 640 ks, and was partly simultaneous with _NuSTAR_ (86 ks). After combining these data with those obtained in the first _IXPE_ pointing on May 2022 (simultaneous with _XMM-Newton_ and _NuSTAR_) we find a 2-8 keV polarization degree \(\Pi=1.6\pm 0.7\) (at 68 per cent confidence level), which corresponds to an upper limit \(\Pi=3.2\) per cent (at 99 per cent confidence level). We then compare the polarization results with Monte Carlo simulations obtained with the monc code, with which different coronal geometries have been explored (spherical lamppost, conical, slab and wedge). Furthermore, the allowed range of inclination angles is found for each geometry. If the best fit inclination value from a spectroscopic analysis is considered, a cone-shaped corona along the disc axis is disfavoured.
keywords: galaxies: active - galaxies: Seyfert - polarization - X-rays:galaxies - X-rays: individual: MCG-05-23-16
## 1 Introduction
The large amount of energy released by AGNs is widely thought to be generated in a very compact and central region via accretion onto a supermassive black hole (SMBH, Rees 1984; Antonucci 1993). The optical/UV radiation emitted by the accretion disc is partly redirected towards the X-ray band (primary emission) through a process known as Comptonization, which involves multiple scattering in a cloud of hot electrons, generally called the corona (Sunyaev & Titarchuk 1980; Haardt & Maraschi 1991; Zdziarski et al. 2000; Zdziarski & Gierlinski 2004; Done et al. 2007). These structures are characterized by high electron temperatures (\(kT_{\rm e}\) usually ranging from tens to hundreds keV) and moderate Thomson optical depths (\(r\), Petrucci et al. 2001; Perola et al. 2002; Dadina 2007; Panessa et al. 2011; De Rosa et al. 2012; Ricci et al. 2017; Marinucci et al. 2018; Tortosa et al. 2018; Middei et al. 2019). Despite being a key element in understanding the energy generation mechanism of AGNs, the morphology of the corona, which may hold clues to its physical origins, remains a matter of debate. While in principle spectroscopic techniques can provide information on the coronal geometry, even the best observations, while providing valuable information on its physical parameters such as temperature and optical depth, fall short of distinguishing between different geometrical configurations (Zhang
et al., 2019; Tortosa et al., 2018). Currently, some constraints on the coronal morphology have been derived using time lags techniques (such as reverberation mapping, Uttley et al., 2014; Fabian et al., 2017; Caballero-Garcia et al., 2020), but many aspects remain to be determined. In this context, X-ray polarimetry represents a fundamental tool in order to investigate the coronal properties and constrain its geometry, because different morphologies of the emitting region produce different polarization signatures.
Several geometrical models have been proposed for the corona. In this work we consider the following: spherical lamppost, conical outflow, slab corona and wedge-shaped hot accretion flow. The spherical lamppost consists of an isotropic spherical source located on the spin axis of the SMBH (Matt et al., 1991; Wilkins and Fabian, 2012; Ursini et al., 2022) and it is defined by its radius and its height above the SMBH. This configuration is expected to produce a low polarization degree (PD= \(0-2\) per cent) with the polarization angle (PA) perpendicular to the accretion disc axis (Utsini et al., 2022). The conical outflow is commonly associated with an aborted jet (Henri and Petrucci, 1997; Ghisellini et al., 2004; Ursini et al., 2022). According to this model, radio-quiet AGNs have central SMBHs powering outflows and jets which may propagate only for a short distance, if the velocity of the ejected material is sub-relativistic and smaller than the escape velocity. This configuration is expected to produce somewhat larger (up to 6 per cent) polarization degree, also in this case perpendicular to the accretion disc axis (Utsini et al., 2022). In the slab corona scenario the hot medium is assumed to be uniformly distributed above the cold accretion disc. This geometry can be realised in the scenario where magnetic loops rise high above the disc plane and dissipate energy via reconnection (Liang, 1979; Haardt and Maraschi, 1991; Beloborodov, 2017). This configuration can produce polarization degree up to 14 per cent (Poutanen and Svensson, 1996; Ursini et al., 2022; Gianolli et al., 2023). In this case the polarization angle is parallel to the accretion disc axis. The wedge is, finally, similar to the slab but with the height increasing with the radius. In this scenario the'standard' accretion disc is thought to be truncated at a certain radius, while the corona represents some type of a 'hot accretion flow', possibly extending to the innermost stable circular orbit (ISCO, Esin et al., 1997; Schnittman and Krolik, 2010; Poutanen et al., 1997; Yuan and Narayan, 2014; Poutanen et al., 2018; Ursini et al., 2020). It is expected to produce intermediate (up to 5 per cent, depending on the specific assumed configuration) polarization degree, parallel to the accretion disc axis. This configuration is considered in detail in Sect. 4.
MCG-05-23-16 is a nearby (\(z=0.0085\), Wegner et al., 2003) Seyfert 1.9 galaxy (Veron et al., 1980) with broad emission lines in the infrared (Goodrich et al., 1994). It is a relatively bright X-ray source (\(F_{2-10}=7-10\times 10^{-11}\)erg cm\({}^{-2}\)s\({}^{-1}\), Mattson and Weaver, 2004) showing moderate cold absorption (\(N_{\rm H}\sim 10^{22}\) cm\({}^{-2}\)). It has been widely studied in the X-ray band (Beckmann et al., 2008; Molina et al., 2013), and its high energy cut-off (\(E_{\rm C}\)) and coronal physical parameters, i.e. temperature and Thomson optical depth, are quite well estimated (Balokovic et al., 2015). The SMBH mass (\(M_{\rm BH}=2\times 10^{7}\) M\({}_{\odot}\)) has been estimated via X-ray variability (Ponti et al., 2012), and it is consistent with the virial mass derived from the infrared lines (Onori et al., 2017). From the observations of MCG-05-23-16 performed with _XMM-Newton_, _NuSTAR_ and _IXPE_ in May 2022, Marinucci et al. (2022) found, assuming a simple cut-off power law for the primary continuum, a spectral index \(\Gamma=1.85\pm 0.01\) and a high energy cut-off \(E_{\rm C}=120\pm 15\) keV, leading to an electron temperature \(kT_{\rm e}=25\pm 2\) keV and \(\tau=1.27\pm 0.08\) if the cut-off power law is replaced by the comptonization model compps (Poutanen and Svensson, 1996) and a uniform slab geometry for the corona is assumed. Moreover, a 4.7 per cent upper limit (99 per cent c.l. for one parameter of interest) for the polarization degree was obtained.
In this paper we present and discuss the second _IXPE_ observation of MCG-5-23-16, performed in November 2022 in coordination with _NuSTAR_. The combined analysis of the data collected in the 2022 May and November observations are also discussed. The results are then compared with Monte Carlo simulations of the expected polarization properties for different geometries of the corona.
The paper is organized as follows: in Sect. 2 we discuss the data reduction procedure, in Sect. 3 we present the spectropolarimetric data analysis, in Sect. 4 we present Monte Carlo simulations designed to calculate the expected polarization for different geometries and, finally, the results are summarized in Sect. 5.
## 2 Observations and data reduction
_IXPE_(Weisskopf et al., 2022) observed MCG-05-23-16 twice, in May and November 2022. The first _IXPE_ observation and the simultaneous _XMM-Newton_ and _NuSTAR_ data are presented in Marinucci et al. (2022). These spectra, with updated response matrices, are also used in this work. The second pointing started on November 6, and had a net exposure time of 642 ks. Cleaned level 2 event files were produced and calibrated using standard filtering criteria with the dedicated ftools tasks and the latest calibration files available in the _IXPE_ calibration database (CALDB 20220303). \(I\), \(Q\) and \(U\) Stokes background spectra were extracted from source-free circular regions with a radius of 100 arcSect. Extraction radii for the \(I\) Stokes spectra of the source were computed via an iterative process which leads to the maximization of the Signal-to-Noise Ratio (SNR) in the 2-8 keV energy band, similar to the approach described in Piconcelli et al. (2004). We therefore adopted circular regions centered on the source with radii of 62 arcsec for the three DUs. The net exposure times are 641.7 ks and the same extraction radii were then applied to the \(Q\) and \(U\) Stokes spectra. We used a constant energy binning of 0.2 keV for \(Q\), \(U\) Stokes spectra and required a SNR higher than 5 in each spectral channel, in the intensity spectra. \(I\), \(Q\), \(U\) Stokes spectra from the three DUs are always fitted independently in the following, but we will plot them together using the setp group command in xspec, for the sake of visual clarity. Background represents 2.0, 1.8 and 2.1 per cent of the total DU1, DU2 and DU3 \(I\) spectra, respectively. We followed the formalism discussed in Strohmayer (2017) and used the weighted analysis method presented in Di Marco et al. (2022) (parameter stokes=Neff in xselect). The summed background subtracted light curves for the two _IXPE_ poitings are shown in Fig. 1.
_NuSTAR_(Harrison et al., 2013) observed MCG-05-23-16, with its two coaligned X-ray telescopes with corresponding Focal Plane Module A (FPMA) and B (FPMB), on 2022 November 11. The total elapsed time is 164.6 ks. The Level 1 data products were processed with the _NuSTAR_ Data Analysis Software (NuSTARADAS) package (v. 2.1.2). Cleaned event files (level 2 data products) were produced and calibrated using standard filtering criteria with the nuppileile task and the latest calibration files available in the _NuSTAR_ calibration database (CALDB 20221020). Extraction radii for the source and background spectra were 40 arcsec and 60 arcsec, FPMA spectra were binned in order not to over-sample the instrumental resolution more than a factor of 2.5 and to have a SNR greater than 5 in each spectral channel, the same energy binning was then applied to the FPMB spectra. The net observing times for the FPMA and the FPMB data sets are 85.7 ks and 84.9 ks, respectively. The summed background subtracted FPMA and FPMB light curves are shown in Fig. 1. We adopt the cosmological parameters \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\),
0.73 and \(\Omega_{\rm m}=0.27\), i.e. the default ones in xspec 12.12.1 (Arnaud, 1996). Errors correspond to the 90 per cent confidence level for one interesting parameter (\(\Delta\chi^{2}=2.7\)), if not stated otherwise.
## 3 Data analysis
### _IXPE_ analysis
Initially, we conducted a preliminary examination of the _IXPE_ data through the utilization of a baseline model, consisting of an absorbed power law convolved with a constant polarization kernel: \(\rm{const}\times\rm{polos}\times\rm{stras}\times\rm{powerlaw}\). We fit this model in the \(2-8\) keV energy range simultaneously to the \(I\), \(Q\) and \(U\) spectra collected by the 3 _IXPE_ Detector Units (DUs) during the second observation of MCG-05-23-16 (2022 November 6; 640ks). In all cases where we only use _IXPE_ data the adoption of a more complex model is unnecessary. In fact, the reduced chi-square is always close to unity. At the 68 per cent c.l for one parameter of interest, we obtained a polarization degree \(\Pi=1.1\pm 0.9\) per cent and a polarization angle \(\Psi=57^{\circ}\pm 27^{\circ}\). This translates into a 99 per cent c.l. upper limit to the polarization degree of \(\Pi=3.3\) per cent. In Fig. 2 we show the \(Q\) and \(U\) spectra used to perform this first analysis (along with the model and the residuals), while in Fig. 3 the contour plot between \(\Pi\) and \(\Psi\) is shown. An alternative, model-independent, analysis of the polarization cubes with the software xspecbssim(Baldini et al., 2022) gives consistent results.
We then performed a combined analysis of the _IXPE_\(I\), \(Q\) and \(U\) spectra collected on May and November 2022, using the same model. We notice a significant variation of the primary continuum spectral index between the two pointings. This variation is only found in the _IXPE_ data. For this reason, we consider it as a calibration issue concerning the first pointing of MCG-05-23-16 by this instrument. For this reason, it is not possible to sum the two observations together. We therefore proceeded to conduct a combined analysis leaving untied the spectral indexes of the two pointings. We obtained (at 68 per cent c.l. for one interesting parameter) a polarization degree \(\Pi=1.6\pm 0.7\) per cent and a polarization angle \(\Psi=53^{\circ}\pm 13^{\circ}\). This translate into a polarization degree upper limit (at 99 per cent c.l.) \(\Pi=3.2\) per cent
Figure 1: _IXPE_, _NuSTAR_ and _XMM-Newton_ light curves of the two observing campaign of MCG-05-23-16 are shown. Data counts from DU1, DU2 and DU3 on board on _IXPE_ and from FPMA/B on board on _NuSTAR_ have been summed. The full energy bands of the three satellites have been used and we adopted a 3 ks time binning.
cent. This represents a significant improvement with respect to the results obtained for the May 2022 observation alone. In Table 1 the best-fit values of the polarization degree and angle obtained using only _IXPE_ dataset are shown.
### _XMM-Newton_, _NuSTAR_ and _IXPE_ combined analysis
As a next step we performed a spectropolarimetric analysis combining the 2-8 keV _IXPE_ spectra (May+November), the 2-10 keV _XMM-Newton_ spectrum (May) and the 3-79 keV _NuSTAR_ spectra (May+November). Taking advantage of the previous analysis of the May observations (Marinucci et al., 2022), we used the following model:
\(\rm const\times\tau_{\rm tbabs}[polconst\times ztabs\timescutoff+vashift(polconst \times\times\tau_{\rm kerrdisk}+polconst\times\times\tau_{\rm kulver})]\),
where the constant component is needed to cross-calibrate the data set collected by the different detectors (DU1, DU2, DU3, FPMA, FPMB and EPIC pn). The primary continuum is modeled using a simple power law with a high energy exponential cut of (cutoff+), while tbabs is used to model the Galactic absorption, using a column density \(N_{\rm H}=7.8\times 10^{20}\,\rm cm^{-2}\)(H14PI Collaboration et al., 2016). The reflection from distant (and neutral) material (such as the external regions of the accretion disc and the torus) is modeled using xillver (Garcia et al., 2013). The spectral index and high energy cut off in the reflection model are linked to those of the primary emission. The Fe abundance is set equal to the solar value and the inclination angle to \(\theta=30^{\circ}\). The kerrdisk component (Brenneman and Reynolds, 2006) is used to deal with some residuals close to 6.4 keV, which may be interpreted as a Fe K\(\alpha\) line from the inner part of the accretion disc, broadened by relativistic effects. For the _XMM-Newton_ spectrum we added a vashift component (which simply provides a shift in energy) in order to deal with the energy of the narrow Fe K\(\alpha\) line, which is inconsistent with being 6.4 keV in the host galaxy rest frame. This effect is only found in the pn (and not in the MOS), so we conclude that it is likely due to calibration issues. We noticed a similar effect also in _NuSTAR_, with an increasing deviation between the first and the second pointing. For this reason, we added a vashift component here too, attributing the effect to instrument degradation in time. In kerrdisk, the black hole spin is fixed to \(a=0.998\), since the fit is largely insensitive to this parameter. Moreover, we fixed the disc emissivity profile to \(\epsilon(r)\propto r^{-3}\). The rest frame energy of the line was fixed to 6.4 keV and the inner radius of the disc to its previously found best-fit value (\(37R_{\rm G}\), as found by Reeves et al., 2007)1. In order to deal with calibration issues that affect the spectral index of May _IXPE_ observation, we modified, as in Marinucci et al. (2022), the response files gain in the \(I\) spectra (using the gain fit command). Finally, as done for the _IXPE_ analysis, we untied the primary continuum spectral indices between the two observations.
Footnote 1: A complete and detailed spectroscopic analysis of these datasets, including the relativistic effects, will be presented in a forthcoming paper (Serafinelli et al, in prep.)
Each main spectral component (i.e. primary continuum and reflection) is associated with a different polarization. The Fe K\(\alpha\) line is expected to be unpolarized (Goosmann and Matt, 2011; Marin, 2018), while the Compton reflection continuum contributes little in the _IXPE_ band pass (Marin et al., 2018). For these reasons, after checking the insensitivity of the fit to variations of these parameters, we fix the polarization of kerrdisk and xillver to zero for simplicity (see also Marinucci et al., 2022). We get only an upper limit (at 99 per cent c.l. for one interesting parameter) for the polarization degree of the primary continuum of \(\Pi=3.3\) per cent. At 68 per cent of c.l., we retrieve a polarization degree and angle of \(\Pi=1.6\pm 0.7\) per cent and \(\Psi=53^{\circ}\pm 12^{\circ}\), respectively. The fit is not ideal (\(\chi^{2}/\)dof= 2381/2259 (see Fig. 4) but, since there is no evidence from the residuals of missing or wrong components in the model, we attribute it to an imperfect cross calibration between the three instruments.
\begin{table}
\begin{tabular}{l c c c} \hline Parameter & May 2022 & Nov 2022 & May+Nov 2022 \\ \hline \(\Pi\) (\%) & \(2.2\pm 1.7\) & \(1.1\pm 0.9\) & \(1.6\pm 0.7\) \\ \(\Psi\) (deg) & \(50\pm 24\) & \(57\pm 27\) & \(53\pm 13\) \\ \(\Pi\) (\%) & \(\leq 4.7\) & \(\leq 3.3\) & \(\leq 3.2\) \\ \hline \end{tabular} _Note_: The errors are shown at 68 per cent and the upper limits at 99 per cent confidence level for one parameter of interest.
\end{table}
Table 1: Polarimetric properties of MCG-05-23-16 obtained with _IXPE_.
Figure 3: Contour plot between the polarization degree \(\Pi\) and angle \(\Psi\) for the November 2022 data. Purple, pink and orange regions correspond, respectively, to 68, 90 and 99 per cent confidence levels for two parameters of interest.
Figure 2: _IXPE_\(Q\) (purple crosses) and \(U\) (orange crosses) grouped Stokes spectra of the second IXPE pointing (November 2022) of MCG-05-23-16 are shown with residuals, along with the corresponding best-fitting model.
In Table 2 we summarize the best-fitting values for all the free parameters of this last and complete analysis (with errors at 68 per cent c.l.). In Fig. 5 we show the contour plot of the polarization degree and angle of the continuum component, as well as a comparison with the contour plot from the May observation alone (Marinucci et al. (2022)).
## 4 Monte Carlo simulations
To interpret the polarization results, we perform detailed numerical simulations with the Monte Carlo code monok(Zhang et al., 2019), following the approach of Ursini et al. (2022) (where spherical lamp-post, conical outflow and slab have been already explored). We focus here on the so-called concave wedge geometry which, similarly to the slab, gives rise to polarization angles parallel to the accretion disc axis. A wedge configuration could potentially solve some of the theoretical issues that arise when using geometries such as the slab or the sphere (Stern et al., 1995; Done et al., 2007; Poutanen et al., 2018). The wedge geometry is defined by three parameters: an inner radius (\(R_{\rm in}\)), an outer radius (\(R_{\rm out}\)), and an opening angle (\(\alpha\)) (see Fig. 6). We assume the inner radius to coincide with the Innermost Stable Circular Orbit (ISCO), which depends on the SMBH spin value (6 \(R_{\rm G}\) for \(a=0\) and 1.24 \(R_{\rm G}\) for \(a=0.998\)). Unlike the slab configuration, the height of the wedge increases with radius. In this configuration the accretion disc is assumed to be truncated at a certain radius, while the corona represents a 'hot accretion flow', extending to the ISCO. The density profile of the wedge corona is uniform and the Thomson optical depth is computed radially. Finally, the accretion disc truncation radius can either coincide with the external edge of the corona or reach lower values, down to the ISCO. In Figure 6 a sketch of the wedge corona is shown.
We perform Monte Carlo simulations for a total of 8 parameter combinations, considering only the external disc scenario with
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Best fitting value \\ \hline \(N_{\rm H}\) [cm\({}^{-2}\)] & \((1.30\pm 0.02)\times 10^{22}\) \\ \hline \(\Gamma_{\rm CutOffR}\) (May) & \(1.84\pm 0.01\) \\ \(\Gamma_{\rm CutOffR}\) (Nov) & \(1.85\pm 0.01\) \\ \hline \(E_{\rm C}\) [keV] & \(120^{+9}_{-5}\) \\ \hline \(\Pi_{\rm CutOffR}\) [\%] & \(1.6\pm 0.7\) \\ \(\Psi_{\rm CutOffR}\) [deg] & \(53\pm 12\) \\ \(\Pi_{\rm NLL}=\Pi_{\rm NERR}\) [\%] & 0 \\ \(\Psi_{\rm NLL}=\Psi_{\rm NERR}\) [deg] & 0 \\ \hline \(v_{\rm shift}\) [km s\({}^{-1}\)] \\ _XMM-Newton_ & \(2.2^{+0.3}_{-0.4}\times 10^{3}\) \\ _NuSTAR_ (May) & \(3.4^{+0.9}_{-0.4}\times 10^{3}\) \\ _NuSTAR_ (Nov) & \(5.5^{+1.0}_{-0.8}\times 10^{3}\) \\ \hline \(\theta_{\rm max}\) [deg] & \(61^{+4}_{-1}\) \\ \(a\) & 0.998 \\ \(R_{\rm in}\) [\(R_{\rm G}\)] & 37 \\ \(\theta_{\rm ind}\) [deg] & 30 \\ \hline \multicolumn{3}{c}{NORMALIZATION CONSTANTS} \\ \(N_{\rm CutOffR}\), & \((2.52\pm 0.02)\times 10^{-2}\) \\ \(N_{\rm NALL}\) & \((2.0\pm 0.1)\times 10^{-4}\) \\ \(N_{\rm NAAA}\) & \((3.6\pm 0.3)\times 10^{-5}\) \\ \hline \multicolumn{3}{c}{\(F_{2-10}\) [erg cm\({}^{-2}\)s\({}^{-1}\)]} \\ _XMM-Newton_ & \((7.48\pm 0.01)\times 10^{-11}\) \\ _NuSTAR_ (Nov) & \((1.12\pm 0.02)\times 10^{-10}\) \\ \hline \(L_{2-10}\) [erg s\({}^{-1}\)] & \((1.70\pm 0.01)\times 10^{43}\) \\ \(R\) & \(0.42\pm 0.03\) \\ \hline \(\chi^{2}\)/dof & 2381/2259 \\ \hline \end{tabular} _Note:_ The errors at 68 per cent c.l. for one parameter of interest. \(\Pi\) and \(\Psi\) of xillayer and xierdicks are set equal to 0. Parameters without error have been frozen in the fit. The spectral index for the first observation is obtained applying the gain fit command. \(R\) is the reflection fraction defined as the ratio between the 20–40 keV fluxes of the Compton reflection and the primary component.
\end{table}
Table 2: Best-fitting parameters for the _XMM-Newton_, _NuSTAR_ and _IXPE_ May+November 2022 combined data set.
Figure 4: The EPIC pn (May 2022), _NuSTAR_ (May+November 2022) and _IXPE_ **1** (May+November 2022) spectra together with the best-fitting model (_upper panel_), and the residuals (_lower panel_).
Figure 5: Comparison between the polarization degree \(\Pi\) and angle \(\Psi\) contour plots from the combined (May+November 2022) observations _XMM-Newton_, _NuSTAR_ and _IXPE_ (_saturated plot_) and the first (May 2022) observation only (_pale plot_). Purple, pink and orange regions represent, respectively, the 68, 90 and 99 per cent confidence levels for two parameters of interest.
uniform coronal density (a detailed analysis of the various wedge configurations is beyond the scope of this paper and will be presented in a following paper). The simulations are run for two values of the SMBH spin (\(a=0\) and \(a=0.998\)). In both cases, we set the inner radius to the ISCO, i.e. 6 \(R_{\rm O}\) for the static black hole and 1.24 \(R_{\rm O}\) for the maximally rotating black hole. We test four different opening angles (\(15^{\circ}\), \(30^{\circ}\), \(45^{\circ}\) and \(60^{\circ}\)). We set the coronal electron temperature to 25 keV, as measured by Balokovic et al. (2015) and Marinucci et al. (2022). After setting the electron temperature, for each geometrical configuration we find the optical depth that fits the spectrum best in the _IXPE_ band pass (i.e. 2-8 keV) when we replace the cut-off power law with the spectra obtained with MONN in the best-fit model retrieved in Sect. 3.2. In Table 3 we summarize the physical and geometrical parameters we assume in the simulations. For all the simulations we perform, we assume a mass of the SMBH of \(M_{\rm BH}=2\times 10^{7}M_{\odot}\) and an Eddington ratio of 0.1 (Ponti et al., 2012). Finally, we set the initial polarization (i.e. the polarization of the optical/UV radiation emitted by the accretion disc) as appropriate for a pure scattering, plane-parallel, semi infinite atmosphere (Chandrasekhar, 1960).
The polarization angle is found to be always parallel to the accretion disc axis. The degree of polarization is up to 5-6 per cent for the smaller opening angles, showing no significant variations with energy in the 2-8 keV energy range. In all tested cases, we notice a decrease in the degree of polarization for larger opening angles, as the geometry becomes closer to a sphere, for which zero polarization is expected. Finally, we notice a slight increase in PD between the static and the maximum spinning black hole cases. In Fig. 7 we show the polarization degree as a function of the cosine of the inclination angle (\(\mu=\cos\theta_{\rm disc}\)).
## 5 Conclusions
Constraining the geometry of the comptonizing corona in AGNs is one of the main goals of _IXPE_. So far, three radio-quiet, unobscured AGNs have been observed: MCG-05-23-16 (Marinucci et al., 2022), NGC 4151 (Gianolli et al., 2023) and IC 4329A (Ingram et al. in prep.). The first observation of MCG-05-23-16 put constraints on the polarization degree of the primary continuum (\(\Pi\leq 4.7\) per cent) and found a hint of alignment between the polarization angle and the accretion disc spin axis. In NGC 4151 a clear detection has been obtained, with a polarization degree of \(\Pi=4.9\pm 1.1\) per cent and a polarization angle parallel to the disc axis (as probed by the radio jet). These results disfavour a lamppost geometry (Gianolli et al., 2023)
In this paper we have analysed the second pointing of MCG-05-23-16 performed by _IXPE_ on November 2022, also combining this observation with the first one (May, 2022), and using _XMM-Newton_ and _NuSTAR_ data taken contemporaneously. The results were then compared with theoretical simulations performed with the Comptonization Monte Carlo code MONN. The combined analysis led to a significant decrease of the upper limit to the polarization degree of the primary continuum, which is now \(\Pi\leq 3.2\) per cent (to be compared with \(\Pi\leq 4.7\) per cent from the first observation only, Marinucci et al., 2022).
_Hubble Space Telescope_'s WFPC2 images showed that the ionization cone of MCG-05-23-16 has a roughly \(40^{\circ}\) position angle, as probed by [O iii] emission (Ferruit et al., 2000). Let us assume it as a marker for the Narrow Line Region (NLR), and that the NLR elongation axis is perpendicular to the accretion disc. Even if the polarization angle is formally unconstrained, given that we do not have a firm polarization detection, our analysis nevertheless suggests a statistical preference for a polarization angle in the \(\sim\)\(50^{\circ}\) direction (see Fig. 5). This is a hint that the polarization of the primary emission is aligned with the NLR and so parallel to the accretion disc axis, similar to what was found in NGC 4151 (Gianolli et al., 2023).
Let us now use the PD-PA contour plots to put constraints on the geometrical parameters of the corona. In Fig. 8 we plot, superimposed to the contour plots, the polarization degree and angle from MONN simulations for four different geometries. The results for the lamppost, cone and slab are taken from Ursini et al. (2022) and all assume a static black hole, a coronal temperature of 25 keV and the optical depth which best reproduces the observed MCG-05-23-16 spectrum analyzed by Marinucci et al. (2022). In the absence of any independent constraint on the source inclination, we cannot formally exclude any geometry, as, for low enough angles, any of them can reproduce a polarization degree close to zero. For the slab and the wedge cases (which have polarization angles parallel to the disc axis), the effective upper limit is 3.2 per cent, and we can constrain
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(kT_{\rm e}\) [keV] & SMBH spin & \(R_{\rm in}\) [\(R_{\rm O}\)] & \(\alpha\) [deg] & \(\tau\) & PD\({}_{\rm max}\) [\%] \\ \hline \multirow{8}{*}{25} & \multirow{8}{*}{0} & \multirow{8}{*}{6} & 15 & 6.8 & 5 \\ & & & 30 & 4.2 & 4 \\ & & & 45 & 3.3 & 3 \\ \cline{1-1} & & & 60 & 2.8 & 2.3 \\ \cline{1-1} & & & 15 & 8.3 & 5.8 \\ \cline{1-1} & & & 30 & 5.1 & 4.3 \\ \cline{1-1} & & & 45 & 3.8 & 3.2 \\ \cline{1-1} & & & 60 & 3.2 & 3.2 \\ \hline \hline \end{tabular} _Note_. In the last column the maximum polarization degree (PD\({}_{\rm max}\)) resulting from the simulations is reported.
\end{table}
Table 3: Coronal input parameters for the MONN simulations.
Figure 6: The wedge corona. This geometry is characterized by an inner and an outer radius and an opening angle (measured from the accretion disc plane). In the left configuration, the inner radius of the accretion disc coincides with the outer radius of the corona, while in the right configuration it extends into the corona itself. The \(R\) and \(z\) axes represent the radial and the vertical coordinates.
the source inclination to be lower than \(40^{\circ}\) assuming the slab geometry. If we instead assume the wedge geometry, the allowed range of source inclinations depends also on opening angle \(\alpha\). We see from Fig. 7 that for \(\alpha\geq 45^{\circ}\), the predicted polarisation degree is always below our observational upper limit, thus leaving the source inclination unconstrained. On the other hand, assuming a static SMBH, constrains the inclination to be either below \(50^{\circ}\) or above \(80^{\circ}\) for \(\alpha=30^{\circ}\) and to be lower than \(50^{\circ}\) for \(\alpha=15^{\circ}\). Assuming instead a maximally spinning SMBH, constrains the inclination to be lower than about \(40^{\circ}\) for both \(\alpha=30^{\circ}\) and \(\alpha=15^{\circ}\).
For coronal geometries that predict polarization angles perpendicular to the disc axis, the upper limit on polarization degree is much more stringent (\(\Pi\leq 0.5\) per cent). In this scenario, if we consider the cone-shaped corona, we can constrain the source inclination to be lower than \(20^{\circ}\). Finally, considering the lamppost geometry, since it predicts a very low PD for all inclinations, no constraints could be obtained. Information on the inclination angle, however, can in principle be obtained by modeling the reflection component. Sefarfinelli at al. (in prep.) found the inclination of MCG-05-23-16 to be constrained in the \(30^{\circ}-50^{\circ}\) range. If we assume these values, Fig. 8 shows that the cone-shaped corona is disfavoured.
Figure 8: Comparison between moxk simulations and the contour plot of the combined analysis presented in Sect. 3.2. Different coronal geometries are shown: slab (in light green) and spherical lamppost (in blue) in the _left panel_, wedge (in magenta) and cone (in red) in the _right panel_. Regions of the plot filled with pale colours represent the expected \(\Pi\) for all the possible inclinations of the source, while the saturated ones represent the expected degree for inclinations in the range \(30^{\circ}-50^{\circ}\), as found in Serafinelli et al. (in prep.). The black-dotted line at \(40^{\circ}\) represents the supposed elongation of the NLR (which is the expected polarization angle in the slab and wedge geometries), while the black-dotted line at \(-50^{\circ}\) represents the direction orthogonal to the NLR, (the expected polarization angle for the lamppost and the cone).
Figure 7: Polarization degree from the moxk simulations in the case of a wedge-shaped corona as a function of the cosine of the inclination angle (\(\mu=\cos\theta_{\rm disc}\), where \(\mu=0\) and \(\mu=1\) represent the edge-on and face-on views of the source, respectively. _Left panel_: static SMBH (\(\alpha=0\)) cases. _Right panel_: maximally spinning SMBH (\(\alpha=0.998\)). Purple, black, red and blue lines correspond to the \(15^{\circ}\), \(30^{\circ}\), \(45^{\circ}\) and \(60^{\circ}\) opening angles cases, respectively). The green regions represent the allowed values of the polarization degree (see Sect.3.2).
## Acknowledgements
The _Imaging X ray Polarimetry Explorer_ (_IXPE_) is a joint US and Italian mission. The US contribution is supported by the National Aeronautics and Space Administration (NASA) and led and managed by its Marshall Space Flight Center (MSFC), with industry partner Ball Aerospace (contract NNM15AA18C). The Italian contribution is supported by the Italian Space Agency (ASI) through contract ASI-OHBI-2017-12-I.0, agreements ASI-INAF-2017-12-H0 and ASI-INFN-2017.13-H0, and its Space Science Data Center (SSDC), and by the Istituto Nazionale di Astrofisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy. This research used data products provided by the _IXPE_ Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC). Part of the French contribution is supported by the Scientific Research National Center (CNRS) and the French Space Agency (CNES). MD, VK and JPod thank for the support from the GACR project 21-06825X and the institutional support from RVO:67985815. I.A. acknowledges financial support from the Spanish "Ministerio de Ciencia Innovacion" (MCINN) through the "Center of Excellence Severo Ochoa" award for the Instituto de Astrofisica de Andalucia-CSIC (SEV-2017-0709) and through grants AYA2016-80889-P and PID2019-107847RB-C44.
## Data Availability
The data analyzed in this work are either publicly available at the HEASARC database or available from the corresponding author upon request.
|
2306.09212 | CMMLU: Measuring massive multitask language understanding in Chinese | As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context. | Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin | 2023-06-15T15:49:51Z | http://arxiv.org/abs/2306.09212v2 | # CMMLU: Measuring massive multitask language understanding in Chinese
###### Abstract
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.1
Footnote 1: The data and evaluation code are available at [https://github.com/haonan-li/CMMLU](https://github.com/haonan-li/CMMLU)
## 1 Introduction
Large-scale language models (LLMs) have made remarkable advancements in natural language processing and artificial intelligence, revolutionizing the field (Zhang et al., 2022; Scao et al., 2022; Zeng et al., 2023; Touvron et al., 2023; OpenAI, 2023; Wu et al., 2023; Taori et al., 2023; Li et al., 2023a). However, assessing the knowledge and reasoning abilities encoded in these models has become increasingly challenging, especially with the proliferation of LLMs that generate fluent and plausible responses.
To this end, researcher has created various benchmarks from different aspects (Wang et al., 2019, 2019; Lin et al., 2022; Zellers et al., 2019; Hendrycks et al., 2021; Chen et al., 2021). Specially, Hendrycks et al. (2021) proposed MMLU, a benchmark encompasses various tasks ranging from elementary mathematics and computer science to management and law, which can be used to comprehensively measures LLM capabilities by leveraging pre-training knowledge. Due to its format of multiple-choice questions, which facilitates evaluation, and its inclusion of a wide variety of subjects, it has become widely used as a fundamental assessment of the knowledge encoded by LLMs. However, this benchmark is in English, which limits its ability to assess LLMs for other languages. Although some researchers (OpenAI, 2023) have made efforts to automatically translate it for evaluating LLMs' knowledge and abilities in other languages, the inherent bias towards U.S. culture in the dataset renders it inappropriate and even impossible to accurately assess the real-world applications of LLMs in diverse cultures and languages.
In this paper, we propose CMMLU (Figure 2), a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context. CMMLU covers a
Figure 1: Comparison between MMLU, CLUE and our CMMLU.
wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within humanities and social sciences. Many of these tasks are not easily translatable from other languages due to their specific contextual nuances and wording. Furthermore, numerous tasks within CMMLU have answers that are specific to China and may not be universally applicable or considered correct in other regions or languages.
We assess ChatGPT and most advanced open-sourcing multilingual- and Chinese-oriented LLMs on CMMLU. The evaluation results reveal that the majority of these models struggle to achieve an accuracy score above 40%, while the random guess accuracy stands at 25%. Notably, ChatGPT demonstrates an average accuracy rate of 55%. These findings highlight the considerable room for improvement in LLMs regarding Chinese knowledge and language understanding.
To gain a deeper understanding of the models' proficiency in handling Chinese knowledge, we conduct comprehensive analysis experiments. We first focus on examining the models' performance across various subjects. It becomes evident that all models exhibit imbalanced performance across different subjects, with comparatively higher scores in humanities and social sciences, but lower scores in China-specific and STEM subjects. To delve further into the matter, we explore the effectiveness of chain-of-thought prompts and few-shot examples in aiding the models' comprehension of tasks and enhancing their reasoning abilities. Additionally, we investigate the impact of model size on performance, analyze the relationship between question length and difficulty, and explore two specific question types that existing language models have not yet effectively addressed.
## 2 Related Works
Benchmarking plays a crucial role in evaluating AI development, particularly in the domain of LLMs. While benchmarks such as GLUE Wang et al. (2019) and SuperGLUE Wang et al. (2019) have made significant progress in evaluating natural language understanding (NLU) tasks, their primary focus is on assessing language skills. As a result, they have been less commonly used as benchmarks for LLMs as more and more models are able to generate fluent and plausible language. Meanwhile, various benchmarking datasets have been proposed to evaluate LMs' abilities in different aspects, including reading comprehension Rajpurkar et al. (2018); Kwiatkowski et al. (2019); Li et al. (2022), summarization Hermann et al. (2015), commonsense reasoning Clark et al. (2018); Talmor et al. (2019); Sakaguchi et al. (2020), mathematical reasoning Hendrycks et al. (2021); Cobbe et al. (2021), and code generation Chen et al. (2021); Austin et al. (2021). However, some recent work Goyal et al. (2022); Liu et al. (2023) have demonstrate that LLM can perform even better than human or human annotators on some tasks such as summarization, leading to a re-evaluation of the appropriateness of using these benchmarks. In order to comprehensively assess the capabilities of LLMs, some benchmarks have incorporated massive multi-task evaluations in their framework Hendrycks et al. (2021); Liang et al. (2022); Srivastava et al. (2023). An example is MMLU Hendrycks et al. (2021), which includes multiple domains and tasks based on real-world exams. It has gained wide usage in evaluating LLMs due to its standardized and simplified format, comprehensive nature, and real-world relevance. However, All these benchmarks are primarily focused on the English language.
Given that Chinese is the language with the highest number of speakers worldwide, several benchmarks have been proposed to evaluate Chinese language models. Following the footsteps of GLUE and SuperGLUE, Xu et al. (2020) introduced CLUE, a pioneering large-scale Chinese NLU benchmark that is widely used today. They have also recently proposed SuperCLUE Xu and others from SuperCLUE team (2023), which specifically focuses on LLMs. A comparison of MMLU, CULE and our CMMLU is shown in Figure 1. Recently, there has been several Chinese benchmarks that following MMLU style, while all of these are concurrent work to ours. In detail, Zeng (2023) presented MMCU, a test set that covers four major domains (medicine, law, psychology, and education), with a particular focus on medicine and education. AGIEval Zhong et al. (2023) focuses on Chinese standardized exams, such as college entrance exams, math competitions, and lawyer qualification tests. Huang et al. (2023) released C-Eval, a benchmark that incorporates questions across four difficulty levels of elementary school, middle school, high school and college. Another benchmark, M3KE Liu et al. (2023), collects 71
tasks from the Chinese education examination system, which has the same difficulties coverage with C-Eval.
Compared to these benchmarks, our proposed benchmark has several distinct features. Firstly, it includes more than 10 subjects that may not typically appear in standard exams but are relevant to people's daily life, such as _Chinese food culture_, _Chinese driving rule_, etc. Secondly, it covers not only China-specific knowledge but also general world knowledge that a Chinese individual should be familiar with, such as subjects of _world religion_, _world history_, _global facts_, and more. Lastly, we have made our data completely public, enabling communities to freely and conveniently utilize and evaluate their models.
## 3 Cmmlu
Task OverviewWe created an extensive multi-task test in Chinese, which covers diverse areas of knowledge, including the humanities, social sciences, STEM (science, technology, engineering, and mathematics), and other areas that are important in human daily life. In includes the common test questions such as mathematics, physics, chemistry that the answers are consistent with different languages or areas, and also includes several tasks that the answers are very area-dependent, such as _Chinese driving rule_, _Chinese food culture_, _Chinese teacher qualification_. The questions in these tasks involve lots of China-related knowledge, and can test a model's understanding and adaptability to China. In addition, CMMLU also contains tasks that can only expressed in Chinese, such as _ancient Chinese language_ and _Chinese literature_. The terms and concepts involved in these tasks are heavily rely on Chinese expression and almost impossible to be obtained from translation. The full subject list, concepts that each subject test the number of questions, and the statistics of questions and answer length are provided in Appendix A.
Data collectionThis dataset contains 11,528 questions and across 67 subjects. Each subject have at least 105 questions, which we split into a few-shot development set with 5 questions, and a test set with more than 100 questions.
We hire four annotators who held undergraduate degrees or higher qualifications to manually collected these questions and answers from freely available resources, with a rate of 50 CNY per hour. To prevent our questions from appearing in the training set of LLMs, we make effort to utilize internal resources within the education system (non-publicly available materials), mock exam questions, and questions from quiz shows. The entire collection process takes around 250 hours.
FormatEach question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer, see Figure 3 for an example. The questions can be expressed in fill in the
Figure 2: CMMLU task overview.
blank (by choosing the correct option) style, or direct question format. For math formulas, chemical formulas and some other math expressions, we use a mixture of LaTeX and plan text by 50:50, where plane text were only allowed if a expression is commonly used and not prone to ambiguity (judge by annotator). For instance, the chemical expression for water can be written as plane text "H2O", or in LaTeX format "SH_{2}OS".
Quality CheckingTo further check data quality, we sampled 5% questions with answers for each subject, and conduct detailed verification though online resources. Finally, we identified an average mislabeling rate of approximately 2%. Based on the evaluation results in Section 4 that most models are struggle to achieve and average accuracy of 40%, we believe such an error rate will not significantly impact the overall evaluation accuracy.
## 4 Experiments
To provide an overview of existing open-sourced LLMs on language understanding within the context of Chinese, we evaluate 18 advanced LLMs in different sizes, language orients, and stages (pre-trained or fine-tuned). We analyze their performance and investigated several factors that could affect the performance of LLMs on this knowledge-centric benchmark.
SetupOur goal is to assess the knowledge leveraged by an LLM during pre-training and/or fine-tuning. For open sourced models, we follow MMLU to obtain the probability of the next tokens after the prompt and selected the one with the highest probability among 'A', 'B', 'C', and 'D' as the model's choice. For non-open sourced models such as ChatGPT, we generated output and used a series of regular expressions to extract the model's choice.2 We employed both zero-shot (do not input examples) and few-shot (input with few examples) settings.
Footnote 2: If nothing matched by regular expression, we assign a random choice among ‘A’, ‘B’,‘C’, ‘D’ to make fair comparison,
PromptWe introduce each question with the phrase "["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["][""]["]["]["]["]["]["]["]["]["]["]["]["]["]["][""]["]["]["][""]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["][""]["]["]["]["]["]["]["]["]["]["]["]["]["][""]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["][""]["]["]["][""]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["]["][""]["][""]["]["][""]["][""]["][""]["][""]["]["][""]["][""]["][""]["]["][""]["][""]["]["][""]["][""]["]["]["]["]["][""]["][""][""]["][""][""]["]["][""]["][""]["]["]["]["][""]["][""]["][""][""][""]["][""][""]["][""]["][""][""]["][""]["][""]["][""]["][""]["][""][""]["]["]["][""]["][""]["]["][""][""]["][""]["][""]["][""]["][""]["][""]["][""]["][""][""][""]["][""][""]["][""][""]["][""][""]["][""][""][""][""][""][""][""]["][""][""]["][""][""][""]["][""][""][""][""][""]["][""][""]["][""]["][""][""]["][""][""]["][""][""][""]["][""]["][""]["][""][""]["][""][""][""][""][""][""][""][""][""][""]["][""][""][""][""][""][""][""][""][""][""][""][""][""][""][""][""][""][""][""][""]["][""][""][""][""][""][""][""][""][""]["""][""][""][""][""][""]["][""]["""][""][""][""][""][""][""][""][""][""][""][""][""]["][""][""][""][""][""][""][""][""][""][""]["][""][""][""][""][""][""]["][""][""][""][""]["][""][""]["][""][""][""]["][""][""][""][""][""]["][""][""][""][""][""]["][""][""]["][""][""]["][""]["][""][""]["][""][""]["][""][""][""][""]["][""][""][""]["""]["][""][""][""]["][""]["][""][""]["][""]["][""][""]["][""][""][""][""][""]["][""]["][""][""]["][""]["][""][""]["]["][""]["][""][""][""][""]["][""][""]["][""][""][""][""][""][""][""]["][""][""][""][""]["][""]["]["""][""][""][""][""][""][""][""]["][""][""]["][""][""][""][""][""][""][""][""][""][""]["""][""][""]["""][""][""][""]["""][""][""][""]["""][""][""][""][""]["""][""][""][""][""][""][""][""][""][""]["""][""][""]["""][""][""][""][""]["]["""][""]["][""][""][""]["""][""][""][""][""]["""][""][""]["""][""][""][""][""]["""]["""][""]["""][""][""]["""][""]["""][""][""][""]["""][""]["""]["""]["""][""]["""]["""][""][""]["""]["""]["""][""][""]["""]["""]["""]["""]["""]["""][""""][""]["""]["""]["""][""""]["""]["""][""""][""""][""""]["""][""""][""""]["""""]["""""]["""""][""""""]["""""]["""""]["""""][""""""]["""
2023), Bactrian-X-7B (Li et al., 2023a), Falcon-7B/40B (Almazrouei et al., 2023); (2) Chinese-oriented models: MOSS-SFT-16B (OpenLMLab, 2023), Chinese-LLaMA-7B/13B (Cui et al., 2023), Chinese-GLM-10B (Du et al., 2022), ChatGLM-6B (Zeng et al., 2023) and BatGPT-15B (Li et al., 2023b). The details about these models are introduced in Appendix D.
### Main Results
Table 1 shows the performance of all models under the five-shot setting. Since the zero-shot results are overall slightly lower than the five-shot results, we provide them in Appendix B.1.
By modelFrom the first block of the table we observe the following: (1) Small pre-trained models (BLOOM-7B, LLaMA-7B) without Chinese-specific training or fine-tuning achieve random accuracy of approximately 25%. While the performance of these pre-trained models increases as the model size increases; (2) For those multilingual models, fine-tuning using Chinese resources consistently improves their performance on the benchmark (Bactrian-LLaMA vs. LLaMA, BLOOMZ vs. BLOOM); (3) Falcon-40B is the best open-sourced model among these in our benchmark, achieving an accuracy of 41.45%. However, there is still a significant gap between Falcon-40B and ChatGPT, which is the overall best model that achieves an accuracy of 55.51%.
From the second block of the table, we find that: (1) Among the Chinese specific LLMs, ChatGLM-6B demonstrates the best overall performance with the smallest model size. We attribute it to the high quality of its training data; (2) Chinese-LLaMA and Bactrian-LLaMA have similar overall performance, given that Chinese-LLaMA conduct second phase of pre-training in Chinese and vocabulary expansion based on LLaMA, while Bactrian-LLaMA only conduct instruction-following fine-tuning. This indicates that the second phase of pre-training with new data does not enrich the knowledge encoded in the model, although they may enable the models to generate more fluent and plausible Chinese responses.
By subjectFrom the perspective of subject type, all models exhibit better performance in humanities, social sciences, and other subjects compared to STEM subjects, which we believe is due to the inherent difficulty of STEM topics. Additionally, all models' performance in the China-specific category is relatively weak, slightly surpassing their performance in STEM subjects but significantly lagging behind other categories. Notably, models with Chinese-specific pre-training or fine-tuning (models in the second section) show smaller performance gaps between China-specific and other categories.
We compare the performance of the best-performing Chinese model, ChatGLM, with best
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & STEM & Humanities & Social Science & Other & China-specific & Average \\ \hline ChatGPT & **47.81** & **55.68** & **56.50** & **62.66** & **50.69** & **55.51** \\ LLaMA-65B & 34.47 & 40.24 & 41.55 & 42.88 & 37.00 & 39.80 \\ Falcon-40B & 33.33 & 43.46 & 44.28 & 44.75 & 39.46 & 41.45 \\ LLaMA-30B & 29.69 & 33.68 & 34.08 & 37.40 & 30.68 & 33.63 \\ Bacterial-LLaMA-13B & 27.52 & 32.47 & 32.27 & 35.77 & 31.56 & 31.88 \\ LLAMA-13B & 29.84 & 30.96 & 31.60 & 33.07 & 30.65 & 31.36 \\ BLOOMZ-7B & 30.56 & 39.10 & 38.59 & 40.32 & 37.15 & 37.04 \\ BLOOM-7B & 25.40 & 25.39 & 24.72 & 25.10 & 24.38 & 25.11 \\ Bactrian-LLaMA-7B & 24.83 & 27.48 & 27.70 & 28.37 & 26.99 & 27.08 \\ LLaMA-7B & 25.41 & 26.86 & 26.45 & 26.85 & 26.29 & 26.36 \\ Falcon-7B & 26.20 & 26.20 & 25.43 & 24.92 & 25.34 & 25.66 \\ \hline MOSS-SFT-16B & 27.23 & 30.41 & 28.84 & 32.56 & 28.68 & 29.57 \\ BatGPT-15B & **33.49** & 35.38 & 36.31 & **42.14** & 37.00 & 36.72 \\ Chinese-LLaMA-13B & 27.12 & 33.18 & 34.87 & 35.10 & 32.97 & 32.63 \\ Chinese-GLM-10B & 25.49 & 27.05 & 27.42 & 29.21 & 28.05 & 27.26 \\ Chinese-LLaMA-7B & 25.79 & 27.45 & 26.35 & 26.06 & 25.45 & 26.36 \\ ChatGLM-6B & 32.35 & **39.22** & **39.65** & 38.62 & **37.70** & **37.48** \\ \hline Random & 25.00 & 25.00 & 25.00 & 25.00 & 25.00 & 25.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Five-shot average accuracy of all models. We report average accuracy over subjects within each categories. “Average” = average over all subjects. The first block are multilingual- or English-oriented models, the second block are Chinese-oriented models. Models in each block are ranked by model size.
performing multilingual model, ChatGPT, for each subject. We categorize the subjects and present the results in Figure 4. The numerical results can be found in Appendix B.2. From the figure, we note that the model's performance appears to be unbalanced, excelling in certain subjects such as world history but struggling in others like mathematics. We observe that _ancient Chinese_ and _college actuarial science_ are the most challenging subjects for both ChatGLM and ChatGPT, yielding random results, while the _legal and moral basis_ is one of the easiest subject for both models. Comparing the two models, we find that in most cases, ChatGPT outperforms ChatGLM by a significant margin. In _machine learning_ and _computer security_ subjects, ChatGPT achieves nearly twice the accuracy of ChatGLM. However, in the Chinese-specific category, ChatGLM's performance is notably closer to that of ChatGPT. It even surpasses ChatGPT in two subjects: _Chinese history_ and _high school politics_. We believe this is because ChatGLM has encountered distinct data source compared to ChatGPT, particularly within the China-specific category. These findings suggest that it is important to find suitable data sources for multilingual LLMs to accommodate users with different language backgrounds.
### Analysis
In order to gain a comprehensive understanding of the LLM's performance under various conditions, we have explored three factors that may enhance the model's performance and three factors that could potentially diminish its performance. Specifically, we have investigated whether the following factors can improve the model's performance: (1) utilizing chain-of-thought prompts, (2) increasing the number of input examples, and (3) employing larger-sized models within the same family. Conversely, we have explored whether the following factors make the task more challenging for LLMs: (4) questions with longer lengths, (5) questions containing negation words, and (6) questions with sub-options within them. We choose the models based on the overall performance from Table 1. For most analysis, we use top-3 multilingual models: ChatGPT, Falcon-40B, LLaMA-65B, and top-2 Chinese-oriented models: ChatGLM-6B and BatGPT-15B.
(please provide the correct answer choice directly)" to "
culty, and display the relation between question difficulty and question length of Falcon-40B in Figure 7. We conduct a linear regression and find that the correlation between question length and true label confidence is slightly positive.
**Are questions with negation more challenging?** Previous research has pointed out that language models may struggle with negation (Kassner and Schutze, 2020; Hosseini et al., 2021). To investigate whether this issue also exists in the context of Chinese language, we first employ string matching to classify the test set into questions with and without negation words. We then compare the performance of different models on these subsets. The results are presented in Table 3.
From the table we find that all models perform less effectively on questions containing negative words compared to other questions, which aligns with findings from previous studies highlighting this common limitation of large language models. Notably, among these models, ChatGPT exhibits the smallest performance gap between the negation and non-negation subsets, with only a 1% difference, while Falcon-40B shows a gap of approximately 16% in the zero-shot setting. An interesting observation is that for models without fine-tuning, few-shot examples alleviate the performance drop for negation questions. This leads us to infer that these models (LLaMA-65B and Falcon-40B) have already acquired substantial knowledge during pretraining. The subsequent instruction-following tuning or reinforcement learning from human feedback can assist them in effectively addressing negation questions, thereby enhancing their overall capabilities.
**Are questions with sub-options more challenging?** There is a typical question types in all kinds of Chinese exams called "sub-option questions". These questions include a main statement along with multiple sub-options, and inquire about the count, order, or selecting of the sub-options, which requiring model to have deeper reasoning and inference skills (see example in Figure 8). The sub-options in CMMLU can appear in different formats, such as "a, b, c..., "1, "2, "3...". We classify the data into two subsets based on sub-option presence, and display evaluation results in Table 4.
We observe that all these LLMs performed weaker on sub-options questions compared to those without sub-options. In particular, ChatGPT demonstrats a significant decrease of approximately 20 points in performance for sub-option questions, while the other models experience a decline ranging from 5% to 15%. Another finding is that performance gap between sub-option ques
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{0-shot} & \multicolumn{2}{c}{5-shot} \\ \cline{2-5} & w/ & w/o & w/ & w/o \\ \hline ChatGPT & 34.73 & 53.91 & 33.97 & 56.42 \\ BatGPT-15B & 32.82 & 38.83 & 26.72 & 36.87 \\ ChatGLM-6B & 30.53 & 41.26 & 29.01 & 37.82 \\ \hline LLaMA-65B & 26.72 & 35.36 & 28.63 & 40.26 \\ Falcon-40B & 23.66 & 38.76 & 29.01 & 42.07 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average accuracy classified by questions w/ and w/o sub-options.
Figure 8: An example of questions with sub-options. Example from high school geography.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{0-shot} & \multicolumn{2}{c}{5-shot} \\ \cline{2-5} & w/ & w/o & w/ & w/o \\ \hline ChatGPT & 52.32 & 53.63 & 55.04 & 56.01 \\ BatGPT-15B & 31.04 & 39.66 & 29.44 & 37.44 \\ ChatGLM-6B & 33.52 & 41.92 & 31.12 & 38.41 \\ \hline LLaMA-65B & 23.04 & 36.64 & 37.20 & 40.34 \\ Falcon-40B & 24.32 & 40.12 & 35.52 & 42.53 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average accuracy classified by questions w/ and w/o negation.
Figure 7: The relation between question length (in sub-tokens) and true label confidence from Falcon-40B.
tions and other questions exceeds 10% for all multilingual models, while it remains below 10% for Chinese-oriented models. We believe this is because there are more such case in the training data for the Chinese language.
## 5 Conclusion
We introduce CMMLU, a groundbreaking benchmark designed to assess the multi-task language understanding capabilities in Chinese. Our experimental findings reveal substantial opportunities for improvement within existing large language models. Through extensive analysis, we identify several factors that impact model performance and propose actionable directions for enhancing LLMs. We are confident that our benchmark dataset and analytical insights will empower researchers to effectively evaluate and design Chinese LLMs.
|
2304.07446 | Evidence for Misalignment Between Debris Disks and Their Host Stars | We place lower limits on the obliquities between debris disks and their host
stars for 31 systems by comparing their disk and stellar inclinations. While
previous studies did not find evidence for misalignment, we identify 6 systems
with minimum obliquities falling between ~30{\deg}-60{\deg}, indicating that
debris disks can be significantly misaligned with their stars. These
high-obliquity systems span a wide range of stellar parameters with spectral
types K through A. Previous works have argued that stars with masses below 1.2
$M_\odot$ (spectral types of ~F6) have magnetic fields strong enough to realign
their rotation axes with the surrounding disk via magnetic warping; given that
we observe high obliquities for relatively low-mass stars, magnetic warping
alone is likely not responsible for the observed misalignment. Yet, chaotic
accretion is expected to result in misalignments of ~20{\deg} at most and
cannot explain the larger obliquities found in this work. While it remains
unclear how primordial misalignment might occur and what role it plays in
determining the spin-orbit alignment of planets, future work expanding this
sample is critical towards understanding the mechanisms that shape these
high-obliquity systems. | Spencer A. Hurt, Meredith A. MacGregor | 2023-04-15T01:37:10Z | http://arxiv.org/abs/2304.07446v2 | # Evidence for Misalignment Between Debris Disks and Their Host Stars
###### Abstract
We place lower limits on the obliquities between debris disks and their host stars for 31 systems by comparing their disk and stellar inclinations. While previous studies did not find evidence for misalignment, we identify 6 systems with minimum obliquities falling between \(\sim\)30\({}^{\circ}-60^{\circ}\), indicating that debris disks can be significantly misaligned with their stars. These high-obliquity systems span a wide range of stellar parameters with spectral types K through A. Previous works have argued that stars with masses below 1.2 \(M_{\odot}\) (spectral types of \(\sim\)F6) have magnetic fields strong enough to realign their rotation axes with the surrounding disk via magnetic warping; given that we observe high obliquities for relatively low-mass stars, magnetic warping alone is likely not responsible for the observed misalignment. Yet, chaotic accretion is expected to result in misalignments of \(\sim\)20\({}^{\circ}\) at most and cannot explain the larger obliquities found in this work. While it remains unclear how primordial misalignment might occur and what role it plays in determining the spin-orbit alignment of planets, future work expanding this sample is critical towards understanding the mechanisms that shape these high-obliquity systems.
0000-0002-4880-7088]Spencer A. Hurt
0000-0002-4882-7885]Meredith A. MacGregor
## 1 Introduction
The Sun's equatorial plane is well-aligned with the ecliptic, having an obliquity of \(7.155\pm 0.002^{\circ}\)(Beck & Giles, 2005). Most of the major solar system bodies move in nearly the same plane, suggesting that the planets formed from a protoplanetary disk that was rotating in the same direction as the early Sun. It has commonly been thought that other planetary systems form similarly and that exoplanet orbital axes should be closely aligned with their stars' spin axes. However, observational techniques such as the Rossiter-McLaughlin effect (Queloz et al., 2000; Shporer & Brown, 2011; Triaud, 2018), Doppler shadows (Albrecht et al., 2007; Zhou et al., 2016), and gravity darkened transits (Barnes, 2009; Ahlers et al., 2020) have measured large spin-orbit angles for many extra-solar systems (Albrecht et al., 2022).
Possible mechanisms responsible for spin-orbit misalignment generally fall into three categories: primordial misalignment, post-formation misalignment, and changes in the stellar spin axis that are independent of planet formation. The first, primordial misalignment, suggests that a protoplanetary disk is misaligned with its star's rotation axis and that planets with large spin-orbit angles form in situ. Processes that could misalign the disk include chaotic accretion (where the late arrival of material from the molecular cloud warps or tilts the disk; Bate et al., 2010; Thies et al., 2011; Fielding et al., 2015; Bate, 2018; Takaishi et al., 2020), magnetic warping (when the Lorentz force between a young star and ionized inner disk magnifies any initial misalignments; Lai et al., 2011; Foucart & Lai, 2011), and secular processes involving an inclined stellar or planetary companion (Borderies et al., 1984; Lubow & Ogilvie, 2000; Batygin, 2012; Matsakos & Konigl, 2017). Post-formation misalignment implies that after formation, gravitational interactions alter a planet's orbit. This could occur via planet-planet scattering (Malmberg et al., 2011; Beauge & Nesvorny, 2012) or secular processes like Kozai-Lidov cycles (Naoz, 2016) or disk-driven resonance (Petrovich et al., 2020). Both primordial and post-formation misalignment could also occur via stellar clustering, which appears to have a strong influence on the architecture of planetary systems (Tristan & Isella, 2019; Winter et al., 2020; Rodet & Lai, 2022) and may be commonplace (Yep & White, 2022). Thirdly, it has been proposed that stars with convective cores and radiative envelopes can reorient themselves without an external torque due to internal gravity waves generated at the radiative-convective boundary (Rogers et al., 2012, 2013).
Hot Jupiters--massive planets on very short orbits--frequently appear misaligned with hot, rapidly-rotating stars that generally fall above the Kraft break (\(\sim\)6200 K; Kraft, 1967) while low-mass planets appear misaligned with both cool and hot stars (Winn et al., 2010; Schlaufman, 2010; Albrecht et al., 2022). It has been suggested that hot Jupiters first enter high-obliquity orbits regardless of their host stars' properties; however, tidal interactions between
the massive, close-orbiting planets and the relatively thick convective envelopes found in stars below the Kraft break realign the stellar spin axes with the hot Jupiters' orbits. The mechanisms responsible for spin-orbit misalignment may help reveal how these exotic planets form. While formation in situ through core accretion may be possible (Batygin et al., 2016), it would be challenging for enough material to accumulate and develop into a planet that close to a star. Instead, if a massive planet formed far from its star, it could move to a short orbit via disk-driven migration or high-eccentricity tidal migration (Dawson and Johnson, 2018). If the hot Jupiter were primordially misaligned, this would indicate disk-driven migration, whereas post-formation misalignment could result from high-eccentricity tidal migration.
Constraints on which mechanisms actually contribute to spin-orbit misalignment can be placed using the observed distribution of obliquities and trends across system parameters. Additional constraints on primordial misalignment can be placed using observations of circumstellar disks and their stars. Watson et al. (2011) first compared stellar inclinations to disk inclinations for 8 systems with spatially resolved debris disks, while Greaves et al. (2014) later did the same for 10 systems imaged by the _Herschel_ satellite. Neither found evidence for misalignment, but both had limited samples and predate many spatially resolved images of disks taken by the Atacama Large Millimeter/submillimeter Array (ALMA), Hubble Space Telescope (HST), and Gemini Planet Imager (GPI) that can robustly measure disk inclinations. Davies (2019) compared inclinations for resolved disks (mostly protoplanetary) in the \(\rho\) Ophiuchus and Upper Scorpius star forming regions, finding that a third of systems are potentially misaligned. Davies (2019) used these contrasting results to raise the additional question of whether or not debris disks preserve the preceding protoplanetary disks' geometry and if star-disk-planet interactions or the formation of a debris disk can change the star-disk obliquity.
In this work, we study the star-disk alignment for an expanded sample of 31 resolved debris disks. In Section 2, we outline our methods, including the sample selection and measurements made. We then discuss our results in Section 3. Finally, in Section 4, we conclude our findings.
## 2 Methods
We assembled a list of spatially resolved debris disks from the literature, excluding circumbinary and circumtriple disks to simplify our analysis. We then identified systems with published stellar inclinations (\(i_{s}\)) or the data necessary to measure the inclination available, leaving a sample of 31 targets that can be found in Table 1.
Effective temperatures (\(T_{\rm eff}\)), masses (\(M\)), and radii (\(R\)) were taken from the _TESS_ Input Catalog (TIC; Stassun et al., 2018; Paegert et al., 2021) for most stars in our sample. Given the majority resolved debris disks are located around nearby, bright stars, the TIC adopted most of these parameters from large spectroscopic catalogs, avoiding the challenges of color-temperature relationships discussed in Stassun et al. (2018), particularly for the coolest stars (\(T_{\rm eff}<3800\) K). Parallaxes are known for all objects in our sample, providing precise measurements of radius in the TIC. 4 targets (AU Mic, Vega, \(\beta\) Leonis, and \(\beta\) Pictoris) that were either missing measurements or were reported without uncertainties were supplemented using values found elsewhere in the literature. The number of confirmed planets in each system was additionally determined by searching the NASA Exoplanet Archive Confirmed Planets Table (NASA Exoplanet Archive, 2019). 8 out of the 31 systems have at least one confirmed planet.
To determine the projected rotational velocities (\(v\sin i\)) of our sample, we adopted published values for each target. If multiple values were found, we adopted the measurement made using the highest-resolution spectrograph. Only 1 object (\(\beta\) Leonis) had a \(v\sin i\) reported without uncertainties; we assume 10% error bars on this measurement, typical for the uncertainties in our sample. 2 objects (GJ 581 and HD 23484) had upper limits on their projected rotational velocities and were treated as such in our analysis. We note that spectral line broadening from rotation is degenerate with turbulence in the stellar atmosphere and the \(v\sin i\) measurements in this work use a variety of modeling frameworks to account for macroturbulence, possibly introducing unknown systematics to our analysis.
Archival rotation periods were gathered for 26 objects in our sample. We also directly measured the rotation period for stars that displayed quasiperiodic variations in the Pre-Search Data Conditioned Simple Aperture Photometry (PDCSAP) light curves produced by the _TESS_ Science Processing Operations Center (SPOC), which have been corrected for instrumental systematics (Stumpe et al., 2012; Smith et al., 2012; Stumpe et al., 2014). Each photometric time series was modeled using a Gaussian process (GP), which are commonly used to represent rotational modulation induced by active regions rotating in and out of view (Haywood et al., 2014; Rajpaul et al., 2015). We used the rotation kernel implemented in celerite2 that combines two dampened simple harmonic oscillators with periods of \(P\) and \(P/2\) to capture the stochastic variability in a star's rotation signal (Foreman-Mackey et al., 2017; Foreman-Mackey, 2018).
Using _TESS_ data, we measured the rotation period of 18 stars, 5 of which had no previously published measurements; all of these measurements agree with either our archival values or the rotation period relationship in Noyes et al. (1984). For each of these 18 targets, we used the rotation periods and uncertainties determined using _TESS_ data as they are well-constrained and measured under a standard framework. These periods, along with the projected rotational velocities and the corresponding uncertainties, can be found in Table 1 while the _TESS_ light curves, GP models, and rotation period posteriors are shown in Appendix A.
We then determined the stellar inclination for each target with a known radius, rotation period, and \(v\sin i\) using the projected rotational velocity method, where the inclination is given by
\[i=\arcsin\left(\frac{v\sin i}{v}\right)=\arcsin\left(\frac{v\sin i}{2\pi R/P} \right). \tag{1}\]
As discussed by Masuda and Winn (2020), \(v\sin i\) and \(v\) are not independent from each other, complicating the statistical inference of \(i\). A simple technique accounting for this is to to use a Markov chain Monte Carlo (MCMC) process with a uniform prior on \(\cos i\) and measurement informed priors on \(R\), \(P\), and \((2\pi R/P)\,\sqrt{1-\cos^{2}i}\)(Albrecht et al., 2022). This approach is also advantageous because it easily accounts for uncertainties in our measurements of \(R\), \(P\), and \(v\sin i\). Given that measurements of \(v\sin i\) are typically made from a star's spectral absorption lines and require that broadening from rotation be distinguished from other sources, including turbulence in the stellar atmosphere or instrumental resolution, the projected rotational velocity method is often subject to systematic uncertainties. Therefore, we adopted stellar inclinations previously determined using more accurate methods such as interferometry (Vega and \(\beta\) Leonis), asteroseismology (\(\beta\) Pictoris), and starspot tracking (\(\epsilon\) Eridani) whenever possible. Interferometry and asteroseismology also allow us to expand our sample to early-type stars with weak, often undetectable rotational modulation.
We conducted a literature search for disk inclinations (\(i_{d}\)), selecting values that were the most well-constrained, typically corresponding to images with the highest spatial resolution. Most of these images were taken using ALMA, HST, and GPI, although the uncertainties on the inclinations vary widely as the spatial resolution is highly dependent on instrument configuration and distance. Inclinations can also be determined more precisely for edge-on disks than face-on disks.
Figure 1: Disk inclinations plotted against the stellar inclinations for each system in our sample. The color of each point corresponds to the absolute value of the median of \(\Delta i=i_{d}-i_{s}\), where blue indicates a well-aligned system and red indicates a large misalignment. Capped error bars represent the range of values that a parameter falls within (assuming a uniform distribution) while the rest represent the 68% credibility interval.
To better understand whether the star and disk might be misaligned, we calculated the difference between the disk and stellar inclinations (\(\Delta i=i_{d}-i_{s}\)), the absolute value of which gives the minimum star-disk misalignment; because we are unable to determine the position angle of the stellar rotation axis or the direction of the disk and stellar angular momenta, we are unable to calculate the full obliquity. For systems with stellar inclinations determined using the projected rotational velocity technique, we assumed the MCMC posterior distribution; for the systems with archival measurements, a sample of stellar inclinations were drawn from Gaussian distributions. Similarly, we drew a sample of disk inclinations using either uniform or Gaussian distributions when appropriate. We then took the differences between our samples of disk and stellar inclinations and adopted the median value along with lower and upper uncertainties representative of the 68% credibility interval. These differences, along with the stellar and disk inclinations, are given in Table 2.
## 3 Results and Discussion
### Comparing Disk and Stellar Inclinations
25 systems appear to be closely aligned with disk and stellar inclinations consistent of being within 10\({}^{\circ}\) of each other (although large uncertainties mean that some of these systems could still be misaligned). There are several exceptions; most notably, HD 10647, HD 138813, HD 191089, HD 30447, \(\epsilon\) Eridani, and \(\tau\) Ceti all have misalignments ranging roughly between 30\({}^{\circ}\) and 60\({}^{\circ}\). If stars and their disks are well-aligned, we would expect to see a monotonic, increasing relationship between disk inclination and stellar inclination in Figure 1. We test how well-aligned systems tend to be by calculating the Spearman rank-order correlation coefficient (\(r_{S}\)) for our data set. Using the median values for our inclinations, we find a coefficient of 0.62 with a p-value of 0.0002; however, this does not reflect the broad uncertainties on some of the inclination measurements. For each disk and stellar inclination, we drew a random sample and calculated a new coefficient and the corresponding p-value 10\({}^{4}\) times. The 68% credibility interval for \(r_{S}\) was \(.54\pm 0.08\) with p-values of \(0.0008^{+0.0036}_{-0.00069}\). These values for \(r_{S}\) are notably lower than the coefficient of 0.82 found by Watson et al. (2011) and indicate that while there is a positive correlation between disk and stellar inclinations, they are not always well-aligned.
It is important to keep in mind that disk and stellar inclinations can only put a lower limit on misalignment and that a full analysis requires knowledge of both the disk and stellar position angle on the sky plane. Further, inclinations do not indicate the directions that the star is rotating and the disk material is orbiting; if they are moving in opposite directions, the misalignment between the disk and star would be much greater than calculated. Given that systems such as K2-290--a strong candidate for primordial misalignment--have co-planar planets in retrograde orbits, this may be a significant bias (Hjorth et al., 2021).
While Watson et al. (2011) and Greaves et al. (2014) did not find signs of star-disk misalignment in their sample of debris disks, Davies (2019) observed misalignment of protoplanetary disks at a rate slightly higher than seen in our analysis (\(\sim\)33%); however, we note that they observed much smaller misalignments, typically less than 30\({}^{\circ}\). This indicates that the star-disk misalignment may not decrease as the disk transitions, as suggested by Davies (2019), and raises the question of whether misalignment increases as the system evolves. It is possible that mechanisms such as stellar flybys can incline debris disks (Moore et al., 2020) while processes such as accretion onto the star are unlikely to realign the system.
Figure 2 shows the mass, radius, \(v\sin i\), and rotation period of each star in our sample versus the effective temperature. Figure 3 shows the difference between the disk and stellar inclinations as a function of system parameters. In these plots, we see most of the star-disk systems are well-aligned aside from the 6 mentioned above. The misaligned systems are not clustered around any specific \(T_{\rm eff}\) or mass, suggesting that misalignment may occur regardless of stellar type, although there are not enough stars to make definitive conclusions. We also do not observe misalignment occurring more frequently with the presence of known planets; yet, many substellar objects in these debris disk systems may easily be undetectable. Finally, the 6 significantly misaligned systems span a wide range of ages (\(\sim\)7 Myr to 5.8 Gyr; Mamajek and Hillenbrand, 2008; Bell et al., 2015; Pecaut and Mamajek, 2016; Shkolnik et al., 2017; Nielsen et al., 2019); this is not surprising given that primordial misalignment is expected to occur during the protoplanetary disk stage, well before debris disks form.
### Implications for Primordial Misalignment
If magnetic warping were responsible for spin-orbit misalignment, Spalding and Batygin (2015) argue that misalignment should occur more frequently around stars with masses greater than 1.2 \(M_{\odot}\); this is because lower-mass, young stars
would be able to realign their stellar spin axes with the surrounding disks due to their stronger magnetic fields. As seen in Figure 3, 2 of the significantly misaligned systems (\(\epsilon\) Eridani and \(\tau\) Ceti) have stellar masses below 1.2 \(M_{\odot}\) while 2 (HD 10647 and HD 191089) have masses very close to this limit, suggesting that magnetic warping alone is not a viable mechanism for disk misalignment.
If chaotic accretion were at play, subsequent accretion of disk material onto the star is expected to reduce the misalignment to values lower than 20\({}^{\circ}\) by the time planets begin to form (Takaishi et al., 2020). Not only does this fail to describe the distribution of obliquities observed for exoplanets, but it does not match the \(\sim\)\(30^{\circ}-60^{\circ}\) misalignments shown in Figure 3. We do see systems with small potential misalignments near or below 20\({}^{\circ}\), including HD 107146, HD 129590, HD 145560, HD 202917, HD 206893, HD 35650, HD 377, and \(\beta\) Leonis, but the large uncertainties on our
Figure 2: Stellar parameters for the systems in our sample. The color of each point corresponds to the absolute value of the median of \(\Delta i=i_{d}-i_{s}\), where blue indicates a well-aligned system and red indicates a large misalignment. The top left figure shows mass versus \(T_{\rm eff}\), the top right shows radius versus \(T_{\rm eff}\), the bottom left gives \(v\sin i\) versus \(T_{\rm eff}\), and the bottom right gives the rotation period versus \(T_{\rm eff}\).
obliquity measurements make it difficult to determine whether low-obliquity systems are truly misaligned. Additionally, without knowing the position angles of each star, we cannot definitively comment on whether star-disk misalignment commonly falls near 20\({}^{\circ}\).
While the significantly misaligned disks could have been torqued out of alignment by an inclined stellar or planetary companion, this mechanism is unable to explain the observed distribution of spin-orbit obliquities (Zanazzi and Lai, 2018; Albrecht et al., 2022). Ultimately, it is unclear what mechanisms can misalign disks around their stars; further, because we do not know of many planets in these systems, we are unable to determine whether the same mechanisms could be responsible for spin-orbit misalignment. As discussed in Section 3.1, mechanisms such as stellar flybys may incline debris disks in addition to planetary orbits, and the obliquities measured in this work may not reflect a system's primordial architecture.
Figure 3: The difference between the disk and stellar inclinations (\(i_{d}-i_{s}\)) plotted versus different system parameters. The color of each point corresponds to the median value of the minimum obliquity. The top left shows \(T_{\rm eff}\) on the bottom axis while the right has the mass, the bottom left shows \(v\sin i\), and the bottom right the number of confirmed planets in the system.
## 4 Conclusions
We investigate the alignment of resolved debris disks with their stars, placing a lower limit on their obliquities by comparing stellar and disk inclinations. With recent resolved images of disks taken by ALMA, HST, and GPI, along with rotation periods measured using _TESS_, we were able to include 31 systems in our analysis, more than 3 times as large as the samples included in previous studies of debris disks.
While there formerly was little evidence for misalignment between debris disks and their stars, we find 6 systems with disk and stellar inclinations separated by \(\sim\)30\({}^{\circ}-60^{\circ}\). This indicates that these evolved disks are frequently misaligned with their stars; although, systems are more often well-aligned than not. Given that we observe such large minimum obliquities, some mechanism other than chaotic accretion needs to be at play. We also see misaligned systems with stellar masses below or near 1.2 \(M_{\odot}\), suggesting that magnetic warping alone cannot be responsible for misalignment. Because resonant processes that could torque the disk out of alignment fail to explain the distribution of spin-orbit obliquities, it remains unclear what role primordial misalignment could play in shaping planetary systems. Further, it is unknown whether these disk obliquities truly reflect the structure of the preceding protoplanetary disk.
Future work needs to expand the number of debris disk hosts with inclination measurements, helping constrain the characteristics of misaligned systems. Few stars in our sample have known planetary companions and no confirmed hot Jupiter systems are currently known to contain circumstellar debris; searching for dust in confirmed planetary systems could help better understand whether the mechanisms that misalign disks with their stars are also responsible for spin-orbit misalignment.
Existing methods to measure stellar position angle cannot be applied to the vast majority of debris disk hosts (Le Bouquin et al., 2009; Lesage and Wiedemann, 2014), meaning the full obliquity between a disk and its star cannot be measured. As mentioned by Watson et al. (2011), a full Bayesian analysis accounting for this limitation could place more useful upper limits on the misalignment, similar to the framework presented in Fabrycky and Winn (2009) for spin-orbit angles. Regardless, the lower limits on misalignment presented in this work help better understand the geometry of debris disks, and future observations will improve our understanding of the mechanisms that shape and misalign these systems.
We thank the referee for thorough and insightful feedback that greatly improved the quality of this paper. We also thank Ruth Angus and Megan Bedell for a helpful discussion on how to measure stellar rotation periods and Ann-Marie Madigan and Carolyn Crow for thoughtful conversation about our results. M.A.M. acknowledges support for this work from the National Aeronautics and Space Administration (NASA) under award number 19-ICAR19_2-0041. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work made use of the SIMBAD database (operated at CDS, Strasbourg, France), NASA's Astrophysics Data System Bibliographic Services. The TIC data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute and can be accessed via 10.17909/fwdt-2x66. This research has made use of the VizieR catalog access tool, CDS, Strasbourg, France (DOI: 10.26093/cds/vizier). The original description of the VizieR service was published in A&AS 143, 23. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. astropy(Astropy Collaboration et al., 2018), exoplanet(Foreman-Mackey et al., 2021), matplotlib(Hunter, 2007), numpy(Harris et al., 2020), PyMC3(Salvatier et al., 2016), SciPy(Virtanen et al., 2020)
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & \(\dot{\imath}_{\rm disk}\) & Disk Imaging Facility & \(\dot{\imath}_{\rm star}\) & Stellar Inclination Method & \(\Delta i\) & \(\dot{\imath}_{\rm disk}\) & \(\dot{\imath}_{\rm star}\) \\ & (\({}^{\circ}\)) & & (\({}^{\circ}\)) & & (\({}^{\circ}\)) & & \\ \hline
61 Virginis & \(77\pm 4\) & HSO & \(77.5^{+8.7}_{-13.0}\) & \(v\sin i\) & \(2.7^{+13.0}_{-8.9}\) & 1 & 2 \\ AU Mic & \(89.4\pm 0.1\) & GPI & \(87.3^{+1.9}_{-2.8}\) & \(v\sin i\) & \(2.2^{+2.8}_{-1.9}\) & 3 & 2 \\ GJ 581 & [30.0,70.0] & HSO & \(60.0^{+21.0}_{-28.0}\) & \(v\sin i\) & \(-9.0^{+29.0}_{-23.0}\) & 4 & 2 \\ HD 104860 & \(54\pm 7\) & HSO & \(71.0\pm 13.0\) & \(v\sin i\) & \(-11.0^{+14.0}_{-13.0}\) & 5 & 2 \\ HD 10647 & [53.0,90.0] & HSO & \(40.6^{+6.4}_{-5.6}\) & \(v\sin i\) & \(30.0^{+14.0}_{-13.0}\) & 6 & 2 \\ HD 107146 & \(18\pm 2\) & HST & \(14.2^{+2.4}_{-2.3}\) & \(v\sin i\) & \(5.8^{+2.6}_{-2.7}\) & 7 & 2 \\ HD 129590 & \(74.56\pm 0.05\) & SPHERE/VLT & \(84.9^{+5.4}_{-5.4}\) & \(v\sin i\) & \(-10.3^{+5.4}_{-3.6}\) & 8 & 2 \\ HD 138813 & \(29.0\pm 0.3\) & ALMA & \(79.7^{+7.3}_{-11.0}\) & \(v\sin i\) & \(-50.5^{+11.0}_{-7.3}\) & 9 & 2 \\ HD 141943 & \(87\pm 1\) & GPI & \(84.1^{+4.1}_{-6.2}\) & \(v\sin i\) & \(3.7^{+4.1}_{-4.1}\) & 3 & 2 \\ HD 145560 & \(43.9\pm 1.5\) & GPI & \(59.9^{+11.0}_{-7.5}\) & \(v\sin i\) & \(-14.9^{+7.5}_{-11.0}\) & 10 & 2 \\ HD 166 & \(21\pm 24\) & HSO & \(36.1^{+5.4}_{-4.5}\) & \(v\sin i\) & \(1.0^{+18.0}_{-12.0}\) & 5 & 2 \\ HD 181296 & [70.0,90.0] & Gemini South & \(74.0^{+11.0}_{-16.0}\) & \(v\sin i\) & \(6.0^{+17.0}_{-12.0}\) & 11 & 2 \\ HD 191089 & \(59\pm 3\) & HST & \(19.9^{+20.0}_{-3.4}\) & \(v\sin i\) & \(41.3^{+4.2}_{-19.0}\) & 12 & 2 \\ HD 202628 & \(57.4\pm 0.4\) & ALMA & \(51.2^{+5.5}_{-4.6}\) & \(v\sin i\) & \(6.5^{+4.6}_{-5.6}\) & 13 & 2 \\ HD 202917 & \(68.6\pm 1.5\) & HST & \(84.5^{+3.8}_{-5.4}\) & \(v\sin i\) & \(-14.7^{+5.4}_{-3.8}\) & 14 & 2 \\ HD 206893 & \(40\pm 3\) & ALMA & \(29.8^{+6.0}_{-5.6}\) & \(v\sin i\) & \(12.6^{+5.8}_{-6.3}\) & 15 & 2 \\ HD 23484 & [50.0,90.0] & Herschel & \(57.0^{+22.0}_{-26.0}\) & \(v\sin i\) & \(13.0^{+28.0}_{-23.0}\) & 16 & 2 \\ HD 30447 & \(83\pm 6\) & GPI & \(45.2^{+7.6}_{-6.2}\) & \(v\sin i\) & \(42.3^{+7.5}_{-8.1}\) & 3 & 2 \\ HD 35650 & \(89.0\pm 2.5\) & HST & \(80.1^{+6.9}_{-9.4}\) & \(v\sin i\) & \(10.8^{+9.6}_{-6.9}\) & 17 & 2 \\ HD 35841 & \(84.9\pm 0.2\) & GPI & \(79.2^{+7.4}_{-9.7}\) & \(v\sin i\) & \(5.8^{+9.7}_{-7.4}\) & 18 & 2 \\ HD 377 & \(85^{\rm a}\) & HST & \(79.1^{+7.4}_{-8.9}\) & \(v\sin i\) & \(9.9^{+9.3}_{-7.8}\) & 17 & 2 \\ HD 53143 & \(56.23\pm 0.37\) & ALMA & \(67.0^{+16.0}_{-18.0}\) & \(v\sin i\) & \(-10.0^{+18.0}_{-16.0}\) & 19 & 2 \\ HD 61005 & \(85.6\pm 0.1\) & ALMA & \(73.0^{+12.0}_{-14.0}\) & \(v\sin i\) & \(12.0^{+14.0}_{-12.0}\) & 20 & 2 \\ HD 92945 & \(65.4\pm 0.9\) & ALMA & \(65.0^{+18.0}_{-23.0}\) & \(v\sin i\) & \(1.0^{+23.0}_{-18.0}\) & 21 & 2 \\ TWA 25 & \(75\pm 6\) & HST & \(79.0^{+7.6}_{-10.0}\) & \(v\sin i\) & \(1.0^{+11.0}_{-8.4}\) & 17 & 2 \\ TWA 7 & \(22\pm 22\) & HST & \(37.0^{+17.0}_{-11.0}\) & \(v\sin i\) & \(0.0\pm 19.0\) & 17 & 2 \\ Vega & [0.0,40.0] & ALMA & \(6.2\pm 0.4\) & Interferometry & \(14.0^{+13.0}_{-14.0}\) & 22 & 23 \\ \(\beta\) Leonis & \(33\pm 7\) & HSO & \(21.5\pm 5.0\) & Interferometry & \(16.8^{+9.0}_{-5.6}\) & 24 & 25 \\ \(\beta\) Pictoris & \(85.3\pm 0.3\) & GPI & \(87.8\pm 1.6\) & Asteroseismology & \(-2.0^{+1.6}_{-1.5}\) & 3 & 26 \\ \(\epsilon\) Eridani & \(18\pm 13\) & SMA/ATCA & \(69.95^{+5.6}_{-7.6}\) & Photometry + Spectroscopy & \(-42.6^{+11.0}_{-9.5}\) & 27 & 28 \\ \(\tau\) Ceti & \(35\pm 10\) & HSO & \(81.6^{+5.9}_{-8.6}\) & \(v\sin i\) & \(-38.4^{+11.0}_{-8.3}\) & 29 & 2 \\ \hline \end{tabular} \({}^{a}\)No uncertainties reported. Standard deviation of 5\({}^{\circ}\) assumed.
\end{table}
Table 2: Disk and Stellar Inclinations
## Appendix A _Tess_ Light Curves and Measured Rotation Periods
A description of the light curve modeling approach is given in Section 2 while the derived rotation periods and \(1\sigma\) uncertainties are found in Table 1. We additionally calculate Lomb-Scargle periodograms for each light curve and the false alarm probability (FAP) associated with the rotation signal (Zechmeister & Kurster, 2009). Several stars display double dipping, where two opposing star spots create a false signal at \(P/2\)(Basri & Nguyen, 2018); in several instances, including HD 377 and HD 92945, we find that phase dispersion minimization periodograms (Stellingwerf, 1978) better capture the true rotation period and use them in place of Lomb-Scargle periodograms. Figures 4 through 21 show a periodogram, phase-folded light curve, light curve from a single _TESS_ sector, and the rotation period posterior for each object showing quasiperiodic variations in _TESS_ data.
Figure 4: _Top left:_ The Lomb-Scargle periodogram for the AU Mic _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 1 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 5: _Top left:_ The Lomb-Scargle periodogram for the HD 104860 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 14 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 6: _Top left:_ The Lomb-Scargle periodogram for the HD 10647 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 2 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 7: _Top left:_ The Lomb-Scargle periodogram for the HD 129590 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 11 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 8: _Top left:_ The Lomb-Scargle periodogram for the HD 141943 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 12 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 9: _Top left:_ The Lomb-Scargle periodogram for the HD 145560 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 12 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 11: _Top left:_ The Lomb-Scargle periodogram for the HD 202628 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 1 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 10: _Top left:_ The Lomb-Scargle periodogram for the HD 166 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 17 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 12: _Top left:_ The Lomb-Scargle periodogram for the HD 202917 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 1 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 13: _Top left:_ The Lomb-Scargle periodogram for the HD 23484 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 31 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 14: _Top left:_ The Lomb-Scargle periodogram for the HD 35650 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 5 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 15: _Top left:_ The Lomb-Scargle periodogram for the HD 35841 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 5 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 16: _Top left:_ The phase dispersion minimization periodogram for the HD 377 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 5 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 17: _Top left:_ The Lomb-Scargle periodogram for the HD 53143 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 1 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 19: _Top left:_ The phase dispersion minimization periodogram for the HD 92945 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 9 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 18: _Top left:_ The Lomb-Scargle periodogram for the HD 61005 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 7 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 21: _Top left:_ The Lomb-Scargle periodogram for the TWA 7 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 36 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval.
Figure 20: _Top left:_ The Lomb-Scargle periodogram for the TWA 25 _TESS_ PDCSAP light curve is shown in black. The dashed grey line marks the 1% FAP level while the shaded red region denotes the 1\(\sigma\) confidence interval for the rotation period posterior. _Top right:_ The phase-folded light curve using the peak period from the Lomb-Scargle periodogram. _Bottom left:_ The black points show data from _TESS_ Sector 10 while the red line and shaded region mark the mean and 1\(\sigma\) confidence interval for the GP model. _Bottom right:_ The histogram shows the rotation period posterior derived from the GP model while the dash red lines mark the median and 1\(\sigma\) interval. |
2307.08225 | Harnessing Scalable Transactional Stream Processing for Managing Large
Language Models [Vision] | Large Language Models (LLMs) have demonstrated extraordinary performance
across a broad array of applications, from traditional language processing
tasks to interpreting structured sequences like time-series data. Yet, their
effectiveness in fast-paced, online decision-making environments requiring
swift, accurate, and concurrent responses poses a significant challenge. This
paper introduces TStreamLLM, a revolutionary framework integrating
Transactional Stream Processing (TSP) with LLM management to achieve remarkable
scalability and low latency. By harnessing the scalability, consistency, and
fault tolerance inherent in TSP, TStreamLLM aims to manage continuous &
concurrent LLM updates and usages efficiently. We showcase its potential
through practical use cases like real-time patient monitoring and intelligent
traffic management. The exploration of synergies between TSP and LLM management
can stimulate groundbreaking developments in AI and database research. This
paper provides a comprehensive overview of challenges and opportunities in this
emerging field, setting forth a roadmap for future exploration and development. | Shuhao Zhang, Xianzhi Zeng, Yuhao Wu, Zhonghao Yang | 2023-07-17T04:01:02Z | http://arxiv.org/abs/2307.08225v1 | # Harnessing Scalable Transactional Stream Processing for Managing Large Language Models [Vision]
###### Abstract
Large Language Models (LLMs) have demonstrated extraordinary performance across a broad array of applications, from traditional language processing tasks to interpreting structured sequences like time-series data. Yet, their effectiveness in fast-paced, online decision-making environments requiring swift, accurate, and concurrent responses poses a significant challenge. This paper introduces TStreamLLM, a revolutionary framework integrating Transactional Stream Processing (TSP) with LLM management to achieve remarkable scalability and low latency. By harnessing the scalability, consistency, and fault tolerance inherent in TSP, TStreamLLM aims to manage \(continuous\) & \(concurrent\) LLM updates and usages efficiently. We showcase its potential through practical use cases like real-time patient monitoring and intelligent traffic management. The exploration of synergies between TSP and LLM management can stimulate groundbreaking developments in AI and database research. This paper provides a comprehensive overview of challenges and opportunities in this emerging field, setting forth a roadmap for future exploration and development.
## 1 Introduction
Large language models (LLMs) have become increasingly influential, propelling numerous advancements not just in natural language understanding and generation, but also in areas such as time-series analysis, structured sequence interpretation, and artificial intelligence overall [3, 5, 33]. Their unprecedented scale and complexity allow them to excel at zero-shot and few-shot learning tasks [3, 26], opening up diverse applications across a multitude of domains. However, the promising capabilities of LLMs come with their own set of challenges.
_Continuous Model Updates (\(\mathcal{LA}\)):_ The success of LLMs hinges on significant resource consumption and a heavy reliance on the pre-training process [36, 14]. As a result, there exists a knowledge cutoff for LLMs. While the world continually evolves with new concepts, events, and trends [30, 25], LLMs stay static after their pre-training. Therefore, keeping them updated and maintaining their relevance and accuracy pose significant challenges [19].
_Concurrent Model Updates and Usage (\(\mathcal{LB}\)):_ The demand for real-world applications that require _reliable and prompt_ responses amidst intensive _concurrent model updates and usage_ presents another layer of complexity. Addressing the requirement for concurrent model updates and usage is not only critical but also inevitable, as potential conflicts and dependencies among multiple services may arise.
_Optimization and Acceleration (\(\mathcal{LC}\)):_ Various techniques have been developed to accelerating model train and inference, such as mixed precision training [29], distillation [15], pruning [13], and quantization [18]. Additionally, the exploitation of novel hardware architectures [4] can enhance the performance of LLMs without significantly sacrificing their accuracy. However, adapting these methods for real-time operation and ensuring their compatibility with other concurrent services presents a significant challenge.
To address these issues, we introduce a visionary approach in this paper: TStreamLLM. This innovative framework aims to achieve ultra-scalability and low latency in managing concurrent LLM updates and usage. The key concept behind TStreamLLM is the integration of transactional stream processing (TSP) techniques [27] into LLM management. TSP, an emerging data stream processing paradigm, offers real-time adaptation, data consistency, fault tolerance, and fine-grained access control [16]--qualities that make it suitable for managing LLMs under intensive concurrent stream processing scenarios [37].
By leveraging TSP's scalability, fault-tolerance, and streaming semantics, TStreamLLM empowers LLM management to substantially improve upon existing solutions. For instance, it reduces the best achievable long-run latency to a linear function of the single-user-single-run model manipulation overhead. These innovations could expand the potential of LLMs across a multitude of AI applications. Furthermore, the TSP-empowered LLM management system presents the database research community with flexible, adaptive methods for data ingestion, manipulation, and mining.
**In summary**, this paper makes the following contributions: We start by illustrating two practical use cases of LLMs, highlighting the pressing need for a system that can effectively manage continuous model updates, handle concurrent model updates and usage, and optimize and accelerate model operation in a real-time, scalable, and efficient manner (Section 2). Next, we introduce our novel solution to these challenges: the TStreamLLM framework. TStreamLLM integrates TSP techniques into LLM management, offering potential improvements in efficiency, scalability, and adaptability (Section 3). Lastly, we explore the challenges and open research questions in this emerging field (Section 4). Our discussion sets a foundation for future research aimed at developing novel LLM architectures and management strategies leveraging TSP, thereby propelling advancements in AI and database research (Section 5).
## 2 Use Cases
In this section, we delve into two significant real-world applications of TStreamLLM, namely _Real-time Patient Monitoring in Healthcare_ and _Traffic Management in Smart Nation_, showcasing how TStreamLLM effectively tackles the three main challenges of LLM management (\(\mathcal{L}\)_A_, \(\mathcal{L}\)_B_, and \(\mathcal{L}\)_C_).
_Use Case 1: Real-time Patient Monitoring in Healthcare:_ Real-time patient monitoring has gained substantial relevance in the rapidly evolving field of healthcare [21, 22]. A patient monitoring system implemented on TStreamLLM enables the processing of a wide range of data, including electrocardiogram reports for patients under observation and medical condition descriptions from remote patients. By learning and analyzing these input data using the LLM, TStreamLLM generates real-time health monitoring outputs and offers diagnostic assistance to doctors, as depicted in Figure 1.
To stay updated on the latest health condition of patients (\(\mathcal{L}\)_A_), TStreamLLM continuously fine-tunes the LLM to incorporate the most recent health data. By leveraging stream processing, the system efficiently carries out noise removal, feature extraction, and identification of key health indicators on input data. It concurrently updates LLM states (model parameters and metadata) using parallel executors, effectively meeting the real-time operational requirements (\(\mathcal{L}\)_C_).
However, ensuring consistency in the LLM during concurrent model updates and queries poses a notable challenge (\(\mathcal{L}\)_B_) due to the intricate dependencies involved in model access requests. TStreamLLM successfully addresses this challenge by employing transactional concurrency control mechanisms. This allows for real-time querying and seamless access to the dynamically evolving LLM without impeding its ongoing training process, ensuring the efficient provision of diagnostic assistance to doctors.
_Use Case 2: Intelligent Traffic Management in Smart Cities:_ In the context of smart city traffic management, the optimization of city-wide traffic flow and response times necessitates an intelligent solution [31, 6]. However, there are challenges (\(\mathcal{L}\)_A_ and \(\mathcal{L}\)_B_) posed by the dynamic nature of traffic data. These challenges involve maintaining model
Figure 1: TStreamLLM applied in real-time patient monitoring in healthcare.
Figure 2: TStreamLLM’s role in online traffic management within a smart nation framework.
consistency and facilitating continuous learning in the face of data from diverse sources, such as road sensors, traffic cameras, and user-reported incidents.
STreemLLM excels in managing concurrent data streams, ensuring the LLM is consistently updated with real-time traffic conditions (Figure 2). Additionally, it collaborates with manual monitoring to handle complex traffic queries and offer context-aware recommendations (\(\mathcal{L}C\)).
During emergency situations like ambulance requests, TStreamLLM effectively demonstrates its real-time capabilities by promptly notifying the nearest ambulance, identifying the optimal route to the hospital, and simultaneously generating traffic control signals to facilitate the ambulance's movement. In more complex scenarios involving concurrent emergency calls (\(\mathcal{L}B\)), TStreamLIM efficiently learns and generates optimal traffic control strategies. It effectively allocates resources and prevents further damage.
## 3 Harnessing Transactional Stream Processing for LLM Management
This section provides an overview of how TStreamLLM harnesses the power of TSP to manage LLMs effectively. As illustrated in Figure 3, TStreamLLM uniquely integrates TSP techniques into LLM management, marking a pioneering framework that opens up avenues for future research.
TStreamLLM is designed around four critical components: (1) _Stream Processing_ that efficiently processes real-time data streams and user inference requests, (2) _Real-time Adaptation and Learning_ that facilitates dynamic adaptation of the LLM based on incoming data, (3) _Transaction Management_ that guarantees model consistency and efficient update propagation, and (4) _LLM State Management_ that ensures the LLM remains up-to-date, managing the storage of LLM parameters and metadata. These components not only interlink to form the integrated TStreamLIM, but also function independently, offering TStreamLIM remarkable versatility across various scenarios.
### Stream Processing
The Stream Processing component is at the core of TStreamLIM, designed to efficiently handle and process real-time data streams, supporting the optimization and acceleration of LLM under concurrent services (\(\mathcal{L}C\)). As a plethora of data from user interactions, device logs, or sensor readings continuously flow in, this component acts as a dynamic dispatcher. It preprocesses the data, filters out irrelevant content, transforms raw data into a format digestible for LLMs, and performs various aggregations to distill meaningful insights.
The Stream Processing component utilizes advanced techniques to effectively manage high-velocity and high-volume data streams. To optimize the handling of incoming data, data stream compression [40] is implemented, reducing storage and computational demands. Additionally, parallel processing [41] enables simultaneous management of multiple data streams, enabling TStreamLIM to keep up with the constant influx of data.
The Stream Processing component goes beyond its role of handling incoming data streams for model updates and also addresses real-time user inference requests using transactional semantics. It efficiently processes and models user requests as transactions and delivers real-time responses based on the adapting LLM, facilitating seamless interaction between users and the Transaction Management component (Section 3.3).
### Real-time Adaptation and Learning
The Real-time Adaptation and Learning component plays a crucial role in the continuous fine-tuning of LLM (\(\mathcal{L}A\)). It integrates with the Transaction Management component (Section 3.3) to consistently retrieve the latest version of LLM parameters and metadata, and refine these states based on the insights derived from the processed data streams. This continuous learning mechanism allows the LLM to persistently enhance its performance and accuracy, maintaining relevance in the ever-evolving data landscape.
To efficiently perform real-time adaptation and improvement on the model, the Real-time Adaptation and Learning component utilizes concepts from Online Learning (OL), which is a machine learning technique that allows models
Figure 3: Architectural overview of TStreamLIM.
to be incrementally updated as new data arrives without waiting for large batches [1]. OL enables the LLM to adapt to real-time changes in the continuous data stream, making it highly responsive to the shift in data stream patterns with minimal usage of computation resources, and supports rapid deployment in real-time decision-making scenarios.
However, ensuring the consistency of LLM states in the presence of concurrent inferences and model updates presents a significant challenge (\(\pounds B\)). To address this, the Real-time Adaptation and Learning component models upstream state access operations as transactions, each transaction encapsulates a series of model update operations that must be performed jointly as an atomic unit. These transactions are subsequently handed over to the Transaction Management component (Section 3.3) for reliable execution.
### Transaction Management
The Transaction Management component of TStreamLLM plays a crucial role in ensuring data consistency and enabling efficient update propagation within a transactional stream processing framework [27]. It is responsible for guaranteeing the correctness of LLM states in the presence of concurrent model updates and usages (\(\pounds B\)). By incorporating transactional semantics into LLM state access management, TStreamLLM ensures isolation among concurrent transactions, enabling their execution without interference. Furthermore, it ensures the durability of state updates, making them permanent even in the face of unexpected system failures.
To manage the execution of transactional requests received from upstream components (Section 3.1 and 3.2), the Transaction Management component employs various concurrency control techniques, aiming to allow multiple transactions to proceed without locking any shared states. It carefully analyzes and resolves dependencies among state access operations within transactions. Subsequently, it adaptively schedules state access workloads to parallel executors, which then interact with the LLM State Management component for execution.
### LLM State Management
The LLM State Management component manages the storage of shared stateful objects in LLMs, including parameters (e.g., word embeddings, weights, and biases) and metadata (e.g., training history, model hyperparameters). These states are continuously updated through transactions propagated from the Transaction Management component, ensuring that the LLM remains aligned with the latest insights derived from incoming data streams.
Scalability and efficiency are prioritized by the LLM State Management component, which is crucial for handling large language models that can comprise billions of parameters. To achieve this, TStreamLLM employs a distributed storage strategy, where the LLM states are partitioned and distributed across multiple nodes. This approach harnesses the power of parallel computing, enabling the system to effectively manage and update LLM states while enhancing scalability.
Additionally, the LLM State Management component incorporates efficient indexing strategies to facilitate rapid retrieval and updates of model states. Techniques such as hashing and trie-based index structures are employed to expedite access to state objects, particularly in highly concurrent environments. These indexing techniques contribute to improved performance and efficient handling of LLM states within the system.
## 4 Open Challenges and Opportunities
While TStreamLLM demonstrates promising potential, there are still challenges and opportunities for research.
### Scalable Stream Processing
To effectively handle high-velocity data streams and update LLMs with minimal latency under high levels of parallelism and heavy workloads, it is crucial to enhance the scalability of TStreamLLM. This challenge opens several avenues for future research:
_Data Partitioning and Load Balancing:_ Effective data partitioning strategies can evenly distribute language model training data across parallel processing units, resulting in efficient resource utilization and minimized processing bottlenecks [34]. Moreover, designing custom accelerators, GPUs, and multicore processors optimized for parallel processing and stream management can substantially enhance the scalability of stream processing. Future research should also investigate dynamic load balancing mechanisms that can adapt resource allocation in real-time according to fluctuating data rates and computational demands of the language models.
_Domain-Specific Stream Processing:_ Integrating domain-specific knowledge [9, 11] into the stream processing pipeline enhances the efficiency of LLM management. Research can target developing
-bespoke stream processing operators and algorithms tailored to particular applications. Machine learning approaches could inform adaptive query optimization techniques that adjust execution plans based on incoming data stream characteristics, language model requirements, and resource availability. A critical challenge lies in managing the volume of training data processed, stored, and transmitted. Implementing domain-specific data stream compression techniques [40] and approximation algorithms could economize resource consumption by accepting a trade-off between accuracy and reduced processing time.
_Fault Tolerance and System Reliability:_ Maintaining system robustness is vital for TStreamLLM, given the complexity and high volume of data processed by LLMs. Efficient recovery techniques like checkpointing, logging, and rollback mechanisms are essential to minimize disruptions, ensure system availability, and handle transaction failures. Approximate Fault Tolerance (AFT) [17, 35] offers a promising approach, balancing error bounds and backup overheads while bolstering system stability. Future research should explore the potential of emerging hardware technologies and domain-specific fault tolerance strategies to improve system performance and ensure the scalability and reliability of TStreamLLM in managing concurrent LLM updates.
### Real-time Adaptation and Learning
Ensuring the relevance and accuracy of LLMs in the face of dynamic data streams and rapidly changing application requirements necessitates real-time adaptation and learning capabilities in TStreamLLM. Future research can address this challenge by focusing on the following aspects:
_Stream Data Selection:_ In an environment with large-scale, high-velocity data streams, LLMs face the challenge of selecting the most pertinent data for training [43, 24]. Traditional learning scenarios provide pre-defined datasets for incremental or transfer learning [20, 32], but this approach becomes unfeasible in a dynamic data stream environment [28]. Instead, the model needs to make knowledgeable decisions about data selection based on its existing knowledge. This challenge becomes evident during a newsworthy event, where the model is inundated with redundant information from various media outlets and online comments. In such cases, the model must adopt appropriate data selection techniques to balance the training volume with its understanding of the event, all the while maintaining its neutrality and objectivity.
_Continual and Transfer Learning:_ In the realm of continual learning, catastrophic forgetting presents a significant challenge [39, 8], whereas, in transfer or stream learning, the model's adaptability is of greater concern. In TStreamLLM, both these issues co-exist, implying a need for models to possess both forward and backward transfer capabilities. This duality presents new challenges for existing methods. Given the continuous data stream, storing new knowledge becomes difficult for the model. When the model undergoes a training cycle, it can struggle to retain current factual knowledge, a problem exacerbated by random masking mechanisms employed during the training of models like BERT [12]. In an online setting, with continuous data streams and no distinct task or domain boundaries, most offline continual learning methods fall short. Moreover, implementing a core set for replay through methods like gradient computation is particularly challenging for LLMs, leading to potentially high costs.
_Adaptive and Efficient Model Training:_ The traditional static nature of LLMs post-pre-training presents challenges in dynamic real-world scenarios. The TStreamLLM framework emphasizes the need for frequent, accurate model updates with reduced latency. Classic model updating, involving steps like forward propagation, loss computation, back propagation, and parameter updates, can introduce system latency due to its sequential nature. To address this, we suggest predictive and concurrent model training methods. These would include predicting upcoming loss values based on previous ones, enabling continuous updates even before the completion of prior ones. Another promising direction involves preemptive identification and updating of parameters requiring significant changes post-loss computation, aiming to avoid potential conflicts.
### Streaming Transaction Processing
The TStreamLLM framework hinges on the effective transactional management of LLMs in response to high-velocity data streams. The dynamic nature of data streams and the necessity for real-time response pose exciting research challenges in the realm of streaming transaction processing:
_Transactional Model Updates:_ Incorporating concurrent and predictive model training methodologies introduces several challenges, including maintaining ACID properties during concurrent updates, especially with high-velocity
data streams. Concurrent updates can also create potential conflicts and dependencies among services, adding complexity. Therefore, future research should develop efficient conflict detection and resolution strategies specific to LLMs. Despite the challenges, these strategies, transactional guarantees, and conflict resolution mechanisms could significantly enhance model training efficiency and concurrent update management in the TStreamLLM framework, improving its effectiveness and reliability.
_Scalability and Performance Trade-offs:_ As the demand for LLMs in real-time applications grows, the TStreamLLM framework must be capable of processing transactions efficiently under high loads. Future research could investigate strategies for scaling streaming transaction processing capabilities [27] to accommodate growing volumes of data streams. This could involve exploring innovative parallel processing techniques, distributed computing solutions, or the use of emerging hardware technologies to accelerate transaction processing. Furthermore, there may be trade-offs between transaction processing speed, system consistency, and model accuracy. Understanding these trade-offs, and developing strategies to balance these conflicting demands could be another crucial area of exploration.
### LLM State Management
The ability to manage the state of LLMs effectively within the TStreamLLM framework forms a critical component of maintaining updated and consistent LLMs. This state management plays a pivotal role in ensuring the framework's real-time response capabilities. The areas of investigation worth delving into within this domain include:
_State Storage and Version Control:_ Storage efficiency in LLM state management demands the exploration of innovative methods for compression and storage optimization. Techniques such as delta encoding [7] and sparse representations [42] could minimize storage requirements, thus enhancing the scalability of the TStreamLLM framework. Moreover, contemplating the future integration of vector data management systems [38, 10] into the LLM State Management could further optimize storage and retrieval operations, despite the inherent challenges in handling high-dimensional data. In addition, managing different versions of LLM states efficiently in TStreamLLM, while minimizing the overhead of maintaining these versions, is of paramount importance. This calls for efficient versioning and snapshotting techniques enabling access to and querying of previous LLM states, which in turn contributes to the robustness and reliability of the system in various use cases.
_Optimization and Security Assurance:_ LLM state management significantly impacts system performance and resource utilization. Optimizing elements such as memory hierarchies, storage systems, and processing resources can significantly enhance LLM performance and the overall scalability of the TStreamLLM. The balance between resource utilization and system performance should remain a priority. Security and privacy constitute another critical facet of LLM state management, as they prevent unauthorized access and protect the model from potential damage. Future research should concentrate on devising privacy-preserving techniques for data processing and LLM adaptation, such as federated learning [23] and differential privacy [2]. These methods could protect sensitive data while enabling LLMs to learn from various data sources, thereby ensuring the TStreamLLM framework complies with privacy standards without compromising its learning capabilities.
## 5 Conclusion
In this paper, we introduce a novel perspective on merging transactional stream processing with LLMs management, setting forth a promising research trajectory. We outline key challenges and potential solutions in areas such as scalable stream processing, real-time adaptation and learning, streaming transaction processing, and LLM state management. This integration aims to solve challenges related to data selection, continual learning, and efficient model training in a high-velocity data stream environment. We propose new strategies for transactional model updates, emphasizing concurrent and predictive model training to mitigate system latency and conflict resolution issues. We emphasize the necessity to respect ACID properties and tackle potential service conflicts in high-load applications. We also spotlight the importance of fault tolerance and system reliability for the TStreamLLM framework to handle high-volume data processed by LLMs effectively. Our vision presents the possibility of revolutionizing the management of LLMs. By addressing the open challenges we've outlined, we hope to inspire further innovation, leading to the development of robust, efficient, and scalable solutions in this rapidly evolving field. |
2304.06758 | Codes over the non-unital non-commutative ring $E$ using simplicial
complexes | There are exactly two non-commutative rings of size $4$, namely, $E = \langle
a, b ~\vert ~ 2a = 2b = 0, a^2 = a, b^2 = b, ab= a, ba = b\rangle$ and its
opposite ring $F$. These rings are non-unital. A subset $D$ of $E^m$ is defined
with the help of simplicial complexes, and utilized to construct linear
left-$E$-codes $C^L_D=\{(v\cdot d)_{d\in D} : v\in E^m\}$ and right-$E$-codes
$C^R_D=\{(d\cdot v)_{d\in D} : v\in E^m\}$. We study their corresponding binary
codes obtained via a Gray map. The weight distributions of all these codes are
computed. We achieve a couple of infinite families of optimal codes with
respect to the Griesmer bound. Ashikhmin-Barg's condition for minimality of a
linear code is satisfied by most of the binary codes we constructed here. All
the binary codes in this article are few-weight codes, and self-orthogonal
codes under certain mild conditions. This is the first attempt to study the
structure of linear codes over non-unital non-commutative rings using
simplicial complexes. | Vidya Sagar, Ritumoni Sarma | 2023-04-13T18:01:41Z | http://arxiv.org/abs/2304.06758v1 | # Codes over the non-unital non-commutative ring \(E\) using simplicial complexes
###### Abstract
There are exactly two non-commutative rings of size 4, namely, \(E=\langle a,b\ |\ 2a=2b=0,a^{2}=a,b^{2}=b,ab=a,ba=b\rangle\) and its opposite ring \(F\). These rings are non-unital. A subset \(D\) of \(E^{m}\) is defined with the help of simplicial complexes, and utilized to construct linear left-\(E\)-codes \(C_{D}^{L}=\{(v\cdot d)_{d\in D}:v\in E^{m}\}\) and right-\(E\)-codes \(C_{D}^{R}=\{(d\cdot v)_{d\in D}:v\in E^{m}\}\). We study their corresponding binary codes obtained via a Gray map. The weight distributions of all these codes are computed. We achieve a couple of infinite families of optimal codes with respect to the Griesmer bound. Ashikhmin-Barg's condition for minimality of a linear code is satisfied by most of the binary codes we constructed here. All the binary codes in this article are few-weight codes, and self-orthogonal codes under certain mild conditions. This is the first attempt to study the structure of linear codes over non-unital non-commutative rings using simplicial complexes.
_Keywords:_ few-weight code, optimal code, minimal code, non-unital ring, Simplicial complex
_2020 Mathematics Subject Classification:_ Primary 94 B05 \(\cdot\) Secondary 16 L30, 05 E45
## 1 Introduction
The study of codes over the rings of size four, namely, \(\mathbb{Z}_{4}\) (in [18]), \(\mathbb{F}_{4}\) (in [26]), \(\mathbb{F}_{2}\times\mathbb{F}_{2}\) (in [13]), \(\mathbb{F}_{2}+u\mathbb{F}_{2}\) (in [14]), have got huge attention in the area of algebraic coding theory in the past. These are the only commutative unital rings among the 11 rings of size four classified by Fine [15]. Among these 11 rings, 5 of them are commutative non-unital (namely, \(B\), \(C\), \(H\), \(I\), \(J\)) and 2 of them are non-commutative non-unital (namely, \(E\), \(F\)). In 2020, Alahmadi et. al. [4] considered the non-commutative non-unital ring \(E\) (see [15]) in the context of QSD codes
for the first time. They studied a multilevel construction of a quasi self dual (QSD) code as a function of a pair of dual codes, and classified QSD codes of length \(n\leq 6\). Following the work in [4], the authors in [1] studied construction of self-orthogonal codes over \(E\) and classified QSD codes of length \(n\leq 12\). Since then codes over non-unital rings have been studied by many researchers (see [2, 3, 22, 23, 36, 38]).
In this article, we study the linear left-\(E\)-code \(C_{D}^{L}=\{(v\cdot d)_{d\in D}:v\in E^{m}\}\) and right-\(E\)-code \(C_{D}^{R}=\{(d\cdot v)_{d\in D}:v\in E^{m}\}\), where \(D\subseteq E^{m}\) and \(m\) is a positive integer. The authors in [10] first introduced the above construction of \(C_{D}\) in order to generalize the Mattson-Solomon transform for cyclic codes. If the defining set \(D\) is constructed using simplicial complexes, the study of \(C_{D}\) become convenient. In fact, several interesting linear codes have been constructed in the recent past by using simplicial complexes (see [19, 20, 29, 30, 31, 32, 33, 34, 35, 37, 39, 41, 43]). It is expected that for properly chosen finite fields (more generally, rings) and defining sets, we may be able to discover codes with good parameters.
Yansheng et. al. in [40] studied linear codes over \(\mathbb{F}_{4}\) and their binary subfield codes, and obtained the weight distributions of all these codes. They produced two infinite families of optimal linear codes. Motivated by this work, the authors in [28] studied octanary linear codes using simplicial complexes, and obtained minimal and optimal codes. Recently, the authors in [25] generalized the work of [28] for finite fields of characteristic 2 by using LFSR sequences. The weight distribution (see [21]) of a linear code contains crucial information regarding error detecting as well as error correcting capacity of the code, and it allows the computation of the error probability of error detection and correction with respect to some algorithms [24]. Few-weight codes are useful because of their connection with strongly regular graphs, difference sets and finite geometry [7, 9]. The minimum Hamming distance of linear codes are well known for their importance in determining error-correcting capacity. As a result, finding optimal linear codes has become one of the central topics for researchers. In [20], the authors showed how optimal codes can be utilized for the construction of secret sharing schemes with nice access structures following the framework discussed in [42]. Minimal codes are useful to construct the access structure of the secret sharing schemes [11, 27]. This special class of codes is also important as these can be decoded by using the minimum distance decoding rule [5]. Normally, it is difficult to identify all the minimal codewords of a given code even over a finite field of characteristic 2. In view of this, researchers began to investigate minimal codes.
Inspired by the work of [40], a natural endeavour is to explore the construction of linear codes over non-unital rings with the help of simplicial complexes and investigate the structure of these codes. We consider the rings \(E\) and \(F\) according to the notation of Fine [15]. We, in this article, choose a defining set \(D\) with the help of certain simplicial complexes, and study the structure of linear left-\(E\)-codes and right-\(E\)-codes. We obtain the Lee-weight distributions for the codes over \(E\) by using Boolean functions. By considering a Gray map, we obtain certain binary linear codes, and their weight distributions. These binary codes turn out to be
self-orthogonal under certain mild conditions. By choosing the defining set appropriately, we produce two infinite families of optimal codes. Moreover, most of the binary codes obtained here are minimal. We give a few examples that support our results.
The remaining sections of this article are arranged as follows. Preliminaries are presented in the next section. By using simplicial complexes, linear left-\(E\)-codes and their binary Gray images are studied in Section 3. Linear right-\(E\)-codes and their binary Gray images are investigated in Section 4. Section 5 concludes this article.
## 2 Definitions and Preliminaries
Let \(E\) and \(F\) (in the notation of Fine [15]) be the rings given by \(E=\langle a,b\mid 2a=2b=0,a^{2}=a,b^{2}=b,ab=a,ba=b\rangle\) and \(F=\langle a,b\mid 2a=2b=0,a^{2}=a,b^{2}=b,ab=b,ba=a\rangle\). Note that the underlying sets of \(E\) and \(F\) are both \(\{0,a,b,c=a+b\}\). Both \(E\) and \(F\) are non-commutative and non-unital; and \(F\) is the opposite ring of \(E\). The addition and multiplication tables of \(E\) are given as follows:
\begin{tabular}{|c|c|c|c|c|} \hline \(+\) & 0 & a & b & c \\ \hline
0 & 0 & a & b & c \\ \hline a & a & 0 & c & b \\ \hline b & b & c & 0 & a \\ \hline c & c & b & a & 0 \\ \hline \end{tabular}
Consider the following action of \(\mathbb{F}_{2}\) on \(E\): \(e0=0e=0\) and \(e1=1e=e\) for all \(e\in E\). Then every element of \(E\) can be expressed as \(as+ct\) for \(s,t\in\mathbb{F}_{2}\).
**Lemma 2.1**.: _For \(n\in\mathbb{N}\), \(E^{n}=a\mathbb{F}_{2}^{n}+c\mathbb{F}_{2}^{n}\) and the sum is direct._
Suppose \(\mathcal{R}=E\) (or \(F\)). Let \(\Phi:\mathcal{R}\longrightarrow\mathbb{F}_{2}^{2}\) be the _Gray map_ given by
\[\Phi(as+ct)=(t,s+t). \tag{2.1}\]
This extends to a map from \(\mathcal{R}^{m}\) to \((\mathbb{F}_{2}^{2})^{m}\) component-wise for any \(m\in\mathbb{N}\).
**Definition 2.2**.: A _linear left-\(E\)-code_ (respectively, _right-\(E\)-code_) of length \(m\) is a left-\(E\)-submodule (respectively, right-\(E\)-submodule) of \(E^{m}\).
_Remark 2.3_.: Every right-\(E\)-module is a left-\(F\)-module and conversely. Similarly, a left-\(E\)-module and a right-\(F\)-module are same. Therefore, we shall study linear left-\(E\)-codes and right-\(E\)-codes only, and will not write any assertions for codes over the ring \(F\).
Let \(v,w\in\mathbb{F}_{2}^{m}\). Then the _Hamming weight_ of \(v\) denoted by \(wt_{H}(v)\) is the number of non-zero entries in \(v\). The _Hamming distance_ between \(v\) and \(w\) is \(d_{H}(v,w)=wt_{H}(v-w)\).
Let \(x=a\alpha+c\beta\), \(y=a\alpha^{\prime}+c\beta^{\prime}\in E^{m}\), where \(\alpha,\beta,\alpha^{\prime},\beta^{\prime}\in\mathbb{F}_{2}^{m}\). Then the _Lee weight_ of \(x\) is \(wt_{Lee}(x)=wt_{H}(\Phi(x))=wt_{H}(\beta)+wt_{H}(\alpha+\beta)\). The _Lee distance_ between \(x\) and \(y\) is
\(d_{L}(x,y)=wt_{Lee}(x-y)\). Thus, the map \(\Phi\) is an isometry. Note that the image of a linear left-\(E\)-code (respectively, right-\(E\)-code) under the above Gray map is a binary linear code. Suppose \(C\) is a linear \(E\)-code of length \(m\). Let \(A_{i}\) be the cardinality of the set that contains all codewords of \(C\) having Lee weight \(i\), \(0\leq i\leq 2m\). Then the homogeneous polynomial in two variables
\[Lee_{C}(X,Y)=\sum_{c\in C}X^{2m-wt_{Lee}(c)}Y^{wt_{Lee}(c)}\]
is called the _Lee weight enumerator_ of \(C\) and the string \((1,A_{1},\ldots,A_{2m})\) is called the _Lee weight distribution_ of \(C\). In a similar way, we can define Hamming weight enumerator and Hamming weight distribution of a linear code over a finite field. In addition, if the total number of \(i\geq 1\) such that \(A_{i}\neq 0\) is \(l\), then \(C\) is called an \(l\)-_weight linear code_. Every \(1\)-weight linear code is an equidistant linear code. Bonisoli characterized all equidistant linear codes over finite fields in [6].
**Theorem 2.4**.: _[_6_]_(Bonisoli) _Suppose \(C\) is a equidistant linear code over \(\mathbb{F}_{q}\). Then \(C\) is equivalent to the \(r\)-fold replication of a simplex code, possibly with added \(0\)-coordinates._
An \([n,k,d]\)-linear code \(C\) is called _distance optimal_ if there exist no \([n,k,d+1]\)-linear code (see [21]). Next we recall the Griesmer bound.
**Lemma 2.5**.: _[_17_]_ (Griesmer Bound) _If \(C\) is an \([n,k,d]\)-linear code over \(\mathbb{F}_{q}\), then we have_
\[\sum_{i=0}^{k-1}\left\lceil\frac{d}{q^{i}}\right\rceil\leq n, \tag{2.2}\]
_where \(\lceil\cdot\rceil\) denotes the ceiling function._
A linear code is called a _Griesmer code_ if equality holds in Equation (2.2). Note that every Griesmer code is distance optimal, but the converse need not be true.
For \(m\in\mathbb{N}\), we shall write \([m]\) to denote the set \(\{1,2,\ldots,m\}\) and \(w\in\mathbb{F}_{2}^{m}\). Then the set \(\mathrm{Supp}(w)=\{i\in[m]:w_{i}=1\}\) is called the _support_ of \(w\). Note that the Hamming weight of \(w\in\mathbb{F}_{2}^{m}\) is \(wt_{H}(w)=|\mathrm{Supp}(w)|\). For \(v,w\in\mathbb{F}_{2}^{m}\), one says that \(v\)_covers_\(w\) if \(\mathrm{Supp}(w)\subseteq\mathrm{Supp}(v)\). If \(v\) covers \(w\), we write \(w\preceq v\).
Consider the map \(\psi:\mathbb{F}_{2}^{m}\longrightarrow 2^{[m]}\) is defined as \(\psi(w)=\mathrm{Supp}(w)\), where \(2^{[m]}\) denotes the power set of \([m]\). Note that \(\psi\) is a bijective map. Now onwards, we will write \(w\) instead of \(\mathrm{Supp}(w)\) whenever we require.
**Definition 2.6**.: Let \(C\) be a linear code over \(\mathbb{F}_{2}\). An element \(v\in C\setminus\{0\}\) is called _minimal_ if \(w\preceq v\) and \(w\in C\setminus\{0\}\implies w=v\). If each nonzero codeword of \(C\) is minimal then \(C\) is called a _minimal code_.
Now we recall a result from [5] on the minimality of a code over a finite field.
**Lemma 2.7**.: _[_5_]_(Ashikhmin-Barg) _Let \(C\) be a linear code over \(\mathbb{F}_{q}\) with \(wt_{o}\) and \(wt_{\infty}\) as minimum and maximum weights of its non-zero codewords. If \(\frac{wt_{o}}{wt_{\infty}}>\frac{q-1}{q}\), then \(C\) is minimal._
**Definition 2.8**.: A subset \(\Delta\) of \(\mathbb{F}_{2}^{m}\) is called a _simplicial complex_ if \(v\in\Delta,w\in\mathbb{F}_{2}^{m}\) and \(w\preceq v\)\(\implies\)\(w\in\Delta\). An element \(v\in\Delta\) is called a _maximal element of_\(\Delta\) if for \(w\in\Delta\), \(v\preceq w\implies\)\(v=w\).
A simplicial complex can have more than one maximal elements. Let \(M\subseteq[m]\). The simplicial complex generated by \(M\subseteq[m]\) is denoted by \(\Delta_{M}\) and is defined as
\[\Delta_{M}=\{w\in\mathbb{F}_{2}^{m}|\ \mathrm{Supp}(w)\subseteq M\}=\psi^{-1}(2^ {M}). \tag{2.3}\]
Note that \(\psi^{-1}(M)\) is the only maximal element of \(\Delta_{M}\), and \(|\Delta_{M}|=|2^{M}|=2^{|M|}\). Here \(\Delta_{M}\) is a vector space over \(\mathbb{F}_{2}\) of dimension \(|M|\).
Given a subset \(P\) of \(\mathbb{F}_{2}^{m}\), define the polynomial (referred to as an \(m\)-variable generating function, see [8]) \(\mathcal{H}_{P}(y_{1},y_{2},\ldots,y_{m})\) by
\[\mathcal{H}_{P}(y_{1},y_{2},\ldots,y_{m})=\sum_{v\in P}\prod_{i=1}^{m}y_{i}^{v _{i}}\in\mathbb{Z}[y_{1},y_{2},\ldots,y_{m}], \tag{2.4}\]
where \(v=(v_{1},\ldots,v_{m})\) and \(\mathbb{Z}\) denotes the ring of integers.
We recall a lemma from [8].
**Lemma 2.9**.: _[_8_]_ _Suppose \(\Delta\subseteq\mathbb{F}_{2}^{m}\) is a simplicial complex and \(\mathcal{F}\) consists of its maximal elements. Then_
\[\mathcal{H}_{\Delta}(y_{1},y_{2},\ldots,y_{m})=\sum_{\emptyset\neq S\subseteq \mathcal{F}}(-1)^{|S|+1}\prod_{i\in\cap S}(1+y_{i}), \tag{2.5}\]
_where \(\cap S=\bigcap\limits_{F\in S}\mathrm{Supp}(F)\). In particular, we have_
\[|\Delta|=\sum_{\emptyset\neq S\subseteq\mathcal{F}}(-1)^{|S|+1}2^{|\cap S|}.\]
**Example 2.10**.: Consider the simplicial complex
\[\Delta=\{(0,0,0,0),(1,0,0,0),(0,0,1,0),(1,0,1,0),(0,1,0,0),(0,0,0,1),(0,1,0,1)\}.\]
Then \(\mathcal{F}=\{F_{1},F_{2}\}\) where \(F_{1}=(1,0,1,0)\) and \(F_{2}=(0,1,0,1)\). So
\[\mathcal{H}_{\Delta}(y_{1},y_{2},y_{3},y_{4}) =\prod_{i\in\mathrm{Supp}(F_{1})}(1+y_{i})+\prod_{i\in\mathrm{Supp }(F_{2})}(1+y_{i})-1\] \[=1+y_{1}+y_{3}+y_{1}y_{3}+y_{2}+y_{4}+y_{2}y_{4}.\]
and \(|\Delta|=7\).
For \(M\subseteq[m]\), define a Boolean function \(\Psi(\cdot|M):\mathbb{F}_{2}^{m}\longrightarrow\mathbb{F}_{2}\) by
\[\Psi(\alpha|M)=\prod_{i\in M}(1-\alpha_{i})=\begin{cases}1,\ \mathrm{if}\ \ \mathrm{Supp}( \alpha)\cap M=\emptyset;\\ 0,\ \mathrm{if}\ \ \mathrm{Supp}(\alpha)\cap M\neq\emptyset.\end{cases} \tag{2.6}\]
Here we recall a lemma from [29].
**Lemma 2.11**.: _[_29_]_ _Suppose \(M,N\) are subsets of \([m]\). Then_
1. 1. \(|\{v\in\mathbb{F}_{2}^{m}:\Psi(v|M)=1\}|=2^{m-|M|}\)_._ 2. \(|\{v\in\mathbb{F}_{2}^{m}:\Psi(v|M)=0\}|=(2^{|M|}-1)\times 2^{m-|M|}\)_._
2. \(|\{v\in\mathbb{F}_{2}^{m}:\Psi(v|M)=0,\Psi(v|N)=0\}|=(2^{|M|}-1)\times 2^{m-|M|}+(2^{|N|} -1)\times 2^{m-|N|}-(2^{|M\cup N|}-1)\times 2^{m-|M\cup N|}\)_._
3. 1. \(|\{(v,w)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}:v\neq w,\Psi(w|M)=0, \Psi(v+w|M)=0\}|=\{(2^{|M|}-2)\times 2^{m-|M|}+(2^{m-|M|}-1)\}\times(2^{|M|}-1) \times 2^{m-|M|}\)_._ 2. \(|\{(v,w)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}:v\neq w,\Psi(w|M)=0, \Psi(v+w|M)=1\}|=\{(2^{|M|}-1)\times 2^{m-|M|}\}\times(2^{m-|M|}-1)\)_._ 3. \(|\{(v,w)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}:v\neq w,\Psi(w|M)=1, \Psi(v+w|M)=0\}|=\{(2^{|M|}-1)\times 2^{m-|M|}\}\times(2^{m-|M|}-1)\)_._ 4. \(|\{(v,w)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}:v\neq w,\Psi(w|M)=1, \Psi(v+w|M)=1\}|=\{(2^{m-|M|}-1)\}\times(2^{m-|M|}-2)\)_._
Here we recall a result from [32].
**Lemma 2.12**.: _[_32_]_ _Let \(\alpha\in\mathbb{F}_{2}^{m}\) and let \(\Delta_{M}\) be the simplicial complex generated by \(M\subseteq[m]\). Then_
\[\sum_{t\in\Delta_{M}^{\mathrm{c}}}(-1)^{\alpha t}=2^{m}\delta_{0,\alpha}-\sum _{t\in\Delta_{M}}(-1)^{\alpha t},\]
_where \(\delta\) is the Kronecker delta function, and \(\Delta_{M}^{\mathrm{c}}=\mathbb{F}_{2}^{m}\setminus\Delta_{M}\), the complement of \(\Delta_{M}\)._
In the forthcoming sections, we study the algebraic structures of linear left-\(E\)-codes (respectively, right-\(E\)-codes) and their Gray images.
## 3 Linear left-\(E\)-codes using simplicial complexes
For any \(n,m\in\mathbb{N}\), let \(D=\{d_{1}<d_{2}<\cdots<d_{n}\}\) be an ordered multiset, where \(d_{i}\in E^{m}\ \forall\ i\). Let \(m\in\mathbb{N}\) and let \(D_{i}\subseteq\mathbb{F}_{2}^{m},i=1,2\). Assume that \(D=aD_{1}+cD_{2}\subseteq E^{m}\). Define
\[C_{D}^{L}=\{c_{D}^{L}(v)=\big{(}v\cdot d\big{)}_{d\in D}\ |\ v\in E^{m}\}. \tag{3.1}\]
where \(x\cdot y=\sum\limits_{i=1}^{m}x_{i}y_{i}\) for \(x,y\in E^{m}\).
Then \(C_{D}^{L}\) is a linear left-\(E\)-code of length \(|D|\). The ordered set \(D\) is called the _defining set_ of \(C_{D}^{L}\). Throughout this article we consider a defining set to an ordered multiset. Note that on changing the order of \(D\) we will get a code which is permutation equivalent (see [21]) to \(C_{D}^{L}\). Observe that \(c_{D}^{L}:E^{m}\longrightarrow C_{D}^{L}\) defined by \(c_{D}^{L}(v)=\big{(}v\cdot d\big{)}_{d\in D}\) is an epimorphism of left-\(E\)-modules.
### Weight distributions of linear left-\(E\)-codes
Assume that \(x=a\alpha+c\beta\in E^{m}\) and \(d=at_{1}+ct_{2}\in D\), where \(\alpha,\beta\in\mathbb{F}_{2}^{m}\) and \(t_{i}\in D_{i},i=1,2\). Then the Lee weight of \(c_{D}^{L}(x)\) is
\[wt_{Lee}(c_{D}^{L}(x))= wt_{Lee}\big{(}\big{(}\big{(}a\alpha+c\beta\big{)}\cdot\big{(}at_{1} +ct_{2}\big{)}\big{)}_{t_{i}\in D_{i}}\big{)}\] \[= wt_{Lee}\big{(}\big{(}a(\alpha t_{1})+c(\beta t_{1})\big{)}_{t_{ i}\in D_{i}}\big{)}\] \[= wt_{H}\big{(}\big{(}\beta t_{1}\big{)}_{t_{i}\in D_{i}}\big{)}+ wt_{H}\big{(}\big{(}\alpha t_{1}+\beta t_{1}\big{)}_{t_{i}\in D_{i}}\big{)}\]
Now if \(v\in\mathbb{F}_{2}^{m}\), then \(wt_{H}(v)=0\iff v=\mathbf{0}\in\mathbb{F}_{2}^{m}\). Hence,
\[wt_{Lee}(c_{D}^{L}(x)) =|D|-\frac{1}{2}\sum_{t_{1}\in D_{1}}\sum_{t_{2}\in D_{2}}\big{(} 1+(-1)^{\beta t_{1}}\big{)} \tag{3.2}\] \[+|D|-\frac{1}{2}\sum_{t_{1}\in D_{1}}\sum_{t_{2}\in D_{2}}\big{(} 1+(-1)^{(\alpha+\beta)t_{1}}\big{)}\] \[=|D|-\frac{1}{2}\sum_{t_{1}\in D_{1}}(-1)^{\beta t_{1}}\sum_{t_{2 }\in D_{2}}(1)-\frac{1}{2}\sum_{t_{1}\in D_{1}}(-1)^{(\alpha+\beta)t_{1}}\sum _{t_{2}\in D_{2}}(1).\]
For \(Q\subseteq\mathbb{F}_{2}^{m}\) and \(\alpha\in\mathbb{F}_{2}^{m}\), we define
\[\chi_{\alpha}(Q)=\sum_{t\in Q}(-1)^{\alpha t}. \tag{3.3}\]
Note that, for \(\alpha\in\mathbb{F}_{2}^{m}\) and \(\emptyset\neq M\subseteq[m]\), we have
\[\chi_{\alpha}(\Delta_{M}) =\sum_{t\in\Delta_{M}}(-1)^{\alpha t} \tag{3.4}\] \[=\mathcal{H}_{\Delta_{M}}\big{(}(-1)^{\alpha_{1}},(-1)^{\alpha_{2 }},\ldots,(-1)^{\alpha_{m}}\big{)}\] \[=\prod_{i\in M}(1+(-1)^{\alpha_{i}})=\prod_{i\in M}(2-2\alpha_{i}) \text{ (By Lemma \ref{lem:2.1})}\] \[=2^{|M|}\prod_{i\in M}(1-\alpha_{i})=2^{|M|}\Psi(\alpha|M),\]
where \(\Psi(\cdot|M)\) is the Boolean function defined in Equation (2.6).
The following result describes Lee weight distributions of \(C_{D}^{L}\) for various choices of \(D\).
**Theorem 3.1**.: _Suppose that \(m\in\mathbb{N}\) and \(M,N\subseteq[m]\)._
1. _Let_ \(D=a\Delta_{M}+c\Delta_{N}\subseteq E^{m}\)_. Then_ \(C_{D}^{L}\) _is a linear left-_\(E\)_-code of length_ \(|D|=2^{|M|+|N|}\) _and size_ \(2^{2|M|}\)_. The Lee weight distribution of_ \(C_{D}^{L}\) _is displayed in Table_ 1_._
2. _Let_ \(D=a\Delta_{M}^{c}+c\Delta_{N}\subseteq E^{m}\)_. Then_ \(C_{D}^{L}\) _is a linear left-_\(E\)_-code of length_ \(|D|=(2^{m}-2^{|M|})\times 2^{|N|}\) _and size_ \(2^{2m}\)_. The Lee weight distribution of_ \(C_{D}^{L}\) _is displayed in Table_ 2_._
3. _Let_ \(D=a\Delta_{M}+c\Delta_{N}^{c}\subseteq E^{m}\)_. Then_ \(C_{D}^{L}\) _is a linear left-_\(E\)_-code of length_ \(|D|=2^{|M|}\times(2^{m}-2^{|N|})\) _and size_ \(2^{2|M|}\)_. The Lee weight distribution of_ \(C_{D}^{L}\) _is displayed in Table_ 3_._
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \(2^{|M|+|N|}\) & \(2^{2m-2|M|}(2^{|M|}-1)^{2}\) \\ \hline \(2^{|M|+|N|-1}\) & \(2^{2m-2|M|+1}(2^{|M|}-1)\) \\ \hline \(0\) & \(2^{2m-2|M|}\) \\ \hline \end{tabular}
\end{table}
Table 1: Lee weight distribution in Theorem 3.1 (1)
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \(2^{|M|}\times(2^{m}-2^{|N|})\) & \(2^{2m}-2^{2m-|M|+1}+2^{2m-2|M|}\) \\ \hline \(2^{|M|-1}\times(2^{m}-2^{|N|})\) & \(2^{2m-|M|+1}-2^{2m-2|M|+1}\) \\ \hline \(0\) & \(2^{2m-2|M|}\) \\ \hline \end{tabular}
\end{table}
Table 3: Lee weight distribution in Theorem 3.1 (3)
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \(2^{|M|+|N|}\) & \(2^{2m-2|M|}(2^{|M|}-1)^{2}\) \\ \hline \(2^{|M|+|N|-1}\) & \(2^{2m-2|M|+1}(2^{|M|}-1)\) \\ \hline \(0\) & \(2^{2m-2|M|}\) \\ \hline \end{tabular}
\end{table}
Table 1: Lee weight distribution in Theorem 3.1 (1)
4. _Let_ \(D=a\Delta_{M}^{c}+c\Delta_{N}^{c}\subseteq E^{m}\)_. Then_ \(C_{D}^{L}\) _is a linear left-_\(E\)_-code of length_ \(|D|=(2^{m}-2^{|M|})(2^{m}-2^{|N|})\) _and size_ \(2^{2m}\)_. The Lee weight distribution of_ \(C_{D}^{L}\) _is displayed in Table_ 4_._
5. _Let_ \(D=a\Delta_{M}+c\Delta_{N}\) _so that_ \(D^{c}=\big{(}a\Delta_{M}^{c}+c\overline{\varepsilon}_{2}^{m}\big{)}\bigsqcup \big{(}a\Delta_{M}+c\Delta_{N}^{c}\big{)}\)_, where_ \(\bigsqcup\) _denotes disjoint union. Then_ \(C_{D^{c}}^{L}\) _is a linear left-_\(E\)_-code of length_ \(|D^{c}|=2^{2m}-2^{|M|+|N|}\) _and size_ \(2^{2m}\)_. The Lee weight distribution of_ \(C_{D^{c}}^{L}\) _is displayed in Table_ 5_._
Proof.: We discuss the proof of part (4). The other parts can be proved in a similar way.
Let \(x=a\alpha+c\beta\in E^{m}\). By Equation (3.2) and (3.4), we have
\[wt_{Lee}(c_{D}(x))=|D|-\frac{1}{2}(2^{m}-2^{|N|})\big{[}(2^{m} \delta_{0,\beta}-2^{|M|}\Psi(\beta|M))+(2^{m}\delta_{0,\alpha+\beta}-2^{|M|} \Psi(\alpha+\beta|M))\big{]}. \tag{3.5}\]
Now we look at the following cases.
1. If \(\alpha=0,\beta=0\), then \(wt_{Lee}(c_{D}(x))=0\). Here in this case, \(\#\alpha=1,\#\beta=1\). Therefore, \(\#x=1\).
\begin{table}
\begin{tabular}{|c|c|} \hline Lee weight & Frequency \\ \hline \(2^{m}\times(2^{m}-2^{|N|})\) & \(2^{2m-2|M|}-2^{m-|M|+1}+1\) \\ \hline \(2^{m-1}\times(2^{m}-2^{|N|})\) & \(2^{m-|M|+1}-2\) \\ \hline \((2^{m}-2^{|M|})(2^{m}-2^{|N|})\) & \(2^{2m}-2^{2m-|M|+1}+2^{2m-2|M|}\) \\ \hline \((2^{m}-2^{|M|-1})(2^{m}-2^{|N|})\) & \(2^{2m-|M|+1}-2^{2m-2|M|+1}-2^{m+1}+2^{m-|M|+1}\) \\ \hline \((2^{m-1}-2^{|M|-1})(2^{m}-2^{|N|})\) & \(2^{m+1}-2^{m-|M|+1}\) \\ \hline \(0\) & \(1\) \\ \hline \end{tabular}
\end{table}
Table 4: Lee weight distribution in Theorem 3.1 (4)
\begin{table}
\begin{tabular}{|c|c|} \hline Lee weight & Frequency \\ \hline \(2^{2m}\) & \(2^{2m-2|M|}-2^{m-|M|+1}+1\) \\ \hline \(2^{2m-1}\) & \(2^{m-|M|+1}-2\) \\ \hline \(2^{2m}-2^{|M|+|N|}\) & \(2^{2m}-2^{2m-|M|+1}+2^{2m-2|M|}\) \\ \hline \(2^{2m}-2^{|M|+|N|-1}\) & \(2^{2m-|M|+1}-2^{2m-2|M|+1}+2^{m-|M|+1}\) \\ \hline \(2^{2m-1}-2^{|M|+|N|-1}\) & \(2^{2m-1}-2^{2|M|+1}-2^{m+1}+2^{m-|M|+1}\) \\ \hline \(0\) & \(1\) \\ \hline \end{tabular}
\end{table}
Table 5: Lee weight distribution in Theorem 3.1 (5)
2. If \(\alpha\neq 0,\beta=0\), then \[wt_{Lee}(c_{D}(x))=|D|-\frac{1}{2}(2^{m}-2^{|N|})\big{[}(2^{m}-2^{|M|})-2^{|M|} \Psi(\alpha|M)\big{]}.\] * If \(\Psi(\alpha|M)=1\) then \(wt_{Lee}(c_{D}(x))=(2^{m}-2^{|N|})\times 2^{m-1}\). In this case, by using Lemma 2.11, we get \(\#\alpha=(2^{m-|M|}-1)\), \(\#\beta=1\). Therefore, \(\#x=(2^{m-|M|}-1)\). * If \(\Psi(\alpha|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=(2^{|M|}-1)\times 2^{m-|M|}\), \(\#\beta=1\). Therefore, \(\#x=(2^{|M|}-1)\times 2^{m-|M|}\).
3. If \(\alpha=0,\beta\neq 0\), then \[wt_{Lee}(c_{D}(x))=(2^{m}-2^{|M|})(2^{m}-2^{|N|})-\frac{1}{2}(2^{m}-2^{|N|}) \big{[}-2^{|M|+1}\Psi(\beta|M)\big{]}.\] * If \(\Psi(\beta|M)=1\) then \(wt_{Lee}(c_{D}(x))=2^{m}\times(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=(2^{|M|}-1)\times 2^{m-|M|}\). Therefore, \(\#x=(2^{|M|}-1)\times 2^{m-|M|}\).
4. If \(\alpha\neq 0,\beta\neq 0\) and \(\alpha=\beta\) ( \(\implies\alpha+\beta=0\)), then \[wt_{Lee}(c_{D}(x))=(2^{m}-2^{|M|})(2^{m}-2^{|N|})-\frac{1}{2}(2^{m}-2^{|N|}) \big{[}-2^{|M|}\Psi(\beta|M)+(2^{m}-2^{|M|})\big{]}.\] * If \(\Psi(\beta|M)=1\) then \(wt_{Lee}(c_{D}(x))=2^{m-1}\times(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=2^{m}\times(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=2^{m}\times(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=\frac{1}{2}(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=2^{m}\times(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=1\), \(\#\beta=2^{m-|M|}-1\). Therefore, \(\#x=2^{m-|M|}-1\). * If \(\Psi(\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=2^{m}\times(2^{m}-2^{|N|})\).
\(\#\alpha=1\), \(\#\beta=(2^{|M|}-1)\times 2^{m-|M|}\times(2^{m-|M|}-1)\). Therefore, \(\#x=(2^{|M|}-1)\times 2^{m-|M|}\).
5. If \(\alpha\neq 0,\beta\neq 0\) and \(\alpha\neq\beta\) (\(\implies\alpha+\beta\neq 0\)), then \[wt_{Lee}(c_{D}(x))=(2^{m}-2^{|M|})(2^{m}-2^{|N|})-\frac{1}{2}(2^{m}-2^{|N|}) \big{[}-2^{|M|}\Psi(\beta|M)-2^{|M|}\Psi(\alpha+\beta|M)\big{]}.\] * If \(\Psi(\beta|M)=0,\Psi(\alpha+\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=(2^{m}-2^{|M|})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\beta=(2^{|M|}-1)\times 2^{m-|M|}\), \(\#\alpha=(2^{|M|}-2)\times 2^{m-|M|}+(2^{m-|M|}-1)\). Therefore, \(\#x=\big{(}(2^{|M|}-2)\times 2^{m-|M|}+(2^{m-|M|}-1)\big{)}(2^{|M|}-1) \times 2^{m-|M|}\) * If \(\Psi(\beta|M)=0,\Psi(\alpha+\beta|M)=1\) then \(wt_{Lee}(c_{D}(x))=(2^{m}-2^{|M|-1})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=(2^{|M|}-1)\times 2^{m-|M|}\), \(\#\beta=1\times(2^{m-|M|}-1)\). Therefore, \(\#x=(2^{|M|}-1)\times 2^{m-|M|}\times(2^{m-|M|}-1)\). * If \(\Psi(\beta|M)=1,\Psi(\alpha+\beta|M)=0\) then \(wt_{Lee}(c_{D}(x))=(2^{m}-2^{|M|-1})(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=(2^{|M|}-1)\times 2^{m-|M|}\), \(\#\beta=(2^{m-|M|}-1)\). Therefore, \(\#x=(2^{|M|}-1)\times 2^{m-|M|}\times(2^{m-|M|}-1)\). * If \(\Psi(\beta|M)=1,\Psi(\alpha+\beta|M)=1\) then \(wt_{Lee}(c_{D}(x))=2^{m}\times(2^{m}-2^{|N|})\). In this case, by using Lemma 2.11, we get \(\#\alpha=(2^{m-|M|}-1)\), \(\#\beta=(2^{m-|M|}-2)\). Therefore, \(\#x=(2^{m-|M|}-1)\times(2^{m-|M|}-2)\). Based on the above calculations, we obtain Table 4. By using Table 4, \(|\ker(c_{D}^{L})|=|\{v\in E^{m}:v\cdot d=0\ \forall\ d\in D\}|=1\) so that \(c_{D}^{L}\) is an isomorphism and hence \(|C_{D}^{L}|=2^{2m}\). \(\Box\)
### Gray images of linear left-\(E\)-codes
In this subsection, we study the Gray images of linear left-\(E\)-codes in Theorem 3.1.
Now we recall a result (see Theorem 1.4.8\((ii)\) of [21]) for self-orthogonality of binary linear codes.
**Theorem 3.2**.: _[_21_]_ _If the Hamming weight of every non-zero element of a binary linear code \(C\) is a multiple of \(4\), then \(C\) is self-orthogonal._
By using Theorem 3.2, we give a sufficient condition for the binary Gray images \(\Phi(C_{D}^{L})\) to be self-orthogonal for each \(D\) discussed in Theorem 3.1.
**Proposition 3.3**.: _Let \(m\in\mathbb{N}\) and let \(M,N\subseteq[m]\). Assume that \(D\) is as in Theorem 3.1. Then the parameters of the binary Gray image \(\Phi(C_{D}^{L})\) are given by Table 6 for each \(D\). Besides, \(\Phi(C_{D}^{L})\) is a binary self-orthogonal code provided \(|M|+|N|\geq 3\)._
The following examples illustrate Theorem 3.1 and Proposition 3.3.
**Example 3.4**.:
1. Let \(m=5\), \(M=\{1,2,3\},N=\{2,3,4\}\). Then \(C_{D}^{L}\), where \(D\) is as in Theorem 3.1(1), is a linear left-\(E\)-code with Lee weight enumerator \(X^{128}+14X^{96}Y^{32}+49X^{64}Y^{64}\). By Proposition 3.3, \(\Phi(C_{D}^{L})\) is a \([128,6,32]\) binary self-orthogonal code.
2. Let \(m=4\), \(M=\{2,4\},N=\{3\}\). Then \(C_{D}^{L}\), where \(D\) is as in Theorem 3.1(2), is a linear left-\(E\)-code with Lee weight enumerator \(X^{48}+24X^{36}Y^{12}+72X^{20}Y^{28}+144X^{24}Y^{24}+6X^{32}Y^{16}+9X^{16}Y^{32}\). By Proposition 3.3, \(\Phi(C_{D}^{L})\) is a \([48,8,12]\) binary self-orthogonal code.
3. Let \(m=4\), \(M=\{1\},N=\{2,3,4\}\). Then \(C_{D}^{L}\), where \(D\) is as in Theorem 3.1(3), is a linear left-\(E\)-code with Lee weight enumerator \(X^{32}+2X^{24}Y^{8}+X^{16}Y^{16}\). By Proposition 3.3, \(\Phi(C_{D}^{L})\) is a \([1568,10,392]\) binary self-orthogonal code.
5. Let \(m=4\), \(M=\{1\},N=\{3,4\}\). Then \(C_{D}^{L}\), where \(D\) is as in Theorem 3.1(5), is a linear left-\(E\)-code with Lee weight enumerator \(X^{496}+16X^{372}Y^{124}+112X^{244}Y^{252}+64X^{248}Y^{248}+14X^{368}Y^{128}+49 X^{240}Y^{256}\). By Proposition 3.3, \(\Phi(C_{D}^{L})\) is a \([496,8,124]\) binary self-orthogonal code.
\begin{table}
\begin{tabular}{c|c|c} \hline S.N. & \(D\) as in & \([n,k,d]\) \\ \hline
1 & Theorem 3.1(1) & \([2^{|M|+|N|+1},2|M|,2^{|M|+|N|-1}]\) \\ \hline
2 & Theorem 3.1(2) & \([(2^{m}-2^{|M|})2^{|N|+1},2m,(2^{m}-2^{|M|})2^{|N|-1}]\) \\ \hline
3 & Theorem 3.1(3) & \([2^{|M|+1}(2^{m}-2^{|N|}),2|M|,2^{|M|-1}(2^{m}-2^{|N|})]\) \\ \hline
4 & Theorem 3.1(4) & \([(2^{m+1}-2^{|M|+1})(2^{m}-2^{|N|}),2m,(2^{m-1}-2^{|M|-1})(2^{m}-2^{|N|})]\) \\ \hline
5 & Theorem 3.1(5) & \([(2^{2m+1}-2^{|M|+|N|+1}),2m,(2^{2m-1}-2^{|M|+|N|-1})]\) \\ \hline \end{tabular}
\end{table}
Table 6: Parameters of \(\Phi(C_{D}^{L})\) for \(C_{D}^{L}\) as in Theorem 3.1
## 4 Linear right-\(E\)-codes using simplicial complexes
This section studies linear right-\(E\)-codes and their Gray images.
Let \(m\in\mathbb{N}\) and let \(D_{i}\subseteq\mathbb{F}_{2}^{m},i=1,2\). Assume that \(D=aD_{1}+cD_{2}\subseteq E^{m}\). Define
\[C_{D}^{R}=\{c_{D}^{R}(v)=\big{(}d\cdot v\big{)}_{d\in D}\ |\ v\in E^{m}\} \tag{4.1}\]
where \(x\cdot y=\sum\limits_{i=1}^{m}x_{i}y_{i}\) for \(x,y\in E^{m}\).
Then \(C_{D}^{R}\) is a linear right-\(E\)-code of length \(|D|\). The ordered set \(D\) is called the _defining set_ of \(C_{D}^{R}\). Observe that \(c_{D}^{R}:E^{m}\longrightarrow C_{D}^{R}\) defined by \(c_{D}^{R}(v)=\big{(}d\cdot v\big{)}_{d\in D}\) is an epimorphism of right-\(E\)-modules.
### Weight distributions of linear right-\(E\)-codes
Assume that \(x=a\alpha+c\beta\in E^{m}\) and \(d=at_{1}+ct_{2}\in D\), where \(\alpha,\beta\in\mathbb{F}_{2}^{m}\) and \(t_{i}\in D_{i},i=1,2\). Then the Lee weight of \(c_{D}^{R}(x)\) is
\[wt_{Lee}(c_{D}^{R}(x))= wt_{L}\big{(}\big{(}\big{(}at_{1}+ct_{2}\big{)}\cdot\big{(}a \alpha+c\beta\big{)}\big{)}_{t_{i}\in D_{i}}\big{)}\] \[= wt_{Lee}\big{(}\big{(}a(\alpha t_{1})+c(\alpha t_{2})\big{)}_{t_ {i}\in D_{i}}\big{)}\] \[= wt_{H}\big{(}\big{(}\alpha t_{2}\big{)}_{t_{i}\in D_{i}}\big{)} +wt_{H}\big{(}\big{(}\alpha t_{1}+\alpha t_{2}\big{)}_{t_{i}\in D_{i}}\big{)}\]
Now if \(v\in\mathbb{F}_{2}^{m}\), then \(wt_{H}(v)=0\iff v=\mathbf{0}\in\mathbb{F}_{2}^{m}\). Hence,
\[wt_{Lee}(c_{D}^{R}(x)) =|D|-\frac{1}{2}\sum_{t_{1}\in D_{1}}\sum_{t_{2}\in D_{2}}\big{(} 1+(-1)^{\alpha t_{2}}\big{)}\] \[+|D|-\frac{1}{2}\sum_{t_{1}\in D_{1}}\sum_{t_{2}\in D_{2}}\big{(} 1+(-1)^{(\alpha t_{1}+\alpha t_{2})}\big{)} \tag{4.2}\] \[=|D|-\frac{1}{2}\sum_{t_{1}\in D_{1}}(1)\sum_{t_{2}\in D_{2}}(-1) ^{\alpha t_{2}}-\frac{1}{2}\sum_{t_{1}\in D_{1}}(-1)^{\alpha t_{1}}\sum_{t_{2} \in D_{2}}(-1)^{\alpha t_{2}}.\]
Based on the above discussion, we have the following result.
**Theorem 4.1**.: _Suppose that \(m\in\mathbb{N}\) and \(M,N\subseteq[m]\)._
1. _Let_ \(D=a\Delta_{M}+c\Delta_{N}\subseteq E^{m}\)_. Then_ \(C_{D}^{R}\) _is a_ \(2\)_-weight linear right-_\(E\)_-code of length_ \(|D|=2^{|M|+|N|}\) _and size_ \(2^{|M\cup N|}\)_. The Lee weight distribution of_ \(C_{D}^{R}\) _is displayed in Table_ 8_._
2. _Let_ \(D=a\Delta_{M}+c\Delta_{N}^{\mathrm{c}}\subseteq E^{m}\)_. Then_ \(C_{D}^{R}\) _is a_ \(3\)_-weight linear right-_\(E\)_-code of length_ \(|D|=2^{|M|}\times(2^{m}-2^{|N|})\) _and size_ \(2^{m}\)_. The Lee weight distribution of_ \(C_{D}^{R}\) _is displayed in Table_ 9_._
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \(2^{|M|+|N|}\) & \(2^{m}\times(2^{m}-2^{m-|N|})\) \\ \hline \(2^{|M|+|N|-1}\) & \(2^{m}\times(2^{m-|N|}-2^{m-|M\cup N|})\) \\ \hline \(0\) & \(2^{2m-|M\cup N|}\) \\ \hline \end{tabular}
\end{table}
Table 7: Lee weight distribution in Theorem 4.1 (1)
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \((2^{m}-2^{|M|})\times 2^{|N|}\) & \(2^{m}\times(2^{m}-2^{m-|N|})\) \\ \hline \(2^{m+|N|-1}\) & \(2^{m}\times(2^{m-|M\cup N|}-1)\) \\ \hline \((2^{m}-2^{|M|})\times 2^{|N|-1}\) & \(2^{m}\times(2^{m-|N|}-2^{m-|M\cup N|})\) \\ \hline \(0\) & \(2^{m}\) \\ \hline \end{tabular}
\end{table}
Table 8: Lee weight distribution in Theorem 4.1 (2)
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \(2^{m+|M|}\) & \(2^{m}\times(2^{m-|M\cup N|}-1)\) \\ \hline \(2^{|M|}\times(2^{m}-2^{|N|-1})\) & \(2^{m}\times(2^{m-|N|}-2^{m-|M\cup N|})\) \\ \hline \(2^{|M|}\times(2^{m}-2^{|N|})\) & \(2^{m}\times(2^{m}-2^{m-|N|})\) \\ \hline \(0\) & \(2^{m}\) \\ \hline \end{tabular}
\end{table}
Table 9: Lee weight distribution in Theorem 4.1 (3)
_._
4. _Let_ \(D=a\Delta_{M}^{\rm c}+c\Delta_{N}^{\rm c}\subseteq E^{m}\)_. Then_ \(C_{D}^{R}\) _is a_ \(3\)_-weight linear right-_\(E\)_-code of length_ \(|D|=(2^{m}-2^{|M|})(2^{m}-2^{|N|})\) _and size_ \(2^{m}\)_. The Lee weight distribution of_ \(C_{D}^{R}\) _is displayed in Table_ 10_._
5. _Let_ \(D=a\Delta_{M}+c\Delta_{N}^{\rm c}\subseteq E^{m}\) _so that_ \(D^{\rm c}=(a\Delta_{M}^{\rm c}+c\mathbb{F}_{2}^{m})\bigsqcup(a\Delta_{M}+c \Delta_{N}^{\rm c})\) _where_ \(\bigsqcup\) _denotes disjoint union. Then_ \(C_{D}^{R}\) _is a_ \(3\)_-weight linear right-_\(E\)_-code of length_ \(|D|=(2^{2m}-2^{|M|+|N|})\) _and size_ \(2^{m}\)_. The Lee weight distribution of_ \(C_{D}^{R}\) _is displayed in Table_ 11_._
### Gray images of linear right-\(E\)-codes
Now, we study the Gray images of the codes in Theorem 4.1.
By Theorem 3.2, we have several self-orthogonal binary codes.
**Proposition 4.2**.: _Assume that \(C_{D}^{R}\) is as in Theorem 4.1. Then its Gray image \(\Phi(C_{D}^{R})\) is a binary self-orthogonal code provided \(|M|+|N|\geq 3\)._
By using Lemma 2.7 and Lemma 2.2 respectively, we find minimal and optimal linear codes among the Gray images of codes in Theorem 4.1.
**Theorem 4.3**.: _Suppose \(\Phi\) is the map defined in Equation (2.1). Let \(m\in\mathbb{N}\) and \(M,N\subseteq[m]\)._
1. _Let_ \(D=a\Delta_{M}+c\Delta_{N}\subseteq E^{m}\)_. Then_ \(\Phi(C_{D}^{R})\) _is a binary_ \([2^{|M|+|N|+1},|M\cup N|,2^{|M|+|N|-1}]\)_-linear_ \(2\)_-weight code._
2. _Let_ \(D=a\Delta_{M}^{\rm c}+c\Delta_{N}\subseteq E^{m}\)_. Then_ \(\Phi(C_{D}^{R})\) _is a binary_ \([(2^{m}-2^{|M|})2^{|N|+1},m,(2^{m}-2^{|M|})2^{|N|-1}]\)_-linear_ \(3\)_-weight code._
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \((2^{m}-2^{|M|})(2^{m}-2^{|N|-1})\) & \(2^{m}\times(2^{m-|N|}-2^{m-|M\cup N|})\) \\ \hline \(2^{m}(2^{m}-2^{|M|})-2^{m+|N|-1}\) & \(2^{m}\times(2^{m-|M\cup N|}-1)\) \\ \hline \((2^{m}-2^{|M|})\times(2^{m}-2^{|N|})\) & \(2^{m}\times(2^{m}-2^{m-|N|})\) \\ \hline \(0\) & \(2^{m}\) \\ \hline \end{tabular}
\end{table}
Table 10: Lee weight distribution in Theorem 4.1 (4)
\begin{table}
\begin{tabular}{c|c} \hline Lee weight & Frequency \\ \hline \(2^{2m}\) & \(2^{m}\times(2^{m-|M\cup N|}-1)\) \\ \hline \((2^{2m}-2^{|M|+|N|-1})\) & \(2^{m}\times(2^{m-|N|}-2^{m-|M\cup N|})\) \\ \hline \((2^{2m}-2^{|M|+|N|})\) & \(2^{m}\times(2^{m}-2^{m-|N|})\) \\ \hline \(0\) & \(2^{m}\) \\ \hline \end{tabular}
\end{table}
Table 11: Lee weight distribution in Theorem 4.1 (5)
3. _Let_ \(D=a\Delta_{M}+c\Delta_{N}^{\mathrm{c}}\subseteq E^{m}\)_. Then_ \(\Phi(C_{D}^{R})\) _is a binary_ \([2^{|M|+1}(2^{m}-2^{|N|}),m,2^{|M|}(2^{m}-2^{|N|})]\)_-linear_ \(3\)_-weight code. If_ \(|N|\leq m-2\) _then_ \(\Phi(C_{D}^{R})\) _is a minimal code._ 1. _Let_ \(|M|+|N|\leq m-1\) _and_ \(\theta_{3}=2^{|M|+1}-1\)_. If_ \(1\leq\theta_{3}<|M|+|N|+1\) _then_ \(\Phi(C_{D}^{R})\) _is optimal with respect to the Griesmer bound._ 2. _Let_ \(m\leq|M|+|N|\leq 2m-1\) _and_ \(\theta_{4}=2^{|M|+|N|+1-m}(2^{m-|N|}-1)\)_. If_ \(0<\theta_{4}<m\) _then_ \(\Phi(C_{D}^{R})\) _is optimal with respect to the Griesmer bound._
4. _Let_ \(D=a\Delta_{M}+c\Delta_{N}^{\mathrm{c}}\subseteq E^{m}\)_. Then_ \(\Phi(C_{D}^{R})\) _is a binary_ \([2(2^{m}-2^{|M|})(2^{m}-2^{|N|}),m,(2^{m}-2^{|M|}))\)_-linear_ \(3\)_-weight code. Moreover, it is a minimal code._
5. _Let_ \(D=a\Delta_{M}+c\Delta_{N}\subseteq E^{m}\) _so that_ \(D^{\mathrm{c}}=(a\Delta_{M}^{\mathrm{c}}+c\mathbb{F}_{2}^{m})\bigsqcup(a\Delta _{M}+c\Delta_{N}^{\mathrm{c}})\)_. Then_ \(\Phi(C_{D^{\mathrm{c}}}^{R})\) _is a binary_ \([2(2^{2m}-2^{|M|+|N|}),m,(2^{2m}-2^{|M|+|N|})]\)_-linear_ \(3\)_-weight code. If_ \(|M|+|N|\leq 2m-2\) _then_ \(\Phi(C_{D^{\mathrm{c}}}^{R})\) _is a minimal code._
The following examples illustrate Theorem 4.1, Proposition 4.2 and Theorem 4.3.
**Example 4.4**.:
1. Let \(m=5\), \(M=\{1,2,3\},N=\{2,3,4\}\). If \(D\) is as in Theorem 4.1(1), then \(C_{D}^{R}\) is a linear right-\(E\)-code with Lee weight enumerator \(X^{128}+X^{96}Y^{32}+14X^{64}Y^{64}\). By Proposition 4.2 and Theorem 4.3(1), \(\Phi(C_{D}^{R})\) is a \([128,4,32]\) binary self-orthogonal code.
2. Let \(m=5\), \(M=\emptyset,N=\{1,2,3\}\). If \(D\) is as in Theorem 4.1(2), then \(C_{D}^{R}\) is a linear right-\(E\)-code with Lee weight enumerator \(X^{224}+6X^{168}Y^{56}+X^{160}Y^{64}+24X^{112}Y^{112}\). By Proposition 4.2 and Theorem 4.3(2), \(\Phi(C_{D}^{R})\) is a \([224,5,56]\) binary self-orthogonal code.
3. 1. Let \(m=5\), \(M=\emptyset,N=\{1,2,3\}\). If \(D\) is as in Theorem 4.1(3), then \(C_{D}^{R}\) is a linear right-\(E\)-code with Lee weight enumerator \(X^{48}+28X^{24}Y^{24}+3X^{16}Y^{32}\). By Proposition 4.2 and Theorem 4.3(3a), \(\Phi(C_{D}^{R})\) is a \([48,5,24]\) binary minimal and self-orthogonal code. Note that Gray image \(\Phi(C_{D}^{R})\) is optimal according to the Database [16]. 2. Let \(m=9\), \(M=\{1,2\},N=\{2,3,4,5,6,7,8\}\). If \(D\) is as in Theorem 4.1(3), then \(C_{D}^{R}\) is a linear right-\(E\)-code with Lee weight enumerator \(X^{3072}+508X^{1536}Y^{1536}+2X^{1280}Y^{1792}+X^{1024}Y^{2048}\). By Proposition 4.2 and Theorem 4.3(3b), \(\Phi(C_{D}^{R})\) is a \([3072,9,1536]\) binary minimal and self-orthogonal code. Note that Gray image \(\Phi(C_{D}^{R})\) is optimal as there doesn't exist \([3072,9,1537]\)-linear code over \(\mathbb{F}_{2}\).
4. Let \(m=4\), \(M=\{1\},N=\{2,3\}\). If \(D\) is as in Theorem 4.1(4), then \(C_{D}^{R}\) is a linear right-\(E\)-code with Lee weight enumerator \(X^{336}+12X^{168}Y^{168}+X^{144}Y^{192}+2X^{140}Y^{196}\). By Proposition 4.2 and Theorem 4.3(4), \(\Phi(C_{D}^{R})\) is a \([336,4,168]\) binary minimal and self-orthogonal code.
5. Let \(m=4\), \(M=\{1,2,3\},N=\{2,3,4\}\). If \(D\) is as in Theorem 4.1(5), then \(C_{D}^{R}\) is a linear right-\(E\)-code with Lee weight enumerator \(X^{384}+14X^{192}Y^{192}+X^{160}Y^{224}\). By Proposition 4.2 and Theorem 4.3(4), \(\Phi(C_{D}^{R})\) is a \([384,4,192]\) binary minimal and self-orthogonal code.
_Remark 4.5_.: By Theorem 2.4 all the binary 1-weight codes are simplex codes and hence they are distance optimal.
## 5 Conclusion
Linear left-\(E\)-codes and right-\(E\)-codes denoted, respectively, by \(C_{D}^{L}\) and \(C_{D}^{R}\) are studied using simplicial complexes having one maximal element. This is the first attempt to obtain linear codes over a non-unital non-commutative ring using such a construction. We use a Gray map to study the corresponding binary codes for both \(C_{D}^{L}\) and \(C_{D}^{R}\). We obtain their weight distributions; Boolean functions are used in these computations. As a consequence, we achieve two infinite families of optimal codes. Besides, we obtain many families of binary minimal codes. Most of the codes obtained in this article are few-weight codes. All the binary codes obtained here are self-orthogonal under certain mild conditions. For the reader's convenience, we list certain few-weight codes having good parameters in Table 12 obtained in this article.
|
2305.16662 | Simple smooth modules over the superconformal current algebra | In this paper, we classify simple smooth modules over the superconformal
current algebra $\frak g$. More precisely, we first classify simple smooth
modules over the Heisenberg-Clifford algebra, and then prove that any simple
smooth $\frak g$-module is a tensor product of such modules for the super
Virasoro algebra and the Heisenberg-Clifford algebra, or an induced module from
a simple module over some finite-dimensional solvable Lie superalgebras. As a
byproduct, we provide characterizations for both simple highest weight $\frak
g$-modules and simple Whittaker $\frak g$-modules. Additionally, we present
several examples of simple smooth $\frak g$-modules that are not tensor product
of modules over the super Virasoro algebra and the Heisenberg-Clifford algebra. | Dong Liu, Yufeng Pei, Limeng Xia, Kaiming Zhao | 2023-05-26T06:20:24Z | http://arxiv.org/abs/2305.16662v2 | # Simple smooth modules over the superconformal current algebra
###### Abstract.
In this paper, we classify simple smooth modules over the superconformal current algebra \(\mathfrak{g}\). More precisely, we first classify simple smooth modules over the Heisenberg-Clifford algebra, and then prove that any simple smooth \(\mathfrak{g}\)-module is a tensor product of such modules for the super Virasoro algebra and the Heisenberg-Clifford algebra, or an induced module from a simple module over some finite-dimensional solvable Lie superalgebras. As a byproduct, we provide characterizations for both simple highest weight \(\mathfrak{g}\)-modules and simple Whittaker \(\mathfrak{g}\)-modules. Additionally, we present several examples of simple smooth \(\mathfrak{g}\)-modules that are not tensor product of modules over the super Virasoro algebra and the Heisenberg-Clifford algebra.
Mathematics Subject Classification: 17B65, 17B68, 17B70.
###### Contents
* 1 Introduction
* 2 The super Virasoro algebra \(\mathfrak{s}\) and the Heisenberg-Clifford algebra \(\mathfrak{bc}\) are well-known infinite dimensional Lie superalgebras that have significant applications in various fields of mathematics and physics [26]. The superconformal current algebra \(\mathfrak{g}\) is defined as a Lie superalgebra obtained from the semi-direct product of \(\mathfrak{s}\) and \(\mathfrak{bc}\), as described in Definition 2.1. This particular Lie superalgebra corresponds to 2-dimensional quantum field theories with both chiral and superconformal symmetries (see [25],[19]). Additionally, Balinsky-Novikov superalgebras can also be used to realize the superconformal current algebra \(\mathfrak{g}\). It was originally introduced by Balinsky for constructing local translation-invariant Lie superalgebras of vector-valued functions on a line (see [7, 41]). Independently, the superalgebra \(\mathfrak{g}\) was also introduced as an extension of the Beltrami algebra with supersymmetry (refer to [21]). Marcel, Ovsienko, and Roger studied this superalgebra in their research on generalized Sturm-Liouville operators (refer to [35],[36]), while it appeared in Gu's work [20] on integrable geodesic flows in the superextension of the Bott-Virasoro group. Recently, there is a study conducted on the superconformal current algebra within supersymmetric warped conformal field theory [13].
* Smooth modules for a \(\mathbb{Z}\) or \(\frac{1}{2}\mathbb{Z}\)-graded Lie superalgebra are those in which any vector can be annihilated by a sufficiently large positive part of the Lie superalgebra. Smooth modules are generalizations of highest weight and Whittaker modules. The study of smooth modules for infinite-dimensional Lie superalgebras with a \(\mathbb{Z}\) or \(\frac{1}{2}\mathbb{Z}\)-gradation is central to Lie theory because these modules are closely related to corresponding vertex operator superalgebras [29],[30]. However, classifying all smooth modules for a given Lie superalgebra remains a significant challenge.
* Simple smooth modules for the Virasoro algebra and the Neveu-Schwarz algebra were completely determined in [38, 32]. Partial results have been obtained for simple smooth modules of certain Lie (super)algebras in [11, 14, 12, 34, 42, 44]. Among others, simple smooth modules for the Ramond algebra, the twisted Heisenberg-Virasoro algebra, the mirror Heisenberg-Virasoro algebra and the Fermion-Virasoro algebra have been systematically studied.
In this paper, we develop some new methods based on [32, 34, 42, 43] to provide a classification of simple smooth modules for the superconformal current algebra \(\mathfrak{g}\). This seemingly generalization from Lie algebras to Lie supralgebras is indeed nontrivial (overcoming superalgebra difficulties and combining results from the super Virasoro algebra, the Fermion-Virasoro algebra and the twisted Heisenberg-Virasoro algebra). Our main results are presented below.
**Main theorem 1** (Theorem 4.8) _Let \(S\) be a simple smooth module over the superconformal current algebra \(\mathfrak{g}\) with the level \(\ell\neq 0\). Then_
(i) \(S\cong H(z)^{\mathfrak{g}}\) _where \(H\) is a simple smooth module over the Heisenberg-Clifford superalgebra \(\mathfrak{h}\)c and \(z\in\mathbb{C}\), or_
(ii)_\(S\) is an induced \(\mathfrak{g}\)-module from a simple smooth \(\mathfrak{g}^{(0,-n)}\)-module for some \(n\in\mathbb{Z}_{+}\), or_
(iii)_\(S\cong U^{\mathfrak{g}}\otimes H(z)^{\mathfrak{g}}\) where \(U\) is a simple smooth module over the super Virasoro algebra \(\mathfrak{s}\), \(H\) is a simple smooth module over the Heisenberg-Clifford superalgebra \(\mathfrak{h}\)c and \(z\in\mathbb{C}\)._
From **Main theorem 1** we see that simple smooth \(\mathfrak{g}\)-modules under some conditions are tensor products of such modules for the super Virasoro algebra and the Heisenberg-Clifford algebra. Meanwhile, simple smooth modules over the Heisenberg-Clifford algebra \(\mathfrak{h}\)c will be constructed and classified in Section 3 (Theorem 3.5), and simple smooth modules over the super Virasoro algebra \(\mathfrak{s}\) have been constructed and classified in [32] (Theorem 3.8 below). So **Main theorem 1** shows that simple smooth \(\mathfrak{g}\)-modules at nonzero level have been determined.
**Main theorem 2** (Theorem 5.6) _Let \(S\) be a simple smooth module over the superconformal current algebra \(\mathfrak{g}\) of central charge \((c,z)\) at level \(\ell=0\), and the central element \(H_{0}\) acts as a scalar \(d\) with \(d+(n+1)z\neq 0\) for any \(n\in\mathbb{Z}^{*}\). Then \(S\) is isomorphic to a simple module of the form \(\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}^{(0,-q)}}V\), where \(V\) is a simple \(\mathfrak{g}^{(0,-q)}\)-module for some \(q\in\mathbb{N}\)._
**Main theorem 3** (Theorem 6.3 and Theorem 6.5) _Suppose that \(m\in\mathbb{Z}_{+}\), and \(\phi_{m}\) and \(\phi^{\prime}_{m}\) are given in_ Section 6.1.2 _with \(\phi(\mathfrak{c}_{2})=z\)._
(i) _If \(\phi_{m}(\mathfrak{c}_{3})=\ell\neq 0\), then the universal Whittaker \(\mathfrak{g}\)-module \(\widetilde{W}_{\phi_{m}}\cong W^{\mathfrak{g}}_{\phi^{\prime}_{m}}\otimes H(z )^{\mathfrak{g}}\), where \(W_{\phi^{\prime}_{m}}\) is the Whittaker \(\mathfrak{s}\)-module and \(H=U(\mathfrak{h}\mathfrak{c})w_{\phi_{m}}\) is the Whittaker \(\mathfrak{h}\mathfrak{c}\)-module. Moreover \(\widetilde{W}_{\phi_{m}}\) is simple if and only if \(W_{\phi^{\prime}_{m}}\) is a simple \(\mathfrak{s}\)-module._
(ii) _If \(\phi_{m}(\mathfrak{c}_{3})=0\), then \(\widetilde{W}_{\phi_{m}}\) is simple if and only if \(\phi_{m}(H_{m})\neq 0\)._
It is well known that highest weight modules and Whittaker modules are important examples of smooth modules. **Main theorem 3** provides the necessary and sufficient conditions for Whittaker modules to be simple in all cases. Meanwhile by **Main theorems 1, 2** new characterizations for simple highest weight modules and Whittaker modules are obtained in Section 6 (Theorem 6.1 and Theorem 6.4).
These simple smooth modules are actually simple weak modules over the \(N=1\) Heisenberg-Virasoro vertex operator superalgebras \(\mathcal{V}(c,z,\ell)\) (see [2],[3]). Therefore, we are able to classify all weak simple \(\mathcal{V}(c,z,\ell)\)-modules. It is important to note that certain weak modules induced from simple and smooth \(\mathfrak{g}^{(0,-n)}\)-modules do not have the form \(M_{1}\otimes M_{2}\) as weak modules for a tensor product of two vertex operator superalgebras. Our result in this regard is interesting because such modules do not exist in the category of ordinary modules for vertex operator superalgebras.
The paper is organized as follows. In Section 2, we recall some related results that will be used throughout the paper. In Section 3, we obtain all simple smooth modules over the Heisenberg-Clifford superalgebra, and investigate the tensor product of simple modules over the superconformal current algebra, where we provide some general results. In Section 4, we determine all simple smooth modules for the superconformal current algebra at non-zero level (Theorem 4.8). Section 5 determines simple smooth modules at level zero with a mild condition (Theorem 5.6).
In Section 6, we then apply Theorems 4.8 and 5.6 to give characterizations of simple highest weight modules (Theorem 6.1) and simple Whittaker modules over \(\mathfrak{g}\) (Theorem 6.3, 6.4). In the end, we provide a few examples of simple smooth \(\mathfrak{g}\)-modules that are not tensor product modules over super Virasoro modules and Heisenberg-Clifford modules.
Throughout this paper, \(\mathbb{Z}\), \(\mathbb{Z}^{*}\), \(\mathbb{N}\) and \(\mathbb{Z}_{+}\) denote the set of integers, nonzero integers, non-negative and positive integers, respectively. Let \(\mathbb{C}\) and \(\mathbb{C}^{*}\) denote the sets of complex numbers and nonzero complex numbers, respectively. All vector spaces and Lie superalgebras are assumed to be over \(\mathbb{C}\). We denote the universal algebra of a Lie superalgebra \(L\) by \(U(L)\).
## 2. Notations and preliminaries
In this section, we will review the notations and established results associated with the superconformal current algebra.
### Superconformal current algebra
**Definition 2.1**.: _[_25_]_ _The superconformal current algebra \(\mathfrak{g}\) has a basis_
\[\left\{L_{m},H_{m},G_{p},F_{p},\mathbf{c}_{i}\mid i=1,2,3,m\in\mathbb{Z},p\in \mathbb{Z}+\frac{1}{2}\right\}\]
_where \(L_{m},H_{m},\mathbf{c}_{i}\in\mathfrak{g}_{0}\) and \(G_{p},F_{p}\in\mathfrak{g}_{1}\), with the following relations:_
\[[L_{m},L_{n}]=(m-n)L_{n+m}+\delta_{m+n,0}\frac{1}{12}(m^{3}-m) \mathbf{c}_{1},\] \[[L_{m},H_{n}]=-nH_{n+m}+\delta_{m+n,0}(m^{2}+m)\mathbf{c}_{2},\] \[[H_{m},H_{n}]=m\delta_{m+n,0}\mathbf{c}_{3},\] \[[L_{m},G_{p}]=\left(\frac{m}{2}-p\right)G_{p+m},\] \[[G_{p},G_{q}]=2L_{p+q}+\frac{1}{3}\left(p^{2}-\frac{1}{4}\right) \delta_{p+q,0}\mathbf{c}_{1},\] \[[L_{m},F_{p}]=-\left(\frac{m}{2}+p\right)F_{m+p},\] \[[F_{p},F_{q}]=\delta_{p+q,0}\mathbf{c}_{3},\] \[[G_{p},F_{q}]=H_{p+q}+(2p+1)\delta_{p+q,0}\mathbf{c}_{2},\] \[[H_{m},G_{p}]=mF_{m+p},\] \[[H_{m},F_{p}]=0,\ [\mathfrak{g},\mathbf{c}_{i}]=0\]
_for \(m,n\in\mathbb{Z},\ p,q\in\mathbb{Z}+\frac{1}{2},i=1,2,3\)._
Note that \(\mathfrak{g}\) is equipped with a triangular decomposition and \(\frac{1}{2}\mathbb{Z}\)-graded structure: \(\mathfrak{g}=\mathfrak{g}^{+}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}^{-}\), where
\[\mathfrak{g}^{\pm}=\bigoplus_{n\in\mathbb{Z}_{+},r\in\mathbb{N}+ \frac{1}{2}}(\mathbb{C}L_{\pm n}\oplus\mathbb{C}H_{\pm n}\oplus\mathbb{C}G_{ \pm r}\oplus\mathbb{C}F_{\pm r}),\] \[\mathfrak{g}_{0}=\mathbb{C}H_{0}\oplus\mathbb{C}L_{0}\oplus \mathfrak{g}_{i=1}^{3}\mathbb{C}\mathbf{c}_{i}.\]
Moreover,
\[\mathfrak{g}=\bigoplus_{k\in\frac{1}{2}\mathbb{Z}}\mathfrak{s}_{k} \tag{2.1}\]
is \(\frac{1}{2}\mathbb{Z}\)-graded with \(\mathfrak{g}_{i}=\mathbb{C}L_{i}\oplus\mathbb{C}H_{i},\mathfrak{g}_{r}= \mathbb{C}G_{r}\oplus\mathbb{C}F_{r}\) for \(i\in\mathbb{Z}^{*},r\in\mathbb{Z}+\frac{1}{2}\).
It is worth noting that the center of \(\mathfrak{g}\) is \(\mathfrak{z}:=\operatorname{span}_{\mathbb{C}}\{H_{0},\mathbf{c}_{1},\mathbf{ c}_{2},\mathbf{c}_{3}\}\). Furthermore, it is interesting to observe that \(\mathfrak{g}\) contains six important Lie superalgebras:
* the super Virasoro algebra (also called \(N=1\) Neveu-Schwarz algebra) \[\mathfrak{s}=\operatorname{span}_{\mathbb{C}}\{L_{i},G_{i+\frac{1}{2}}, \mathbf{c}_{1}\mid i\in\mathbb{Z}\},\]
* the Heisenberg algebra \[\mathfrak{h}=\operatorname{span}_{\mathbb{C}}\{H_{i},\mathbf{c}_{3}\mid i\in \mathbb{Z}\},\]
* the Fermion superalgebra \[\mathfrak{f}=\operatorname{span}_{\mathbb{C}}\{F_{i+\frac{1}{2}},\mathbf{c }_{3}\mid i\in\mathbb{Z}\},\]
* the Heisenberg-Clifford superalgebra \[\mathfrak{h}\mathfrak{c}=\operatorname{span}_{\mathbb{C}}\{H_{i},F_{i+\frac {1}{2}},\mathbf{c}_{3}\mid i\in\mathbb{Z}\},\]
* the twisted Heisenberg-Virasoro algebra \[\mathfrak{h}\mathfrak{v}=\operatorname{span}_{\mathbb{C}}\{L_{i},H_{i}, \mathbf{c}_{1},\mathbf{c}_{2},\mathbf{c}_{3}\mid i\in\mathbb{Z}\},\]
* the Fermion-Virasoro algebra \[\mathfrak{h}\mathfrak{v}=\operatorname{span}_{\mathbb{C}}\{L_{i},F_{i+\frac{1 }{2}},\mathbf{c}_{1},\mathbf{c}_{3}\mid i\in\mathbb{Z}\}.\]
Set \(\mathfrak{h}\mathfrak{c}^{\pm}=\mathfrak{h}\mathfrak{c}\cap\mathfrak{g}^{\pm}\) and \(\mathfrak{s}^{\pm}=\mathfrak{s}\cap\mathfrak{g}^{\pm}\). For convenience, we define the following subalgebras of \(\mathfrak{g}\). For any \(m\in\mathbb{N},n\in\mathbb{Z}\), set
\[\mathfrak{s}^{(m)}=\sum_{i\in\mathbb{N}}\mathbb{C}L_{m+i}\oplus \sum_{i\in\mathbb{N}}\mathbb{C}G_{m+i+\frac{1}{2}}\oplus\mathbb{C}\mathbf{c}_ {1}, \tag{2.2}\] \[\mathfrak{h}\mathfrak{c}^{(m)}=\sum_{i\in\mathbb{N}}\left( \mathbb{C}H_{m+i}+\mathbb{C}F_{m+i+\frac{1}{2}}\right)\oplus\mathbb{C} \mathbf{c}_{3}\] \[\mathfrak{g}^{(m,n)}=\mathfrak{s}^{(m)}\oplus\mathfrak{h} \mathfrak{c}^{(n)}\oplus\mathbb{C}\mathbf{c}_{2},\] \[\mathfrak{g}^{(m,-\infty)}=\mathfrak{s}^{(m)}\oplus\mathfrak{h} \mathfrak{c}\oplus\mathbb{C}\mathbf{c}_{2}.\]
**Definition 2.2**.: _We define a \(\mathfrak{g}\)-module \(W\) to have central charge \((c,z)\) if \(\mathbf{c}_{1}\) and \(\mathbf{c}_{2}\) act on \(W\) as complex scalars \(c\) and \(z\), respectively. Similarly, we say that a \(\mathfrak{g}\)-module \(W\) has_ **level**_\(\ell\), if \(\mathbf{c}_{3}\) acts on it as the complex scalar \(\ell\)._
### Some concepts about smooth modules
In this subsection, we introduce some concepts about smooth modules.
**Definition 2.3**.: _Let \(L=\oplus_{i\in\frac{1}{2}\mathbb{Z}}L_{i}\) be an arbitrary \(\frac{1}{2}\mathbb{Z}\)-graded Lie superalgebra. An \(L\)-module \(V\) is called the smooth module if for any \(v\in V\) there exists \(n\in\frac{1}{2}\mathbb{N}\) such that \(L_{i}v=0\), for \(i>n\). The category of smooth modules over \(L\) will be denoted as \(\mathcal{R}_{L}\)._
**Definition 2.4**.: _Let \(\mathfrak{a}\) be a subspace of a Lie superalgebra \(L\), and \(V\) be an \(L\)-module. We denote_
\[\operatorname{Ann}_{V}(\mathfrak{a})=\{v\in V\mid\text{$av=0$}\}.\]
**Definition 2.5**.: _Let \(L\) be a Lie superalgebra, \(V\) an \(L\)-module and \(x\in L\)._
1. _If for any_ \(v\in V\) _there exists_ \(n\in\mathbb{Z}_{+}\) _such that_ \(x^{n}v=0\)_, then we say that the action of_ \(x\) _on_ \(V\) _is locally nilpotent._
2. _If for any_ \(v\in V\) _we have_ \(\dim\left(\sum_{n\in\mathbb{N}}\mathbb{C}x^{n}v\right)<+\infty\)_, then the action of_ \(x\) _on_ \(V\) _is said to be locally finite._
3. _The action of_ \(L\) _on_ \(V\) _is said to be locally nilpotent if for any_ \(v\in V\) _there exists an_ \(n\in\mathbb{Z}_{+}\) _(depending on_ \(v\)_) such that_ \(x_{1}x_{2}\cdots x_{n}v=0\) _for any_ \(x_{1},x_{2},\cdots,x_{n}\in L\)_._
4. _The action of_ \(L\) _on_ \(V\) _is said to be locally finite if for any_ \(v\in V\)_,_ \(\dim U(L)v<\infty\)_._
**Definition 2.6**.: _A module \(M\) over an associative (or Lie) superalgebra \(A\) is called strictly simple if it is a simple module over the algebra \(A\) (forgetting the \(\mathbb{Z}_{2}\)-gradation)._
### Induced modules
Denote by \(\mathbb{M}\) the set of all infinite vectors of the form \(\mathbf{i}:=(\ldots,i_{2},i_{1})\) with entries in \(\mathbb{N}\), satisfying the condition that the number of nonzero entries is finite, and \(\mathbb{M}_{1}:=\{\mathbf{i}\in\mathbb{M}\mid\mathbf{i}_{k}=0,1,\ \forall k\in \mathbb{Z}_{+}\}\).
Let \(\mathbf{0}=(\ldots,0,0)\in\mathbb{M}\), and for \(i\in\mathbb{Z}_{+}\) let \(\epsilon_{i}\) denote the element \((\ldots,0,1,0,\ldots,0)\in\mathbb{M}\), where \(1\) is in the \(i\)'th position from right. For any nonzero \(\mathbf{i}\in\mathbb{M}\), let \(\hat{i}\) be the smallest integer \(p\) such that \(i_{p}\neq 0\) and define \(\mathbf{i}^{\prime}=\mathbf{i}-\epsilon_{\hat{i}}\). Let \(\hat{\hat{i}}\) be the largest integer \(q\) such that \(i_{q}\neq 0\) and define \(\mathbf{i}^{\prime\prime}=\mathbf{i}-\epsilon_{\hat{i}}\).
For \(\mathbf{i},\mathbf{j}\in\mathbb{M},\mathbf{k},\mathbf{l}\in\mathbb{M}_{1}\), we denote
\[\ell(\mathbf{i})=\sum_{j\geq 1}i_{j},\]
\[L^{\mathbf{i}}H^{\hat{\mathbf{i}}}G^{\mathbf{k}}F^{\mathbf{l}}=\ldots L_{-2}^ {i_{2}}L_{-1}^{i_{1}}\ldots H_{-2}^{j_{2}}H_{-1}^{j_{1}}\ldots G_{-2+\frac{1}{ 2}}^{k_{2}}G_{-1+\frac{1}{2}}^{k_{1}}\ldots F_{-2+\frac{1}{2}}^{l_{2}}F_{-1+ \frac{1}{2}}^{l_{1}}\in U(\mathfrak{g}_{-})\]
and
\[\operatorname{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=\sum_{n\in \mathbb{Z}_{+}}i_{n}\cdot n+\sum_{n\in\mathbb{Z}_{+}}j_{n}\cdot n+\sum_{n\in \mathbb{Z}_{+}}k_{n}\cdot(n-\frac{1}{2})+\sum_{n\in\mathbb{Z}_{+}}l_{n}\cdot( n-\frac{1}{2}), \tag{2.3}\]
which is called the length of \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\) (or the length of \(L^{\mathbf{i}}H^{\hat{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{l}}\)).
\[\operatorname{w}(\mathbf{i},\mathbf{k})=\sum_{n\in\mathbb{Z}_{+}}i_{n}\cdot n +\sum_{n\in\mathbb{Z}_{+}}k_{n}\cdot(n-\frac{1}{2}), \tag{2.4}\]
which is called the length of \((\mathbf{i},\mathbf{k})\) (or the length of \(L^{\mathbf{i}}G^{\mathbf{k}}\) or the length of \(H^{\hat{\mathbf{i}}}F^{\mathbf{k}}\)).
Denote by \(<\) the _lexicographical total order_ on \(\mathbb{M}\), defined as follows: for any \(\mathbf{i},\mathbf{j}\in\mathbb{M}\), set
\[\mathbf{i}<\mathbf{j}\ \Leftrightarrow\ \text{there exists $r\in\mathbb{Z}_{+}$ such that $i_{r}<j_{r}$ and $i_{s}=j_{s},\ \forall\ s>r$.}\]
Denote by \(\prec\) the _reverse lexicographical total order_ on \(\mathbb{M}\), defined as follows: for any \(\mathbf{i},\mathbf{j}\in\mathbb{M}\), set
\[\mathbf{i}\prec\mathbf{j}\ \Leftrightarrow\ \text{there exists $r\in\mathbb{Z}_{+}$ such that $i_{r}<j_{r}$ and $i_{s}=j_{s},\ \forall\ 1\leq s<r$.}\]
Now we introduce two total orders "\(<\)" and "\(\prec\)" on \(\mathbb{M}\times\mathbb{M}_{1}\):
**Definition 2.7**.: _Denoted by \(<\): \((\mathbf{j},\mathbf{l})<(\mathbf{j}_{1},\mathbf{l}_{1})\) if and only if one of the following conditions is satisfied:_
1. \(\mathbf{j}<\mathbf{j}_{1}\)_;_
2. \(\mathbf{j}=\mathbf{j}_{1}\) _and_ \(\mathbf{l}<\mathbf{l}_{1}\) _for all_ \(\mathbf{j},\mathbf{j}_{1}\in\mathbb{M},\mathbf{l},\mathbf{l}_{1}\in\mathbb{M}_ {1}\)_._
**Definition 2.8**.: _Denoted by \(<\): \((\mathbf{i},\mathbf{k})<(\mathbf{i}_{1},\mathbf{k}_{1})\) if and only if one of the following conditions is satisfied:_
1. \(\mathrm{w}(\mathbf{i},\mathbf{k})<\mathrm{w}(\mathbf{i}_{1},\mathbf{k}_{1})\)_;_
2. \(\mathrm{w}(\mathbf{i},\mathbf{k})=\mathrm{w}(\mathbf{i}_{1},\mathbf{k}_{1})\) _and_ \(\mathbf{k}\prec\mathbf{k}_{1}\)_;_
3. \(\mathrm{w}(\mathbf{i},\mathbf{k})=\mathrm{w}(\mathbf{i}_{1},\mathbf{k}_{1})\)_,_ \(\mathbf{k}=\mathbf{k}_{1}\)_,_ \(\ell(\mathbf{i})<\ell(\mathbf{i}_{1})\)_;_
4. \(\mathrm{w}(\mathbf{i},\mathbf{k})=\mathrm{w}(\mathbf{i}_{1},\mathbf{k}_{1})\)_,_ \(\mathbf{k}=\mathbf{k}_{1}\)_,_ \(\ell(\mathbf{i})=\ell(\mathbf{i}_{1})\) _and_ \(\mathbf{i}\prec\mathbf{i}_{1}\)_, for all_ \(\mathbf{i},\mathbf{i}_{1}\in\mathbb{M},\mathbf{k},\mathbf{k}_{1}\in\mathbb{M}_ {1}\)_._
**Definition 2.9**.: _Now we can induce a principal total order on \(\mathbb{M}^{2}\times\mathbb{M}_{1}^{2}\), still denoted by \(<\): \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\prec(\mathbf{i}_{1},\mathbf{j}_{1 },\mathbf{k}_{1},\mathbf{l}_{1})\) if and only if one of the following conditions is satisfied:_
1. \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})<\mathrm{w}(\mathbf{i}_ {1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})\)_;_
2. \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=\mathrm{w}(\mathbf{i} _{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})\) _and_ \(\mathbf{k}\prec\mathbf{k}_{1}\)_;_
3. \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=\mathrm{w}(\mathbf{i} _{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})\) _and_ \(\mathbf{k}=\mathbf{k}_{1}\) _and_ \(\mathbf{i}\prec\mathbf{i}_{1}\)_;_
4. \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=\mathrm{w}(\mathbf{i} _{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})\)_,_ \((\mathbf{i},\mathbf{k})=(\mathbf{i}_{1},\mathbf{k}_{1})\) _and_ \((\mathbf{j},\mathbf{l})<(\mathbf{j}_{1},\mathbf{l}_{1})\)_._
For \(q\in\mathbb{N}\) or \(q=\infty\), let \(V\) be a \(\mathfrak{g}^{(0,-q)}\)-module. According to the PBW Theorem, every element of \(\mathrm{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\) can be uniquely written in the following form
\[\sum_{\mathbf{i},\mathbf{j}\in\mathbb{M},\mathbf{k},\mathbf{l}\in\mathbb{M}_ {1}}L^{\mathbf{i}}H^{\mathbf{j}}G^{\mathbf{k}}F^{\mathbf{l}}v_{\mathbf{i}, \mathbf{j},\mathbf{k},\mathbf{l}}, \tag{2.5}\]
where all \(v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}}\in V\) and only finitely many of them are nonzero.
**Definition 2.10**.: _For any \(v\in\mathrm{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\) as in (2.5), we denote by \(\mathrm{supp}(v)\) the set of all \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\mathbb{M}^{2}\times\mathbb{ M}_{1}^{2}\) such that \(v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}}\neq 0\). For a nonzero \(v\in\mathrm{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\), we write \(\deg(v)\) the maximal element (with respect to the principal total order on \(\mathbb{M}^{2}\times\mathbb{M}_{1}^{2}\)) in \(\mathrm{supp}(v)\), called the degree of \(v\)._
For later use, we also define \(\mathrm{supp}_{\mathfrak{g}}(v)\) to be the set of all \((\mathbf{i},\mathbf{k})\in\mathbb{M}\times\mathbb{M}_{1}\) with \(v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}}\neq 0\), \(\mathrm{supp}_{\mathfrak{b}}(v)\) to be the set of all \((\mathbf{j},\mathbf{l})\in\mathbb{M}\times\mathbb{M}_{1}\) with \(v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}}\neq 0\), \(\deg_{\mathfrak{g}}(v)\) to be the maximal element (with respect to the principal total order \(<\) on \(\mathbb{M}\times\mathbb{M}_{1}\)) in \(\mathrm{supp}(v)\), \(\deg_{\mathfrak{b}\mathfrak{b}}(v)\) to be the maximal element (with respect to the principal total order \(<\) on \(\mathbb{M}\times\mathbb{M}_{1}\)) in \(\mathrm{supp}_{\mathfrak{b}\mathfrak{b}}(v)\), and denote by \(|v|_{\mathfrak{s}}=\mathrm{w}(\deg_{\mathfrak{s}}(v))\), \(|v|_{\mathfrak{b}\mathfrak{b}}=\mathrm{w}(\deg_{\mathfrak{b}\mathfrak{b}}(v))\). Note that here and later we make the convention that \(\deg(v)\), \(\deg_{\mathfrak{s}}(v)\) and \(\deg_{\mathfrak{b}\mathfrak{b}}(v)\) only for \(v\neq 0\). Now we give a lemma for later convenience.
**Lemma 2.11**.: _For any \(k,l\in\mathbb{Z}_{+},k\geq l\), \(x\in U(\mathfrak{g}^{-}\oplus\mathfrak{g}^{0})_{-k}\) and \(y\in\mathfrak{g}_{l}\), we have \([y,x]\in U(\mathfrak{g}^{-}\oplus\mathfrak{g}^{0})_{-k+l}+U(\mathfrak{g}^{-} \oplus\mathfrak{g}^{0})_{\mathfrak{g}_{+}}\), where the grading of \(U(\mathfrak{g}^{-}\oplus\mathfrak{g}^{0})_{-k}\) is according to the adjoint action of \(L_{0}\)._
Proof.: It follows by direct calculations.
## 3. Simple smooth modules over the Heisenberg-Clifford algebra
In this section we shall consider all simple smooth \(\mathfrak{h}\)\(\subset\)-modules with nonzero level.
### Tensor product of simple modules
We need the following result on simple tensor product modules.
**Lemma 3.1**.: _[_43_, Lemma 2.2]_ _Let \(A,A^{\prime}\) be unital associative superalgebras, and \(M,M^{\prime}\) be \(A,A^{\prime}\) modules, respectively. If \(A^{\prime}\) has a countable basis and \(M^{\prime}\) is strictly simple, then_
(1)_\(M\otimes M^{\prime}\) is a simple \(A\otimes A^{\prime}\)-module if and only if \(M\) is a simple \(A\)-module;_
(2) _if \(V\) is a simple \(A\otimes A^{\prime}\)-module containing a strictly simple \(\mathbb{C}\otimes A^{\prime}\)-submodule \(M^{\prime}\), then \(V\cong M\otimes M^{\prime}\) for some simple \(A\)-module \(M\)._
**Lemma 3.2**.: _Let \(W\) be a module over a Lie superalgebra \(L\), \(S\) be a subalgebra of \(L\), and \(B\) be an \(S\)-module. Then the \(L\)-module homomorphism \(\varphi:\operatorname{Ind}_{S}^{L}(B\otimes W)\to\operatorname{Ind}_{S}^{L}(B )\otimes W\) induced from the inclusion map \(B\otimes W\to\operatorname{Ind}_{S}^{L}(B)\otimes W\) is an \(L\)-module isomorphism._
Proof.: It is essentially the same as that of [34, Lemma 8] although we have modules over Lie superalgebras here.
Let \(\mathfrak{g}=\mathfrak{a}\ltimes\mathfrak{b}\) be a Lie superalgebra where \(\mathfrak{a}\) is a subalgebra of \(\mathfrak{g}\) and \(\mathfrak{b}\) is an ideal of \(\mathfrak{g}\). Let \(M\) be a \(\mathfrak{g}\)-module with a \(\mathfrak{b}\)-submodule \(H\) so that the \(\mathfrak{b}\)-submodule structure on \(H\) can be extended to a \(\mathfrak{g}\)-module structure on \(H\). We denote this \(\mathfrak{g}\)-module by \(H^{\mathfrak{g}}\). For any \(\mathfrak{a}\)-module \(U\), we can make it into a \(\mathfrak{g}\)-module by \(\mathfrak{b}U=0\). We denote this \(\mathfrak{g}\)-module by \(U^{\mathfrak{g}}\). Then by Lemma 3.1 we have
**Corollary 3.3**.: _Let \(\mathfrak{g}=\mathfrak{a}\ltimes\mathfrak{b}\) be a countable dimensional Lie superalgebra. Let \(M\) be a simple \(\mathfrak{g}\)-module with a strictly simple \(\mathfrak{b}\)-submodule \(H\) so that an \(H^{\mathfrak{g}}\) exists. Then \(M\cong U^{\mathfrak{g}}\otimes H^{\mathfrak{g}}\) as \(\mathfrak{g}\)-modules for some simple \(\mathfrak{a}\)-module \(U\)._
For any \(z\in\mathbb{C}\), \(H\in\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\) with the action of \(\mathbf{c}_{3}\) as a nonzero scalar \(\ell\), similar to (4.2), (4.3), (4.4a), (4.4b) in [25] where they only considered highest weight modules, here we can also give a \(\mathfrak{g}\)-module structure on \(H\) (denoted by \(H(z)^{\mathfrak{g}}\)) via the following map
\[L_{n}\mapsto\bar{L}_{n}=\frac{1}{2\ell}\sum_{k\in\mathbb{Z}}:H_{k}H_{-k+n}:- \frac{(n+1)z}{\ell}H_{n}-\frac{1}{2\ell}\sum_{r\in\mathbb{Z}+\frac{1}{2}}(r+ \frac{1}{2}):F_{r}F_{-r+n}:,\]
\[G_{r}\mapsto\bar{G}_{r}=\frac{1}{\ell}\sum_{k\in\mathbb{Z}}H_{k}F_{-k+r}- \frac{(2r+1)z}{\ell}F_{r},\ H_{n}\mapsto H_{n},\ F_{r}\mapsto F_{r},\]
\[\mathbf{c}_{1}\mapsto\frac{3}{2}-\frac{12z^{2}}{\ell},\ \mathbf{c}_{2}\mapsto z,\ \mathbf{c}_{3}\mapsto\ell \tag{3.1}\]
for all \(n\in\mathbb{Z},r\in\mathbb{Z}+\frac{1}{2}\), where the normal order is defined as
\[:H_{i}H_{j}:=\left\{\begin{array}{ll}H_{i}H_{j},&\text{ if }i<j,\\ H_{j}H_{i},&\text{ otherwise.}\end{array}\right.\]
\[:F_{r}F_{s}:=\left\{\begin{array}{ll}F_{r}F_{s},&\text{if $r<s$,}\\ -F_{s}F_{r},&\text{otherwise.}\end{array}\right.\]
Applying the above corollary to \(\mathfrak{g}=\mathfrak{s}\ltimes\mathfrak{h}\mathfrak{c}\), we have the following results.
**Corollary 3.4**.: _Let \(V\) be a simple smooth \(\mathfrak{g}\)-module with central charge \((c,z)\) at nonzero level. Then \(V\cong U^{\mathfrak{g}}\otimes H(z)^{\mathfrak{g}}\) as a \(\mathfrak{g}\)-module for some simple \(\mathfrak{s}\)-module \(U\in\mathcal{R}_{s}\) and some simple module \(H\in\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\) if and only if \(V\) contains a simple \(\mathfrak{h}\mathfrak{c}\)-submodule \(H\in\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\)._
Proof.: (\(\Leftarrow\)). This follows from Corollary 3.3 and the fact that any simple smooth \(\mathfrak{h}\mathfrak{c}\)-module \(H\) is strictly simple (see Section 3 of [44], also see Proposition 3.5 below).
(\(\Rightarrow\)). This follows from the fact that \(u\otimes H(z)^{\mathfrak{g}}\) is a simple \(\mathfrak{h}\mathfrak{c}\)-submodule of \(U^{\mathfrak{g}}\otimes H(z)^{\mathfrak{g}}\) for any nonzero \(u\in U\).
### Simple modules in \(\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\)
For any \(m\in\mathbb{Z}_{+}\), we define
\[\mathfrak{h}\mathfrak{c}_{[m]}=\text{span}\{H_{i},F_{j+\frac{1}{2}},\mathbf{c }_{3}|-m\leq i\leq m,-m\leq j\leq m-1\}.\]
**Theorem 3.5**.: _Let \(B\), \(B^{\prime}\) be simple modules over \(\mathfrak{h}\mathfrak{c}_{[m]}\) for some \(m\in\mathbb{Z}_{+}\) with nonzero action of \(\mathbf{c}_{3}\)._
(a) _The \(\mathfrak{h}\mathfrak{c}\)-module \(\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(m)}}^{\mathfrak{h}\mathfrak{c} }B\) is simple, where \(B\) is regarded as \(\mathfrak{h}\mathfrak{c}^{(-m)}\)-module by \(H_{i}B=0=F_{i+\frac{1}{2}}B,\forall i>m\). Moreover, all nontrivial simple modules in \(\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\) can be obtained in this way._
(b) _As \(\mathfrak{h}\mathfrak{c}\)-modules, \(\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(-m)}}^{\mathfrak{h}\mathfrak{c} }B\cong\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(-m)}}^{\mathfrak{h} \mathfrak{c}}B^{\prime}\) if and only if \(B\cong B^{\prime}\) as \(\mathfrak{h}\mathfrak{c}_{[m]}\)-modules._
(c) _Let \(V\) be a simple smooth \(\mathfrak{h}\mathfrak{c}\)-module with nonzero action of \(\mathbf{c}_{3}\), then \(V\cong H\otimes K\) for some simple smooth \(\mathfrak{h}\mathfrak{c}\)-module \(H\) and some simple smooth \(\mathfrak{f}\)-module \(K\), where the actions are as follows:_
\[x(u\otimes w)=\begin{cases}xu\otimes w&\text{if $x\in\mathfrak{h}$,}\\ u\otimes xw&\text{if $x\in\mathfrak{f}$.}\end{cases}\]
Proof.: We define the new Lie superalgebra \(\mathfrak{f}^{\prime}=\{F_{r},\mathbf{c}_{3}^{\prime}|r\in\mathbb{Z}\}\) with brackets:
\[[F_{m},F_{n}]=\delta_{m+n,0}\mathbf{c}_{3}^{\prime},\]
and the subalgebras \(\mathfrak{f}^{\prime}_{[m]}=\{F_{r},\mathbf{c}_{3}^{\prime}|-m+\frac{1}{2} \leq r\leq m-\frac{1}{2}\}\), \(\mathfrak{h}_{[m]}=\text{span}\{H_{i},\mathbf{c}_{3}|-m\leq i\leq m\}\). Then
\[\mathfrak{h}\mathfrak{c}_{[m]}=(\mathfrak{h}_{[m]}\oplus\mathfrak{f}^{\prime }_{[m]})/\langle\mathbf{c}_{3}-\mathbf{c}_{3}^{\prime}\rangle.\]
Now \(B,B^{\prime}\) can be considered as modules over the superalgebra \((\mathfrak{h}_{[m]}\oplus\mathfrak{f}^{\prime}_{[m]})\). From Lemma 3.8 in [44], we know that there is a strictly simple \(\mathfrak{f}^{\prime}_{[m]}\)-module \(B_{2}\) contained in \(B\). From Corollary 3.3, we know that there is a simple \(\mathfrak{h}_{[m]}\)-module \(B_{1}\) such that \(B=B_{1}\otimes B_{2}\) as module over the superalgebra \((\mathfrak{h}_{[m]}\oplus\mathfrak{f}^{\prime}_{[m]})\).
(a) Using PBW Theorem and nonzero action of \(\mathbf{c}_{3}\), we can easily prove that the \(\mathfrak{h}\mathfrak{c}\)-module \(\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}_{[m]}}^{\mathfrak{h}\mathfrak{c} }B=\operatorname{Ind}_{\mathfrak{h}_{[m]}}^{\mathfrak{h}}B_{1}\otimes \operatorname{Ind}_{\mathfrak{f}^{\prime}_{[m]}}^{\mathfrak{h}}B_{2}\) is simple.
Now suppose that \(V\in\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\) is simple and nontrivial. From Lemma 3.9 in [44], we know that there is a simple \(\mathfrak{f}^{\prime}\)-submodule \(W_{2}\) in \(V\). From Corollary 3.3, we know that there is a simple \(\mathfrak{h}\)-module \(W_{1}\) such that \(V=W_{1}\otimes W_{2}\) as module over the superalgebra \((\mathfrak{h}\oplus\mathfrak{f}^{\prime})\). Combining this with Lemma 3.6 in [44] and Proposition 9 in [34] we obtain the result.
(b) Noting that \(B\) and \(B^{\prime}\) are the socles of \(\mathfrak{h}\mathfrak{c}^{(-m)}\)-modules \(\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(-m)}}^{\mathfrak{h}\mathfrak{c}}B\) and \(\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(-m)}}^{\mathfrak{h}\mathfrak{c}} B^{\prime}\) respectively, we see that, if \(\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(-m)}}^{\mathfrak{h}\mathfrak{c} }B\cong\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(-m)}}^{\mathfrak{h} \mathfrak{c}^{(-m)}}B^{\prime}\) as \(\mathfrak{h}\mathfrak{c}^{(-m)}\)-modules, hence \(B\cong B^{\prime}\) as \(\mathfrak{h}\mathfrak{c}_{[m]}\)-modules. The converse is trivial.
(c) Note that \(\mathfrak{h}\mathfrak{c}=(\mathfrak{h}\oplus\mathfrak{f}^{\prime})/\langle \mathbf{c}_{3}-\mathbf{c}_{3}^{\prime}\rangle\). Then \(V\) is naturally considered as a \((\mathfrak{h}\oplus\mathfrak{f}^{\prime})\)-module. By Lemma 3.9 in [44], there is a strictly simple smooth \(\mathfrak{f}^{\prime}\)-submodule \(K\) in \(V\). Using Corollary 3.3, we obtain the statement.
Note that, simple smooth \(\mathfrak{h}\)-modules were classified in [34], while simple smooth \(\mathfrak{f}\)-modules were classified in [44]. Combining with the above proposition, we have all simple smooth modules over the Heisenberg-Clifford superalgebra \(\mathfrak{h}\mathfrak{c}\).
**Example 3.6**.: _Take \(m=1\), make \(\mathbb{C}v\) into a module over \(\mathfrak{b}=\mathbb{C}[H_{0},H_{-1},F_{-\frac{1}{2}}\}+\mathbb{C}\mathbf{c}_{3}\) by \(H_{0}v=dv,H_{-1}v=av,F_{-\frac{1}{2}}v=0,\mathbf{c}_{3}v=\ell v\) for \(a,d,\ell\in\mathbb{C}\) with \(\ell\neq 0\). We have the simple \(\mathfrak{h}\mathfrak{c}_{[1]}\)-module \(B=\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}_{[1]}}^{\mathfrak{h}\mathfrak{c }_{[1]}}\mathbb{C}v\). Then we obtain the simple \(\mathfrak{h}\mathfrak{c}\)-module \(\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(-1)}}^{\mathfrak{h}\mathfrak{c }^{(-1)}}B\). In this manner similar to Sect.3.1 in [9], one can construct a lot of simple nonweight modules in \(\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\)._
Now we introduce a result on Whittaker modules over \(\mathfrak{h}\mathfrak{c}\). Suppose that \(\phi:\mathfrak{h}\mathfrak{c}^{(0)}\to\mathbb{C}\) is a homomorphism of Lie superalgebras. It follows that \(\phi(F_{r})=0\) for all \(r>0\). Then \(\mathbb{C}w_{\phi}\) becomes a one-dimensional \(\mathfrak{h}\mathfrak{c}^{(0)}\)-module defined by \(xw_{\phi}=\phi(x)w_{\phi}\) for all \(x\in\mathfrak{h}\mathfrak{c}^{(0)}\). The induced \(\mathfrak{h}\mathfrak{c}\)-module \(W_{\phi}=\operatorname{Ind}_{\mathfrak{h}\mathfrak{c}^{(0)}}^{\mathfrak{h} \mathfrak{c}}\mathbb{C}w_{\phi}\) is called a Whittaker module with respect to \(\phi\). Note that the \(\mathfrak{h}\mathfrak{c}\)-module \(W_{\phi}\) is not necessarily a smooth module.
**Lemma 3.7**.: _The \(\mathfrak{h}\mathfrak{c}\)-module \(W_{\phi}\) is simple if and only if \(\phi(\mathbf{c}_{3})\neq 0\)._
Proof.: It follows by direct calculations.
### Simple modules in \(\mathcal{R}_{\mathfrak{s}}\)
We first recall from [32] the classification for simple smooth \(\mathfrak{s}\)-modules.
**Theorem 3.8**.: _[_32_]_ _Any simple smooth \(\mathfrak{s}\)-module is a highest weight module, or isomorphic to \(\operatorname{Ind}_{\mathfrak{g}^{(0)}}^{\mathfrak{s}}V\) for a simple \(\mathfrak{s}^{(0)}\)-module \(V\) such that for some \(k\in\mathbb{Z}_{+}\),_
(i)_ \(L_{k}\) acts injectively on \(V\);_
(ii)_ \(L_{i}V=0\) for all \(i>k\)_(_in this case \(G_{i-\frac{1}{2}}V=0\) for all \(i>k\))._
For later convenience, now we establish some basic results about the Verma modules over the super Virasoro algebra.
For the Verma module \(M_{\mathfrak{s}}(c,h)\) over the super Virasoro algebra \(\mathfrak{s}\), it is well-known from [6] that there exist two homogeneous elements \(P_{1},P_{2}\in U(\mathfrak{s}^{-})\mathfrak{s}^{-}\) such that \(U(\mathfrak{s}^{-})P_{1}w_{1}+U(\mathfrak{s}^{-})P_{2}w_{1}\) is the unique maximal proper \(\mathfrak{s}\)-submodule of \(M_{\mathfrak{s}}(c,h)\), where \(P_{1},P_{2}\) are allowed to be zero and \(w_{1}\) is the highest weight vector in \(M_{\mathfrak{s}}(c,h)\).
**Lemma 3.9**.: _Let \(k=0,-1\). Suppose \(M\) is an \(\mathfrak{s}^{(k)}\)-module on which \(L_{0}\) and \(\mathbf{c}_{1}\) act as multiplication by given scalars \(h\) and \(c\) respectively. Then there exists a unique maximal submodule \(N\) of \(\operatorname{Ind}_{\mathfrak{s}^{(k)}}^{\mathfrak{s}}M\) with \(N\cap M=0\). More precisely, \(N\) is generated by \(P_{1}M\) and \(P_{2}M\), i.e., \(N=U(\mathfrak{s}^{-})(P_{1}M+P_{2}M)\)
Proof.: We will follow the proof in [42, Lemma 4.8]. Note that \(L_{0}\) acts diagonalizably on \(\operatorname{Ind}_{g^{(k)}}^{\mathfrak{s}}M\) and its submodules, and
\[M=\{u\in\operatorname{Ind}_{g^{(k)}}^{\mathfrak{s}}M\mid L_{0}u=\lambda u\},\]
i.e., \(M\) is the highest weight space of \(\operatorname{Ind}_{g^{(k)}}^{\mathfrak{s}}M\). Let \(N\) be the sum of all \(\mathfrak{s}\)-submodules of \(\operatorname{Ind}_{g^{(k)}}^{\mathfrak{s}}M\) which intersect with \(M\) trivially. Then \(N\) is the desired unique maximal \(\mathfrak{s}\)-submodule of \(\operatorname{Ind}_{g^{(k)}}^{\mathfrak{s}}M\) with \(N\cap M=0\).
Let \(N^{\prime}\) be the \(\mathfrak{s}\)-submodule generated by \(P_{1}M\) and \(P_{2}M\), i.e., \(N^{\prime}=U(\mathfrak{s}^{-})(P_{1}M+P_{2}M)\). Then \(N^{\prime}\cap M=0\). Hence, \(N^{\prime}\subseteq N\). Suppose there exists a proper submodule \(U\) of \(\operatorname{Ind}_{g^{(k)}}^{\mathfrak{s}}M\) such that \(U\subset N\) and \(U\not\subset N^{\prime}\). Choose a nonzero homogeneous \(v=\sum_{i=1}^{r}u_{i}v_{i}\in U\setminus N^{\prime}\), where \(u_{i}\in U(\mathfrak{s}^{-})\) and \(v_{1},...v_{r}\in M\) are linearly independent. Note that all \(u_{i}\) have the same weight. Then some \(u_{i}v_{i}\notin N^{\prime}\), say \(u_{1}v_{1}\notin N^{\prime}\). There is a homogeneous \(u\in U(\mathfrak{s})\) such that \(uu_{1}v_{1}=v_{1}\). Noting that all \(uu_{i}\) has weight \(0\) and each \(U(\mathfrak{s})v_{i}\) is a highest weight \(\mathfrak{s}\)-module, so \(uu_{i}v_{i}\in\mathbb{C}v_{i}\). Thus \(uv\in U\cap M\subset N\cap M=0\), which is impossible. This implies that \(N\subseteq N^{\prime}\). Hence, \(N=N^{\prime}\), as desired.
**Lemma 3.10**.: _Let \(M\) be an \(\mathfrak{s}^{(0)}\)-module on which \(\mathfrak{s}^{+}\) acts trivially and \(\mathbf{c_{1}}\) acts as multiplication by ascalar \(c\). If any finitely generated \(\mathbb{C}[L_{0}]\)-submodule of \(M\) is a free \(\mathbb{C}[L_{0}]\)-module, then any nonzero \(\mathfrak{s}\)-submodule of \(\operatorname{Ind}_{g^{(0)}}^{\mathfrak{s}}M\) intersects with \(M\) non-trivially._
Proof.: We will follow the proof in [42, Lemma 4.10]. Let \(V\) be a nonzero submodule of \(\operatorname{Ind}_{g^{(0)}}^{\mathfrak{s}}M\). Take a nonzero \(u\in V\) and \(u\in V\backslash M\). Write \(u=\sum_{i=1}^{n}a_{i}u_{i}\) where \(a_{i}\in U(\mathfrak{s}^{-}\oplus\mathbb{C}L_{0})\), \(u_{i}\in M\). Since \(M_{1}:=\sum_{1\leq i\leq n}\mathbb{C}[L_{0}]u_{i}\) (an \(\mathfrak{s}^{(0)}\)-submodule of \(M\) ) is a finitely generated \(\mathbb{C}[L_{0}]\)-module, we see that \(M_{1}\) is a free module over \(\mathbb{C}[L_{0}]\) by the assumption. Without loss of generality, we may assume that \(M_{1}=\oplus_{1\leq i\leq n}\mathbb{C}[L_{0}]u_{i}\) with basis \(u_{1},\cdots,u_{n}\) over \(\mathbb{C}[L_{0}]\). Note that each \(a_{i}\) can be expressed as a sum of eigenvectors of \(\operatorname{ad}L_{0}\) for \(1\leq i\leq n\). Assume that \(a_{1}\) has a maximal eigenvalue among all \(a_{i}\) for \(1\leq i\leq n\). Then \(a_{1}u_{1}\notin M\). For any \(\lambda\in\mathbb{C}\), let \(M_{1}(\lambda)\) be the \(\mathbb{C}[L_{0}]\)-submodule of \(M_{1}\) generated by \(u_{2},u_{3},\cdots,u_{n},L_{0}u_{1}-\lambda u_{1}\). Then \(M_{1}/M_{1}(\lambda)\) is a one-dimensional \(\mathfrak{s}^{(0)}\)-module with \(L_{0}(u_{1}+M_{1}(\lambda))=\lambda u_{1}+M_{1}(\lambda)\). By the Verma module theory for the super Virasoro algebra, we know that there exists \(0\neq\lambda_{0}\in\mathbb{C}\) such that the corresponding Verma module \(U=\operatorname{Ind}_{g^{(0)}}^{\mathfrak{s}}(M_{1}/M_{1}(\lambda_{0}))\) is irreducible (see [6, 23]). We know that \(u=a_{1}u_{1}\neq 0\) in \(U\). Hence we can find a homogeneous \(w\in U(\mathfrak{s}^{+})\) such that \(wa_{1}u_{1}=f_{1}(L_{0})u_{1}\) in \(\operatorname{Ind}_{g^{(0)}}^{\mathfrak{s}}M\), where \(0\neq f_{1}(L_{0})\in\mathbb{C}[L_{0}]\). So \(wu=\sum_{i=1}^{n}wa_{i}u_{i}=\sum_{i=1}^{n}f_{i}(L_{0})u_{i}\) for \(f_{i}(L_{0})\in\mathbb{C}[L_{0}]\), \(1\leq i\leq n\). Therefore, \(0\neq wu\in V\cap M_{1}\subset V\cap M\), as desired.
## 4. Simple smooth \(\mathfrak{g}\)-modules with nonzero level
In this section we will determine all simple smooth \(\mathfrak{g}\)-modules \(M\) of nonzero level.
For a given simple smooth \(\mathfrak{g}\)-module \(M\) with level \(\ell\neq 0\) and central charge \((c,z)\), From [44, Lemma 3.9], we know that \(\operatorname{Ann}_{M}(\mathfrak{f}^{+})\neq 0\). Then there is an \(n\in\mathbb{N}\) such that
\[M(n)=\operatorname{Ann}_{M}(\operatorname{span}_{\mathbb{C}}\{H_{i}|i\geq n\} )\cap\operatorname{Ann}_{M}(\mathfrak{f}^{+})\neq 0.\]
Let \(n_{M}=\min\{n\in\mathbb{Z}:M(n)\neq 0\}\), and
\[\begin{split} M_{0}&=\operatorname{Ann}_{M}( \operatorname{span}_{\mathbb{C}}\{H_{i},F_{i-\frac{1}{2}}|i\geq n_{M}\}),\text{ if }n_{M}>0,\\ M_{0}&=\operatorname{Ann}_{M}(\operatorname{span}_{ \mathbb{C}}\{H_{i},F_{i+\frac{1}{2}}|i\geq 0\}),\text{ if }n_{M}=0.\end{split}\]
**Lemma 4.1**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\). Then the following statements hold._
* \(n_{M}\in\mathbb{N}\)_, and_ \(H_{n-1}\) _acts injectively on_ \(M_{0}\)_._
* \(M_{0}\) _is a nonzero_ \(\mathfrak{g}^{(0,-(n_{M}-1))}\)_-module, and is invariant under the action of the operators_ \(\bar{L}_{i},\bar{G}_{i+\frac{1}{2}}\) _defined in (_3.1_) for_ \(i\in\mathbb{N}\)_._
Proof.: (i) Assume that \(n_{M}<0\). Take any nonzero \(v\in M_{0}\), we then have
\[v=\frac{1}{\ell}[H_{1},H_{-1}]v=0,\]
a contradiction. Hence, \(n_{M}\in\mathbb{N}\). The definition of \(n_{M}\) means that \(H_{n_{M}-1}\) acts injectively on \(M_{0}\).
(ii) It is obvious that \(M_{0}\neq 0\) by definition. For any \(w\in M_{0}\), \(i,k\in\mathbb{N}\), it is clear that \(L_{i}w,G_{i+\frac{1}{2}}w,H_{i-(n_{M}-1)}w,F_{i-(n_{M}-1)+\frac{1}{2}}w\in M_{0}\). So \(M_{0}\) is a nonzero \(\mathfrak{g}^{(0,-(n_{M}-1))}\)-module.
For \(i,n\in\mathbb{N}\), \(w\in M_{0}\), noticing \(n_{M}\geq 0\) by (i), it follows from (3.1) that
\[\begin{split}& H_{i+n_{M}}\bar{L}_{n}w=\bar{L}_{n}H_{i+n_{M}}w+(i+n_ {M})H_{n+i+n_{M}}w=0,\\ & F_{i+n_{M}\pm\frac{1}{2}}\bar{L}_{n}w=\bar{L}_{n}F_{i+n_{M}\pm \frac{1}{2}}w+(\frac{n}{2}+i+n_{M}\pm\frac{1}{2})F_{n+i+n_{M}\pm\frac{1}{2}} w=0,\end{split}\]
where sign " + " corresponds to \(n_{M}=0\). This implies that \(\bar{L}_{n}w\in M_{0}\) for \(n\in\mathbb{N}\). Similarly, \(\bar{G}_{n+\frac{1}{2}}w\in M_{0}\) for \(n\in\mathbb{N}\). That is, \(M_{0}\) is invariant under the action of the operators \(\bar{L}_{n},\bar{G}_{n+\frac{1}{2}}\) for \(n\in\mathbb{N}\).
**Proposition 4.2**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\) and central charge \((c,z)\). If \(n_{M}=0,1\), then \(M\cong U^{\mathfrak{g}}\otimes H(z)^{\mathfrak{g}}\) as \(\mathfrak{g}\)-modules for some simple modules \(U\in\mathcal{R}_{\mathfrak{s}}\) and \(H\in\mathcal{R}_{\mathfrak{h}\mathfrak{c}}\)._
Proof.: Since \(n_{M}=0,1\), we take any nonzero \(v\in M_{0}\). Then \(\mathbb{C}v\) is an \(\mathfrak{h}^{(0)}\)-module (the action of \(H_{0}\) is a scalar multiplication since it is in the center of \(\mathfrak{g}\)). Let \(H=U(\mathfrak{h}c)v\), the \(\mathfrak{h}c\)-submodule of \(M\) generated by \(v\). By some direct calculations we see that \(H\) is a simple \(\mathfrak{h}c\)-module. Then the desired assertion follows directly from Corollary 3.4.
Next we assume that \(n_{M}\geq 2\).
We define the operators \(L^{\prime}_{n}=L_{n}-\bar{L}_{n}\) and \(G^{\prime}_{n+\frac{1}{2}}=G_{n+\frac{1}{2}}-\bar{G}_{n+\frac{1}{2}}\) on \(M\) for \(n\in\mathbb{Z}\). Since \(M\) is a smooth \(\mathfrak{g}\)-module, then \(L^{\prime}_{n}\) is well-defined for any \(n\in\mathbb{Z}\).
Since
\[\begin{split}&[L_{n},H_{k}]=[\bar{L}_{n},H_{k}]=-kH_{n+k}+\delta_ {n+k,0}(n^{2}+n)\mathbf{c}_{2},\\ &[G_{n+\frac{1}{2}},H_{k}]=[\bar{G}_{n+\frac{1}{2}},H_{k}]=-F_{n+ k+\frac{1}{2}},\\ &[L_{n},F_{r}]=[\bar{L}_{n},F_{r}]=-(\frac{n}{2}+r)F_{n+r},\\ &[G_{p},F_{r}]=[\bar{G}_{p},F_{r}]=H_{p+q}+(2p+1)\delta_{p+q,0} \mathbf{c}_{2},\end{split} \tag{4.1}\]
we have
\[\begin{split}&[\bar{L}_{m},L_{n}]=[\bar{L}_{m},\bar{L}_{n}],\ [\bar{L}_{m},G_{p}]=[\bar{L}_{m},\bar{G}_{p}],\\ &[L_{m},\bar{G}_{p}]=[\bar{L}_{m},\bar{G}_{p}],\ [\bar{G}_{p},G_{q}]=[\bar{G}_{p},\bar{G}_{q}].\end{split} \tag{4.2}\]
By (3.1) and (4.2), we have
\[\begin{split}&[L^{\prime}_{m},L^{\prime}_{n}]=(m-n)L^{\prime}_{m+ n}+\frac{m^{3}-m}{12}\delta_{m+n,0}\mathbf{c}^{\prime}_{1},\\ &[G^{\prime}_{p},G^{\prime}_{q}]=2L^{\prime}_{p+q}+\frac{1}{3} \left(p^{2}-\frac{1}{4}\right)\delta_{p+q,0}\mathbf{c}^{\prime}_{1},\\ &[L^{\prime}_{m},G^{\prime}_{p}]=\left(\frac{m}{2}-p\right)G^{ \prime}_{m+p},\ [L^{\prime}_{m},\mathbf{c}^{\prime}_{1}]=0,\ \forall m,n\in\mathbb{Z},\ p,q\in\mathbb{Z}+\frac{1}{2},\end{split} \tag{4.3}\]
where \(\mathbf{c}^{\prime}_{1}=\mathbf{c}_{1}-(\frac{3}{2}-\frac{12z^{2}}{\ell}) \mathrm{id}_{M}\). So the algebra
\[\mathfrak{s}^{\prime}=\bigoplus_{n\in\mathbb{Z}}\left(\mathbb{C}L^{\prime}_{n} +\mathbb{C}G^{\prime}_{n+\frac{1}{2}}\right)\oplus\mathbb{C}\mathbf{c}^{\prime} _{1}\]
is isomorphic to the super Virasoro algebra \(\mathfrak{s}\). By (4.1), we have
\[[L^{\prime}_{n},H_{k}]=[G^{\prime}_{n+\frac{1}{2}},H_{k}]=0\]
and
\[[L^{\prime}_{n},F_{k+\frac{1}{2}}]=[G^{\prime}_{n+\frac{1}{2}},F_{k+\frac{1}{2 }}]=0,\forall n,k\in\mathbb{Z}\]
and hence \([\mathfrak{s}^{\prime},\mathfrak{bc}+\mathbb{C}\mathbf{c}_{2}]=0\). So the algebra
\[\mathfrak{g}^{\prime}=\mathfrak{s}^{\prime}\oplus(\mathfrak{bc}+\mathbb{C} \mathbf{c}_{2}) \tag{4.4}\]
is a direct sum of two ideals, and \(M=U(\mathfrak{g})v=U(\mathfrak{g}^{\prime})v\) for any \(v\in M\setminus\{0\}\). For any \(n\in\mathbb{Z}\), let
\[Y_{n}=\bigcap_{p\geq n}\mathrm{Ann}_{M_{0}}(\mathrm{span}_{\mathbb{C}}\{L^{ \prime}_{p},G^{\prime}_{p-\frac{1}{2}\tau(p)}\}),r_{M}=\min\{n\in\mathbb{Z}:Y_{ n}\neq 0\},K_{0}=Y_{r_{M}},\]
where \(\tau(p)=1\) if \(p\geq 1\), \(\tau(p)=-1\) if \(p\leq 0\).
Noting that \(M\) is a smooth \(\mathfrak{g}\)-module (also smooth \(\mathfrak{g}^{\prime}\)-module), we know that \(r_{M}<+\infty\). If \(Y_{n}\neq 0\) for all \(n\in\mathbb{Z}\), we define \(r_{M}=-\infty\) (see Lemma 4.3 (i) below). Denote by \(K=U(\mathfrak{bc})K_{0}\).
**Lemma 4.3**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\) and central charge \((c,z)\). Then the following statements hold._
* \(r_{M}\geq-1\) _or_ \(r_{M}=-\infty\)_._
* _If_ \(r_{M}\geq-1\)_, then_ \(K_{0}\) _is a_ \(\mathfrak{g}^{(0,-(n_{M}-1))}\)_-module and_ \(H_{n_{M}-1}\) _acts injectively on_ \(K_{0}\)_._
* \(K\) _is a_ \(\mathfrak{g}^{(0,-\infty)}\)_-module and_ \(K(z)^{\mathfrak{g}}\) _has a_ \(\mathfrak{g}\)_-module structure by (_3.1_)._
* \(K_{0}\) _and_ \(K\) _are invariant under the actions of_ \(L_{n},G_{n+\frac{1}{2}}\) _and_ \(L^{\prime}_{n},G^{\prime}_{n+\frac{1}{2}}\) _for_ \(n\in\mathbb{N}\)_._
* _If_ \(r_{M}\geq 2\)_, then_ \(L^{\prime}_{r_{M}-1}\) _acts injectively on_ \(K_{0}\) _and_ \(K\)_._
Proof.: (i) If \(Y_{-2}\neq 0\), then \(L^{\prime}_{p}K_{0}=0,p\geq-2\). We deduce that \(\mathfrak{s}^{\prime}K_{0}=0\) and hence \(r_{M}=-\infty\). If \(Y_{-2}=0\), then \(r_{M}\geq-1\).
(ii) For any \(0\neq v\in K_{0}\) and \(x\in\mathfrak{g}^{(0,-(n_{M}-1))}\), it follows from Lemma 4.1 (ii) that \(xv\in M_{0}\). We first show that \(L^{\prime}_{p}xv=0,p\geq r_{M}\). Indeed, \(L^{\prime}_{p}H_{k}v=H_{k}L^{\prime}_{p}v=0\) and \(L^{\prime}_{p}F_{k+\frac{1}{2}}v=F_{k+\frac{1}{2}}L^{\prime}_{p}v=0\) by (4.4) for any \(k\geq-(n_{M}-1)\). Moreover, it follows from (4.2) and (4.3) that
\[L^{\prime}_{p}L_{n}v=L_{n}L^{\prime}_{p}v+[L^{\prime}_{p},L_{n}]v=(n-p)L^{ \prime}_{p+n}v=0,\forall n\in\mathbb{N},\]
\[L^{\prime}_{p}G_{n+\frac{1}{2}}v=G_{n+\frac{1}{2}}L^{\prime}_{p}v+[L^{\prime}_ {p},G_{n-\frac{1}{2}}]v=(\frac{p}{2}-n-\frac{1}{2})G^{\prime}_{p+n+\frac{1}{2} }v=0,\forall n\in\mathbb{N}.\]
Hence, \(L^{\prime}_{p}xv=0,p\geq r_{M}\). Similarly \(G^{\prime}_{p-\frac{1}{2}\tau(r_{M})}xv=0,p\geq r_{M}\). That is, \(xv\in K_{0}\), as desired.
Since \(0\neq K_{0}\subseteq M_{0}\), we see that \(H_{n_{M}-1}\) acts injectively on \(K_{0}\) by Lemma 4.1 (i).
(iii) follows from (ii).
(iv) The statement that \(K_{0}\) is invariant under the actions of \(L_{n},G_{n+\frac{1}{2}}\) for \(n\in\mathbb{N}\) follows from the definition of \(K_{0}\) and the computations:
\[L^{\prime}_{p}L_{n}K_{0}=G^{\prime}_{p-\frac{1}{2}\tau(p)}L_{n}K_{0}=0,\ \ L^{\prime}_{p}G_{n+\frac{1}{2}}K_{0}=G^{\prime}_{p-\frac{1}{2}\tau(p)}G_{n+ \frac{1}{2}}K_{0}=0,\forall n\geq 0,p\geq r_{M},\]
\[H_{k+n_{M}}L_{n}K_{0}=F_{k+n_{M}\pm 1/2}L_{n}K_{0}=0,\ \ H_{k+n_{M}}G_{n+\frac{1}{2}}K_{0}=F_ {k+n_{M}\pm 1/2}G_{n+\frac{1}{2}}K_{0}=0,\forall n\geq 0,p\geq r_{M}.\]
Similarly, \(K_{0}\) is invariant under the actions of \(L^{\prime}_{n},G^{\prime}_{n+\frac{1}{2}}\) for \(n\in\mathbb{N}\).
Using these just established results on \(K_{0}\), the definition of \(K\) and \(K_{0}\), and the fact that \([\mathfrak{s}^{\prime},\mathfrak{b}\mathfrak{c}]=0\), we can verify that \(L^{\prime}_{n}K\subset K,G^{\prime}_{n+\frac{1}{2}}K\subset K\). The statement that \(K\) is invariant under the actions of \(L_{n},G_{n+\frac{1}{2}}\) follows from the fact that \([\mathfrak{s},\mathfrak{b}\mathfrak{c}]=\mathfrak{b}\mathfrak{c}\).
(v) If there exists \(v\in K_{0}\) such that \(L^{\prime}_{ru-1}v=0\), then \(u=G^{\prime}_{r_{M}-\frac{1}{2}}v\neq 0\) by the definition of \(r_{M}\). However \(G^{\prime}_{r_{M}-\frac{3}{2}}u=2L^{\prime}_{2r_{M}-3}v=0\) since \(r_{M}\geq 2\). We see that \(L^{\prime}_{r_{M}-1}u=\frac{1}{2}G^{\prime}_{r_{M}+\frac{1}{2}}G^{\prime}_{r_{M }-\frac{1}{2}}u=0\) which contradicts the definition of \(r_{M}\) (note that \(u\in M_{0}\)). Then \(L^{\prime}_{r_{M}-1}\) acts injectively on \(K_{0}\), and then on \(K\) since \([\mathfrak{s}^{\prime},\mathfrak{b}\mathfrak{c}]=0\).
**Proposition 4.4**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\) and central charge \((c,z)\). If \(r_{M}=-\infty\), then \(M=K(z)^{\mathfrak{g}}\). Hence \(c=\frac{3}{2}-\frac{12z^{2}}{\ell}\) and \(K\) is a simple \(\mathfrak{b}\mathfrak{c}\)-module._
Proof.: Since \(r_{M}=-\infty\), we see that \(\mathfrak{s}^{\prime}K_{0}=0\). This together with (4.3) implies that \(c_{1}=\frac{3}{2}-\frac{12z^{2}}{\ell}\). Noting that \([\mathfrak{s}^{\prime},\mathfrak{b}\mathfrak{c}+\mathbb{C}\mathfrak{c}_{2}]=0\), we further obtain that \(\mathfrak{s}^{\prime}K=0\), that is, \(L_{n}v=\bar{L}_{n}v,G_{n+\frac{1}{2}}v=\bar{G}_{n+\frac{1}{2}}v\in K\) for any \(v\in K\) and \(n\in\mathbb{Z}\). Hence \(K(z)^{\mathfrak{g}}\) is a \(\mathfrak{g}\)-submodule of \(M\), yielding that \(M=K(z)^{\mathfrak{g}}\). In particular, \(K\) is a simple \(\mathfrak{b}\mathfrak{c}\)-module.
**Proposition 4.5**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\) and central charge \((c,z)\). If \(r_{M}\geq 2\) and \(n_{M}\geq 2\), then \(K_{0}\) is a simple \(\mathfrak{g}^{(0,-(n_{M}-1))}\)-module and \(M\cong\mathrm{Ind}^{\mathfrak{g}}_{\mathfrak{g}^{(0,-(n_{M}-1))}}K_{0}\)._
Proof.: We first show that \(\mathrm{Ind}^{\mathfrak{g}^{(0,-\infty)}}_{\mathfrak{g}^{(0,-(n_{M}-1))}}K_{0}\cong K\) as \(\mathfrak{g}^{(0,-\infty)}\) modules. For that, let
\[\phi:\ \mathrm{Ind}^{\mathfrak{g}^{(0,-\infty)}}_{\mathfrak{g}^{(0,-(n_{M }-1))}}K_{0} \longrightarrow K\] \[\sum_{\mathbf{j}\in\mathbb{M},\mathbf{j}\in\mathbb{M},\mathbf{j} \in\mathbb{M}^{1}}H^{\mathbf{j}}F^{\mathbf{l}}\otimes v_{\mathbf{j}\mathbf{j}}\] \[\mapsto \sum_{\mathbf{j}\in\mathbb{M},\mathbf{j}\in\mathbb{M}^{1}}H^{ \mathbf{j}}F^{\mathbf{l}}v_{\mathbf{j}\mathbf{j}},\]
where \(v_{\mathbf{j}\mathbf{i}}\in K_{0}\setminus\{0\}\), \(H^{\mathbf{j}}F^{\mathbf{i}}=\cdots H^{j_{2}}_{-2(n_{M}-1)}H^{j_{1}}_{-1-(n_{M}-1)} \cdots F^{j_{2}}_{-\frac{3}{2}-(n_{M}-1)}F^{l_{1}}_{-\frac{1}{2}-(n_{M}-1)}\in U (\mathfrak{b}\mathfrak{c})\). Then \(\phi\) is a \(\mathfrak{g}^{(0,-\infty)}\)-module epimorphism and \(\phi|_{K_{0}}\) is one-to-one.
**Claim**. Any nonzero submodule \(V\) of \(\operatorname{Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}^{(0,-\infty)}} K_{0}\) does not intersect with \(K_{0}\) trivially.
Assume \(V\cap K_{0}=0\). Let \(v=\sum_{\mathbf{j}\in\mathbb{M},\mathbf{j}\in\mathbb{M}_{1}}H^{\mathbf{j}}F^{ \mathbf{i}}\otimes v_{\mathbf{j}\mathbf{i}}\in V\backslash K_{0}\) with minimal \(\deg(v):=(\underline{\mathbf{j}},\underline{\mathbf{l}})\). Then \((\mathbf{0},\mathbf{0})\prec(\underline{\mathbf{j}},\underline{\mathbf{l}})\).
Let \(p=\min\{s:i_{s}\neq 0\}\). Since \(H_{p+n_{M}-1}v_{\mathbf{j}\mathbf{i}}=0\), we have \(H_{p+n_{M}-1}H^{\mathbf{j}}F^{\mathbf{i}}\otimes v_{\mathbf{j}\mathbf{i}}=[H_{ p+n_{M}-1},H^{\mathbf{j}}]F^{\mathbf{i}}v_{\mathbf{j}\mathbf{i}}\). If \(j_{p}=0\) then \(H_{p+n_{M}-1}H^{\mathbf{j}}F^{\mathbf{i}}\otimes v_{\mathbf{j}\mathbf{i}}=0\), and if \(j_{p}\neq 0\), noticing the level \(\ell\neq 0\), then \([H_{p+n_{M}-1},h^{\mathbf{j}}F^{\mathbf{i}}]=\lambda H^{\mathrm{i}-\epsilon_{p }}F^{\mathbf{i}}\) for some \(\lambda\in\mathbb{C}^{*}\) and hence
\[\deg([H_{p+n_{M}-1},H^{\mathbf{j}}]F^{\mathbf{i}}v_{\mathbf{j}\mathbf{i}})=( \mathbf{j}-\epsilon_{p},\mathbf{l})\preceq(\underline{\mathbf{j}}-\epsilon_{p },\underline{\mathbf{l}}),\]
where the equality holds if and only if \(\underline{\mathbf{j}}=\mathbf{j}\). Hence \(\deg(H_{p+n_{M}-1}v)=(\underline{\mathbf{j}}-\epsilon_{p},\underline{\mathbf{ l}})\prec(\underline{\mathbf{j}},\underline{\mathbf{l}})\) and \(H_{p+n_{M}-1}v\in V\), contrary to the choice of \(v\).
Similarly, set \(p=\min\{s:j_{s}\neq 0\}\), we can also get a contradiction. So the claim holds.
By the claim, as \(\mathfrak{g}\)-modules, we have
\[\operatorname{Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}}K_{0}\cong \operatorname{Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}}(\operatorname {Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}^{(0,-\infty)}}K_{0})\cong \operatorname{Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}}K.\]
By the way, it is clear that \(\operatorname{Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}}K\cong \operatorname{Ind}_{\mathfrak{g}^{(0)}}^{\mathfrak{g}^{\prime}}K\) as vector spaces. Moreover, we have the following \(\mathfrak{g}\)-module epimorphism
\[\pi:\operatorname{Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}}K= \operatorname{Ind}_{\mathfrak{g}^{(0)}}^{s^{\prime}}K \rightarrow M,\] \[\sum_{\mathbf{i}\in\mathbb{M},\mathbf{k}\in\mathbb{M}_{1}}L^{ \prime}G^{\prime\mathbf{k}}\otimes v_{\mathbf{i}\mathbf{,k}} \mapsto \sum_{\mathbf{i}\in\mathbb{M},\mathbf{k}\in\mathbb{M}_{1}}L^{ \prime}G^{\prime\mathbf{k}}v_{\mathbf{i}\mathbf{,k}},\]
where \(L^{\prime i}=\cdots(L^{\prime}_{-2})^{i_{2}}(L^{\prime}_{-1})^{i_{1}}\) and \(G^{\prime\mathbf{k}}=\cdots(G^{\prime}_{-\frac{1}{2}})^{k_{2}}(G^{\prime}_{- \frac{1}{2}})^{k_{1}}\). We see that \(\pi\) is also an \(\mathfrak{s}^{\prime}\)-module epimorphism.
By Lemma 4.3 (v), we see that \(L^{\prime}_{r_{M}-1}\) act injectively on \(K\). By the proof of Theorem 3.1 in [32] we know that any nonzero \(\mathfrak{s}^{\prime}\)-submodule of \(\operatorname{Ind}_{\mathfrak{s}^{\prime(0)}}^{\mathfrak{s}^{\prime}}K\) contain nonzero vectors of \(K\). Note that \(\pi|_{K}\) is one-to-one, we see that the image of any nonzero \(\mathfrak{g}\)-submodule (and hence \(\mathfrak{s}^{\prime}\)-submodule) of \(\operatorname{Ind}_{\mathfrak{g}^{(0,-\infty)}}^{\mathfrak{g}}K\) must be a nonzero \(\mathfrak{g}\)-submodule of \(M\) and hence be the whole module \(M\), which forces that the kernel of \(\pi\) must be \(0\). Therefore, \(\pi\) is an isomorphism. Since \(M\) is simple, we see \(K_{0}\) is a simple \(\mathfrak{g}^{(0,-(n_{M}-1))}\)-module.
**Lemma 4.6**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\). If \(r_{M}=1\), then \(L^{\prime}_{0}\) has an eigenvector in \(K\)._
Proof.: Lemma 4.3 (iv) means that \(K\) is a \(\mathfrak{g}^{\prime(0,-\infty)}\)-module. Assume that any finitely generated \(\mathbb{C}[L^{\prime}_{0}]\)-submodule of \(K\) is a free \(\mathbb{C}[L^{\prime}_{0}]\)-module. By Lemma 3.10 we see that the following \(\mathfrak{g}^{\prime}\)-module homomorphism
\[\varphi:\operatorname{Ind}_{\mathfrak{g}^{\prime(0,-\infty)}}^{ \mathfrak{s}^{\prime}}K=\operatorname{Ind}_{\mathfrak{s}^{\prime(0)}}^{\mathfrak{ s}^{\prime}}K \longrightarrow M,\] \[x\otimes u \mapsto xu,x\in U(\mathfrak{s}^{\prime}),u\in K.\]
is an isomorphism. So \(M=\operatorname{Ind}_{s^{\prime(0)}}^{s^{\prime}}K\), and consequently, \(K\) is a simple \(\mathfrak{g}^{\prime(0,-\infty)}\)-module. Since \(r_{M}=1\) and \(s^{\prime+}K=0\), \(K\) can be seen as a simple module over the Lie superalgebra \(\mathfrak{h}\varheartsuit\varheartsuit\varheartsuit\varheartsuit\varheartsuit\varheuit \varcal L^{\prime}_{0}\), where \(\varheartsuit\varhecal L^{\prime}_{0}\) lies in the center of the Lie superalgebra. Schur's lemma tells us that \(L^{\prime}_{0}\) acts as a scalar on \(K\), a contradiction. So this case will not occur.
Therefore, there exists some finitely generated \(\mathbb{C}[L^{\prime}_{0}]\)-submodule \(W\) of \(K\) that is not a free \(\mathbb{C}[L^{\prime}_{0}]\)-module. Since \(\mathbb{C}[L^{\prime}_{0}]\) is a principal ideal domain, by the structure theorem of finitely generated modules over a principal ideal domain, there exists a monic polynomial \(f(L^{\prime}_{0})\in\mathbb{C}[L^{\prime}_{0}]\) with minimal positive degree and nonzero element \(u\in W\) such that \(f(L^{\prime}_{0})u=0\). Write \(f(L^{\prime}_{0})=\Pi_{1\leq i\leq s}(L^{\prime}_{0}-\lambda_{i})\), \(\lambda_{1},\cdots,\lambda_{s}\in\mathbb{C}\). Denote \(w:=\prod_{i=1}^{s-1}(L^{\prime}_{0}-\lambda_{i})u\neq 0\), we see \((L^{\prime}_{0}-\lambda_{s})w=0\) where we make convention that \(w=u\) if \(s=1\). Then \(w\) is a desired eigenvector of \(L^{\prime}_{0}\).
**Proposition 4.7**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\) and central charge \((c,z)\). If \(r_{M}=0,\pm 1\), then \(K\) is a simple \(\mathfrak{h}\varheartsuit\varheartsuit\)-module and \(M\cong U^{\mathfrak{g}}\otimes K(z)^{\mathfrak{g}}\) for some simple module \(U\in\mathcal{R}_{\mathfrak{s}}\)._
Proof.: If \(r_{M}=1\), then by Lemma 4.6 we know that there exists \(0\neq u\in K\) such that \(L^{\prime}_{0}u=\lambda u\) for some \(\lambda\neq 0\); if \(r_{M}=0,-1\), then \(L^{\prime}_{0}K=0\). In summary, for all the three cases, \(L^{\prime}_{0}\) has an eigenvector in \(K\). Since \(M\) is a simple \(\mathfrak{g}^{\prime}\)-module, Schur's lemma implies that \(\mathbf{c}^{\prime}_{1},\mathbf{c}_{2},\mathbf{c}_{3}\) act as scalars on \(M\). So \(M\) is a weight \(\mathfrak{g}^{\prime}\)-module, and \(K\) is a weight module for \(\mathfrak{g}^{\prime(s_{M},-\infty)}\), where \(s_{M}=r_{M}-\delta_{r_{M},1}\). Take a weight vector \(u_{0}\in K\) with \(L^{\prime}_{0}u_{0}=\lambda_{0}u_{0}\) for some \(\lambda_{0}\in\mathbb{C}\).
Set \(K^{\prime}=U(\mathfrak{h}\varheartsuit)u_{0}\), which is an \(\mathfrak{h}\varheartsuit\)-submodule of \(K\). Now we define the \(\mathfrak{g}^{\prime}\)-module \(K^{\prime\mathfrak{g}^{\prime}}\) with trivial action of \(\mathfrak{s}^{\prime}\). Let \(\mathbb{C}v_{0}\) be the one-dimensional \(\mathfrak{g}^{\prime(s_{M},-\infty)}\)-module defined by
\[L^{\prime}_{0}v_{0} =\lambda_{0}v_{0},\ \mathbf{c}^{\prime}_{1}v_{0}=(c-\frac{3}{2}+ \frac{12z^{2}}{\ell})v_{0},\] \[L^{\prime}_{n}v_{0} =G^{\prime}_{m+\frac{1}{2}}v_{0}=H_{k}v_{0}=F_{k+\frac{1}{2}}v_{0 }=\mathbf{c}_{2}v_{0}=\mathbf{c}_{3}v_{0}=0,\ \ 0\neq n\geq r_{M},m\geq s_{M},k\in\mathbb{Z}.\]
Then \(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{g}^{\prime}}\) is a \(\mathfrak{g}^{\prime(s_{M},-\infty)}\)-module with central charge \((c_{1}-\frac{3}{2}+\frac{12z^{2}}{\ell},z)\) and level \(\ell\). There is a \(\mathfrak{g}^{\prime(s_{M},-\infty)}\)-module homomorphism
\[\varphi_{K^{\prime}}:\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{g} ^{\prime}} \longrightarrow M,\] \[v_{0}\otimes u \mapsto u,\forall u\in K^{\prime},\]
which is injective and can be extended to be the following \(\mathfrak{g}^{\prime}\)-module epimomorphism
\[\varphi:\operatorname{Ind}_{\mathfrak{g}^{\prime(s_{M},-\infty)}}^{\mathfrak{g }^{\prime}}(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{g}^{\prime}}) \longrightarrow M,\] \[x(v_{0}\otimes u) \mapsto xu,x\in U(\mathfrak{g}^{\prime}),u\in K^{\prime}.\]
By Lemma 3.2 we know that
\[\operatorname{Ind}_{\mathfrak{g}^{\prime(s_{M},-\infty)}}^{\mathfrak{g}^{ \prime}}(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{g}^{\prime}})\cong( \operatorname{Ind}_{\mathfrak{g}^{\prime(s_{M},-\infty)}}^{\mathfrak{g}^{ \prime}}\mathbb{C}v_{0})\otimes K^{\prime\mathfrak{g}^{\prime}}=(\operatorname{ Ind}_{\mathfrak{g}^{\prime(s_{M})}}^{\mathfrak{g}^{\prime}}\mathbb{C}v_{0})^{ \mathfrak{g}^{\prime}}\otimes K^{\prime\mathfrak{g}^{\prime}}.\]
Then we have the following \(\mathfrak{g}^{\prime}\)-module epimorphism
\[\varphi^{\prime}:(\operatorname{Ind}_{\mathfrak{g}^{\prime(s_{M} )}}^{\mathfrak{g}^{\prime}}\mathbb{C}v_{0})^{\mathfrak{g}^{\prime}}\otimes K^{ \prime\mathfrak{g}^{\prime}} \longrightarrow M,\] \[xv_{0}\otimes u \mapsto xu,x\in U(\mathfrak{s}^{\prime}),u\in K^{\prime}.\]
Note that \((\operatorname{Ind}_{\mathfrak{g}^{\prime(s_{M})}}^{\mathfrak{g}^{\prime}} \mathbb{C}v_{0})^{\mathfrak{g}^{\prime}}\otimes K^{\prime\mathfrak{g}^{\prime}} \cong\operatorname{Ind}_{\mathfrak{g}^{\prime(s_{M})}}^{\mathfrak{g}^{\prime}}( \mathbb{C}v_{0}\otimes K^{\prime\mathfrak{g}^{\prime}})\) as \(\mathfrak{s}^{\prime}\)-modules, and \(\varphi^{\prime}\) is also an \(\mathfrak{s}^{\prime}\)-module epimorphism, \(\varphi^{\prime}|_{\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{g}^{\prime}}}\) is one-to-one, and \((\operatorname{Ind}_{\mathfrak{g}^{\prime(s_{M})}}^{\mathfrak{g}^{\prime}} \mathbb{C}v_{0})^{\mathfrak{g}^{\prime}}\otimes K^{\prime\mathfrak{g}^{ \prime}}\) is a highest weight \(\mathfrak{s}^{\prime}\)-module.
Let \(V=\operatorname{Ind}_{\mathfrak{s}^{\prime}(M)}^{\mathfrak{s}^{\prime}}\mathbb{C}v_{0}\) and \(\mathfrak{R}=\operatorname{Ker}(\varphi^{\prime})\). Since \(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}}=\{u\in V^{\mathfrak{ s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}\mid L^{\prime}_{0}u=\lambda_{0}u\}\), we have
\[(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}})\cap\mathfrak{R}=0. \tag{4.5}\]
Let \(\mathfrak{R}^{\prime}\) be the sum of all \(\mathfrak{s}^{\prime}\)-submodules \(W\) of \(V^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}\) with \(W\cap(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}})=0\), that is, the unique maximal (weight) \(\mathfrak{s}^{\prime}\)-submodule of \(V^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}\) with trivial intersection with \((\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}})\). It is obvious that \(\mathfrak{R}\subseteq\mathfrak{R}^{\prime}\) by (4.5).
Next we further show that \(\mathfrak{R}=\mathfrak{R}^{\prime}\). For that, take any \(\mathfrak{s}^{\prime}\)- submodule \(W\) of \(V^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}\) such that \(W\cap(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}})=0\). Then for any weight vector \(w=\sum_{\mathbf{k}\in\mathbb{M}}L^{\mathbf{d}}G^{\prime\mathbf{k}}v_{0}\otimes u _{\mathbf{l},\mathbf{k}}\in W\), where \(u_{\mathbf{l},\mathbf{k}}\in K^{\prime\mathfrak{s}^{\prime}}\),
\[L^{\mathbf{d}}=\cdots(L^{\prime}_{-2})^{l_{2}}(L^{\prime}_{-1})^{l_{1}}\text{ and }G^{\prime\mathbf{k}}=\cdots(G^{\prime}_{-\frac{3}{2}})^{k_{2}}(G^{\prime}_{-\frac{1}{2 }})^{k_{1}}\text{ if }r_{M}=1,0,\text{ or}\]
\[L^{\mathbf{d}}=\cdots(L^{\prime}_{-2})^{l_{2}}\text{ and }G^{\prime\mathbf{k}}= \cdots(G^{\prime}_{-\frac{3}{2}})^{k_{2}}\text{ if }r_{M}=-1,\]
and all \(\operatorname{w}(\mathbf{l},\mathbf{k})\geq\frac{1}{2}\) are equal. Note that \(H_{k}w=\sum_{\mathbf{k}\in\mathbb{M},\mathbf{k}\in\mathbb{M}_{1}}L^{\mathbf{d }}G^{\prime\mathbf{k}}v_{0}\otimes H_{k}u_{\mathbf{l},\mathbf{k}}\) either equals to \(0\) or has the same weight as \(w\) under the action of \(L^{\prime}_{0}\). So \(U(\mathfrak{g}^{\prime})W\cap(\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{s}^{ \prime}})=0\), i.e., \(U(\mathfrak{g}^{\prime})W\subset\mathfrak{R}^{\prime}\). Moreover, we have \(U(\mathfrak{g}^{\prime})\mathfrak{R}^{\prime}\cap(\mathbb{C}v_{0}\otimes K^{ \prime\mathfrak{s}^{\prime}})=0\). The maximality of \(\mathfrak{R}^{\prime}\) forces that \(\mathfrak{R}^{\prime}=U(\mathfrak{g}^{\prime})\mathfrak{R}^{\prime}\) is a proper \(\mathfrak{g}^{\prime}\)-submodule of \(V^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}\). Since \(\mathfrak{R}\) is a maximal proper \(\mathfrak{g}^{\prime}\)-submodule of \(V^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}\), it follows that \(\mathfrak{R}^{\prime}\subseteq\mathfrak{R}\). Thus \(\mathfrak{R}=\mathfrak{R}^{\prime}\).
Now \(\mathfrak{R}\) is just the unique maximal (weight) \(\mathfrak{s}^{\prime}\)-submodule of \(V^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}\) with trivial intersection with \((\mathbb{C}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}})\). By Lemma 3.9 we know that \(\mathfrak{R}\) is generated by \(\mathbb{C}P_{1}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}}\) and \(\mathbb{C}P_{2}v_{0}\otimes K^{\prime\mathfrak{s}^{\prime}}\). Let \(V^{\prime}\) be the maximal \(\mathfrak{s}^{\prime}\)-submodule of \(V\) generated by \(P_{1}v_{0}\) and \(P_{2}v_{0}\), then \(\mathfrak{R}=V^{\prime\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{ \prime}}\). Therefore,
\[M\cong(V^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}})/(V^{ \prime\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}})\cong(V/V^{ \prime})^{\mathfrak{s}^{\prime}}\otimes K^{\prime\mathfrak{s}^{\prime}}, \tag{4.6}\]
which forces that \(K^{\prime\mathfrak{s}^{\prime}}\) is a simple \(\mathfrak{g}^{\prime}\)-module and hence a simple \(\mathfrak{h}\)-module. So \(K^{\prime}\) is a simple \(\mathfrak{h}\)-module. By Corollary 3.4 we know there exists a simple \(\mathfrak{s}\)-module \(U\in\mathcal{R}_{\mathfrak{s}}\) such that \(M\cong U^{\mathfrak{s}}\otimes K^{\prime}(z)^{\mathfrak{s}}\). From this isomorphism and some computations we see that \(K_{0}\subseteq v_{0}\otimes K^{\prime}(z)^{\mathfrak{s}}\), where \(v_{0}\) is a highest weight vector. So \(K=K^{\prime}\).
We are now in a position to present the following main result on classifications of simple smooth \(\mathfrak{g}\)-modules with nonzero level.
**Theorem 4.8**.: _Let \(M\) be a simple smooth \(\mathfrak{g}\)-module with level \(\ell\neq 0\) and central charge \((c,z)\). The invariants \(n_{M},r_{M}\) of \(M\), \(K_{0},K\) are defined as before. Then_
\[M\cong\begin{cases}K(z)^{\mathfrak{s}},&\text{ if }r_{M}=-\infty,\\ U^{\mathfrak{s}}\otimes H(z)^{\mathfrak{s}},&\text{ if }r_{M}=0,\pm 1\text{ or }n_{M}=0,1,\\ \operatorname{Ind}_{\mathfrak{g}^{(0,-(n_{M}-1))}}^{\mathfrak{s}}K_{0},& \text{ otherwise,}\end{cases}\]
_for some simple \(U\in\mathcal{R}_{\mathfrak{s}},H\in\mathcal{R}_{\mathfrak{h}\mathfrak{h}}\)._
Proof.: The assertion follows directly from Proposition 4.2, Proposition 4.4, Proposition 4.5 and Proposition 4.7.
## 5. Simple smooth g-modules at level zero
In this section we will determine simple smooth g-module \(M\) of level zero with a mild condition.
### Constructing simple smooth \(\mathfrak{g}\)-modules
For any \(q\in\mathbb{N},c,d,z\in\mathbb{C}\), let \(V\) be a simple g\({}^{(0,-q)}\)-module with the actions of \(H_{0},\mathbf{c}_{1},\mathbf{c}_{2},\mathbf{c}_{3}\) as scalars \(d,c,z,l\), respectively. In this section we mainly consider the corresponding induced module \(\operatorname{Ind}_{q^{(0,-q)}}^{\mathfrak{g}}V\). It is well known that \(\operatorname{Ind}_{q^{(0,-q)}}^{\mathfrak{g}}V\) becomes the Verma module if \(q=0\) and \(V\) is a one dimensional \(\mathfrak{g}^{(0,0)}\)-module; \(\operatorname{Ind}_{q^{(0,-q)}}^{\mathfrak{g}}V\) becomes a Whittaker module if \(q=0\) and \(V=\operatorname{Ind}_{q^{(k,0)}}^{q^{(0,0)}}W\), where \(W\) is a finite dimensional simple \(\mathfrak{g}^{(k,0)}\)-module for some \(k\in\mathbb{Z}_{+}\) (see Section 6).
**Theorem 5.1**.: _Let \(c,z,d\in\mathbb{C},q\in\mathbb{N}\), and \(V\) be a simple \(\mathfrak{g}^{(0,-q)}\)-module with central charge \((c,z)\), level 0, and the action of \(H_{0}\) as a scalar \(d\). Assume that there exists \(t\in\mathbb{Z}_{+}\) satisfying the following two conditions:_
1. _the action of_ \(H_{t}\) _on_ \(V\) _is injective;_
2. \(H_{i}V=0\) _for all_ \(i>t\) _and_ \(L_{j}V=0\) _for all_ \(j>t+q\)_._
_Then_
1. \(G_{j-\frac{1}{2}}V=0\) _for all_ \(j>t+q\)_, and_ \(F_{i-\frac{1}{2}}V=0\) _for all_ \(i>t\)_,_
2. _the induced module_ \(\operatorname{Ind}_{q^{(0,-q)}}^{\mathfrak{g}}V\) _is a simple_ \(\mathfrak{g}\)_-module._
Proof.: (i) By (b), for \(j\geq t+q\) we have \(G_{j+\frac{1}{2}}^{2}V=L_{2j+1}V=0\). If \(G_{j+\frac{1}{2}}V=0\) we are done. Otherwise, \(W=G_{j+\frac{1}{2}}V\) is a proper subspace of \(V\). For \(r\in\mathbb{Z}_{+}\), we have
\[F_{i+\frac{1}{2}}W\subseteq W,\ \ H_{i}W\subseteq W,\forall i\geq-q,\]
\[G_{r-\frac{1}{2}}W=G_{r-\frac{1}{2}}G_{j+\frac{1}{2}}V=L_{r+j}V-G_{j+\frac{1} {2}}G_{r-\frac{1}{2}}V\subset G_{j+\frac{1}{2}}V=W,\]
\[2L_{r}W=[G_{r-\frac{1}{2}},G_{\frac{1}{2}}]G_{j+\frac{1}{2}}V=G_{r-\frac{1}{2 }}G_{\frac{1}{2}}G_{j+\frac{1}{2}}V+G_{\frac{1}{2}}G_{r-\frac{1}{2}}G_{j+\frac {1}{2}}V\subset W.\]
It follows that \(W\) is a proper \(\mathfrak{g}^{(0,-q)}\)-submodule of \(V\). Then \(W=G_{j+\frac{1}{2}}V=0\) for \(j\geq t+q\) since \(V\) is simple.
For any \(i>t\), \(iF_{i+\frac{1}{2}}V=[H_{i},G_{\frac{1}{2}}]V=0\). Note that \(F_{i+\frac{1}{2}}^{2}V=0\). If \(F_{i+\frac{1}{2}}V=0\) we are done. Otherwise \(U=F_{i+\frac{1}{2}}V\) is a proper subspace of \(V\). We can easily verify that
\[F_{i+\frac{1}{2}}U\subseteq U,\ \ H_{i}U\subseteq U,\forall i\geq-q,\]
\[G_{r+\frac{1}{2}}U\subseteq U,\ \ L_{r}U\subset U,\forall r\geq 0.\]
So \(U\) is a \(\mathfrak{g}^{(0,-q)}\)-submodule of \(V\). Thus \(U=F_{i+\frac{1}{2}}V=0\).
(ii) Suppose that \(W\) is a nonzero proper \(\mathfrak{g}\)-submodule of \(\operatorname{Ind}_{q^{(0,-q)}}^{\mathfrak{g}}V\). Now we are going to deduce some contradictions. Let \(v\in W\setminus\{0\}\) such that \(\deg(v)=(\underline{\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}}, \underline{\mathbf{l}})\) is minimal. Write
\[v=\sum_{(\mathbf{i},\underline{\mathbf{j}},\underline{\mathbf{k}},\underline {\mathbf{l}})\in\operatorname{supp}(v)}L^{i}H^{\underline{\mathbf{j}}}G^{ \mathbf{k}}F^{\mathbf{l}}v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}}, \tag{5.1}\]
where all \(v_{i,j,k,l}\in V\) and only finitely many of them are nonzero. Note that \(\mathbf{j}=\mathbf{0}\) or \(\hat{j}>q\), and \(\mathbf{l}=\mathbf{0}\) or \(\hat{l}>q\) for any \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\mathrm{supp}(v)\).
**Claim 1.**\(\underline{\mathbf{k}}=\mathbf{0}\).
If \(\underline{\mathbf{k}}\neq\mathbf{0}\), then \(\underline{\hat{k}}>0\). We will show that \(\deg(F_{\underline{\hat{k}}+t-\frac{1}{2}}v)=(\underline{\mathbf{i}}, \underline{\mathbf{j}},\underline{\mathbf{k}}^{\prime},\underline{\mathbf{l}})\). It suffices to consider those \(v_{i,j,k,l}\) with
\[L^{\underline{\mathbf{i}}}H^{\underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{ i}}v_{i,j,k,l}\neq 0.\]
For any \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\mathrm{supp}(v)\), since \(F_{\underline{\hat{k}}+t-\frac{1}{2}}v_{i,j,k,l}=0\) we see that
\[F_{\underline{\hat{k}}+t-\frac{1}{2}}L^{\underline{\mathbf{i}}}H^{\underline {\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k,l}=[F_{\underline{\hat{k}}+ t-\frac{1}{2}},L^{\underline{\mathbf{i}}}]H^{\underline{\mathbf{j}}}G^{\mathbf{k}}F^{ \mathbf{i}}v_{i,j,k,l}\pm L^{\underline{\mathbf{i}}}H^{\underline{\mathbf{j}}}[ F_{\underline{\hat{k}}+t-\frac{1}{2}},G^{\mathbf{k}}]F^{\mathbf{i}}v_{i,j,k,l}.\]
Since \(\mathrm{w}([F_{\underline{\hat{k}}+t-\frac{1}{2}},L^{\underline{\mathbf{i}}} ]H^{\underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k,l})\leq\mathrm{ w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})-\underline{\hat{k}}\) (we have used the property of \((\underline{\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}}, \underline{\mathbf{l}})\)) and \(\mathrm{w}([L^{\underline{\mathbf{i}}}H^{\underline{\mathbf{j}}}[F_{ \underline{\hat{k}}+t-\frac{1}{2}},G^{\mathbf{k}}]F^{\mathbf{i}}v_{i,j,k,l}) \leq\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})-\underline{\hat {k}}+\frac{1}{2}\) by (a), (b) and (i), we deduce that
\[\mathrm{w}(F_{\underline{\hat{k}}+t-\frac{1}{2}}L^{\underline{\mathbf{i}}}H^{ \underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k,l})\leq\mathrm{w}( \mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})-\underline{\hat{k}}+\frac{1}{2}. \tag{5.2}\]
For any \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\mathrm{supp}(v)\) with \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})<\mathrm{w}(\underline {\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}},\underline{ \mathbf{l}})\), by (5.2) we see that
\[\mathrm{w}(F_{\underline{\hat{k}}+t-\frac{1}{2}}L^{\underline{\mathbf{i}}}H^{ \underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k,l})\leq\mathrm{w} (\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})-\underline{\hat{k}}+\frac{1}{2} <\mathrm{w}(\underline{\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{ k}},\underline{\mathbf{l}})-\underline{\hat{k}}+\frac{1}{2}.\]
So
\[\deg F_{\underline{\hat{k}}+t-\frac{1}{2}}L^{\underline{\mathbf{i}}}H^{ \underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k,l}<(\underline{ \mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}}^{\prime},\underline {\mathbf{l}}).\]
Now we suppose that \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=\mathrm{w}(\underline {\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}},\underline{ \mathbf{l}})\) and \(\mathbf{k}<\underline{\mathbf{k}}\) and denote
\[\deg(F_{\underline{\hat{k}}+t-\frac{1}{2}}L^{\underline{\mathbf{i}}}H^{ \underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k,l})=(\mathbf{i} _{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})\in\mathbb{M}\times\mathbb{ M}_{1}.\]
If \(\hat{k}>\underline{\hat{k}}\), then \(\mathrm{w}(\mathbf{i}_{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})< \mathrm{w}(\underline{\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}} ^{\prime},\underline{\mathbf{l}})\) and
\[\deg(F_{\underline{\hat{k}}+t-\frac{1}{2}}L^{\underline{\mathbf{i}}}H^{ \underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k,l})=(\mathbf{i} _{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})\prec(\underline{\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}}^{\prime},\underline{\mathbf{l}}).\]
If \(\hat{k}=\underline{\hat{k}}\), then \(\mathrm{w}(\mathbf{i}_{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})= \mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k}^{\prime},\mathbf{l})\). Since \(\mathbf{k}^{\prime}\prec\underline{\mathbf{k}}^{\prime}\), we also have \((\mathbf{i}_{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1})\prec(\underline{ \mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}}^{\prime},\underline{ \mathbf{l}})\).
If \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=\mathrm{w}(\underline {\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}},\underline{\mathbf{l}})\) and \(\mathbf{k}=\underline{\mathbf{k}}\), it is easy to see that
\[\deg(L^{\underline{\mathbf{i}}}H^{\underline{\mathbf{j}}}[F_{\underline{\hat{k}}+ t-\frac{1}{2}},G^{\mathbf{k}}]F^{\mathbf{i}}v_{i,j,k,\mathbf{l}})=(\mathbf{i}, \mathbf{j},\mathbf{k}^{\prime},\mathbf{l})\preceq(\underline{\mathbf{i}}, \underline{\mathbf{j}},\underline{\mathbf{k}}^{\prime},\underline{\mathbf{l}}),\] \[\deg([F_{\underline{\hat{k}}+t-\frac{1}{2}},L^{\underline{ \mathbf{i}}}]H^{\underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_{i,j,k, \mathbf{l}})<(\mathbf{i},\mathbf{j},\mathbf{k}^{\prime},\mathbf{l})\]
where the equality holds if and only if \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=(\underline{\mathbf{i}},\underline{ \mathbf{j}},\underline{\mathbf{k}},\underline{\mathbf{l}})\).
So \(F_{\underline{\hat{k}}+t-\frac{1}{2}}v\in W\) and \(\deg(F_{\underline{\hat{k}}+t-\frac{1}{2}}v)=(\underline{\mathbf{i}}, \underline{\mathbf{j}},\underline{\mathbf{k}}^{\prime},\underline{\mathbf{l}})\), which contradicts the choice of \(v\). Consequently, Claim 1 holds.
**Claim 2.**\(\underline{\mathbf{i}}=\mathbf{0}\).
If \(\underline{\mathbf{i}}\neq\mathbf{0}\), then \(\underline{\hat{i}}>0\). We will show that \(\deg(H_{\underline{\mathbf{i}}+t}v)=(\underline{\mathbf{i}}^{\prime}, \underline{\mathbf{j}},\underline{\mathbf{0}},\underline{\mathbf{l}})\). It suffices to consider those \(v_{i,j,k,l}\) with
\[L^{\underline{\mathbf{i}}}H^{\underline{\mathbf{j}}}G^{\mathbf{k}}F^{\mathbf{i}}v_ {i,j,k,l}\neq 0.\]
For any \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\mathrm{supp}(v)\) with \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})<\mathrm{w}(\mathbf{j}, \mathbf{j},\mathbf{0},\underline{\mathbf{l}})\),
\[H_{\underline{i}+t}L^{\mathrm{i}}H^{\mathrm{j}}G^{\mathrm{k}}F^{\mathrm{l}}v_{ \mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}}=[H_{\underline{i}+t},L^{\mathrm{ i}}]H^{\mathrm{j}}G^{\mathrm{k}}F^{\mathrm{l}}v_{\mathbf{i},\mathbf{j},\mathbf{k}, \mathbf{l}}\pm L^{\mathrm{i}}H^{\mathrm{j}}[H_{\underline{i}+t},G^{\mathrm{k}} ]F^{\mathrm{l}}v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}}.\]
From (a), (b), and (i), we see that
\[\mathrm{w}(L^{\mathrm{i}}H^{\mathrm{j}}[H_{\underline{i}+t},G^{\mathrm{k}}]F^{ \mathrm{l}}v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}})\leq\mathrm{w}( \mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})-\underline{\hat{\hat{\imath}}} -\frac{1}{2}\text{ and }\mathrm{w}([H_{\underline{i}+t},L^{\mathrm{i}}]H^{\mathrm{j}}G^{ \mathrm{k}}F^{\mathrm{l}})\leq\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k}, \mathbf{l})-\underline{\hat{\hat{\imath}}}.\]
Then \(\mathrm{w}(H_{\underline{i}+t}L^{\mathrm{i}}H^{\mathrm{j}}G^{\mathrm{k}}F^{ \mathrm{l}}v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}})\leq\mathrm{w}( \mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})-\underline{\hat{\imath}}<\mathrm{ w}(\mathbf{\hat{\imath}}^{\prime},\underline{\mathbf{j}},\mathbf{0},\underline{\mathbf{l}})\). So
\[\deg(H_{\underline{i}+t}L^{\mathrm{i}}H^{\mathrm{j}}G^{\mathrm{k}}F^{\mathrm{l} }v_{\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l}})<(\mathbf{\hat{\imath}}^{ \prime},\underline{\mathbf{j}},\mathbf{0},\underline{\mathbf{l}}).\]
Now we suppose that \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{0},\mathbf{l})=\mathrm{w}(\underline {\mathbf{i}},\underline{\mathbf{j}},\mathbf{0},\underline{\mathbf{l}})\) and \((\mathbf{i},\mathbf{j},\mathbf{0},\mathbf{l})\prec(\underline{\mathbf{i}}, \underline{\mathbf{j}},\mathbf{0},\underline{\mathbf{l}})\). Then \(\hat{\imath}\geq\underline{\hat{\imath}}\). It is clear that
\[\deg([H_{\underline{i}+t},L^{\mathrm{i}}]H^{\mathrm{j}}F^{\mathrm{l}}v_{ \mathbf{i},\mathbf{j},\mathbf{i}})\leq(\mathbf{\hat{\imath}}^{\prime}, \underline{\mathbf{j}},\mathbf{0},\underline{\mathbf{l}}),\]
where the equality holds if and only if \((\mathbf{i},\mathbf{j},\mathbf{0},\mathbf{l})=(\underline{\mathbf{i}}, \underline{\mathbf{j}},\mathbf{0},\underline{\mathbf{l}})\).
So \(H_{\underline{i}+t}v\in W\) and \(\deg(H_{\underline{i}+t}v)=(\mathbf{\hat{\imath}}^{\prime},\underline{ \mathbf{j}},\underline{\mathbf{k}},\underline{\mathbf{l}})\), which contradicts the choice of \(v\). Consequently, Claim 2 holds.
From Claims 1 and 2 we know that \(\mathbf{i}=\mathbf{k}=\mathbf{0}\) for \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\mathrm{supp}(v)\) with \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})=\mathrm{w}(\underline {\mathbf{j}},\underline{\mathbf{j}},\underline{\mathbf{k}},\underline{ \mathbf{l}})\).
Set \((\mathbf{i}_{1},\mathbf{j}_{1},\mathbf{k}_{1},\mathbf{l}_{1}):=\max\left\{( \mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\mathrm{supp}(v)\mid\mathrm{w} (\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})<\mathrm{w}(\mathbf{0},\underline {\mathbf{j}},\mathbf{0},\underline{\mathbf{l}})\right\}\). Using the arguments in Claims 1 and 2, we deduce that \(\deg F_{\hat{k}_{1}+t-\frac{1}{2}}v=(\mathbf{i}_{1},\mathbf{j}_{1},\mathbf{k}_{1 }^{\prime},\mathbf{l}_{1})\) if \(\mathbf{k}_{1}\neq 0\); \(\deg H_{\hat{\imath}_{1}+t}v=(\mathbf{i}_{1}^{\prime},\mathbf{j}_{1},\mathbf{k }_{1},\mathbf{l}_{1})\) if \(\mathbf{k}_{1}=\mathbf{0}\) and \(\mathbf{i}_{1}\neq 0\).
Repeating these arguments we can take
\[v=\sum H^{\mathrm{j}}F^{\mathrm{l}}v_{\mathbf{j},\mathbf{l}}\in W\setminus\{0\} \tag{5.3}\]
with \(\deg_{\mathfrak{k}}(v)=(\underline{\mathbf{j}},\underline{\mathbf{l}})\). If \(\hat{\hat{\jmath}}>0\), then \(\deg L_{\hat{\jmath}+t}v=(\underline{\mathbf{j}}^{\prime\prime},\underline{ \mathbf{l}})\).
Now we can take \(v=\sum F^{\mathrm{l}}v_{\mathbf{l}}\in W\setminus\{0\}\) with \(\deg(v)=\underline{\mathbf{l}}\), if \(\hat{\hat{\hat{\hat{\hat{\imath}}}}}>0\), then \(\deg G_{\hat{\jmath}+t-\frac{1}{2}}v=\underline{\mathbf{l}}^{\prime\prime}\).
So such \(W\) does not exist, which implies the simplicity of \(\mathrm{Ind}_{\mathfrak{g}^{(0-q)}}^{\mathfrak{g}}V\).
Now we consider the case of \(t=0\) for Theorem 5.1.
**Theorem 5.2**.: _Let \(c,z,d\in\mathbb{C},q\in\mathbb{N}\), and \(V\) be a simple \(\mathfrak{g}^{(0,-q)}\)-module with central charge \((c,z)\), level 0, and the action of \(H_{0}\) as a scalar \(d\). Assume that \(V\) satisfies the following conditions:_
1. \(d+(n+1)z\neq 0\) _for any_ \(n\in\mathbb{Z}^{*}\)_;_
2. \(L_{j}V=0\) _for all_ \(j>q\) _and_ \(H_{i}V=0\) _for all_ \(i>0\)_._
_Then_
1. \(G_{j-\frac{1}{2}}V=0\) _for all_ \(j>q\)_, and_ \(F_{i-\frac{1}{2}}V=0\) _for all_ \(i>0\)_,_
2. _the induced module_ \(\mathrm{Ind}_{\mathfrak{g}^{(0-q)}}^{\mathfrak{g}}V\) _is a simple_ \(\mathfrak{g}\)_-module._
Proof.: (i) The proof is the same as that of (i) in Theorem 5.1.
(ii) It is enough to show that any nonzero proper submodule of \(\operatorname{Ind}_{\mathfrak{g}^{(0,-0)}}^{\mathfrak{g}}V\) has a nonzero intersection with \(V\) (we point out that the proof for this statement will not use the simplicity on \(V\)). To the contrary, suppose that \(W\) is a nonzero proper \(\mathfrak{g}\)-submodule of \(\operatorname{Ind}_{\mathfrak{g}^{(0,-0)}}^{\mathfrak{g}}V\) with \(W\cap V=0\). Now we are going to deduce some contradictions.
Choose \(v\in W\) such that \(\deg_{\mathfrak{s}}(v)\) minimal in all nonzero elements of \(W\). Then
\[v=\sum_{\begin{subarray}{c}(\mathbf{i},\mathbf{k})\in\operatorname{supp}_{ \mathfrak{s}}(v)\\ \operatorname{w}(\mathbf{i},\mathbf{k})=|v|_{\mathfrak{s}}\end{subarray}}L^{ \mathbf{i}}G^{\mathbf{k}}u_{\mathbf{i},\mathbf{k}}+u, \tag{5.4}\]
where \(|u|_{\mathfrak{s}}<|v|_{\mathfrak{s}}\) and \(u_{\mathbf{i},\mathbf{k}}\in\operatorname{Ind}_{\mathfrak{b}^{c(0)}}^{ \mathfrak{b}^{c}}V\). Note that \(\mathfrak{b}^{c}u_{\mathbf{i},\mathbf{k}}=0\)
Set \(\deg_{\mathfrak{s}}v=(\underline{\mathbf{i}},\underline{\mathbf{k}})\). If \(\underline{\mathbf{k}}\neq\mathbf{0}\), acting \(F_{\underline{k}-\frac{1}{2}}\) on (5.4), we get
\[F_{\underline{k}-\frac{1}{2}}v=v_{1}+v_{2}+F_{\underline{k}-\frac{1}{2}}u, \tag{5.5}\]
where
\[v_{1}=\sum_{\begin{subarray}{c}(\mathbf{i},\mathbf{k})\in \operatorname{supp}_{\mathfrak{s}}(v)\\ \operatorname{w}(\mathbf{i},\mathbf{k})=|v|_{\mathfrak{s}}\end{subarray}}L^{ \mathbf{i}}[F_{\underline{k}-\frac{1}{2}},G^{\mathbf{k}}]u_{\mathbf{i}, \mathbf{k}},\ \deg_{\mathfrak{s}}(v_{1})=(\underline{\mathbf{i}},\mathbf{k}_{1}) \tag{5.6}\] \[v_{2}=\sum_{\begin{subarray}{c}(\mathbf{i},\mathbf{k})\in \operatorname{supp}_{\mathfrak{s}}(v)\\ \operatorname{w}(\mathbf{i},\mathbf{k})=|v|_{\mathfrak{s}}\end{subarray}}[F_{ \underline{k}-\frac{1}{2}},L^{\mathbf{i}}]G^{\mathbf{k}}u_{\mathbf{i}, \mathbf{k}},\deg_{\mathfrak{s}}(v_{2})=(\mathbf{i}_{1},\underline{\mathbf{k}})\text { if }v_{2}\neq 0. \tag{5.7}\]
Noticing that \(\mathbf{i}_{1}\prec\underline{\mathbf{i}}\), \(\mathbf{k}_{1}\prec\underline{\mathbf{k}}\) and \(|F_{\underline{k}-\frac{1}{2}}u|<|v|_{\mathfrak{s}}-\underline{\hat{k}}+\frac{ 1}{2}\) or \(F_{\underline{k}-\frac{1}{2}}u=0\) by Lemma 2.11. So \(F_{\underline{k}-\frac{1}{2}}v\neq 0\) and \(\deg_{\mathfrak{s}}(F_{\underline{k}-\frac{1}{2}}v)<\deg_{\mathfrak{s}}(v)\). It contradicts to the choice of \(v\).
So we can suppose that \(\underline{\mathbf{k}}=\mathbf{0},\underline{\mathbf{i}}\neq 0\) and then
\[v=\sum_{\begin{subarray}{c}(\mathbf{i},\mathbf{0})\in\operatorname{supp}_{ \mathfrak{s}}(v)\\ \operatorname{w}(\mathbf{i},\mathbf{0})=|v|_{\mathfrak{s}}\end{subarray}}L^{ \mathbf{i}}u_{\mathbf{i},\mathbf{0}}+u, \tag{5.8}\]
where \(|u|_{\mathfrak{s}}<|v|_{\mathfrak{s}}\).
For \(\mathbf{i}=(\cdots,i_{2},i_{1})\in\mathbb{M}\), denote by \((L^{\mathbf{i}})^{*}:=H_{1}^{i_{1}}H_{2}^{i_{2}}\cdots\). By action of \((L^{\underline{\mathbf{i}}})^{*}\) on (5.8), we get \((L^{\underline{\mathbf{i}}})^{*}v=au_{\underline{\mathbf{i}},\mathbf{0}}\) for some \(a\in\mathbb{C}^{*}\) since \((L^{\mathbf{i}})^{*}\cdot L^{\mathbf{i}}u_{\mathbf{i},\mathbf{0}}=0\) if \((\mathbf{i},\mathbf{0})\prec(\underline{\mathbf{i}},\mathbf{0})\) (see [45]), and \((L^{\underline{\mathbf{i}}})^{*}u=0\) for \(u\) in (5.8) by Lemma 2.11. This is a contradiction to the choice of \(v\).
So we can suppose that
\[v=\sum_{(\mathbf{j},\mathbf{l})\in\operatorname{supp}_{\mathfrak{b}^{c}}(v)}H ^{\mathbf{j}}F^{\mathbf{i}}u_{\mathbf{j},\mathbf{l}}, \tag{5.9}\]
with a minimal \(\deg_{\mathfrak{b}^{c}}(v)\) in all nonzero \(v\in W\), where \(u_{\mathbf{j},\mathbf{l}}\in V\). Note that \(\mathbf{j}=\mathbf{0}\) or \(\hat{j}>q\), and \(\mathbf{l}=\mathbf{0}\) or \(\hat{l}>q\) for any \((\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\in\operatorname{supp}(v)\).
Set \(m_{h}:=\max\{\hat{j}\mid(\mathbf{j},\mathbf{l})\in\operatorname{supp}_{ \mathfrak{b}^{c}}(v)\}\) and \(m_{f}:=\max\{\hat{l}\mid(\mathbf{j},\mathbf{l})\in\operatorname{supp}_{ \mathfrak{b}^{c}}(v)\}\). They can be \(0\) but at least one is positive. Clearly, \(m_{h},m_{f}>q\).
If \(m_{h}\geq m_{f}\), then by action of \(L_{m_{h}}\) on (5.9) and the condition that \(d+(n+1)z\neq 0\) for any \(n\in\mathbb{Z}^{*}\), we get a contradiction to the choice of \(v\).
If \(m_{h}<m_{f}\), then by action of \(G_{m_{f}-\frac{1}{2}}\) on (5.9) and the condition that \(d+(n+1)z\neq 0\) for any \(n\in\mathbb{Z}^{*}\), we get a similar contradiction.
**Remark 5.3**.: _In Theorem 5.1 (resp. Theorem 5.2), the actions of \(L_{q+i},H_{i},G_{q+i-\frac{1}{2}},F_{i-\frac{1}{2}}\) on \(\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\) for all \(i>t\) (resp. \(i>0\)) are locally nilpotent. It follows that \(\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\) is a simple smooth \(\mathfrak{g}\)-module of central charge \((c,z)\) and of level \(0\)._
_Especially, if \(q=0\) in Theorem 5.2, then \(L_{0}\) acts on \(V\) is not free. So \(V\) is one-dimensional, and \(\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\) is a highest weight module._
### Characterizing simple smooth \(\mathfrak{g}\)-modules
In this subsection, we give a characterization of simple smooth \(\mathfrak{g}\)-modules \(S\) of central charge \((c,z)\) at level zero. Moreover, we can suppose that \(H_{0}\) acts on \(S\) as a scalar \(d\).
**Proposition 5.4**.: _Let \(S\) be a simple \(\mathfrak{g}\)-module with \(d+(n+1)z\neq 0\) for any \(n\in\mathbb{Z}^{*}\). Then the following statements are equivalent:_
1. _There exists_ \(t\in\mathbb{Z}_{+}\) _such that the actions of_ \(L_{i},H_{i},G_{i-\frac{1}{2}},F_{i-\frac{1}{2}}\) _for all_ \(i\geq t\) _on_ \(S\) _are locally finite._
2. _There exists_ \(t\in\mathbb{Z}_{+}\) _such that the actions of_ \(L_{i},H_{i},G_{i-\frac{1}{2}},F_{i-\frac{1}{2}}\) _for all_ \(i\geq t\) _on_ \(S\) _are locally nilpotent._
3. _There exist_ \(c,z\in\mathbb{C}\)_,_ \(q\in\mathbb{N}\) _and a simple_ \(\mathfrak{g}^{(0,-q)}\)_-module_ \(V\) _such that both conditions_ \((a)\) _and_ \((b)\) _in Theorem_ 5.1 _or Theorem_ 5.2 _are satisfied and_ \(S\cong\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\)_._
Proof.: First we prove \((1)\Rightarrow(3)\). Suppose that \(S\) is a simple \(\mathfrak{g}\)-module and there exists \(t\in\mathbb{Z}_{+}\) such that the actions of \(L_{i},H_{i},G_{i-\frac{1}{2}},F_{i-\frac{1}{2}},i\geq t\) are locally finite. Thus we can find nonzero \(w\in S\) such that \(L_{t}w=\lambda w\) for some \(\lambda\in\tilde{\mathbb{C}}\).
Similar to the proof of [12, Theorem 6] on Pages 81-83, (where the simplicity of the \(\mathfrak{h}\)-module \(S\) assumed there was not used), we obtain that,
\[V^{\prime}=\{v\in S\ |\ L_{i}v=H_{i}v=0,\ \text{for all}\ i>m\}\neq\{0\}\]
for some \(m\in\mathbb{Z}_{+}\).
Using this just established result we can easily obtain that
\[L_{n+i}V^{\prime}=G_{n+i-\frac{1}{2}}V^{\prime}=F_{n+i-\frac{1}{2}}V^{\prime}=0 \tag{5.10}\]
for some \(n>t\) and any \(i>0\).
For any \(r,k\in\mathbb{Z}\), we consider the following vector space
\[N_{r,k}=\{v\in S\ |\ L_{i}v=G_{i-\frac{1}{2}}v=H_{j}v=F_{j-\frac{1}{2}}v=0, \quad\text{for all}\ i>r,j>k\}.\]
Clearly \(N_{r,k}\neq 0\) for sufficiently large \(r,k\in\mathbb{Z}_{+}\). Moreover \(N_{r,k}=0\) if \(k<0\) since \(H_{0}v\neq 0\) for any \(v\in S\). Thus we can find a smallest nonnegative integer, saying \(s\), with \(V:=N_{r,s}\neq 0\) for some \(r\geq s\). Denote \(q=r-s\geq 0\) and \(V=N_{s+q,s}\) For any \(i>s+q,j>s\) and \(p\geq-q\), it follows from
\(i+p>s\) that
\[L_{i}(H_{p}v)=-pH_{i+p}v=0,\ H_{j}(H_{p}v)=0,\] \[G_{i-\frac{1}{2}}(H_{p}v)=F_{i+p-\frac{1}{2}}v=0,\ F_{j-\frac{1}{2 }}(H_{p}v)=0\]
for any \(v\in V\), respectively. Clearly, \(H_{p}v\in V\) for all \(p\geq-q\). Similarly, we can also obtain \(L_{k}v,G_{k-\frac{1}{2}}v,F_{k-q+\frac{1}{2}}v\in V\) for all \(k\in\mathbb{N}\). Therefore, \(V\) is a \(\mathfrak{g}^{(0,-q)}\)-module.
**Case 1.**\(s\geq 1\)**. By the definition of \(V\), we can obtain that the action of \(H_{s}\) on \(V\) is injective by Theorem 5.1. Since \(S\) is simple and generated by \(V\), there exists a canonical surjective map
\[\pi:\operatorname{Ind}_{c,d,z}V\to S,\quad\pi(1\otimes v)=v,\quad\forall v\in V.\]
Next we only need to show that \(\pi\) is also injective, that is to say, \(\pi\) as the canonical map is bijective. Let \(N=\ker(\pi)\). Obviously, \(N\cap V=0\). If \(N\neq 0\), we can choose a nonzero vector \(v\in N\setminus V\) such that \(\deg(v)=(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\) is minimal possible. Note that \(N\) is a \(\mathfrak{g}\)-submodule of \(\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\). By the claim in the proof of Theorem 5.1 we can create a new vector \(u\in N\) with \(\deg(u)\prec(\mathbf{i},\mathbf{j},\mathbf{k},\mathbf{l})\), which is a contradiction (where we did not use the assumption on the simplicity of \(V\)). This forces \(K=0\), that is, \(S\cong\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\) in Theorem 5.1. Then \(V\) is a simple \(\mathfrak{g}^{(0,-q)}\)-module.
**Case 2.**\(s=0\) and \(r\geq 0\,(q=r)\). Similar to the argument above for \(s\geq 1\) and using the proof of Theorem 5.2 and the assumption that \(d+(n+1)z\neq 0\) for any \(n\in\mathbb{Z}^{*}\), we can deduce that \(S\cong\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\) and \(V\) is a simple \(\mathfrak{g}^{(0,-q)}\)-module.
Moreover, \((3)\Rightarrow(2)\) and \((2)\Rightarrow(1)\) are clear. This completes the proof.
**Lemma 5.5**.: _Let \(V\) be a simple smooth \(\mathfrak{g}\)-module. Then there exists \(t\in\mathbb{Z}_{+}\) such that the actions of \(L_{i},H_{i},G_{i-\frac{1}{2}},F_{i-\frac{1}{2}}\) for all \(i\geq t\) on \(V\) are locally nilpotent._
Proof.: It is clear (also see Lemma 4.2 in [32]).
From Theorem 5.4 and Lemma 5.5, we are in a position to state one of our main result.
**Theorem 5.6**.: _Let \(S\) be a simple smooth \(\mathfrak{g}\)-module of central charge \((c,z)\) at level \(0\), with \(H_{0}\) acting as a scalar \(d\) with \(d+(n+1)z\neq 0\) for any \(n\in\mathbb{Z}^{*}\). Then \(S\) is isomorphic to a simple module of the form \(\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{\mathfrak{g}}V\), where \(V\) is a simple \(\mathfrak{g}^{(0,-q)}\)-module for some \(q\in\mathbb{N}\) in Theorem 5.1 or Theorem 5.2._
## 6. Applications and examples
In this section, we utilize Theorems 4.8 and 5.6 to provide a characterization of simple highest weight \(\mathfrak{g}\)-modules and simple Whittaker \(\mathfrak{g}\)-modules. Additionally, we present several examples of simple smooth \(\mathfrak{g}\)-modules that are also weak (simple) \(\mathcal{V}(c,z,\ell)\)-modules.
### Smooth modules at nonzero level
We first character simple highest weight modules and Whittaker modules over \(\mathfrak{g}\) at nonzero level.
#### 6.1.1. Highest weight modules
For \(h\), \(d,c,z\in\mathbb{C}\), let \(\mathbb{C}v\) be one-dimensional \(\mathfrak{g}^{(0,0)}\)-module defined by \(L_{0}v=hv,H_{0}v=dv,\mathbf{c}_{1}v=cv,\mathbf{c}_{2}v=zv,\mathbf{c}_{3}v=\ell v\) and \(\mathfrak{g}^{+}\) acts trivially on \(v\). The Verma module for \(\mathfrak{g}\) can be defined by \(\operatorname{Ind}_{\mathfrak{g}^{(0,0)}}^{\mathfrak{g}}\mathbb{C}v\).
**Theorem 6.1**.: _Let \(S\) be a \(\mathfrak{g}\)-module (not necessarily weight) on which every element in the algebra \(\mathfrak{g}^{+}\) acts locally nilpotently. Then the following statements hold._
* _The module_ \(S\) _contains a nonzero vector_ \(v\) _such that_ \(\mathfrak{g}^{+}v=0\)_._
* _If_ \(S\) _is simple at nonzero level, then_ \(S\) _is a highest weight module._
Proof.: (i) It follows from [37, Theorem 1] that there exists a nonzero vector \(v\in S\) such that \(L_{i}v=H_{i}v=0\) for any \(i\in\mathbb{Z}_{+}\).
If \(F_{\frac{1}{2}}v=0\), applying \(L_{i}\) we see that \(F_{i-\frac{1}{2}}v=0\)\(i\in\mathbb{Z}_{+}\), i.e., \(L_{i}v=H_{i}v=F_{i-\frac{1}{2}}v=0\) for any \(i\in\mathbb{Z}_{+}\).
If \(F_{\frac{1}{2}}v\neq 0\), then we have \(L_{1}^{i}F_{\frac{1}{2}}v=(-1)^{i}F_{i+\frac{1}{2}}v\) for any \(i\in\mathbb{Z}_{+}\). As \(L_{1}\) acts locally nilpotently on \(S\), it follows that there exists some \(n\in\mathbb{Z}_{+}\) such that \(F_{j+\frac{1}{2}}v=0\) for \(j>n\). Replace \(v\) by \(F_{n+\frac{1}{2}}v\) if \(F_{n+\frac{1}{2}}v\neq 0\), we can get \(L_{i}v=H_{i}v=0,F_{n+\frac{1}{2}}v=0\) for any \(i\in\mathbb{Z}_{+}\). Repeating the above steps, we also find a nonzero vector \(v\in S\) such that
\[L_{i}v=H_{i}v=F_{i-\frac{1}{2}}v=0,\ \forall i\in\mathbb{Z}_{+}. \tag{6.1}\]
If \(G_{\frac{3}{2}}v=0\), repeatedly applying \(L_{1}\) we see that \(G_{i+\frac{1}{2}}v=0\)\(i\in\mathbb{Z}_{+}\), i.e., \(L_{i}v=H_{i}v=F_{i-\frac{1}{2}}v=G_{i+\frac{1}{2}}v=0\) for any \(i\in\mathbb{Z}_{+}\).
If \(G_{\frac{3}{2}}v\neq 0\), then we have \(L_{1}^{i}G_{\frac{3}{2}}v=(-1)^{i}i!G_{i+\frac{3}{2}}v\) for any \(i\in\mathbb{Z}_{+}\). As \(L_{1}\) acts locally nilpotently on \(S\), it follows that there exists some \(n\in\mathbb{Z}_{+}\) such that \(G_{j+\frac{1}{2}}v=0\) for \(j>n\). Replace \(v\) by \(G_{n+\frac{1}{2}}v\) if \(G_{n+\frac{1}{2}}v\neq 0\), we can get \(L_{i}v=H_{i}v=F_{i-\frac{1}{2}}v=0,G_{n+\frac{1}{2}}v=0\) for any \(i\in\mathbb{Z}_{+}\). Repeating the above steps, we also find a nonzero \(v\in S\) such that \(G_{i+\frac{1}{2}}v=L_{i}v=H_{i}v=F_{i-\frac{1}{2}}v=0\) for any \(i\in\mathbb{Z}_{+}\).
If \(G_{\frac{1}{2}}v=0\), we can see that \(L_{i}v=H_{i}v=F_{i-\frac{1}{2}}v=G_{i+\frac{1}{2}}v=0\) for any \(i\in\mathbb{Z}_{+}\). Statement (i) holds.
If \(G_{\frac{1}{2}}v\neq 0\), let \(u=G_{\frac{1}{2}}v\). we can verify that that \(L_{i}u=H_{i}u=F_{i-\frac{1}{2}}u=G_{i+\frac{1}{2}}u=0\) for any \(i\in\mathbb{Z}_{+}\). Statement (i) holds also in this case.
So there exists a nonzero vector \(v\in S\) such that \(F_{i-\frac{1}{2}}u=0\), \(G_{i-\frac{1}{2}}u=0,L_{i}u=0,H_{i}u=0\) for all \(i\in\mathbb{Z}_{+}\).
(ii) By (i), we know that \(S\) is a simple smooth \(\mathfrak{g}\)-module with \(n_{S}=0\) and \(r_{S}\leq 1\). From Theorem 5.4 and Theorem 4.8 we know that \(S\cong U^{\mathfrak{g}}\otimes H(z)^{\mathfrak{g}}\) as \(\mathfrak{g}\)-modules for some simple modules \(H\in\mathcal{R}_{\mathfrak{h}c},z\in\mathbb{C}\) and \(U\in\mathcal{R}_{*}\). Moreover, \(H=\operatorname{Ind}_{\mathfrak{h}c^{(0)}}^{\mathbb{h}c}(\mathbb{C}v)\) is a simple highest weight module over \(\mathfrak{g}\). Note that every element in the algebra \(\mathfrak{s}^{(1)}\) acts locally nilpotently on \(\mathbb{C}v\otimes U\) by the assumption. This implies that the same property also holds on \(U\). From [32, Theorem 4.3] we know that \(U\) is a simple highest weight \(\mathfrak{s}\)-module. This completes the proof.
As a direct consequence of Theorem 6.1, we have
**Corollary 6.2**.: _Let \(S\) be a simple smooth \(\mathfrak{g}\)-module at nonzero level with \(r_{S}\leq 1\) and \(n_{S}=0\). Then \(S\) is a highest weight module._
Proof.: The assumption that \(r_{S}\leq 1\) and \(n_{S}=0\) implies that there exists a nonzero vector \(v\in M\) such that \(\mathfrak{g}^{+}v=0\). Then \(M=U(\mathfrak{g}^{-}+\mathfrak{g}_{0})v\). It follows that each element in \(\mathfrak{g}^{+}\) acts locally nilpotently on \(M\). Consequently, the desired assertion follows directly from Theorem 6.1.
#### 6.1.2. Whittaker modules
Now we will focus on the Whittaker modules over \(\mathfrak{g}\) at nonzero level.
For \(m\in\mathbb{Z}_{+}\), from [32], we know that any simple finite-dimensional \(\mathfrak{g}^{(m,0)}\)-module is one dimensional. Let \(\varphi_{m}\) be a Lie superalgebra homomorphism \(\mathfrak{g}^{(m,0)}\to\mathbb{C}\). Then
\[\varphi_{k}(L_{2m+1+j})=\varphi_{m}(H_{m+1+j})=\varphi_{m}(G_{m+j+\frac{1}{2}} )=\varphi_{m}(F_{j+\frac{1}{2}})=0,\ \forall j\in\mathbb{N}.\]
Let \(\mathbb{C}w\) be one-dimensional \(\mathfrak{g}^{(m,0)}\)-module with \(xw=\varphi_{m}(x)w\) for all \(x\in\mathfrak{g}^{(m,0)}\) and \(\varphi(\mathbf{c}_{1})=c,\varphi(\mathbf{c}_{2})=z\), and \(\varphi(\mathbf{c}_{3})=\ell\) for some \(c,z,\ell\in\mathbb{C}\). The induced \(\mathfrak{g}\)-module
\[\widetilde{W}_{\phi_{m}}=\mathrm{Ind}_{\mathfrak{g}^{(m,0)}}^{\mathfrak{g}} \mathbb{C}w_{\phi_{m}} \tag{6.2}\]
will be called the universal Whittaker module with respect to \(\phi_{m}\). And any nonzero quotient of \(\widetilde{W}_{\phi_{m}}\) will be called a Whittaker module with respect to \(\phi_{m}\).
For the above \(\phi_{m}\) with \(\phi_{m}(\mathbf{c}_{3})=\ell\neq 0\), we define a new Lie superalgebra homomorphism \(\phi_{m}^{\prime}:\mathfrak{s}^{(m)}\to\mathbb{C}\) as follows.
\[\phi_{m}^{\prime}(\mathbf{c}_{1}) = \phi_{m}(\mathbf{c}_{1})-\frac{3}{2}+12\frac{\phi(\mathbf{c}_{2}) ^{2}}{\phi(\mathbf{c}_{3})}, \tag{6.3}\] \[\phi_{m}^{\prime}(L_{k}) = 0,\ \forall k\geq 2m+1,\] \[\phi_{m}^{\prime}(L_{n}) = \phi_{m}(L_{n})-\frac{1}{2\ell}\sum_{k=0}^{m}\phi(H_{k})\phi(H_{-k +n})+\frac{(n+1)c_{2}}{\ell}\phi(H_{n})\] \[+\frac{1}{2\ell}\sum_{k=0}^{m}(k+1)\phi(F_{k+\frac{1}{2}})\phi(F_ {-k-\frac{1}{2}+n}),\forall n=m,m+1,\dots,2m,\] (6.4) \[\phi_{m}^{\prime}(G_{k+m+\frac{1}{2}}) = 0,\forall k\in\mathbb{N}. \tag{6.5}\]
Then we have the universal Whittaker \(\mathfrak{s}\)-module \(W_{\phi_{m}^{\prime}}:=\mathrm{Ind}_{\mathfrak{s}^{(m)}}^{\mathfrak{s}} \mathbb{C}w_{\phi_{m}^{\prime}}\), where \(x\cdot w_{\phi_{m}^{\prime}}=\phi_{m}^{\prime}(x)w_{\phi_{m}^{\prime}},\forall x \in\mathfrak{s}^{(m)}\).
**Theorem 6.3**.: _Suppose that \(m\in\mathbb{Z}_{+}\), and \(\phi_{m}\) and \(\phi_{m}^{\prime}\) are given above with \(\phi_{m}(\mathbf{c}_{3})=\ell\neq 0\), and \(\phi_{m}(\mathbf{c}_{1})=c,\phi_{m}(\mathbf{c}_{2})=z\in\mathbb{C}\). Let \(H=U(\mathfrak{h}\mathfrak{c})w_{\phi_{m}}\) in \(W_{\phi_{m}}\). Then we have_
1. \(\widetilde{W}_{\phi_{m}}\cong W_{\phi_{m}^{\prime}}^{\mathfrak{g}}\otimes H(z) ^{\mathfrak{g}}\)_. Consequently, each simple Whittaker module with respect to_ \(\phi_{m}\) _is isomorphic to_ \(T^{\mathfrak{g}}\otimes H(z)^{\mathfrak{g}}\) _for a simple quotient_ \(T\) _of_ \(W_{\phi_{m}^{\prime}}\)_._
2. _The_ \(\mathfrak{g}\)_-module_ \(\widetilde{W}_{\phi_{m}}\) _is simple if and only if_ \(W_{\phi_{m}^{\prime}}\) _is a simple_ \(\mathfrak{s}\)_-module. Consequently,_ \(\widetilde{W}_{\phi_{m}}\) _is simple if and only if_ \((\phi_{m}^{\prime}(L_{2m-1}),\phi_{m}^{\prime}(L_{2m}))\neq(0,0)\)_, i.e.,_ \[2\phi_{m}(L_{2m})\phi_{m}(\mathbf{c}_{3})+\phi_{m}(H_{m})^{2}\neq 0,\ \text{or}\] \[\phi_{m}(L_{2m-1})\phi_{m}(\mathbf{c}_{3})+\phi_{m}(H_{m})\phi_{m}(H_{m-1 })\neq 0\] _when_ \(m\geq 2\)_._ \(\widetilde{W}_{\phi_{1}}\) _is simple if and only if_ \((\phi_{1}^{\prime}(L_{1}),\phi_{1}^{\prime}(L_{2}))\neq(0,0)\)_, i.e.,_ \[2\phi_{1}(L_{2})\phi_{1}(\mathbf{c}_{3})+\phi_{1}(H_{1})^{2}\neq 0,\ \text{or}\] \[\phi_{1}(L_{1})\phi_{m}(\mathbf{c}_{3})+\phi(H_{0})\phi_{1}(H_{1})+2 \phi_{1}(\mathbf{c}_{2})\phi_{1}(H_{1})\neq 0\]
_when \(m=1\)._
3. _Let_ \(T_{1},T_{2}\) _be simple quotients of_ \(W_{\phi_{m}^{\prime}}\)_. Then_ \(T_{1}^{\mathfrak{g}}\otimes H(z)^{\mathfrak{a}}\cong T_{2}^{\mathfrak{g}} \otimes H(z)^{\mathfrak{a}}\) _if and only if_ \(T_{1}\cong T_{2}\)_._
Proof.: From Lemma 3.7, we know that \(H\) is a simple \(\mathfrak{h}\)-module.
(1) From simple computations we see that
\[H\cong\mathrm{Ind}_{\mathfrak{g}^{(m,-\infty)}}^{(m,-\infty)}\mathbb{C}w_{ \phi_{m}}\cong\mathbb{C}w_{\phi_{m}^{\prime}}\otimes H(z)^{\mathfrak{g}^{(m,- \infty)}}\]
as \(\mathfrak{g}^{(m,-\infty)}\)-modules, where the action of \(\mathfrak{g}^{(m,-\infty)}\) on \(\mathbb{C}w_{\phi_{m}^{\prime}}\) is given by \((\mathfrak{h}\circ+\mathbb{C}\mathbf{c}_{2})w_{\phi_{m}^{\prime}}=0,xw_{\phi_ {m}^{\prime}}=\phi_{m}^{\prime}(x)w_{\phi_{m}^{\prime}}\) for all \(x\in\mathfrak{s}^{(m)}\). Therefore from Lemma 3.2 and (3.1), we have
\[\widetilde{W}_{\phi_{m}} \cong \mathrm{Ind}_{L}^{\mathfrak{g}}(\mathrm{Ind}_{\mathfrak{g}^{(m,0 )}}^{a}\mathbb{C}w_{\phi_{m}})\cong\mathrm{Ind}_{\mathfrak{g}^{(m,-\infty)}} ^{a}(\mathbb{C}w_{\phi_{m}^{\prime}}\otimes H(z)^{\mathfrak{g}^{(m,-\infty)}})\] \[\cong \mathrm{Ind}_{\mathfrak{g}^{(m,-\infty)}}^{\mathfrak{g}}\mathbb{C }w_{\phi_{m}^{\prime}}\otimes H(z)^{\mathfrak{g}}\cong W_{\phi_{m}^{\prime}}^ {\mathfrak{g}}\otimes H(z)^{\mathfrak{g}}.\]
Parts (2) and (3) follow from (5.4), [32, Proposition 5.5] and some easy computations.
The following result characterizes simple Whittaker \(\mathfrak{g}\)-modules.
**Theorem 6.4**.: _Let \(M\) be a \(\mathfrak{g}\)-module at nonzero level (not necessarily weight) on which \(\mathfrak{g}^{+}\) acts locally finitely. Then the following statements hold._
1. _The module_ \(M\) _contains a nonzero vector_ \(v\) _such that_ \(\mathfrak{g}^{+}v\subseteq\mathrm{span}_{\mathbb{C}}\{v,G_{\frac{1}{2}}v\}\)_._
2. _If_ \(M\) _is simple, then_ \(M\) _is a Whittaker module or a highest weight module._
Proof.: From [42, Theorem 5.9 (i)], there exists \(v\in M\) such that \(\mathfrak{g}_{0}^{+}v\subset\mathbb{C}v\). It is clear that \(G_{k+\frac{1}{2}}G_{l+\frac{1}{2}}v\in\mathbb{C}v\) and \(F_{k+\frac{1}{2}}F_{l+\frac{1}{2}}v=0\) for any \(k,l\in\mathbb{N}\). Moreover \(U(\mathfrak{g}^{+})v\) is finite dimensional. Then the theorem follows from [31, Proposition 3.3] and [32, Theorem 4.3].
### Smooth modules of level zero
In this subsection, we give some examples of simple smooth modules of level zero.
#### 6.2.1. Highest weight modules
If \(\ell=0\) in 6.1.1, then by Theorem 5.2 (also see [3]), the Verma module \(\mathrm{Ind}_{\mathfrak{g}^{(0,0)}}^{\mathfrak{g}}\mathbb{C}v\) is irreducible if \(d+(n+1)z\neq 0\) for any \(n\in\mathbb{Z}^{*}\).
#### 6.2.2. Whittaker modules
If \(\phi_{m}(\mathbf{c}_{3})=0\) in 6.1.2, then by Theorem 5.2, we also have
**Theorem 6.5**.: _For \(m\in\mathbb{Z}_{+}\) and \(\phi_{m}(\mathbf{c}_{3})=0\), the Whittaker module \(\widetilde{W}_{\phi_{m}}\) is simple if and only if \(\varphi_{m}(H_{m})\neq 0\)._
Proof.: Let
\[V_{\phi_{m}}=\mathrm{Ind}_{\mathfrak{g}^{(m,0)}}^{\mathfrak{g}^{(0,0)}}\mathbb{ C}w.\]
Similar to the proof of Theorem 5.2, it is tedious but straightforward to check that \(V_{\varphi_{m}}\) is a simple \(\mathfrak{g}^{(0,0)}\)-module if \(\varphi_{m}(H_{m})\neq 0\). From Theorem 5.1, we obtain the corresponding induced \(\mathfrak{g}\)-module \(\mathrm{Ind}_{\mathfrak{g}^{(0,0)}}^{\mathfrak{g}}V_{\varphi_{k}}\) is simple if \(\varphi_{m}(H_{m})\neq 0\). Clearly \(\mathrm{Ind}_{\mathfrak{g}^{(0,0)}}^{\mathfrak{g}}V_{\varphi_{k}}\) is exactly the Whittaker module \(W(\varphi_{m},c,z)\). Moreover if \(\varphi(H_{m})=0\), then the Whittaker module \(\widetilde{W}_{\phi_{m}}\) has a proper submodule generated by \(F_{-\frac{1}{2}}w\).
### Smooth modules that are not tensor product modules
For characterizing simple \(\mathfrak{g}\)-module which are not tensor product modules, we need the following lemma.
**Lemma 6.6**.: _Let \(M=U^{\mathfrak{g}}\otimes V^{\mathfrak{g}}\) be a simple smooth \(\mathfrak{g}\)-module with \(n_{M}>1\) and nonzero level, where \(U\in\mathcal{R}_{\mathfrak{g}}\) and \(V\in\mathcal{R}_{\mathrm{bc}}\) are simple. Set \(V_{0}=\mathrm{Ann}_{V}(\mathfrak{h}\mathfrak{c}^{(n_{M})})\) and \(M_{0}=\mathrm{Ann}_{M}(\mathfrak{h}\mathfrak{c}^{(n_{M})})\). Then \(V_{0}\) is a simple \(\mathfrak{g}^{(0,-n_{M}+1)}\)-module, and \(M_{0}=U\otimes V_{0}\). Hence \(M_{0}\) contains a simple \(\mathfrak{h}\mathfrak{c}^{(-n_{M}+1)}\)-submodule._
Proof.: For any nonzero \(u\in U\), \(u\otimes V_{0}\) is a simple \(\mathfrak{h}\mathfrak{c}^{(-n_{M}+1)}\)-submodule of \(M_{0}\).
Lemma 6.6 means that if \(M\in\mathcal{R}_{\mathfrak{g}}\) is not a tensor product module, then \(M_{0}\) contains no simple \(\mathfrak{h}\mathfrak{c}^{(-n_{M}+1)}\)-submodule.
Here we will consider the case \(n_{M}=2\). Let \(\mathfrak{b}=\mathbb{C}h+\mathbb{C}e\) be the \(2\)-dimensional solvable Lie algebra with basis \(h\), \(e\) and subject to Lie bracket \([h,e]=e\). The following concrete example using [33, Example 13] and [42, Example 7.4] tells us how to construct induced smooth \(\mathfrak{g}\)-modules from a \(\mathbb{C}[e]\)-torsion-free simple \(\mathfrak{b}\)-module.
**Example 6.7**.: _Let \(W=(t-1)^{-1}\mathbb{C}[t,t^{-1}]\oplus(t-1)^{-1}\mathbb{C}[t,t^{-1}]\theta\) be the associative superalgebra with \(t,t^{-1}\in W_{\emptyset}\), \(\theta\in W_{\mathbb{I}}\). Note that \(\theta^{2}=0\). From [33, Example 13] we know that \(W\) is a direct sum of two isomorphic simple \(\mathfrak{b}\)-module whose structure is given by_
\[h\cdot f(t,\theta)=t\frac{d}{dt}(f(t,\theta))+\frac{f(t,\theta)}{t^{2}(t-1)}, \ e\cdot f(t,\theta)=tf(t,\theta),\forall f(t,\theta)\in W.\]
_For \(c,z,z^{\prime}\in\mathbb{C},\ell\in\mathbb{C}^{*}\), we also can make \(W\) into a \(\mathfrak{g}^{(0,0)}\)-module by_
\[L_{0}\cdot f(t,\theta)=h\cdot f(t,\theta),\,H_{1}\cdot f(t,\theta )=e\cdot f(t,\theta),\] \[H_{0}\cdot f(t,\theta)=z^{\prime}f(t,\theta),H_{1+i}\cdot f(t, \theta)=L_{i}\cdot f(t,\theta)=0,\ \ i\in\mathbb{Z}_{+},\] \[F_{\frac{1}{2}}\cdot f(t,\theta)=\theta tf(t,\theta),G_{\frac{1} {2}}f(t,\theta)=\frac{\partial}{\partial\theta}f(t,\theta),\] \[F_{\frac{1}{2}+i}\cdot f(t,\theta)=G_{i+\frac{1}{2}}\cdot f(t, \theta)=0,\ \ i\in\mathbb{Z}_{+},\] \[\mathbf{c}_{1}\cdot f(t,\theta)=cf(t,\theta),\,\mathbf{c}_{2} \cdot f(t,\theta)=zf(t,\theta),\,\mathbf{c}_{3}\cdot f(t,\theta)=\ell f(t, \theta),\]
_where \(f(t,\theta)\in W\). Then \(W\) is a simple \(\mathfrak{g}^{(0,0)}\)-module. Clearly, the action of \(H_{1}\) on \(W\) implies that \(W\) contains no simple \(\mathfrak{h}\mathfrak{c}^{(0)}\)-module. Then \(M_{0}=\mathrm{Ind}_{\mathfrak{g}^{(0,0)}}^{\mathfrak{g}^{(0,-1)}}W\) is a simple \(\mathfrak{g}^{(0,-1)}\)-module and contains no simple \(\mathfrak{h}\mathfrak{c}^{(-1)}\)-module. Let \(M=\mathrm{Ind}_{\mathfrak{g}^{(0,-1)}}^{\mathfrak{g}^{(0,-1)}}M_{0}\). It is easy to see \(n_{M}=2\) and \(r_{M}=3\). The proof of Proposition 4.5 implies that \(M\) is a simple smooth \(\mathfrak{g}\)-module. Lemma 6.6 means that \(M\) is not a tensor product \(\mathfrak{g}\)-module._
**Remark 6.8**.: _According to Theorem 4.8, if \(n_{M}\) equals 0 or 1, then simple smooth \(\mathfrak{g}\)-modules must be tensor product modules. However, Example 6.7 demonstrates that for any \(n_{M}>1\), there exist simple smooth \(\mathfrak{g}\)-modules that are not tensor product modules. On the other hand, Theorem 6.3 states that the smooth module \(\widetilde{W}_{\phi_{m}}\) is a tensor product of smooth modules over \(\mathfrak{h}\mathfrak{c}\) and \(\mathfrak{s}\), even when \(n_{\widetilde{W}_{\phi_{m}}}>1\) and \(r_{\widetilde{W}_{\phi_{m}}}>1\)._
**Acknowledgments**
The authors gratefully acknowledge the partial financial support from the NNSF (11971315, 12071405) and NSERC (311907-2020). Part of the research in this paper was carried out during K.Z. visited Shanghai University in the period of time Mar.19-May 22, 2023.
|
2307.00679 | Holomorphic motions, natural families of entire maps, and
multiplier-like objects for wandering domains | Structural stability of holomorphic functions has been the subject of much
research in the last fifty years. Due to various technicalities, however, most
of that work has focused on so-called finite-type functions (functions whose
set of singular values has finite cardinality). Recent developments in the
field go beyond this setting. In this paper we extend Eremenko and Lyubich's
result on natural families of entire maps to the case where the set of singular
values is not the entire complex plane, showing under this assumption that the
set $M_f$ of entire functions quasiconformally equivalent to $f$ admits the
structure of a complex manifold (of possibly infinite dimension). Moreover, we
will consider functions with wandering domains -- another hot topic of research
in complex dynamics. Given an entire function $f$ with a simply connected
wandering domain $U$, we construct an analogue of the multiplier of a periodic
orbit, called a distortion sequence, and show that, under some hypotheses, the
distortion sequence moves analytically as $f$ moves within appropriate
parameter families. | Gustavo R. Ferreira, Sebastian van Strien | 2023-07-02T22:49:16Z | http://arxiv.org/abs/2307.00679v3 | # Parameter spaces and distortion sequences of entire functions with wandering domains
###### Abstract
The structural stability of holomorphic functions has been the subject of much research in the last fifty years. Due to various technicalities, however, most of that work has focused on so-called finite-type functions, implying that functions with wandering domains - another hot topic of research in complex dynamics - have (for the most part) not been addressed in this context. Given an entire function \(f\) with a simply connected wandering domain \(U\), we construct an object called a distortion sequence that, under some hypotheses, moves analytically as \(f\) moves within appropriate parameter families. In order to "ground" our discussion, we consider - given an entire function \(f\) - the set \(M_{f}\) of entire functions quasiconformally equivalent to \(f\). Generalising earlier results for the finite-type case, we show that if \(f\) has a discrete set of singular values then \(M_{f}\) admits the structure of a complex manifold (of possibly infinite dimension).
## 1 Introduction
We consider the iteration of holomorphic functions \(f:\mathbb{C}\to\mathbb{C}\). As first shown by Fatou [16, 17] and Julia [22], the complex plane is partitioned into an open set of "regular" dynamics, the _Fatou set_ (denoted \(F(f)\)), and a closed set of "chaotic" dynamics, the _Julia set_ (denoted \(J(f)\)) - see, for instance, [4] or [7] for introductions to the subject. Both the Fatou and Julia sets are completely invariant under \(f\), meaning that a connected component \(U\) of the Fatou set - called a _Fatou component_ - is mapped by \(f^{n}\) into another Fatou component, denoted \(U_{n}\). If there exist \(m>n\geq 0\) such that \(U_{m}=U_{n}\), \(U\) is said to be a _(pre-)periodic_ Fatou component.
The internal dynamics of such Fatou components is, for the most part, fully understood (see, for instance, [7, Theorem 6]): a \(p\)-periodic Fatou component \(U\) exhibits one of five types of internal dynamics, each with a clear topological model. In three of these five types (attracting, parabolic, and Siegel), the closure of the Fatou component contains a _periodic point_\(z_{0}\) whose _multiplier_\((f^{p})^{\prime}(z_{0})\) controls the dynamics of \(U\) (the other two types, Baker domains and Herman rings, are not associated to periodic points). A Fatou component that is not pre-periodic is said to be a _wandering domain_. Wandering domains, by definition, cannot have periodic orbits in their closures, and so understanding their internal dynamics is a much more delicate endeavour - one that has been carried out in [6, 18]. Based on this understanding of the different internal behaviours of wandering domains, this paper aims to chip away at the question: how do these different internal dynamics co-exist (if they do) inside parameter families?
If \(f\) is part of a holomorphic family \((f_{\lambda})_{\lambda\in M}\) (see Section 3 for a definition), where \(M\) is complex manifold of possibly infinite dimension, then understanding how periodic points and their multipliers change with \(\lambda\) is an important part of understanding how the dynamics change within \((f_{\lambda})_{\lambda\in M}\). If the multiplier of the \(p\)-periodic point \(z_{0}\) is not \(1\), then by the implicit function theorem the equation \(f_{\lambda}^{p}(z)=z\) has a solution \((\lambda,z(\lambda))\) in some neighbourhood \(\Omega\subset M\times\mathbb{C}\) of \((\lambda_{0},z_{0})\), and the function \(\lambda\mapsto z(\lambda)\) is holomorphic. In other words, we can locally "track" the periodic point \(z_{0}\) as \(\lambda\) changes. This means in particular that the function \(\lambda\mapsto(f_{\lambda}^{p})^{\prime}(z(\lambda))\), the _multiplier map_, is holomorphic in a neighbourhood of \(\lambda_{0}\). It has been studied in great detail in the case of rational functions, and a common theme is that the multipliers of attracting periodic orbits can be used to parameterise the parameter families the functions belong to - see, for instance, [13, 34, 30, 25]. This motivates the questions that drive this paper - which, if \(f=f_{\lambda_{0}}\in(f_{\lambda})_{\lambda\in M}\) is an entire function with a simply connected wandering domain \(U\), can be phrased as:
1. Is there some "multiplier-like" object associated to \(U\) and moving holomorphically with \(\lambda\)?
2. Can we use this object to parameterise \((f_{\lambda})_{\lambda\in M}\)?
If we are assuming that \(f\) is \(J\)-stable (see Section 3 for a definition) in the family \((f_{\lambda})_{\lambda\in M}\) and \(H(\lambda,z)\) is the holomorphic motion of its Julia set, an immediate consequence is that \(f_{\lambda}\) also has a simply connected wandering domain \(U_{\lambda}=H(\lambda,U)\) for \(\lambda\) close enough to \(\lambda_{0}\). This is where we introduce our analogue of the multiplier map: the _distortion map_ (see Section 4 for a definition). We show that, under appropriate assumptions, it is holomorphic in the appropriate domain (see Section 3 for relevant definitions, and Subsection 2.3 for an exposition on conjugations on Banach analytic manifolds):
**Theorem 1.1**.: _Let \(f\) be an entire function with a simply connected wandering domain \(U\), and let \(p\in U\). Let \(\alpha\) be a distortion sequence for \(f\) at \(p\). Assume that \(f=f_{\lambda_{0}}\) is \(J\)-stable in some holomorphic family \((f_{\lambda})_{\lambda\in M}\), where \(M\) is a Banach analytic manifold. Denoting the holomorphic motion of \(J(f)\) by \(H\), assume that \(H(\lambda,f^{n}(p))=f_{\lambda}^{n}(H(\lambda,p))\) for all \(n\in\mathbb{N}\) and \(\lambda\in M\). Then, there exists a holomorphic map \(\mathrm{A}:M\times\overline{M}\to\ell^{\infty}\), called the distortion map (of \(\alpha\) over \(M\)), such that:_
* _For every_ \(\lambda\in M\)_,_ \(\mathrm{A}(\lambda,\overline{\lambda})\) _is a distortion sequence for_ \(f_{\lambda}\) _at_ \(H(\lambda,p)\)_._
* \(\mathrm{A}(\lambda_{0},\overline{\lambda_{0}})=\alpha\)_._
Combining Theorem 1.1 with Benini _et al._'s classification of simply connected wandering domains (see Theorem A in 2 for a statement) has the following consequence - which, for periodic domains, comes from the multiplier map and the classification of periodic Fatou components. This highlights how the distortion map is a "wandering" counterpart to the multiplier map.
**Corollary 1.1**.: _Let \(f\), \(U\), \(M\), and \(\alpha\) be as in Theorem 1.1. If \(\|\alpha\|_{\infty}<1\), then there exists a neighbourhood \(\Lambda\subset M\) of \(\lambda_{0}\) such that, for \(\lambda\in\Lambda\), the wandering domain \(U_{\lambda}\) of \(f_{\lambda}\) is contracting._
This is an answer to question (1); let us now turn to question (2). We show that, for certain kinds of functions with certain kinds of wandering domains, it is possible to "reconstruct" the distortion map by perturbing the distortion sequence. In order to understand what kind of functions we are referring to, we turn to a method originally described by Herman.
Let \(g:\mathbb{C}^{*}\to\mathbb{C}^{*}\) be a holomorphic function with a simply connected, forward invariant Fatou component \(V\). Herman described how to find an entire function \(f:\mathbb{C}\to\mathbb{C}\) such that
\(\exp\circ f=g\circ\exp\), i.e. \(f\) lifts \(g\), and any component of \(\exp^{-1}(V)\) is a wandering domain of \(f\). This motivates the following definition:
**Definition**.: Let \(f\) be an entire function with a simply connected wandering domain \(U\). We say that \(U\) is an _attracting_ (resp. _parabolic_, _Siegel_) _Herman-type_ wandering domain if there exists a holomorphic function \(g:\mathbb{C}^{*}\to\mathbb{C}^{*}\) such that \(g\circ\exp=\exp\circ f\) and \(V=\exp(U)\) is an attracting domain (resp. parabolic domain, Siegel disc) of \(g\).
**Remark**.: It follows from Definition 1 that, if \(f\) has a Herman-type wandering domain, then \(f\) satisfies \(f(z+2\pi)=f(z)+2\pi i\cdot n\) for some \(n\in\mathbb{Z}^{*}\). This, in turn, happens if and only if \(f(z)=nz+h(e^{z})\) where \(h\) is an entire function (see, for instance, [23, Proposition 7]. Thus, functions with Herman-type wandering domains are relatively simple to construct or identify.
By considering the Teichmuller space of functions with Herman-type wandering domains, Fagella and Henriksen [15] showed that such functions are structurally stable in a natural family parametrised by an abstract infinite-dimensional manifold (its Teichmuller space). Here, we give an explicit construction of such a manifold, and show that the associated distortion map is non-constant.
**Theorem 1.2**.: _Let \(f\) be an entire function with an attracting Herman-type wandering domain, and let \(M=\{\lambda\in\ell^{\infty}:\|\lambda\|_{\infty}<1\}\). Then, there exists a natural family \((f_{\lambda})_{\lambda\in M}\) such that \(f\) is \(J\)-stable in \((f_{\lambda})_{\lambda\in M}\) and the distortion map \(\Lambda:M\times\overline{M}\to\ell^{\infty}\) is non-constant._
Finally, we turn to general parameter spaces of entire functions. When considering parameter families of holomorphic functions, one tends to consider _natural_ families, defined in Section 3. This means that, in a sense, the largest possible parameter space containing an entire function \(f\) is the set
\[M_{f}:=\{g=\psi\circ f\circ\varphi^{-1}:g\in E,\psi,\varphi\in QC(\mathbb{C}, \mathbb{C})\},\]
where \(E\) denotes the topological vector space of entire functions (armed with the compact-open topology) and \(QC(\mathbb{C},\mathbb{C})\) denotes the space of quasiconformal homeomorphisms of \(\mathbb{C}\). It is known that, if \(f\) has finitely many singular values (critical values, asymptotic values, and accumulation points thereof), then \(M_{f}\) is a complex manifold of dimension \(q+2\), where \(q\) is the number of singular values (see, for instance, [14, Section 2] and [20, Theorem 3.1]). Our last theorem extends this result to functions with infinitely many singular values, as long as the latter remain discrete.
**Theorem 1.3** (Universal natural family).: _Let \(f\) be an entire function with a discrete set of singular values, and assume that \(f\) has at least two singular values. Then, there exists a Banach analytic manifold \(T_{f}\) and a covering map \(\Phi:T_{f}\to M_{f}\subset E\) satisfying the following properties:_
1. \(\Phi\) _is continuous_1 _and the function_ \(T_{f}\times\mathbb{C}\ni(\lambda,z)\mapsto\Phi_{\lambda}(z)\in\mathbb{C}\) _is analytic._ Footnote 1: Continuity here is meant considering \(M_{f}\subset E\) with the topology of locally uniform convergence.
2. _For any other natural family_ \((f_{\lambda})_{\lambda\in M}\) _containing_ \(f\)_, there exists a holomorphic function_ \(\phi:M\to T_{f}\) _such that_ \(f_{\lambda}=\Phi_{\phi(\lambda)}\)_. In other words,_ \(\phi\) _lifts the natural inclusion_ \(M\ni\lambda\mapsto f_{\lambda}\in E\)_._
_In particular, \(M_{f}\) admits a complex structure that turns \(\Phi\) into a holomorphic map. Furthermore, \(M_{f}\) is finite-dimensional if and only if \(f\) has finitely many singular values._
**Remark**.: If \(f\) has exactly \(q<+\infty\) singular values, then the manifold \(T_{f}\) has dimension \(q+2\), as should be expected. See also [35, Proposition A.1] for a version of this result for real entire functions of finite type.
The existence of \(T_{f}\) helps us understand more general parameter spaces of entire functions - though not necessarily in a straightforward manner. For instance, if \(f\) has infinitely many singular values, then one can show that the manifold \(T_{f}\) is modelled on a non-separable Banach space (see, for instance, [19], or the proof of [21, Theorem 6.6.1]), and is therefore non-separable. The Frechet space \(E\), however, is separable; consequently, the topologies of \(M_{f}\) as an analytic manifold and as a subset of \(E\) do not coincide - in other words, \(M_{f}\) is not an embedded submanifold of \(E\) - unless \(f\) has finitely many singular values. On a more positive note, the proof of Theorem 1.3 gives us the following corollary in the spirit of [35, Corollary 5.4]. For more powerful results on conjugacy classes (which are subsets of \(M_{f}\)), see also [26, Theorem 4.11] and [11, Theorem B].
**Corollary 1.2**.: _For any \(f\in E\), the equivalence class \(M_{f}\) is connected, and in fact path-connected, in \(E\)._
The outline of the paper is as follows. In Section 2, we go through necessary concepts and results on the internal dynamics of wandering domains, holomorphic functions in Banach spaces, Banach analytic manifolds and conjugations, and quasiconformal maps and Teichmuller theory. Then, in Section 3, we introduce existing results and concepts about parameter families of holomorphic functions and prove Theorem 1.3. Section 4 introduces the notion of distortion sequences and proves Theorem 1.1, and finally in Section 5 we prove Theorem 1.2. We notice that the results in Sections 4 and 5 do not depend on the proof of Theorem 1.3; we only prove it first in order to keep similar sections together.
Acknowledgements. The first author thanks Nuria Fagella for many illuminating discussions, and acknowledges financial support from the LMS Early Career Fellowship ECF-2022-16. We thank Lasse Rempe for pointing out a mistake in a previous version of Theorem 1.3.
## 2 Preliminaries
### Internal dynamics of simply connected wandering domains
Let \(f\) be an entire function with a simply connected wandering domain \(U\). For any pair of distinct points \(z\) and \(w\) in \(U\), the sequence \((d_{U_{n}}(f^{n}(z),f^{n}(w)))\) of hyperbolic distances (see [5] for an introduction to the hyperbolic metric of plane domains and its properties) is, by the Schwarz-Pick Lemma [5, Theorem 6.4], decreasing, and hence has a limit \(c(z,w)\) which is either zero or positive. It was shown by Benini _et al._ in [6] that this property is (mostly) independent of \(z\) and \(w\), and so is the question whether this limit is reached. More specifically:
**Theorem A**.: _Let \(f\) be an entire function with a simply connected wandering domain \(U\). Let \(Z_{0}\in U\), and let \(E=\{(z,w)\in U\times U:f^{k}(z)=f^{k}(w)\text{ for some }k\in\mathbb{N}\}\). Then, exactly one of the following holds._
1. \(d_{U_{n}}\left(f^{n}(z),f^{n}(w)\right)\to 0\) _for all_ \(z,w\in U\)_, and we say that_ \(U\) _is_ contracting_._
2. \(d_{U_{n}}\left(f^{n}(z),f^{n}(w)\right)\to c(z,w)>0\) _and_ \(d_{U_{n}}\left(f^{n}(z),f^{n}(w)\right)\neq c(z,w)\) _for all_ \((z,w)\in U\times U\setminus E\)_, and we say that_ \(U\) _is_ semi-contracting_._
3. _There exists_ \(N\in\mathbb{N}\) _such that, for all_ \(n\geq N\)_,_ \(d_{U_{n}}\left(f^{n}(z),f^{n}(w)\right)=c(z,w)>0\) _for all_ \((z,w)\in U\times U\setminus E\)_, and we say that_ \(U\) _is_ eventually isometric_._
_Furthermore, for any \(z\in U\) and \(n\in\mathbb{N}\), let_
\[\alpha_{n}(z):=\|Df(f^{n-1}(z))\|_{U_{n-1}}^{U_{n}}\]
_denote the hyperbolic distortion of \(f\) at \(f^{n-1}(z)\). Then:_
* \(U\) _is contracting if and only if_ \(\sum_{n\geq 1}(1-\alpha_{n}(z))=+\infty\)_._
* \(U\) _is eventually isometric if and only if_ \(\alpha_{n}(z)=1\) _for all sufficiently large_ \(n\)_._
The sequence \((\alpha_{n}(z))_{n\in\mathbb{N}}\) is a distortion sequence for \(f\) at \(z\) (see Section 4) with the property of being positive - in particular, the dynamics of \(U\) are intimately related to its distortion sequences.
### Holomorphic functions on Banach spaces
The theory of holomorphic functions on Banach spaces and manifolds is rich, and shows both similarities and differences to complex analysis on \(\mathbb{C}\) (or even \(\mathbb{C}^{n}\)); we refer the reader to [32] for an overview of the subject, and particularly for proofs of the results given here. The first and most important result we will state is Hartogs' separate analyticity1 theorem [32, Theorem 36.1]:
Footnote 1: Unless otherwise specified, we always use analyticity to mean complex analyticity. As such, we use the words analytic and holomorphic interchangeably.
**Lemma 2.1** (Hartogs Theorem).: _Let \(X\), \(Y\), and \(Z\) be complex Banach spaces, and let \(U\subset X\) and \(V\subset Y\) be open sets. Then, a function \(f:U\times V\to Z\) is holomorphic if and only if, for every \(x_{0}\in U\) and \(y_{0}\in V\), the functions \(y\mapsto f(x_{0},y)\) and \(x\mapsto f(x,y_{0})\) are holomorphic._
Next, we have a very useful condition for analyticity of functions into \(\ell^{\infty}\); see [32, Exercise 8.H].
**Lemma 2.2**.: _Let \(X\) be a complex Banach space, let \(U\subset X\) be open, and let \(f_{n}:U\to\mathbb{C}\) be holomorphic functions. Suppose that, for each compact subset \(K\subset U\), there exists \(M_{K}>0\) such that \(\sup_{z\in K}|f_{n}(z)|<M_{K}\) for all \(n\in\mathbb{N}\). Then, the function \(F:U\to\ell^{\infty}\) given by_
\[F(z)=\left(f_{n}(z)\right)_{n\in\mathbb{N}}\]
_is holomorphic._
Since analyticity is a local property, it follows immediately that all the results discussed in this section are also valid for analytic functions between Banach analytic manifolds.
### Banach analytic manifolds and conjugations
A Banach analytic manifold is, in a sense, the "obvious" way to define an infinite-dimensional complex manifold. More precisely, it is a topological space \(M\) endowed with an open cover \(\{U_{\alpha}\subset M\}_{\alpha\in A}\) and homeomorphisms \(\varphi_{\alpha}:U_{\alpha}\to\varphi_{\alpha}(U_{\alpha})\subset V\), where \(V\) is a complex Banach space, and such that the transition maps \(\varphi_{\beta}\circ\varphi_{\alpha}^{-1}:\varphi_{\alpha}(U_{\alpha}\cap U_{ \beta})\to\varphi_{\beta}(U_{\alpha}\cap U_{\beta})\) are biholomorphic for any \(\alpha,\beta\in A\). The functions \(\varphi_{\alpha}\) are called the _charts_ of \(M\).
For some \(v=(z_{1},\ldots,z_{n})\in\mathbb{C}^{n}\), the meaning of \(\overline{v}\) is relatively immediate: it is \((\overline{z_{1}},\ldots,\overline{z_{n}})\). In a more abstract Banach analytic manifold, however, the meaning of conjugation becomes less clear. To understand it, we must first consider what it means to conjugate elements of complex Banach spaces.
If \(V\) is a complex Banach space, we define a _conjugation_ to be an anti-linear map \(\sigma:V\to V\) such that \(\sigma\circ\sigma(v)=v\) for all \(v\in V\). It is not obvious that every complex Banach space \(V\) has a conjugation; however, one can always be constructed by dividing \(V\) into "real" and "imaginary" parts1. More specifically, take a Hamel basis for \(V\), and consider its \(\mathbb{R}\)-span \(V_{\mathbb{R}}\). It is easy to verify that \(V=V_{\mathbb{R}}\oplus iV_{\mathbb{R}}\); thus, any \(v\in V\) can be written as \(v=x+iy\), where \(x\) and \(y\) are in \(V_{\mathbb{R}}\), and a well-defined conjugation is given by \(\sigma(x+iy)=x-iy\). Taking different Hamel bases gives us different conjugations on \(V\) (for instance, we can start with any \(z\in\mathbb{C}\) with \(|z|=1\) and obtain a conjugation on \(\mathbb{C}\) by reflecting across the \(\mathbb{R}\)-span of \(z\)).
Footnote 1: This is called a _real structure_ on \(V\).
On a Banach analytic manifold \(M\), we conjugate by changing the complex structure of \(M\). In other words, if \(\{\psi_{\alpha}:U_{\alpha}\subset M\to V\}_{\alpha\in A}\) are charts for \(M\), then we define \(\overline{M}\) as the manifold with charts \(\{\sigma\circ\psi_{\alpha}:U_{\alpha}\to V\}_{\alpha\in A}\), where \(\sigma:V\to V\) is a conjugation. Notice that the identity is an anti-holomorphic map from \(M\) to \(\overline{M}\).
### Quasiconformal maps and Teichmuller theory
As usual, our most important tool will be the measurable Riemann mapping theorem of Ahfolrs and Bers [2]; see [21, Theorem 4.6.1 and Proposition 4.7.5] and [12, Theorem 4.4.1] for the version given here. Here and throughout, for a quasiregular map \(\varphi\), we denote by \(\varphi^{*}\mu\) the pullback of the almost complex structure \(\mu\) by \(\varphi\), and by \(\mu_{0}\) the standard almost complex structure.
**Lemma 2.3**.:
1. _Let_ \(\mu\in L^{\infty}(\mathbb{C})\) _be such that_ \(\|\mu\|_{\infty}<1\)_. Then, there exists a quasiconformal map_ \(\varphi:\mathbb{C}\to\mathbb{C}\) _such that_ \(\varphi\) _has Beltrami coefficient_ \(\mu\)_, i.e.,_ \(\varphi^{*}\mu_{0}=\mu\)_. Furthermore,_ \(\varphi\) _is unique up to postcomposition with an automorphism of_ \(\mathbb{C}\)_._
2. _Let_ \(\varphi^{\mu}\) _denote the quasiconformal homeomorphism of_ \(\mathbb{C}\) _such that_ \(\varphi^{*}\mu_{0}=\mu\) _normalised by_ \(\varphi^{\mu}(0)=0\)_,_ \(\varphi^{\mu}(1)=1\)_, and_ \(\varphi^{\mu}(\infty)=\infty\)_. Then, the map_ \(L^{\infty}(\mathbb{C})\mapsto QC(\mathbb{C},\mathbb{C})\) _given by_ \(\mu\mapsto\varphi^{\mu}\) _is analytic, in the sense that it is continuous relative to the compact-open topology and for each_ \(z\in\mathbb{C}\) _the map_ \(\mu\mapsto\varphi^{\mu}(z)\) _is analytic._
_Item (1) holds with \(\mathbb{C}\) replace by \(\mathbb{D}\). Item (2) holds with \(\mathbb{C}\) replaced by \(\mathbb{D}\) and analytic replaced by real-analytic._
The analytic dependence of mappings on Beltrami coefficients has something of a converse [21, Lemma 4.8.15]:
**Lemma 2.4**.: _Let \(M\) be a Banach analytic manifold, \(U\) an open subset of \(\mathbb{C}\), and \(F:M\times U\to\mathbb{C}\) a continuous map. Let \(h_{t}(z):=H(t,z)\). Suppose that \(h_{t}\) is quasiconformal for all \(t\in M\), and that for every \(z\in U\) the map \(t\mapsto h_{t}(z)\) is analytic. Then, the Beltrami coefficient_
\[\mu_{t}:=(h_{t})^{*}\mu_{0}\]
_is analytic as a map \(M\mapsto L^{\infty}(\mathbb{C})\)._
Families of functions \(h_{t}\) as in Lemma 2.4 are in fact very popular in complex dynamics, as formalised in the following definition:
**Definition**.: Let \(X\subset\mathbb{C}\). A _holomorphic motion_ of \(X\) over a Banach analytic manifold \(M\) with basepoint \(\lambda_{0}\in M\) is a mapping \(H:M\times X\to\mathbb{C}\) such that:
1. For each \(x\in X\), the map \(\lambda\mapsto H(\lambda,x)\) is analytic;
2. For each \(\lambda\in M\), the map \(x\mapsto H(\lambda,x)\) is injective; and
3. \(H(\lambda_{0},x)=x\).
This definition was introduced by Mane, Sad, and Sullivan in [27], who called them "analytic families of injections". A major property of holomorphic motions is the following (see [21, Theorem 5.2.3]):
**Lemma 2.5** (\(\lambda\)-Lemma).: _Let \(M\) be a Banach analytic manifold, and let \(X\subset\mathbb{C}\). If \(H:M\times\mathbb{C}\to\mathbb{C}\) is a holomorphic motion, then for every \(\lambda\in M\) the map \(X\ni x\mapsto H(\lambda,x)\in\mathbb{C}\) is quasiconformal._
Much more can be said: a holomorphic motion of \(X\) can be extended to a holomorphic motion of \(\overline{X}\), and sometimes even to a holomorphic motion of \(\mathbb{C}\); see, for instance, [8], [31], or [21, Section 5.2].
Given a closed set \(E\subset\mathbb{C}\), an important question is to consider the family of all possible holomorphic motions of \(E\). It turns out that the answer to this question is deep, and requires a foray into Teichmuller theory. Let us start with some definitions (properly tailored for our context):
**Definition**.: Let \(S\subset\mathbb{C}\) be open, and let \(\psi:S\to\psi(S)\subset\mathbb{C}\) and \(\varphi:S\to\varphi(S)\subset\mathbb{C}\) be quasiconformal maps. We say that \(\psi\) and \(\varphi\) are equivalent (or _Teichmuller equivalent_) if there exists a conformal map \(h:\psi(S)\to\varphi(S)\) such that the quasiconformal map \(\eta=\varphi^{-1}\circ h\circ\psi:S\to S\) is isotopic to the identity relative to \(\partial S\). In other words, \(\eta\) extends continuously to \(\partial S\) and there exists a continuous map \(h:[0,1]\times\overline{S}\to\overline{S}\) such that \(h(0,z)=z\) for all \(z\in S\), \(h(1,z)=\eta(z)\) for all \(z\in S\), and \(h(t,z)=z\) for all \(t\in[0,1]\) and \(z\in\partial S\).
**Definition**.: Let \(S\) be a plane domain. Then, its _Teichmuller space_\(\mathcal{T}(S)\) is the set of all possible quasiconformal maps \(\varphi:S\to\varphi(S)\subset\widehat{\mathbb{C}}\) modulo Teichmuller equivalence.
The fact that \(\mathcal{T}(S)\) is a complex manifold for any plane domain \(S\) (or, more generally, for any Riemann surface; see e.g. [21, Theorem 6.5.1]) is a powerful result with many applications, and so is the fact that this idea can be generalised to _unions_ of mutually disjoint plane domains (see [28, Section 5.3] or [31, Section 5] for details on how to do this).
Now, let \(E\subset\widehat{\mathbb{C}}\) be closed, and assume that \(\{0,1,\infty\}\subset E\). If \(\psi,\varphi:\mathbb{C}\to\mathbb{C}\) are quasiconformal maps fixing \(0\), \(1\), and \(\infty\), we say (in the spirit of the previous definitions) that \(\psi\) and \(\varphi\) are _\(E\)-equivalent_ if \(\psi\circ\varphi^{-1}\) is isotopic to the identity relative to \(E\) (in particular, \(\varphi|_{E}=\psi|_{E}\)). We define the Teichmuller space of \(E\), denoted \(T(E)\), to be the set of \(E\)-equivalence classes of quasiconformal homeomorphisms of \(\widehat{\mathbb{C}}\) fixing \(0\), \(1\), and \(\infty\). We will see that the space \(T(E)\) holds the answer to our problem of describing all possible holomorphic motions of \(E\); let us take a closer look at it.
For \(X\subset\mathbb{C}\), let \(M(X)\) denote the unit ball of \(L^{\infty}(\mathbb{C})\). It follows from the Ahlfors-Bers theorem that the map \(P:M(\mathbb{C})\to T(E)\) assigning to \(\mu\) the \(E\)-equivalence class of its unique normalised integrating map \(\varphi^{\mu}\) is a well-defined function, but this by itself does not tell us much. Far more useful is the following characterisation of \(T(E)\):
**Lemma 2.6** ([31], Corollaries 5.3, 6.1, and 6.2).: _For any closed set \(E\subset\widehat{\mathbb{C}}\) containing \(0\), \(1\), and \(\infty\), the following hold._
1. \(T(E)\simeq\mathcal{T}(\widehat{\mathbb{C}}\setminus E)\times M(E)\)_._
2. _There exists_ \(\pi:M(\widehat{\mathbb{C}}\setminus E)\to\mathcal{T}(\widehat{\mathbb{C}} \setminus E)\) _such that the map_ \(P_{E}:M(\mathbb{C})\to\mathcal{T}(\widehat{\mathbb{C}}\setminus E)\times M(E)\) _given by_ \(P_{E}(\mu)=(\pi(\mu),\mu|_{E})\) _is a split submersion._
3. _The function_ \(P_{E}\) _satisfies_ \(P_{E}(\mu)=P_{E}(\nu)\) _if and only if_ \(P(\mu)=P(\nu)\)_._
_In particular, \(T(E)\) admits a complex structure that makes \(P_{E}\) into a holomorphic split submersion._
The final answer to our previous question was given by Mitra:
**Lemma 2.7**.: _For any closed set \(E\subset\widehat{\mathbb{C}}\) containing \(0\), \(1\), and \(\infty\), there exists a universal holomorphic motion \(\Psi_{E}:T(E)\times E\to\widehat{\mathbb{C}}\) with the following property: if \(H:M\times E\to\widehat{\mathbb{C}}\) is any other holomorphic motion of \(E\) over a Banach analytic manifold \(M\), then there exists a unique holomorphic map \(F:M\to T(E)\) such that \(\Psi_{E}(F(m),z)=H(m,z)\) for all \((m,z)\in M\times E\)._
## 3 Parameter families of entire functions
### Holomorphic and natural families
In this subsection, we give an overview of the main concepts used to discuss parameter families of holomorphic functions. Since the functions themselves are holomorphic, we can ask for parameter dependence to be holomorphic as well, bringing us to the concept of a holomorphic family. By Hartogs' Theorem, this means that we can define define a holomorphic family of entire functions \((f_{\lambda})_{\lambda\in M}\), where \(M\) is a Banach analytic manifold, as a holomorphic function \(F:M\times\mathbb{C}\to\mathbb{C}\) with \(F(\lambda,z)=f_{\lambda}(z)\). Since we are concerned with the quasiconformal equivalence class \(M_{f}\) of a given entire function \(f\), it also makes sense to consider a different type of parameter family:
**Definition**.: Let \(M\) be a Banach analytic manifold. A _natural family_ of entire functions over \(M\) is a family \((f_{\lambda})_{\lambda\in M}\) such that \(f_{\lambda}=\psi_{\lambda}\circ f\circ\varphi_{\lambda}^{-1}\), where \(f=f_{\lambda_{0}}\) is an entire function and \(\psi_{\lambda}\) and \(\varphi_{\lambda}\) are quasiconformal homeomorphisms of \(\mathbb{C}\) depending holomorphically on \(\lambda\in M\).
It is easy to show (see the proof of Theorem 1.3) that a natural family \((f_{\lambda})_{\lambda\in M}\) is also a holomorphic family, with the additional feature that the singular values of \(f_{\lambda}\)_and_ their pre-images move holomorphically. Astorg, Benini, and Fagella showed [3, Theorem 2.6] that the converse also holds locally for finite-type maps: every holomorphic family of finite-type maps for which singular values and their pre-images move holomorphically can be locally expressed as a natural family.
We can now discuss structural stability in natural families of entire functions. The "standard" definition is the following:
**Definition**.: Let \(f=f_{\lambda_{0}}\) be an entire function in the natural family \((f_{\lambda})_{\lambda\in M}\), where \(M\) is a Banach analytic manifold. We say that \(f\) is _\(J\)-stable in \((f_{\lambda})_{\lambda\in M}\)_ if there exists a neighbourhood \(\Lambda\subset M\) of \(\lambda_{0}\) and a holomorphic motion of \(J(f)\) over \(\Lambda\) such that:
* For each \(\lambda\in\Lambda\), \(H(\lambda,J(f))=J(f_{\lambda})\).
* \(H(\lambda,f(z))=f_{\lambda}(H(\lambda,z))\) for every \((\lambda,z)\in\Lambda\times J(f)\).
In other words, \(H(\lambda,\cdot)\) conjugates \(f|_{J(f)}\) to \(f_{\lambda}|_{J(f_{\lambda})}\).
### Universal natural families
In this subsection, we prove Theorem 1.3. Denote by \(M(X)\), \(X\subset\mathbb{C}\), the unit ball in the Banach space \(L^{\infty}(X)\subset L^{\infty}(\mathbb{C})\). Also, denote by \(S(f)\) the set of singular values of \(f\). We start with a much easier result:
**Lemma 3.1**.: _For any entire function \(f\in E\), there exists a function \(\tilde{\Phi}:M(\mathbb{C})\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\to M_{f}\) such that \(\tilde{\Phi}\) is continuous, surjective, and the mapping \(M(\mathbb{C})\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\times\mathbb{C}\ni( \lambda,z)\mapsto\tilde{\Phi}_{\lambda}(z)\in\mathbb{C}\) is analytic._
Proof.: Fix a function \(f\in E\), and fix \(z_{0}\in f^{-1}(0)\) and \(z_{1}\in f^{-1}(1)\). Let \(\lambda=(\mu,a_{1},b_{1},a_{2},b_{2})\). We start by applying Lemma 2.3 to \(\mu\), obtaining a quasiconformal map \(\psi^{\mu}\) fixing \(0\) and \(1\). If we let \(\psi_{\lambda}(z)=a\psi^{\mu}(z)+b\), then \(\psi_{\lambda}\) ranges over all quasiconformal maps with Beltrami coefficient \(\mu\) and is analytic in \(\lambda\) by Lemmas 2.3 and 2.1.
Next, let \(\mu_{\lambda}=(\psi_{\lambda}\circ f)^{*}\mu_{0}=f^{*}\mu\). It can be shown (see, for instance, [9, p. 17]) that
\[\mu_{\lambda}(z)=\mu\left(f(z)\right)\frac{\overline{f^{\prime}(z)}}{f^{ \prime}(z)},\]
meaning that the map \(\lambda\mapsto\mu_{\lambda}(z)\) is still analytic for every \(z\in\mathbb{C}\). It is time to apply Lemma 2.3 again to obtain a quasiconformal map \(\varphi^{\mu_{\lambda}}\) with Beltrami coefficient \(\mu_{\lambda}\) and fixing \(z_{0}\) and \(z_{1}\). Defining \(\varphi_{\lambda}(z)=a_{2}\varphi^{\mu_{\lambda}}(z)+b_{2}\), this once again varies over all quasiconformal maps with Beltrami coefficient \(\mu_{\lambda}\) as \(a_{2}\in\mathbb{C}^{*}\) and \(b_{2}\in\mathbb{C}\) vary.
Defining the function \(g(z)=\psi_{\lambda}\circ f\circ(\varphi_{\lambda})^{-1}(z)\), it follows from the construction that \(g\) is entire, and we see by an argument originally by Buff and Cheritat [10, p. 21] that the map \(\lambda\mapsto g(z)\) is holomorphic in \(\lambda\) for any fixed \(z\in\mathbb{C}\). The continuity of \(\lambda\mapsto g\) follows from Lemma 2.3, and it is analytic as a map \(M(\mathbb{C})\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\times\mathbb{C}\to \mathbb{C}\) by Hartogs' Theorem. Clearly, the function \(\tilde{\Phi}\) is also surjective.
The domain \(M(\mathbb{C})\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\) is too big for our purposes - especially because \(M(\mathbb{C})\) is oboxiously big. From now on, we assume that \(0\) and \(1\) are singular values of \(f\) (which can always be achieved by conjugating \(f\)). It turns out, then, that we can push \(\tilde{\Phi}\) down to a smaller space:
**Lemma 3.2**.: _Let \(\tilde{\Phi}\) be as given by Lemma 3.1, let \(F=S(f)\), and let \(\mathcal{P}_{F}:M(\mathbb{C})\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\to T( F)\times(\mathbb{C}^{*}\times\mathbb{C})\) be given by \(\mathcal{P}(\mu,a_{1},b_{1},a_{2},b_{2})=(P_{F}(\mu),a_{1},b_{1},a_{2},b_{2})\). Then, if \(\lambda_{1},\lambda_{2}\in M(\mathbb{C})\times(\mathbb{C}^{*}\times\mathbb{C})\) are such that \(\mathcal{P}_{F}(\lambda_{1})=\mathcal{P}_{F}(\lambda_{2})\), we have \(\tilde{\Phi}(\lambda_{1})=\tilde{\Phi}(\lambda_{2})\)._
Proof.: Let \(\lambda_{i}=(\mu_{i},a_{i,1},b_{i,1},a_{i,2},b_{i,2})\), \(i=0,1\). It is clear from the definition of \(\mathcal{P}\) that we have \(a_{0,j}=a_{1,j}\) and \(b_{0,j}=b_{1,j}\) for \(j=1,2\), so that we must focus on the role of \(\mu_{i}\). The proof now is essentially the same as that of [14, Lemma 2] with minor modifications. More specifically, let \(g_{0}=\tilde{\Phi}(\lambda_{0})=\psi_{0}\circ f\circ(\varphi_{0})^{-1}\) and \(g_{1}=\tilde{\Phi}(\lambda_{1})=\psi_{1}\circ f\circ(\varphi_{1})^{-1}\). Then, one has from the definition of \(T(F)\) and Lemma 2.7 that \(\psi_{0}\) and \(\psi_{1}\) are \(F\)-equivalent, meaning that there exists an isotopy \(\tilde{\psi}_{t}:\mathbb{C}\to\mathbb{C}\), \(t\in[0,1]\), such that \(\tilde{\psi}_{0}=\psi_{0}\), \(\tilde{\psi}_{1}=\psi_{1}\), and \(\psi_{t}|_{F}\) is the identity for every \(t\in[0,1]\). Since \(S(f)\) has Lebesgue measure zero, one can apply the covering homotopy theorem to find an isotopy \(\tilde{\varphi}_{t}:\mathbb{C}\setminus f^{-1}(S(f))\to\mathbb{C}\setminus g_{ 0}^{-1}\circ\psi_{0}(S(f))\), \(t\in[0,1]\), such that \(\tilde{\varphi}_{0}=\varphi_{0}\), and \(g_{0}\circ\tilde{\varphi}_{t}(z)=\tilde{\psi}_{t}\circ f(z)\) for all \(z\in\mathbb{C}\setminus f^{-1}(S(f))\) and \(t\in[0,1]\). Setting \(t=1\), we obtain
\[g_{0}\circ\tilde{\varphi}_{1}(z)=\psi_{1}\circ f(z)=g_{1}\circ\varphi_{1}(z),z \in\mathbb{C}\setminus f^{-1}(S(f)),\]
where the last equality follows from the definition of \(g_{1}\). We see that \(g_{0}=g_{1}\circ(\tilde{\varphi}_{1}\circ\varphi_{1}^{-1})\). To proceed, notice that the isotopy \(\tilde{\psi}_{t}\) fixes \(0\) and \(1\), meaning that its lift \(\tilde{\varphi}_{t}\) is constant on \(f^{-1}(0)\) and \(f^{-1}(1)\) (which exact permutation of these sets is enacted by \(\tilde{\varphi}_{t}\) does not depend on \(t\), and is uniquely determined by \(a_{0,2}\) and \(b_{0,2}\)). Furthermore, since \(g_{0}\) and \(g_{1}\) are entire, the composition \(\tilde{\varphi}_{1}\circ\varphi_{1}^{-1}\) is conformal on the full measure set \(\mathbb{C}\setminus f^{-1}(F)\), and hence conformal on \(\mathbb{C}\) by Weyl's lemma. It follows that \(\tilde{\varphi}_{1}\) and \(\varphi_{1}\) differ by an automorphism of \(\mathbb{C}\), and because \(a_{0,2}=a_{1,2}\) and \(b_{0,2}=b_{1,2}\) we have that this automorphism is the identity. We are done.
Lemma 3.2 shows that the projection \(\mathcal{P}_{F}\) pushes \(\tilde{\Phi}\) down to a well-defined function \(\Phi\colon T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\to M_{f}\), which inherits all the properties of \(\tilde{\Phi}\). The last step, then, is to show that, \(\Phi\) is locally invertible.
**Lemma 3.3**.: _Let \(f\), \(E\), and \(\Phi\colon T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\) be as above. Then, for any point \(\lambda^{*}=(\mathcal{P}_{F}(\mu^{*}),a_{1}^{*},b_{1}^{*},a_{2}^{*},b_{2}^{*}) \in T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\), there exists a neighbourhood1\(U\subset M_{f}\) of \(g^{*}=\Phi(\lambda^{*})\) and a function \(\sigma:U\to T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\) such that:_
Footnote 1: We’re considering \(M_{f}\) with the topology induced by \(\Phi\).
1. \(\sigma(g^{*})=\lambda^{*}\)_;_
2. \(\sigma\) _is a right inverse to_ \(\Phi\)_, i.e.,_ \(\Phi\circ\sigma(g)=g\) _for any_ \(g\in U\)_; and_
3. _If_ \(M\) _is a Banach analytic manifold and_ \(\psi_{\lambda}\) _and_ \(\varphi_{\lambda}\) _are quasiconformal maps depending holomorphically on_ \(\lambda\in M\) _such that_ \(g(\lambda)=\psi_{\lambda}\circ f\circ(\varphi_{\lambda})^{-1}\) _is entire and belongs to_ \(U\)_, then the function_ \(M\to T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\) _given by_ \(\lambda\mapsto\sigma(g(\lambda))\) _is holomorphic._
Proof.: Write \(\psi_{\lambda^{*}}\) and \(\varphi_{\lambda^{*}}\) as in the proof of Lemma 3.1, so that \(g^{*}=\psi_{\lambda^{*}}\circ f\circ\varphi_{\lambda^{*}}^{-1}\). Let \(s_{1}=\psi_{\lambda^{*}}(0)\) and \(s_{2}=\psi_{\lambda^{*}}(1)\); it is clear that \(s_{1}\) and \(s_{2}\) are singular values of \(g^{*}\), and furthermore that \(s_{1}=b_{1}^{*}\) and \(s_{2}=a_{1}^{*}+b_{2}^{*}\). In a neighbourhood of \(g^{*}\), one can "track" these singular values, obtaining an injective function \(g\mapsto(s_{1}(g),s_{2}(g))\) (see, for instance, [24, Lemma 1]). It follows that one obtains an injective function \(g\mapsto(a_{1}(g),b_{1}(g))\). After post-composing \(g\) with an affine map that sends \(s_{1}(g)\) to \(0\) and \(s_{2}(g)\) to \(1\), we obtain \(\mathcal{P}_{F}(\mu(g))\) as the element of \(T(F)\) "closest" to \(\mathcal{P}_{F}(\mu^{*})\) with the correct \(F\)-equivalence class (i.e., \(\eta(S(f)=S(g)\) for any \(\eta\) in the equivalence class \(\mathcal{P}_{F}(\mu(g))\)). Finally, for \(a_{2}\) and \(b_{2}\), we can "track" the corresponding pre-image of \(s_{1}(g)\) and \(s_{2}(g)\) in neighbourhoods of \(w_{0}=a_{2}^{*}z_{0}+b_{2}^{*}\) and \(w_{1}=a_{2}^{*}z_{1}+b_{2}^{*}\). In other words, \(a_{2}(g)\) and \(b_{2}(g)\) are the solutions of
\[\begin{cases}g(a_{2}z_{0}+b_{2})=s_{1}(g)\\ g(a_{2}z_{1}+b_{2})=s_{2}(g)\end{cases}\]
found in neighbourhoods of \(a_{2}^{*}\) and \(b_{2}^{*}\), respectively. Thus, we obtain
\[\sigma(g)=(\mathcal{P}_{F}(\mu(g)),a_{1}(g),b_{1}(g),a_{2}(g),b_{2}(g)),\]
and it is clear that is satisfies property (1). Property (2) follows from Lemma 3.2.
Finally, property (3) follows from Lemma 2.7, the hypothesis that \(\psi_{\lambda}(z)\) and \(\varphi_{\lambda}(z)\) are holomorphic functions of \(\lambda\) for each fixed \(z\in\mathbb{C}\), and Hartogs' Theorem.
Lemma 3.3 shows that pushing forward the topology of \(T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\) turns \(\Phi:T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\) into a covering map. Property (3), in particular, implies that "pushing forward" the complex structure of \(T(F)\times(\mathbb{C}^{*}\times\mathbb{C})^{2}\) through \(\Phi\) induces a well-defined complex structure on \(M_{f}\), turning \(\Phi\) into a holomorphic covering map. This completes the proof of Theorem 1.3.
## 4 Distortion sequences for wandering domains
In this section, we define distortion sequences for entire functions with simply connected wandering domains and prove Theorem 1.1.
**Definition**.: Let \(f\) be an entire function with a simply connected wandering domain \(U\). Take \(z_{0}\in U\), and let \(z_{n}:=f^{n}(z_{0})\). For \(n\geq 0\), let \(\psi_{n}\colon\mathbb{D}\to U_{n}\) be Riemann maps with \(\psi_{n}(0)=z_{n}\). For \(n\in\mathbb{N}\), define \(g_{n}\colon\mathbb{D}\to\mathbb{D}\) by \(g_{n}(z)=\psi_{n}^{-1}\circ f\circ\psi_{n-1}(z)\), so that \(g_{n}(0)=0\). The sequence \((\alpha_{n}(f,z_{0}))_{n\in\mathbb{N}}\) given by
\[\alpha_{n}(f,z_{0}):=g_{n}^{\prime}(0)\]
is called a _distortion sequence of \(f\) at \(z_{0}\)_.
Notice that the distortion sequence of \(f\) at \(z_{0}\) is not unique, but any two distortion sequences \((\alpha_{n})_{n\in\mathbb{N}}\) and \((\beta_{n})_{n\in\mathbb{N}}\) at \(z_{0}\) satisfy \(|\alpha_{n}|=|\beta_{n}|\) for all \(n\in\mathbb{N}\). By Theorem A, a distortion sequence of \(f\) at \(z_{0}\in U\) is closely related to the internal dynamics of \(U\).
As discussed in Section 1, if \(f\) is \(J\)-stable in some natural family \((f_{\lambda})_{\lambda\in M}\) with holomorphic motion \(H\), then \(f_{\lambda}\) has a simply connected wandering domain \(U_{\lambda}=H(\lambda,U)\), and \(U_{\lambda,n}=f_{\lambda}^{n}(U_{\lambda})=H(\lambda,U_{n})\). To understand the distortion sequences of \(f_{\lambda}\) at \(H(\lambda,z_{0})\), we must understand the Riemann maps \(\zeta_{\lambda,n}\) of \(U_{\lambda,n}\). We have:
**Lemma 4.1**.: _Let \(\Omega\subsetneq\mathbb{C}\) be a simply connected domain, and let \(H\colon M\times\Omega\to\mathbb{C}\) be a holomorphic motion of \(\Omega\) over some Banach analytic manifold \(M\) with basepoint \(\lambda_{0}\in M\). Let \(p\in\Omega\), and let \(\zeta\colon\mathbb{D}\to\Omega\) be a Riemann map with \(\zeta(0)=p\). Then, there exists a holomorphic motion \(\eta\colon M\times\overline{M}\times\mathbb{D}\to\Omega^{\prime}_{\lambda, \gamma}:=\eta(\lambda,\gamma)(\mathbb{D})\subset\mathbb{C}\) such that:_
1. _For all_ \((\lambda,\gamma)\in M\times\overline{M}\)_,_ \(\eta(\lambda,\gamma)(0)=0\)_;_
2. _For each_ \(\lambda\in M\)_,_ \(\eta(\lambda,\overline{\lambda})(\mathbb{D})=\mathbb{D}\)_; and_
3. _For all_ \(z\in\mathbb{D}\)_,_ \(\eta(\lambda_{0},\overline{\lambda_{0}})(z)=z\)_._
_Furthermore, for \((\lambda,\gamma)\in M\times\overline{M}\), let \(Z(\lambda,\gamma)\colon\Omega^{\prime}_{\lambda,\gamma}\to\mathbb{C}\) be given by \(Z(\lambda,\gamma)=H(\lambda,\zeta\circ\eta(\lambda,\gamma)^{-1}(z))\). Then:_
1. _For every_ \((\lambda,\gamma)\in M\times\overline{M}\)_,_ \(Z(\lambda,\gamma)\colon\Omega^{\prime}_{\lambda,\gamma}\to\Omega_{\lambda}:=H (\lambda,\Omega)\) _is a biholomorphism;_
2. _For each_ \(\lambda\in M\)_, the function_ \(\zeta_{\lambda}(z):=Z(\lambda,\overline{\lambda})(z)\) _is a Riemann map of_ \(\Omega_{\lambda}\)_;_
3. _For all_ \((\lambda,\gamma)\in M\times\overline{M}\)_,_ \(Z(\lambda,\gamma)(0)=H(\lambda,p)\)_;_
4. _For all_ \(z\in\mathbb{D}\)_,_ \(Z(\lambda,\overline{\lambda_{0}})(z)=\zeta(z)\)_; and_
5. _For all_ \((\lambda^{*},\gamma^{*},z)\in M\times\overline{M}\times\Omega^{\prime}_{ \lambda^{*},\gamma^{*}}\) _and any neighbourhood_ \(\Lambda\subset M\times\overline{M}\) _of_ \((\lambda^{*},\gamma^{*})\) _such that_ \(z\in\Omega^{\prime}_{\lambda,\gamma}\) _for_ \((\lambda,\gamma)\in\Lambda\)_, the map_ \(\Lambda\ni(\lambda,\gamma)\mapsto Z(\lambda,\gamma)(z)\in\mathbb{C}\) _is holomorphic._
Proof.: Let \(h_{\lambda}(z):=H(\lambda,z)\), so that \(h_{\lambda}\) is quasiconformal by Lemma 2.5. We start by defining a Beltrami coefficient \(\mu_{\lambda}\colon\mathbb{D}\to\mathbb{D}\) as \(\mu_{\lambda}:=(h_{\lambda}\circ\zeta)^{*}\mu_{0}\). It follows from Lemma 2.4 that \(\mu_{\lambda}(z)\) is analytic in \(\lambda\) for each \(z\in\mathbb{D}\), but we cannot integrate it directly to obtain an integrating map depending analytically on \(\lambda\).
To circumvent this problem, consider that the space \(L^{\infty}(\mathbb{C})\) can be split as \(L^{\infty}(\mathbb{C})=L^{\infty}(\mathbb{D})\oplus L^{\infty}(\mathbb{C} \setminus\overline{\mathbb{D}})\). Lemma 2.3 can therefore be restated as saying that the map
\[L^{\infty}(\mathbb{D})\times L^{\infty}(\mathbb{C}\setminus\overline{\mathbb{ D}})\times\mathbb{C}\ni(\mu,\nu,z)\mapsto\varphi^{\mu+\nu}(z)\in\mathbb{C}\]
is holomorphic. If, furthermore, we take \(\nu=\tau^{*}\mu\), where \(\tau(z)=1/\bar{z}\), then the corresponding integrating map \(\varphi^{\mu+\nu}\) is symmetric with respect to reflection across the unit circle, and therefore \(\varphi^{\mu+\nu}(\mathbb{D})=\mathbb{D}\) (see [9, Exercise 1.4.1] or [2, Lemma 14]). Its restriction to the unit disc is therefore a quasiconformal self-map of the unit disc, fixing the origin (and one) and integrating \(\mu\).
Applying this idea to our situation, we let \(\mu=\mu_{\lambda}=(h_{\lambda}\circ\zeta)^{*}\mu_{0}\) and \(\nu=\nu_{\gamma}=\tau^{*}(h_{\gamma}\circ\zeta)^{*}\mu_{0}\), where \((\lambda,\gamma)\in M\times\overline{M}\). We obtain the map \(\eta\colon M\times\overline{M}\times\mathbb{D}\to\mathbb{C}\) given by
\[\eta(\lambda,\gamma)(z):=\varphi^{\mu_{\lambda}+\nu_{\gamma}}(z);\]
it follows from the definition that \(\mu_{\lambda}\) and \(\nu_{\gamma}\) are holomorphic in \(\lambda\) and \(\gamma\) (respectively), so that we conclude from Lemma 2.3 that \(\eta\) is a holomorphic motion. Properties (1), (2), and (3) are readily established from the definition of \(\varphi^{\mu_{\lambda}+\nu_{\gamma}}\).
If we define \(Z(\lambda,\gamma)\colon\Omega^{\prime}_{\lambda,\gamma}\to\mathbb{C}\) as proposed, property (i) follows from the fact that - by construction - it preserves the standard almost complex structure \(\mu_{0}\) (see Weyl's lemma, e.g. [9, Theorem 1.14]). Properties (ii), (iii), and (iv) now follow from (2), (1), and (3), respectively. Finally, property (v) follows by the usual argument of Buff and Cheritat [10, p. 21].
**Remark**.: The question of how Riemann maps relate to holomorphic motions of the domain is a deep one; other results on the topic can be found in [33, 36, 37].
We are ready to prove Theorem 1.1.
Proof of Theorem 1.1.: We begin by taking Riemann maps \(\zeta_{n}\colon\mathbb{D}\to U_{n}\), \(n\geq 0\), with base-point \(p_{n}=f^{n}(p)\), where \(p\in U\) is such that (by hypothesis) \(H(\lambda,\cdot)\) conjugates the \(f\)-orbit of \(p\) to the \(f_{\lambda}\)-orbit of \(H(\lambda,p)\). Applying Lemma 4.1, we obtain holomorphic motions \(\eta_{n}\colon M\times\overline{M}\times\mathbb{D}\to\mathbb{C}\), \(n\geq 0\), such that \(\zeta_{\lambda,n}(z):=H\left(\lambda,\zeta_{n}\circ\eta_{n}(\lambda,\overline{ \lambda})^{-1}(z)\right)\) are Riemann maps of \(U_{\lambda,n}=f_{\lambda}^{n}(U_{\lambda})=H(\lambda,U_{n})\) normalised so that \(\zeta_{\lambda,n}(0)=H(\lambda,p_{n})\). The functions \(g_{\lambda,n}\colon\mathbb{D}\to\mathbb{D}\) defined by
\[g_{\lambda,n}(z):=\zeta_{\lambda,n}^{-1}\circ f_{\lambda}\circ\zeta_{\lambda, n-1}(z)\]
form a sequence of inner functions that is conjugate to \((f_{\lambda}|_{U_{\lambda,n}})_{n\in\mathbb{N}}\), but they are not holomorphic in \(\lambda\). They can, however, be understood as the restriction to \((\lambda,\overline{\lambda})\in M\times\overline{M}\) of the more general functions
\[G(\lambda,\gamma)(z):=Z_{n}(\lambda,\gamma)^{-1}\circ f_{\lambda}\circ Z_{n-1 }(\lambda,\gamma)(z),\]
where \(Z_{n}(\lambda,\gamma)\colon\eta_{n}(\lambda,\gamma)(\mathbb{D})\to\mathbb{C}\) are also given by Lemma 4.1. These functions are holomorphic in \(\lambda,\gamma\), and \(z\); it follows that the maps \(\alpha_{n}\colon M\times\overline{M}\to\mathbb{C}\) given by
\[\alpha_{n}(\lambda,\gamma):=G_{n}(\lambda,\gamma)^{\prime}(0)\]
are holomorphic. Furthermore, since the quasiconformal maps \((\eta_{n}(\lambda,\gamma))_{n\in\mathbb{N}}\) have dilatation uniformly bounded in terms of \(\lambda\) and \(\gamma\), it follows that the sequence \((\alpha_{n}(\lambda,\gamma))_{n\in\mathbb{N}}\) is uniformly bounded on compact subsets of \(M\times\overline{M}\) by Cauchy's integral formula. Hence, the function \(\mathrm{A}\colon M\times\overline{M}\to\ell^{\infty}\) given by
\[\mathrm{A}(\lambda,\gamma):=\left(\alpha_{n}(\lambda,\gamma)\right)_{n\in \mathbb{N}}\]
is holomorphic by Lemma 2.2. Finally, it follows from the construction of \(\mathrm{A}\) that \(\mathrm{A}(\lambda,\overline{\lambda})=(g^{\prime}_{\lambda,n}(0))_{n\in \mathbb{N}}\), completing the proof.
Corollary 1.1 follows immediately from Theorem 1.1 by noting that \(\|\mathrm{A}(\lambda,\overline{\lambda})\|_{\infty}\) varies continuously with \(\lambda\in M\), and thus if \(\|\alpha\|_{\infty}<1\) then there exists a neighbourhood \(\Lambda\subset M\) of \(\lambda_{0}\) for which \(\|\mathrm{A}(\lambda,\overline{\lambda})\|_{\infty}<1\), which implies that the corresponding wandering domain \(U_{\lambda}\) is contracting by Theorem \(\mathrm{A}\).
## 5 Perturbing Herman-type wandering domains
Theorem 1.2 follows immediately from the more precise statement.
**Theorem 5.1**.: _Let \(f\colon\mathbb{C}\to\mathbb{C}\) be a transcendental entire function with an attracting Herman-type wandering domain \(U\), and let \(M=\{\lambda=(\lambda_{1},\lambda_{2},\ldots)\in\ell^{\infty}:\|\lambda\|_{ \infty}<1\}\). Then, there exist \(z_{0}\in U\) and a natural family \((f_{\lambda})_{\lambda\in M}\) such that:_
1. \(f_{0}=f\)
_._
2. \(f\) _is_ \(J\)_-stable in_ \((f_{\lambda})_{\lambda\in M}\)_;_
3. _If_ \(\alpha\) _is a distortion sequence of_ \(f\) _at_ \(z_{0}\) _and_ \(\alpha_{n}(\lambda)\) _denotes the_ \(n\)_-th entry of the distortion map_ \(\mathrm{A}(\lambda,\overline{\lambda})\) _of_ \(\alpha\) _over_ \(M\)_, then_ \[\frac{d\alpha_{n}}{d\lambda_{n}}(0)\neq 0.\]
The proof of Theorem 5.1 occupies the rest of this section. We will use a quasiconformal surgery parametrised by the proposed Banach analytic manifold \(M\).
Let \(z_{0}\in U\) denote the lift in \(U\) of the attracting fixed point \(w_{0}\) of \(h\colon\mathbb{C}^{*}\to\mathbb{C}^{*}\), where \(\exp\circ f=h\circ\exp\), and let \(z_{n}:=f^{n}(z_{0})\). Denoting the immediate basin of attraction of \(w_{0}\) by \(V\), let \(\zeta\colon\mathbb{D}\to V\) be a Riemann map with \(\zeta(0)=w_{0}\) and \(\arg\zeta^{\prime}(0)=\arg w_{0}\), and let \(\exp_{n}^{-1}\) denote the branch of the logarithm on \(V\) mapping \(w_{0}\) to \(z_{n}\) for \(n\geq 0\). We see that \(\zeta_{n}:=\exp_{n}^{-1}\circ\zeta\) is a Riemann map of \(U_{n}\) satisfying \(\zeta_{n}(0)=z_{n}\) and \(\zeta^{\prime}_{n}(0)>0\). It follows that we have an inner function \(g\colon\mathbb{D}\to\mathbb{D}\) such that, for all \(n\in\mathbb{N}\),
\[g(z)=\zeta_{n}^{-1}\circ f\circ\zeta_{n-1}(z),\]
with \(g(0)=0\) and \(\alpha:=g^{\prime}(0)=h^{\prime}(w_{0})\). In particular, \(\alpha=(\alpha,\alpha,\ldots)_{n\in\mathbb{N}}\) is a distortion sequence for \(f\) at \(z_{0}\) (this is the only mention we will make of the distortion sequence \(\alpha\), as opposed to the multiplier \(\alpha\). We hope this will not cause confusion).
By Koenig's linearisation theorem [29, Theorem 8.2], there exist a neighbourhood \(\Delta\subset\mathbb{D}\) of the origin and a biholomorphism \(\psi\colon\Delta\to\mathbb{D}_{L}:=\{z:|z|<L\}\) such that
\[\psi\circ g(z)=\alpha\cdot\psi(z)\text{ for }z\in\Delta\]
and, furthermore, \(\psi^{\prime}(0)=1\).
Now, we wish to use the linearised coordinates \(w=\psi(z)\) to substitute the action \(w\mapsto\alpha w\) of \(g\) for the action \(w\mapsto\tilde{\alpha}_{n}w\), where \(\tilde{\alpha}_{n}:=\alpha+\rho\lambda_{n}\) and \(\rho:=|\alpha|/2\cdot\min\{|\alpha,1-\alpha\}\). This will require the following kind of quasiconformal interpolation:
**Lemma 5.1**.: _Let \(R>0\), let \(\alpha\in\mathbb{D}^{*}\), let \(r=|\alpha|R\), and let \(\rho=|\alpha|/2\cdot\min\{|\alpha|,1-|\alpha|\}\). Then, the map \(\varphi\colon\mathbb{D}\to\{z:r<|z|<R\}\to\mathbb{C}\) given by_
\[\varphi(\lambda,te^{i\theta})=\left(\frac{R-t}{R-r}\tilde{\alpha}+\frac{t-r}{ R-r}\alpha\right)te^{i\theta},\]
_where \(\tilde{\alpha}=\alpha+\rho\lambda\) and \(\lambda\in\mathbb{D}\), satisfies the following properties._
1. _For every_ \(\lambda\in\mathbb{D}\)_,_ \(\varphi(\lambda,\cdot)\) _interpolates between_ \(re^{i\theta}\mapsto\tilde{\alpha}re^{i\theta}\) _and_ \(Re^{i\theta}\mapsto\alpha Re^{i\theta}\)_._
2. _For every_ \(\lambda\in\mathbb{D}\)_, the map_ \(\varphi_{\lambda}:=\varphi(\lambda,\cdot)\) _is a quasiconformal homeomorphism._
3. _For every fixed_ \(z\)_, the map_ \(\mathbb{D}\ni\lambda\mapsto\mu(\lambda):=\varphi_{\lambda}^{*}\mu_{0}(z)\) _is analytic and_ \(|\mu(\lambda)|\leq|\lambda|\)_._
Proof.: Property (1) is clear by substituting \(t=r\) and \(t=R\) into the definition of \(\varphi\). To prove (2), we start by noting that \(\varphi_{\lambda}\) maps the circle of radius \(t=r+s(R-r)\), \(0\leq s\leq 1\), homeomorphically onto the circle of radius \(\sigma(s)=|\alpha+(1-s)\rho\lambda|(r+s(R-r))\). Because \(\rho\) is small compared to \(\alpha\), \(\sigma(s)\) is injective, and hence \(\varphi_{\lambda}\) is an orientation-preserving diffeomorphism between \(\{z:r<|z|<R\}\) and \(\{z:\tilde{\alpha}r<|z|<\alpha R\}\). It follows that it is quasiconformal, since it extends smoothly to the closure (see [9, Remark 1.6(b)]). Finally, it is clear that, for each \(z=te^{i\theta}\), the map \(\mathbb{D}\ni\lambda\mapsto\varphi_{\lambda}(z)\) is analytic, and thus by Lemma 2.4 so is its Beltrami coefficient. Property (3) now follows from the Schwarz lemma [32, Theorem 7.19].
We want to apply Lemma 5.1 to our situation: we choose \(R\in(0,L)\) (we will see further ahead, in Claim 5.1, that \(R\) must be "small"), and interpolate between \(w\mapsto\tilde{\alpha}_{n}w=(\alpha+\rho\lambda_{n})w\) on \(\{w\colon|w|=|\alpha|R\}\) and \(w\mapsto\alpha w\) on \(\{w\colon|w|=R\}\), obtaining the quasiconformal maps \(\varphi_{n}(w)=\varphi(\lambda_{n},w)\). Also by Lemma 5.1, the Beltrami coefficients \(\varphi_{n}^{\star}\mu_{0}\) satisfy \(\|\varphi_{n}^{\star}\mu_{0}\|_{\infty}\leq|\lambda_{n}|\leq|\lambda|_{\infty}\).
Define now for \(n\in\mathbb{N}\) the sets
\[\Delta_{n}:=\zeta_{n-1}\circ\psi^{-1}\left(\{w\colon|w|\leq|\alpha|R\}\right) \text{ and }A_{n}:=\zeta_{n-1}\circ\psi^{-1}\left(\{w\colon|\alpha|R<|w|<R\}\right)\]
and let \(g_{\lambda}\colon\mathbb{C}\to\mathbb{C}\) be the quasiregular map given by
\[g_{\lambda}(z):=\begin{cases}\zeta_{n}\circ\psi^{-1}\left(\tilde{\alpha}_{n} \cdot\psi\circ\zeta_{n-1}^{-1}(z)\right),&z\in\Delta_{n},\\ \zeta_{n}\circ\psi^{-1}\left(\varphi_{n}\circ\psi\circ\zeta_{n-1}^{-1}(z) \right),&z\in A_{n},\\ f(z)&\text{elsewhere}.\end{cases}\]
It is important to notice that, because the internal dynamics of \(U\) are essentially the dynamics of \(h\colon V\to V\), the domains \(A_{n}\) are "fundamental domains" for the grand orbit relation (see [27, pp. 194-195]) of \(f\). This means that the orbit of any point in \(\mathbb{C}\) intersects \(A_{n}\) for at most one value of \(n\in\mathbb{N}\), and so the Beltrami coefficient \(\mu_{\lambda}\) given by
\[\mu_{\lambda}(z):=\begin{cases}g_{\lambda}^{\star}\mu_{0}(z),&z\in A_{n},\\ (f^{j})^{\star}g_{\lambda}^{\star}\mu_{0}(z),&z\in f^{-j}(A_{n}),\\ \mu_{0}(z),&\text{elsewhere}\end{cases}\]
is \(g_{\lambda}\)-invariant, i.e. \(g_{\lambda}^{\star}\mu_{\lambda}=\mu_{\lambda}\), and satisfies \(\|\mu_{\lambda}\|_{\infty}\leq\|\lambda\|_{\infty}\). Furthermore - and this is why our construction was so strict - the map \(M\ni\lambda\mapsto\mu_{\lambda}\in\mathbb{D}\) is analytic for every \(z\in\mathbb{C}\). It follows from Lemma 2.3 that there exists a quasiconformal map \(\varphi_{\lambda}\colon\mathbb{C}\to\mathbb{C}\) fixing \(0\) and \(z_{0}\) varying analytically with \(\lambda\in M\) and such that
\[f_{\lambda}:=\varphi_{\lambda}\circ g_{\lambda}\circ\varphi_{\lambda}^{-1}\]
is an entire function.
Now, the fact that both \(\varphi_{\lambda}\) and \(g_{\lambda}\) depend analytically on \(\lambda\) does not imply that the same is true of \(f_{\lambda}\). Define the function
\[\psi_{\lambda}(z):=\begin{cases}g_{\lambda}\circ f^{-1}(z),&z\in\Delta_{n} \cup A_{n},\\ z,&\text{elsewhere};\end{cases}\]
it is immediate that \(\psi_{\lambda}\) is analytic in \(\lambda\) and that \(g_{\lambda}=\psi_{\lambda}\circ f\), so that
\[f_{\lambda}=\varphi_{\lambda}\circ\psi_{\lambda}\circ f\circ\varphi_{ \lambda}^{-1}.\]
Hence, \((f_{\lambda})_{\lambda\in M}\) defines a natural family with basepoint \(f_{0}=f\) and therefore moves analytically with \(\lambda\) (see Theorem 1.3).
Finally, it is clear that \(\varphi_{\lambda}\) conjugates \(f|_{J(f)}\) to \(f_{\lambda}|_{J(f_{\lambda})}\) by construction, and that \(H(\lambda,z):=\varphi_{\lambda}(z)\) is a holomorphic motion over \(M\), implying that \(f\) is \(J\)-stable in \((f_{\lambda})_{\lambda\in M}\). We must now show that the distortion sequence of \(f_{\lambda}\) at \(z_{\lambda}:=\varphi_{\lambda}(z_{0})\) is not constant with \(\lambda\), concluding the proof of Theorem 5.1.
To this end, we start by obtaining Riemann maps \(\zeta_{\lambda,n}\colon\mathbb{D}\to U_{\lambda,n}\) with adequate properties. We proceed as in the proof of Theorem 1.1, obtaining integrating maps \(h_{n}\colon\mathbb{D}\to\mathbb{D}\) of \(\mu_{\lambda,n}:=(\varphi_{\lambda}\circ\zeta_{n})^{\star}\mu_{0}\) (notice that \(h_{n}\) depends on \(\lambda_{m}\), \(m\geq n+1\), although that dependence is not made explicit). By construction, the functions
\[\zeta_{\lambda,n}(z)=\varphi_{\lambda}\circ\zeta_{n}\circ(h_{n})^{-1}(z)\]
are Riemann maps of \(U_{\lambda,n}\), and we can define, for \(n\in\mathbb{N}\),
\[g_{n}(\lambda,z):=\zeta_{\lambda,n}^{-1}\circ f_{\lambda}\circ\zeta_{\lambda,n-1} (z).\]
It is clear that the functions \(g_{n}\) are holomorphic, move real-analytically with \(\lambda\in M\) and are conjugated to \(f_{\lambda}|_{U_{\lambda,n}}\) by the Riemann maps \(\zeta_{\lambda,n}\). In particular, \((g_{n}^{\prime}(\lambda,0))_{n\in\mathbb{N}}\) is a distortion sequence for \(f_{\lambda}\) at \(z_{\lambda}\).
It also follows that the integrating maps \(h_{n}\) conjugate \(g_{n}\) to the quasiregular maps
\[\tilde{g}_{n}(\lambda,z)=\begin{cases}\psi^{-1}\left(\tilde{\alpha}_{n}\cdot \psi(z)\right),&z\in\psi^{-1}\left(\{w\colon|w|\leq|\alpha|R\}\right),\\ \psi^{-1}\left(\varphi_{n}\circ\psi(z)\right),&z\in\psi^{-1}\left(\{w\colon| \alpha|R<|w|\leq R\}\right).\end{cases}\]
In particular, the Beltrami coefficients \(\mu_{\lambda,n}\) are zero in the disc \(\{z\colon|z|\leq|\alpha|R/4\}\), and by the chain rule we have
\[g_{n}^{\prime}(\lambda,0)=\frac{h_{n}^{\prime}(0)}{h_{n-1}^{\prime}(0)} \tilde{\alpha}_{n},\]
where the derivatives are taken with respect to \(z\). Differentiating with respect to \(\lambda_{n}\), and recalling that \(h_{n-1}\) is the identity at \(\lambda=0\) and that \(h_{n}\) does not depend on \(\lambda_{n}\), we arrive at
\[\frac{d}{d\lambda_{n}}g_{n}^{\prime}(0,0)=\rho-|\alpha|\frac{d}{d\lambda_{n}}h _{n-1}^{\prime}(0). \tag{1}\]
It is clear that we must estimate the derivative on the right-hand side, which will require more insight into the construction of \(h_{n-1}\). We recall some elements of Ahlfors and Bers' original work on the parameter dependence of integrating maps (see [2]).
To this end, let \(\mu\in M(\mathbb{D})\) be a Beltrami coefficient with integrating map \(h:\mathbb{D}\to\mathbb{D}\) fixing \(0\) "and \(1\)", and assume that there exists \(r\in(0,1)\) such that \(\mu\) is zero in the disc \(\{z\colon|z|<r\}\). Consider the Beltrami coefficient
\[\hat{\mu}(z):=\begin{cases}\mu(z),&z\in\mathbb{D},\\ \tau^{*}\mu(z),z\in\mathbb{C}\setminus\overline{\mathbb{D}};\end{cases}\]
recall that \(\tau(z)=1/\bar{z}\). This Beltrami coefficient is symmetric around the unit circle, and conformal in neighbourhoods of both zero and infinity. It follows from [1, Theorem 1] that there exists an integrating map \(F^{\hat{\mu}}\) of \(\hat{\mu}\) that fixes the origin and is asymptotic to the identity, i.e.,
\[F^{\hat{\mu}}(z)=a_{0}+z+O(|z|^{-1}).\]
The map \(\tilde{F}^{\hat{\mu}}(z)=F^{\hat{\mu}}(z)/F^{\hat{\mu}}(1)\) is another integrating map of \(\hat{\mu}\), and fixes \(0\), \(1\), and infinity. Since the Beltrami coefficient \(\hat{\mu}\) is symmetric, it follows from uniqueness of the integrating map that \(\tilde{F}^{\hat{\mu}}\) is symmetric around the unit circle (i.e., satisfies \(\overline{\tilde{F}}^{\hat{\mu}}(z)=1/\tilde{F}^{\hat{\mu}}(1/\bar{z})\)), and hence its restriction to the unit disc is an integrating map of \(\mu\). By uniqueness, we have \(h\equiv\tilde{F}^{\hat{\mu}}|_{\mathbb{D}}\). Furthermore, by symmetry of \(\tilde{F}^{\hat{\mu}}\),
\[h^{\prime}(0)=\lim_{z\to 0}\frac{\tilde{F}^{\hat{\mu}}(z)}{z}=\lim_{z\to 0} \frac{z\overline{F^{\hat{\mu}}(1)}}{z(1+z\overline{a_{0}}+z\overline{O(|z|)}}= \overline{F^{\hat{\mu}}(1)}.\]
Thus, in order to understand how \(h^{\prime}(0)\) moves with \(\mu\), we must understand how \(F^{\hat{\mu}}(1)\) moves with \(\hat{\mu}\).
Given \(\vartheta\in L^{p}(\mathbb{C})\), \(p>2\), let us introduce the integral operators
\[P\vartheta(s)=-\frac{1}{\pi}\iint\vartheta(z)\left(\frac{1}{z-s}-\frac{1}{z} \right)\,|dz|^{2}\]
\[T\vartheta(s)=-\frac{1}{\pi}P.V.\iint\frac{\vartheta(z)}{(z-s)^{2}}\,|dz|^{2},\]
where integrals are taken over the whole plane. It was shown by Ahlfors and Bers (see [1, Chapter V]) that
\[F^{\hat{\mu}}(z)=z+P[\hat{\mu}\cdot(\vartheta+1)](z), \tag{2}\]
where
\[\vartheta=T(\hat{\mu}\cdot\vartheta)+T\hat{\mu}=T\hat{\mu}+(T\hat{\mu})^{2}+\cdots,\]
and \(p>2\) is chosen so that \(kC_{p}<1\), where \(\|\hat{\mu}\|_{\infty}\leq k<1\) and \(C_{p}\) is the \(L^{p}(\mathbb{C})\) norm of \(T\) (the fact that \(T\) extends as a linear operator to \(L^{p}(\mathbb{C})\) and that \(C_{p}\) is finite was shown earlier by Calderon and Zygmund). To tie it all together, we have:
**Lemma 5.2**.: _Let \(K\subset\mathbb{C}\) be a compact set, and let_
\[\mathcal{B}_{K}:=\{\mu\in L^{\infty}(\mathbb{C})\colon\|\mu\|_{\infty}<1, \operatorname{supp}(\mu)\subset K\}.\]
_Consider the map \(\mathcal{B}_{K}\ni\mu\mapsto F^{\mu}(1)\in\mathbb{C}\). Then, its (complex) Frechet differential at the origin is given by_
\[D[F^{0}(1)](\nu)=P\nu(1).\]
Proof.: Since \(F^{0}(z)=z\), we have by (2)
\[\lim_{\|\nu\|_{\infty}\to 0}\frac{|F^{\nu}(1)-F^{0}(1)-P\nu(1)|}{\|\nu\|_{ \infty}}=\lim_{\|\nu\|_{\infty}\to 0}\frac{|P[\nu(\vartheta_{\nu}+1)](1)-P \nu(1)|}{\|\nu\|_{\infty}},\]
where \(\vartheta_{\nu}\) satisfies \(\vartheta_{\nu}=T(\nu\vartheta_{\nu})+T\nu\). By definition, the proof is complete if we can show that the limit above is zero; we can assume, henceforth, that \(\|\nu\|_{\infty}<1/4\). First, by linearity of \(P\), we have \(P[\nu(\vartheta_{\nu}+1)](1)=P(\nu\vartheta_{\nu}+\nu)(1)=P(\nu\vartheta_{ \nu})(1)+P\nu(1)\), and so
\[\lim_{\|\nu\|_{\infty}\to 0}\frac{|F^{\nu}(1)-F^{0}(1)-P\nu(1)|}{\|\nu\|_{ \infty}}=\lim_{\|\nu\|_{\infty}\to 0}\frac{|P(\nu\vartheta_{\nu})(1)|}{\|\nu\|_{ \infty}}.\]
Now, by Holder's inequality, we obtain
\[|P(\nu\vartheta_{\nu})(1)|\leq\frac{1}{\pi}\|\nu\vartheta_{\nu}\|_{p}\left\| \frac{1}{z(z-1)}\right\|_{q},\]
where \(p>2\) is fixed and such that \(C_{p}<2\) and \(q=1/(1-1/p)\). Thus,
\[\lim_{\|\nu\|_{\infty}\to 0}\frac{|F^{\nu}(1)-F^{0}(1)-P\nu(1)|}{\|\nu\|_{ \infty}}\leq\frac{1}{\pi}\left\|\frac{1}{z(z-1)}\right\|_{q}\lim_{\|\nu\|_{ \infty}\to 0}\frac{\|\nu\vartheta_{\nu}\|_{p}}{\|\nu\|_{\infty}}.\]
Next, we note that \(\|\nu\vartheta_{\nu}\|_{p}\leq\|\nu\|_{\infty}\|\vartheta_{\nu}\|_{p}\). It also follows from the definition of \(\vartheta_{\nu}\) (see, for instance, [1, p. 55]) that
\[\|\vartheta_{\nu}\|_{p}\leq\frac{C_{p}}{1-C_{p}/4}\|\nu\|_{p};\]
we are left with
\[\lim_{\|\nu\|_{\infty}\to 0}\frac{|F^{\nu}(1)-F^{0}(1)-P\nu(1)|}{\|\nu\|_{ \infty}}\leq\frac{1}{\pi}\left\|\frac{1}{z(z-1)}\right\|_{q}\frac{C_{p}}{1-C_ {p}/4}\lim_{\|\nu\|_{\infty}\to 0}\|\nu\|_{p}.\]
Since \(\operatorname{supp}(\nu)\subset K\) by definition, we have \(\|\nu\|_{p}\leq|K|^{1/p}\|\nu\|_{\infty}\), where \(|K|\) denotes the Lebesgue area of \(K\). We are done.
As the culmination of the preceding discussion, Lemma 5.2 allows us to apply the chain rule to (1) and write the following equation:
\[\frac{d}{d\lambda_{n}}g_{n}^{\prime}(0,0)=\rho-|\alpha|P\left(\frac{d\hat{\mu}_{ \lambda,n-1}}{d\lambda_{n}}\right)(1), \tag{3}\]
where \(\hat{\mu}_{\lambda,n}=\mu_{\lambda,n}+\tau^{*}\mu_{\lambda,n}\). Calculating the right-hand side is a Herculean task, which we will carry out in Appendix A. We summarise the main steps in the following two claims.
**Claim 5.1**.: _Let \(\varphi_{n}(z):=\varphi(\lambda_{n},z)\) be the interpolating maps given by Lemma 5.1, and let \(\nu_{n-1}:=\varphi_{n}^{*}\mu_{0}\) denote their Beltrami coefficients. Then, if \(R>0\) is chosen sufficiently small,_
\[P\left(\frac{d\hat{\mu}_{\lambda,n-1}}{d\lambda_{n}}\right)(1)\approx P\left( \frac{\partial\nu_{n-1}}{\partial\lambda_{n}}+\frac{\partial(\tau^{*}\nu_{n-1 })}{\partial\overline{\lambda_{n}}}\right)(1),\]
_where \(\partial/\partial\lambda_{n}\) and \(\partial/\partial\overline{\lambda_{n}}\) denote Wirtinger derivatives._
**Claim 5.2**.: _With the notation above,_
\[P\left(\frac{\partial\nu_{n-1}}{\partial\lambda_{n}}+\frac{\partial(\tau^{*} \nu_{n-1})}{\partial\overline{\lambda_{n}}}\right)(1)=\frac{\rho}{|\alpha|}( -1+2i).\]
Combining Claims 5.1 and 5.2 with (3), we arrive at
\[\frac{d}{d\lambda_{n}}g_{n}^{\prime}(0,0)\approx 2\rho(1+i)\neq 0,\]
completing the proof of Theorem 5.1.
|
2310.03967 | Sub-token ViT Embedding via Stochastic Resonance Transformers | Vision Transformer (ViT) architectures represent images as collections of
high-dimensional vectorized tokens, each corresponding to a rectangular
non-overlapping patch. This representation trades spatial granularity for
embedding dimensionality, and results in semantically rich but spatially
coarsely quantized feature maps. In order to retrieve spatial details
beneficial to fine-grained inference tasks we propose a training-free method
inspired by "stochastic resonance". Specifically, we perform sub-token spatial
transformations to the input data, and aggregate the resulting ViT features
after applying the inverse transformation. The resulting "Stochastic Resonance
Transformer" (SRT) retains the rich semantic information of the original
representation, but grounds it on a finer-scale spatial domain, partly
mitigating the coarse effect of spatial tokenization. SRT is applicable across
any layer of any ViT architecture, consistently boosting performance on several
tasks including segmentation, classification, depth estimation, and others by
up to 14.9% without the need for any fine-tuning. | Dong Lao, Yangchao Wu, Tian Yu Liu, Alex Wong, Stefano Soatto | 2023-10-06T01:53:27Z | http://arxiv.org/abs/2310.03967v2 | # Sub-token Vit Embedding via Stochastic Resonance Transformers
###### Abstract
We discover the presence of quantization artifacts in Vision Transformers (ViTs), which arise due to the image tokenization inherent in these architectures. These artifacts result in coarsely quantized features, which negatively impact performance, especially on dense prediction tasks. We present a zero-shot method to improve how pre-trained ViTs handle spatial quantization. We propose to ensemble the features obtained from perturbing the input via sub-token spatial translations, inspired by Stochastic Resonance, a method traditionally applied to signal processing. We term our method "Stochastic Resonance Transformer" (SRT), which we show can effectively super-resolve features of pre-trained ViTs, capturing more of the local fine-grained structures that might otherwise be neglected as a result of tokenization. SRT can be applied at any layer, on any task, and does not require any fine-tuning. The advantage of the former is evident when applied to monocular depth prediction, where we show that ensembling model outputs are detrimental while applying SRT on intermediate ViT features outperforms the baseline models by an average of \(4.7\%\) and \(14.9\%\) on the RMSE and RMSE_log metrics across three different architectures. On semi-supervised video object segmentation, SRT also improves over the baseline models uniformly across all metrics, and by an average of \(2.4\%\) in F&J score. We further show that these quantization artifacts can be attenuated to some extent via self-distillation. On the unsupervised salient region segmentation, SRT improves upon the base model by an average of 2.1% on the maxF metric. Finally, despite operating purely on pixel-level features, SRT generalizes to non-dense prediction tasks such as image retrieval and object discovery, yielding consistent improvements of up to \(2.6\%\) and \(1.0\%\) respectively.
## 1 Introduction
The Transformer architecture, that takes quantized or "tokenized" inputs, seems ill fit for vision tasks, since images do not have a natural discretization scale: The same object can disappear within a pixel or fill the entire image plane depending on its distance from the camera. Yet Vision Transformers (ViTs) have been shown effective especially in semantic inference tasks, so we focus on simple methods to use pre-trained ViT models while addressing some of the shortcomings of a fixed spatial quantization of the input tokens.
The standard remedy for quantization artifacts is anti-aliasing. In one-dimensional signals such as audio, anti-aliasing refers to averaging nearby samples, or equivalently translated versions of the signal. For images, in addition to quantization of the translation group reflected in the size of the pixels, there is also the scale (semi-)group, reflected in the size of projection of objects onto the image plane. Various network architectures comprise spatial average pooling, which is translational anti-aliasing, and the notion of domain-size pooling has been introduced in local features by (Dong and Soatto, 2015). While traditionally antialiasing is performed via convolution with a fixed kernel, Stochastic Resonance simply perturbs the data with respect to an artificial distribution and then averages the results. Stochastic resonance can be thought of as a way of performing data augmentation, or adaptive quantization. This simplistic approach is well suited to pre-trained transformers since it only requires acting on inputs and outputs without modifying (or even knowing) the weights of
the model. Stochastic Resonance is used to resolve coarsely quantized signals beyond the native resolution of the sensor and has found wide applicability in cochlear implants. We apply the same mechanism to the spatial dimension, by perturbing the input signal to a ViT, which results in highly variable outcomes for the embeddings. Such outcomes are aggregated statistically to first-order (mean or median) and second-order (dispersion) to yield a sub-token embedding along with a measure of confidence or consistency, which have broad potential applications. For instance, the median outcome can be used as a seed for unsupervised object segmentation (along with motion masks), and the dispersion can be used as a weight for an adaptive regularizer.
We call the resulting methods "Stochastic Resonance Transformer" although we do not modify the transformer itself. Instead, we can leverage ViTs, pre-trained on large datasets, such as CLIP and DINO, to improve their handling of spatial quantization. This may help attenuate some of the biases of these datasets, for instance the object-centric nature of DINO, which biases the representation towards centered objects that occupy a large portion of the visual field. Stochastic Resonance Transformer can be used to super-resolve and ensemble the feature maps in ViT, outputting features that reveal some of the local fine-grained image structure. This can be done at any ViT layer, on any task, without altering network architecture or pre-trained network weights. We can optionally distill the fine-grained features back to the original ViT scale, where we notice performance increase at equal inference time and cost. Our contributions are listed as follows:
* We introduce a novel technique, namely the Stochastic Resonance Transformer (SRT), that super-resolves ViT embeddings without additional training or modifications to the ViT's forward pass.
* SRT yields a versatile visualization tool that can be applied to any layer of any pre-trained ViT model, offering valuable insights into ViT model characteristics.
* The enhanced embeddings from SRT can be seamlessly integrated into any task that utilizes ViT as a feature extractor, thereby serving as a test-time augmentation and ensemble method.
* We showcase the effectiveness of SRT by consistent improvement on a range of diverse vision tasks. Notably, it demonstrates significant enhancements on dense prediction tasks, of up to 14.9% on depth prediction, 2.4% on video object segmentation, and 2.1% on salient region segmentation.
* We provide an efficient implementation SRT, including parallelization and recursive aggregation, which reduces computational and memory requirements.
Figure 1: **High-resolution ViT features Computed by Stochastic Resonance.** Stochastic Resonance enables super-resolving tokenized ViT features during inference without the need for additional training or modifying ViT forward pass. Features visualized via Principal Component Analysis are helpful in analyzing different pre-trained ViT models: CLIP (Radford et al., 2021) captures major image components. Interestingly, although Supervised (Dosovitskiy et al., 2020) and DINO (Caron et al., 2021) are trained by different pipelines and training loss, they prioritize similar regions. This may be due to they are trained on the same dataset and thus capture similar inductive bias. In contrast, SAM (Kirillov et al., 2023) and MAE (He et al., 2022) capture local features over high-level semantics. Stochastic Resonance not only serves as a powerful visualization tool but also enhances model performance across multiple downstream tasks, as demonstrated in Sect. 3.
## 2 Method
In this section, we first describe the conceptual framework and pipeline of SRT in Sect. 2.1, then we formalize SRT in Sect. 2.2, and present details for efficient implementation in Sect 2.3
### Super-resolving ViT embeddings by Stochastic Resonance.
Given an image \(x\) with \(N\times M\) resolution, a Vision Transformer (ViT) divides it into tokens, where each token represents a \(n\times m\) rectangular patch. While tokens can technically overlap, practical ViT models often use non-overlapping tokens for efficiency due to the quadratic complexity of transformers with respect to the number of tokens. Consequently, in a certain layer of ViT, this approach yields a feature map with dimensions \(\frac{N}{n}\times\frac{M}{m}\times C\), where \(C\) is the size of the feature vector determined by architecture, downsampled from the original image and subsequently losing sub-token spatial information.
Given a trained ViT model, we aim to obtain features in a higher resolution that preserves the spatial information on a pixel level, ideally matching with the original image input. Fig. 2 illustrates our proposed pipeline of SRT. To achieve super-resolution of the features, we introduce random sub-token perturbation to the input, i.e. transforming the coordinates of the input and resampling onto a new image plane, and extract embeddings from the resulting perturbed image. We then upsample the resulting low-resolution embeddings back to the original image resolution \(N\times M\) and apply an inverse of the perturbation to the spatial coordinates of the embeddings, and through an inverse warp, align it with the original input image.
By repeating this process on different sub-token perturbations for \(t\) times, we generate a collection of embeddings, denoted by \(N\times M\times C\times t\), that are spatially aligned to the input frame of reference. We can then compute statistics, e.g. mean or median, along the \(t\) dimension. Consequently, we obtain a feature field \(N\times M\times C\), with the same spatial resolution as the original image. As showcased in Fig. 1, the embeddings are "super-resolved" to sub-token resolution. This process is similar to Stochastic Resonance, where introducing white noise to the input signal super-resolves a signal beyond the native resolution. These embeddings offer promising downstream applications, as discussed in more detail in Sect. 3.
For any task that utilizes ViT as a feature extractor, we can take an additional step by applying average pooling to again tokenize this high-resolution feature, to map it to \(\frac{N}{n}\times\frac{M}{m}\times C\). It's important to note that this feature differs from the one obtained from one single forward pass of ViT, as it is an aggregate of multiple perturbed inputs. This process can be viewed as test-time augmentation and ensemble. Since this feature is compatible with the original ViT architecture, it can be seamlessly integrated into any model at any layer, regardless of pre-training, without requiring additional learned modules or altering the forward pass. Such a pipeline improves performance on diverse computer vision tasks, as validated by Sect. 3. Next, we formalize the aforementioned pipeline.
Figure 2: **Schematic for Stochastic Resonance Transformer.** Stochastic Resonance Transformer (SRT) applies controlled perturbations via translations to input images, extracting features through Vision Transformers (ViTs). These features are then upsampled to higher resolution and aligned using the inverse of the applied perturbations. Statistical aggregation, including mean and median, along the perturbation dimension, produces fine-grained feature representations. These features find utility in visualization and can also be seamlessly integrated back into the network for enhanced performance in downstream tasks.
### Formalization
\(x\in\mathbb{R}^{N\times M\times K}\) is a \(K\)-channel signal (_e.g._, \(K=3\) for a color image.) Let \(\pi:\mathbb{R}^{N\times M}\rightarrow\mathbb{R}^{n\times m};x\mapsto x\) a projection (subsampling, \(n\ll N,m\ll M\)), with the corresponding inverse (interpolation) map \(\pi^{-1}:\mathbb{R}^{n\times m}\rightarrow\mathbb{R}^{N\times M};x\mapsto x\) be piecewise constant. This is a trivial form of subsampling and interpolation with a constant kernel.
Now, let \(\phi:\mathbb{R}^{NMK}\rightarrow\mathbb{R}^{nmC}\) a trained model with \(C\) channels of feature maps, typically \(C\gg K\). Finally, let \(T:\mathbb{R}^{N\times M}\rightarrow\mathbb{R}^{N\times M};x\mapsto Tx\) a compact and invertible transformation, for instance, e padded shift by a number of pixels smaller than \((N-n)/n\times(M-m)/m\). We consider uniform random padded shifts (translation) and consider the following measurement process:
\[y_{t}=\phi(T_{t}x) \tag{1}\]
for all random transformations \(T_{t}\). We wish to super-resolve1 the output of \(\phi\) from \(n\times m\) to \(N\times M\). We do so iteratively by averaging (or by a trainable linear transformation \(K_{t}\)) with respect to the innovation process:
Footnote 1: We call this process _immersion_ since each point \(x\) maps to \(z=\phi(x)\) but \(z\neq T^{-1}\phi(Tx)\). In other words, \(x\) is mapped injectively but not bijectively, since there are as many (vector)-values as sampled value of \(T\).
\[\epsilon_{t}=\underbrace{\pi\left(T_{t}^{-1}\pi^{-1}y_{t}\right)}_{\hat{y}_{t }}-K_{t}\phi(x) \tag{2}\]
now the super-resolved features which we call \(\hat{x}_{t}\) are obtained by an observer architecture, which implements a closed-loop dynamical system of the form:
\[\begin{cases}\hat{x}_{t+1}=\hat{x}_{t}+T_{t}^{-1}\pi^{-1}y_{t}\quad\hat{x}_{0} =0;\\ y_{t}=\phi(T_{t}x)\end{cases} \tag{3}\]
This is just a super-resolved moving average, whereby the variance of \(\hat{x}\) will decrease to a steady state (by Central Limit Theorem), following the practice of stochastic resonance. It is a mixture of upsampling/interpolation and inference-time data augmentation, or ensembling.
### Efficient Implementation
In theory, there is no limitation on the types of sub-token transformations that can be employed. We opt for a straightforward approach by applying translations (with padding) and this practice demonstrates effective results. We sample translations at the pixel level, avoiding the need for sub-pixel interpolation, which could introduce unwanted artifacts.
For a ViT utilizing token sizes of \(m\times n\), we impose a constraint on the maximum magnitude of translation, limiting it to \(\frac{m}{2}\times\frac{n}{2}\). This constraint allows the model to explore all possible token selections within the image. It is worth noting that excessive translation can be counterproductive when applied to downstream vision tasks, as it can result in information loss at the image boundaries. A detailed discussion can be found in Sect3.2, where we study the relation between perturbation level and model performance.
Furthermore, our framework facilitates computational acceleration through batching. Each ViT forward pass on different augmented images operates independently, enabling parallelization. Nevertheless, in practical vision tasks, GPU memory constraints may pose challenges. This is particularly true after upsampling the embeddings, as each \(M\times N\times C\) tensor consumes a significant amount of memory, and there could be numerous such tensors. To address this issue, we employ recursive mean computation to iteratively aggregate information. In the cases where the sole objective is to obtain the embeddings after average pooling, we can simplify the pipeline by even bypassing the upsampling step. Instead, we explicitly hard-code the average pooling of SRT in a recursive manner, by exploiting the property of translation as one averaged token can be explicitly computed by 4 neighboring tokens2. This approach enhances inference speed by a factor of 25 compared to a naive implementation of SRT and substantially reduces GPU memory usage. With ViT-16/S architecture, on DAVIS-2017 (Pont-Tuset et al., 2017) our implementation of SRT runs at 2.2 seconds per image on a Nvidia 1080Ti GPU using a perturbation level of 3 pixels, which is 14 times slower than running a single forward pass, despite SRT requires 49 independent ViT forward passes. To further speed up, one may optionally fine-tune the ViT model by distilling utilizing SRT, so that the inference time and cost remain, as demonstrated in Sect. 3.2.
Footnote 2: Our implementation will be publicly available upon paper publication.
## 3 Experiments
In this section, we first showcase the Stochastic Resonance Transformer (SRT) as a powerful visualization tool. We use it to visualize fine-grained ViT embeddings from well-established ViT models, illustrating SRT's capacity to provide insights into the properties of ViTs. We then extend the application of SRT to various downstream tasks that utilize ViT as a backbone, employing SRT in a zero-shot manner without fine-tuning the network during inference. This evaluation validates SRT's ability to enhance ViT embeddings for diverse vision tasks, and these enhanced embeddings seamlessly integrate with the original tasks, eliminating the need for any additional training. We test on three dense prediction tasks: semi-supervised video object segmentation, monocular depth estimation, and unsupervised saliency detection. To address concerns about potential bias towards dense prediction, we conduct a sanity check on image retrieval and unsupervised object detection. Remarkably, SRT consistently improves performance across all five tasks.
### Dense ViT Feature Visualization
SRT demonstrates significant promise in visualizing features of ViT models. It achieves this by enhancing feature resolution through the aggregation of perturbed inputs, thus without necessitating modifications to the ViT's forward pass, in contrast to using overlapping tokens (Amir et al., 2021) that impose substantial GPU memory demands. Notably, all visualizations in this paper are computed by a standard consumer laptop.
In Fig. 1, we present visualizations of the final layer features from five popular ViT models, all employing the ViT-B/16 architecture. We employ SRT with a turbulence level of 7 pixels to traverse non-overlapping augmented tokens extensively. The resultant high-dimensional features then go through Principal Component Analysis (PCA), with the top three components mapped to RGB channels to facilitate effective visualization. Despite sharing the same architecture, the five models exhibit distinct characteristics owing to variations in their pre-training supervision. For instance, CLIP (Radford et al., 2021) is trained through contrastive visual-language pre-training and captures major image components in the displayed examples. The Supervised model (Dosovitskiy et al., 2020) is trained for ImageNet classification, while DINO (Caron et al., 2021) undergoes contrastive learning. Interestingly, despite their diverse training regimes, both models prioritize similar image regions, potentially due to their shared dataset and resulting common inductive bias. In contrast, SAM (Kirillov et al., 2023) is trained on massive segmentation masks without semantic labels or object-centric priors, and MAE (He et al., 2022) is trained through inpainting of randomly masked image regions. Both methods emphasize local image features over high-level semantics. Our versatile visualization tool provides valuable insights into the characteristics of ViT models, offering substantial potential for practical applications.
### Semi-supervised Video Object Segmentation
We apply the Stochastic Resonance Transformer (SRT) to evaluate its performance using the DAVIS-2017 video instance segmentation benchmark (Pont-Tuset et al., 2017). We adhere to the experimen
Figure 3: **Relative improvement on DAVIS-2017 dataset vs different noise levels. There exists an inherent trade-off between perturbation level and performance gain. Smaller perturbation ranges result in weaker improvements from the baseline model due to lower input diversity, while larger perturbations are susceptible to greater information loss. 3 pixels is found to be the optimal perturbation level on both ViT-S/16 and ViT-B/16.**
tal methodology established in (Jabri et al., 2020), which employs a "semi-supervised" video object segmentation approach. Provided with the initial annotation of the objects of interest in the first frame, this method subsequently propagates the segmentation between consecutive frames. Notably, the method utilizes the last layer feature of the Vision Transformer (ViT) to guide this segmentation propagation process. Consequently, the quality of the ViT features directly impacts the final segmentation results. For optimal outcomes, these features must possess discriminative and semantically meaningful characteristics to effectively support this segmentation task.
In our study, we evaluate various Vision Transformer (ViT) models pre-trained using the DINO (Caron et al., 2021) contrastive scheme. We adopt three different architectures, specifically ViT-S/16, ViT-B/16, and ViT-S/8, each varying in their spatial patch size (16x16 pixels and 8x8 pixels). Our results, presented in Tab. 1, indicate that, on average, the Stochastic Resonance Transformer (SRT) enhances the original baseline models by a relative 2.4% in terms of the F&J score. The most significant improvement is observed with ViT-S/16, where the improvement reaches 4.1%. Importantly, these enhancements are achieved without any modifications to the model or pre-trained weights. However, we address a potential criticism of our approach, which could be seen as trivial test-time augmentation combined with feature-level ensemble. To counter this concern, we perform a heuristic by naively augmenting images by color jitter and performing feature-level ensemble, and we find that this method is, in fact, detrimental to performance. We also investigate whether inference costs induced by SRT can potentially be mitigated via distillation. Toward this goal, we attempt to learn the ensembled SRT representations using the following self-distillation objective:
\[\min_{w}\sum_{x\in\mathcal{D}}||\phi_{w}(x)-SRT(x,w_{0})||, \tag{4}\]
where \(\phi\) and (\(w_{0}\)) \(w\) are the ViT and its (original) parameters, and \(x\) the image in the target dataset. Our preliminary results on DINO-ViT/16 improve from the baseline by \(1.3\%\) after the self-distillation step. Note that Eq. equation 4 is agnostic to the task and requires no label, rendering distillation by SRT a fine-tuning scheme that adapts pre-trained ViT features to new target datasets. We leave the investigation of this to future work.
Fig. 3 illustrates the relative improvement across different perturbation levels of SRT applied to ViT-S/16 and ViT-B/16. While higher perturbation levels offer greater input diversity, they are also susceptible to information loss. We anticipate a trade-off between perturbation level and performance gain and empirically identify a perturbation level of 3 pixels as the optimal point for both.
### Monocular Depth Prediction
We extend the application of SRT to monocular depth estimation, a task that leverages ViT features from multiple ViT layers, in contrast to video object segmentation which primarily utilizes the last layer features. This choice of task highlights the versatility of SRT, showcasing its seamless compatibility with various ViT layers and architectures. Specifically, we evaluate three ViT architectures: ViT-S/14, ViT-B/14, and ViT-L/14, each equipped with two prediction heads (linear and
\begin{table}
\begin{tabular}{l|c|c|c|c|c} Method & F\&J-Mean & J-Mean & J-Recall & F-Mean & F-Recall \\ \hline DINO-ViT-S/16 & 0.617 & 0.602 & 0.740 & 0.634 & 0.764 \\ + SRT & **0.642** & **0.632** & **0.783** & **0.653** & **0.819** \\ Distill by SRT & 0.625 & 0.609 & 0.745 & 0.642 & 0.780 \\ \hline DINO-ViT-B/16 & 0.622 & 0.608 & 0.748 & 0.637 & 0.760 \\ + SRT & **0.630** & **0.623** & **0.766** & **0.637** & **0.795** \\ \hline DINO-ViT-S/8 & 0.706 & 0.675 & 0.815 & 0.737 & 0.846 \\ + SRT & **0.720** & **0.688** & **0.827** & **0.752** & **0.868** \\ \hline \hline Naive ensemble* & 0.477 & 0.455 & 0.468 & 0.500 & 0.542 \\ \end{tabular}
\end{table}
Table 1: **Results on DAVIS-2017 video object segmentation. Applying SRT improves over the baseline models uniformly over all metrics, as measured across 3 variants of ViTs trained using the DINO (Caron et al., 2021) contrastive learning objective. SRT yields significant improvements even for ViT-S/8 trained with finer patch sizes (8x8). SRT also uniformly outperforms naively augmenting images and averaging features at inference time, as denoted by \(*\). One may optionally fine-tune the original ViT model by distilling by SRT, which increases performance while inference time and cost remains.**
DPT (Ranftl et al., 2021)). We adopt the experimental settings provided by DINOV2, which offers pre-trained backbones and corresponding prediction heads. Our assessment utilizes the NYU-V2 dataset (Nathan Silberman and Fergus, 2012).
Tab. 2 presents the results, demonstrating consistent improvements over baseline methods. The most significant enhancements are observed in the RMSE and RMSE.log metrics, where we achieve relative improvements of 4.7% and 14.9% with linear heads, and 3.6% and 11.0% with DPT heads, respectively. Notably, these metrics are sensitive to outliers, highlighting the effectiveness of our approach in mitigating instability in ViT features and enhancing robustness. For completeness, we compare our method with output-space ensemble (marked as "OE"), which employs the perturbations as SRT, and aggregates the model output instead of intermediate features. We find no significant improvements, and in some cases, this method is even detrimental. This underscores the robustness of SRT's feature ensemble scheme.
### Unsupervised Salient Region Segmentation
We employ SRT in conjunction with TokenCut (Wang et al., 2022) for unsupervised salient region segmentation tasks. TokenCut is a graph-based approach that applies the Normalized Cut algorithm to partition ViT features into two distinct clusters, representing the salient foreground and the background. The key challenge here is to ensure that the features are not only discriminative across clusters but also consistent within clusters. We execute TokenCut without any post-processing, such as Conditional Random Fields (CRF), to assess the raw quality of ViT embeddings.
Our method is evaluated on three datasets: ECSSD (Shi et al., 2015), DUTS (Wang et al., 2017), and DUT-OMRON (Yang et al., 2013). Due to computational limitations, we exclusively use the ViT-S/16 architecture, as TokenCut demands substantial memory resources when applied to a larger number of ViT tokens. The results are in Tab. 3. Across all three datasets, we consistently observe improvements, with an average increase in the maxF metric of 2.1%. Notably, this improvement is constrained by the model architecture, as TokenCut operates at the coarse segmentation level of ViT tokens. Given that SRT has the capability to provide finer-grained features (directly applying TokenCut at this level is computationally impractical due to its \(O(n^{2})\) complexity, where \(n\) is the number of tokens), we anticipate that future research will develop feasible methods to leverage SRT's high-resolution embeddings effectively.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} Backbone & Head & Method & RMSE & RMSE.log & AbsRel & SqRel & a1 & a2 & a3 \\ \hline \hline \multirow{4}{*}{DINOV2-ViT-B/14} & \multirow{4}{*}{Linear} & Baseline & 0.396 & 0.135 & 0.100 & 0.061 & 0.903 & 0.983 & 0.996 \\ & & +OE & 0.376 & 0.121 & 0.093 & 0.059 & 0.918 & 0.984 & 0.997 \\ & & +SRT & **0.349** & **0.108** & **0.087** & **0.052** & **0.930** & **0.990** & **0.998** \\ \cline{2-10} & \multirow{4}{*}{DPT} & Baseline & 0.323 & 0.109 & 0.074 & 0.044 & 0.941 & 0.987 & 0.996 \\ & & +OE & 0.314 & 0.101 & **0.073** & **0.043** & 0.944 & 0.988 & **0.997** \\ & & +SRT & **0.305** & **0.096** & **0.073** & **0.043** & **0.945** & **0.989** & **0.997** \\ \hline \hline \multirow{4}{*}{DINOV2-ViT-S/14} & \multirow{4}{*}{Linear} & Baseline & 0.471 & 0.162 & 0.125 & **0.084** & 0.853 & 0.972 & 0.994 \\ & & +OE & 0.486 & 0.153 & 0.126 & 0.095 & 0.858 & 0.974 & 0.994 \\ & & +SRT & **0.457** & **0.140** & **0.118** & 0.085 & **0.876** & **0.980** & **0.996** \\ \cline{2-10} & \multirow{4}{*}{DPT} & Baseline & 0.336 & 0.114 & 0.080 & **0.048** & 0.933 & 0.986 & 0.996 \\ & & +OE & 0.347 & 0.114 & 0.080 & 0.053 & 0.932 & 0.985 & 0.996 \\ & & +SRT & **0.334** & **0.104** & 0.080 & 0.051 & **0.935** & **0.988** & 0.996 \\ \hline \hline \multirow{4}{*}{DINOV2-ViT-L/14} & \multirow{4}{*}{DPT} & Baseline & 0.373 & 0.127 & 0.093 & 0.054 & 0.916 & 0.985 & 0.996 \\ & & +OE & 0.401 & 0.131 & 0.097 & 0.062 & 0.908 & 0.982 & 0.996 \\ \cline{1-1} & & +SRT & **0.365** & **0.113** & **0.090** & **0.053** & **0.924** & **0.989** & **0.998** \\ \cline{1-1} \cline{2-10} & \multirow{4}{*}{DPT} & Baseline & 0.311 & 0.105 & **0.070** & 0.042 & 0.946 & 0.988 & **0.997** \\ \cline{1-1} & & +OE & 0.317 & 0.103 & 0.072 & 0.044 & 0.942 & 0.987 & 0.996 \\ \cline{1-1} & & +SRT & **0.297** & **0.092** & **0.070** & **0.041** & **0.947** & **0.991** & **0.997** \\ \end{tabular}
\end{table}
Table 2: **Results on NYU-V2 depth prediction. Our method can be extended without modification to improve intermediate features to yield improved performance on the downstream depth prediction tasks. While ensembling of outputs (OE) can often be detrimental to performance, applying SRT on the features from pre-trained backbones (inputs to prediction heads) can improve performance over baselines by \(4.7\%\) and \(14.9\%\) on RMSE and RMSE.log, using the linear prediction head and by \(3.6\%\) and \(11.0\%\) using the DPT head.**
### Sanity Check: Image Retrieval and Unsupervised Object Detection
Incorporating SRT into vision tasks involves updating ViT features based on fine-tuned super-resolved features. However, questions remain regarding whether the observed enhancements in dense prediction tasks are solely due to increased awareness of semantic boundaries in images and whether this method extends to non-dense prediction tasks. To address these concerns, we conducted a sanity check using image retrieval and unsupervised object detection tasks.
For image retrieval, we applied a nearest-neighbor protocol following DINO, using the Oxford image retrieval datasets (Radenovic et al., 2018). Notably, our base model's pre-training poses a substantial domain gap to the target datasets. Although image retrieval primarily requires distinctive image-level features (rather than pixel-level), aiming to match images to queries at a higher level, SRT exhibited effective adaptation, resulting in a notable 2.6% relative improvement in accuracy.
Regarding unsupervised object detection, we utilized TokenCut and the VOC07 dataset (Everingham et al., 2010). Unsupervised object detection focuses on region-level discriminative features, utilizing bounding boxes instead of segmentation masks for object delineation. Despite this, we observed a 1.0% relative improvement in the detection rate, reaffirming that SRT does not compromise the information within the original ViT embeddings. These results serve as a critical validation of SRT's capacity to enhance ViT features without distorting their original information.
## 4 Discussion
### Related Work
**Stochastic Resonance** was proposed by Benzi et al. (1981) and first applied in climate dynamics (Benzi et al., 1982) and later in signal processing (Wellens et al., 2003; Kosko & Mitaim, 2001; Chen et al., 2007) and acoustics (Shu-Yao et al., 2016; Wang et al., 2014). It is used to super-resolve a signal beyond the native resolution of the sensor by adding white noise. We use the same principle to adapt generic ViT image features for dense prediction downstream tasks. By randomly translating the images, (i.e. introducing noise in the spatial dimension), we are able to super-resolve ViT image features to be smoother and better suited for dense prediction tasks. We leave extensions to other groups or semi-groups of transformations (_e.g._, scale or domain size) to future work.
**Test-time data augmentation** involves aggregating model predictions from augmented test input to a final prediction. Applying such a technique increases the robustness of predictions (Prakash et al., 2018; Song et al., 2017; Cohen et al., 2019) and prediction accuracy (Krizhevsky et al., 2012; Szegedy et al., 2015; Simonyan & Zisserman, 2014; Jin et al., 2018; Matsunaga et al., 2017) in a variety of tasks. It can also used to estimate the uncertainty of the model (Matsunaga et al., 2017; Smith & Gal, 2018; Ayhan & Berens, 2022; Wang et al., 2019). Different transformations are
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c|c} Task & Metric & Baseline & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \multirow{2}{*}{Image Retrieval} & Medium & 34.6 & 34.8 & 35.1 & 35.2 & 35.3 & 35.3 & **35.5** \\ & Hard & 13.0 & 13.1 & 13.2 & 13.1 & 13.2 & **13.2** & 13.1 \\ \hline Object Discovery & Detection Rate & 68.7 & 68.9 & 68.9 & 69.2 & **69.4** & 69.3 & 69.2 \\ \end{tabular}
\end{table}
Table 4: **Results on Image Retrieval and Object Discovery. Despite operating purely on pixel-level features, SRT generalizes to non-dense prediction tasks operating on higher-level region/image features to yield equal or better performance compared to the standard inference baseline. On the Oxford image retrieval task, SRT on the DINO-ViT-16 model yields up to \(2.6\%\) relative improvement from the baseline model. On the unsupervised object detection task, SRT improves the detection rate by up to \(1.0\%\).**
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} Datasets & \multicolumn{2}{c|}{ECSSD} & \multicolumn{2}{c|}{DUTS} & \multicolumn{2}{c|}{DUTS-OMRON} \\ \hline \hline Feature Extractor & maxF & IoU & Acc. & maxF & IoU & Acc. & maxF & IoU & Acc. \\ \hline DINO ViT-S/16 & 80.3 & 71.2 & 91.8 & 67.2 & 57.6 & 90.3 & 60.0 & 53.3 & **88.0** \\ +SRT & **82.4** & **71.7** & **92.1** & **68.8** & **58.3** & **90.6** & **60.8** & **53.7** & **88.0** \\ \hline \end{tabular}
\end{table}
Table 3: **Results on unsupervised salient region segmentation. Despite architectural constraints, our method yields consistent improvement on all three datasets, with an average increase of 2.1% in the maxF metric.**
used to target different potential tasks: Pang et al. (2019) linearly combines the testing input and a randomly sampled clean image to generate classification prediction. Isensee et al. (2018) performs flipping and rotation to the test input image to generate 64 different inputs and finally aggregates the outputs to perform medical image segmentation. Krizhevsky et al. (2012) crops the images into smaller patches and ensemble the results for classification. Self-ensembling (Bousselham et al., 2021) is also closely related to our work. Bousselham et al. (2021) leverages multi-scale features fed into multiple independent decoders to create an ensemble within a single model. Liu et al. (2018) ensembles outputs from networks augmented with random noise layers to improve model robustness. SRT aggregates information via adding spatial translations as noise and can be considered a general case of test-time augmentation, where ensembling is performed at the feature level at intermediate layers of a ViT, instead of the output level, which is novel.
**Knowledge distillation** aims to transfer the knowledge from stronger teacher models to weaker student models to improve their performance. Hinton et al. (2015) trains a student model to mimic the soft output distribution of the teacher model. Romero et al. (2014) extends this idea to distill the intermediate features learned by the teacher models. We consider a form of self-distillation (Zhang et al., 2019), in which the student itself is used as the teacher to improve learned representations.
**Dense ViT feature extractor.** Our work is closely related to (Amir et al., 2021),which employs ViT for generating dense visual descriptors. To extract these fine-grained features, (Amir et al., 2021) reduce the stride allowing for overlapping tokens and performing a single forward pass with ViT. In SRT, instead of a single pass, we conduct multiple passes using perturbed inputs. This modification reduces the computational complexity from quadratic to linear.
### Conclusion, Limitation, and Future Work
Averaging sub-token perturbations functions as a versatile visualization tool for ViT embeddings and offers test-time augmentation and ensemble capabilities usable with any ViT architecture. We found the technique especially useful for dense prediction tasks, where even coarse-scale obejts have fine-scale occlusion boundaries that must be resolved.
In contrast to most test-time augmentation and ensemble methods that operate at the output level, and require task-specific designs, our method can be applied to any layer within any architecture and any task that utilizes ViT as a feature extractor, eliminating the need for modifications to the forward pass, such as increasing token numbers, which would significantly increase GPU usage. Instead, SRT accomplishes this by conducting multiple inferences, avoiding the quadratic complexity related to the number of ViT tokens, and enabling super-resolution of ViT embeddings to match the input resolution. This approach is amenable to parallelization through batching, ensuring computational efficiency. Furthermore, the method allows ensembling without memory-intensive resizing of all embeddings to full resolution, which can be executed recursively, as described in Sect. 2.3. Practical implementations demonstrate efficient execution on even laptop GPUs.
With all that said, Stochastic Resonant Transformers have several limitations. The basic embodiment increases inference cost and latency, as each perturbed image necessitates a ViT forward pass. To address this, one viable approach is knowledge distillation, which involves fine-tuning the network to mimic the feature-level output of SRT. We illustrate this process using the DAVIS-2017 training dataset with DINO-ViT-S/16, achieving improved results (F&J-score \(0.617\Rightarrow 0.625\)) without the use of labels or operations on the validation set. This establishes a label-free, task-free transductive fine-tuning scheme that adapts pre-trained ViT features to new target datasets. Future directions may involve refining the distillation process on different layers and exploring the integration of Stochastic Resonance directly into ViT architectures.
Additionally, our findings underscores the segmentation capabilities of ViT embeddings, aligning with recent claims in the field (Caron et al., 2021; Yu et al., 2023). Super-resolved features exhibit sharp, fine-grained semantically relevant boundaries. Furthermore, our method leverages the convexity properties (Park and Kim, 2022) of ViT embeddings, enabling convex combinations (average pooling as a special case) during inference, resulting in improvements across various tasks. It is worth noting that Stochastic Resonance is not limited to ViT architectures nor to spatial quantization. It can be applied to architectures like CNN as well as to other forms of quantization, such as sale or domain size. However, our emphasis in this paper is on ViTs that mostly use non-overlapping tokens, making them particularly suited to our approach. |
2306.03059 | Influence of the finite transverse size of the accelerating region on
the relativistic feedback | Terrestrial gamma-ray flashes (TGFs) are commonly associated with
relativistic runaway electron avalanches (RREAs). However, research shows that
a single RREA cannot generate observable TGF fluxes. In an attempt to settle
this issue the relativistic feedback mechanism was suggested by Joseph Dwyer.
The Monte Carlo simulations and analytical descriptions of this type of
feedback assume that acceleration region has a large size in a plane
perpendicular to the direction of the electric field. Therefore these studies
do not take into account transverse diffusion of RREAs starting points and the
finite transverse size of the accelerating region. Electrons created by the
feedback outside this region can not be accelerated by the electric field and
form an avalanche, which may lead to a decrease in the total number of new
avalanches and an increase in the requirements for self-sustaining RREA
production by the feedback. In this article the transverse propagation of
avalanches starting points was described using a modified two-dimensional
diffusion equation. A correction to the criterion for self-sustaining
production of RREAs was obtained. Monte Carlo simulation was also performed to
calculate the correction for the feedback coefficient. | Alexander Sedelnikov, Egor Stadnichuk, Eduard Kim, Oraz Anuaruly, Daria Zemlianskaya | 2023-06-05T17:30:47Z | http://arxiv.org/abs/2306.03059v1 | # Influence of the finite transverse size of the accelerating region on the relativistic feedback
###### Abstract
Terrestrial gamma-ray flashes (TGFs) are commonly associated with relativistic runaway electron avalanches (RREAs). However, research shows that a single RREA cannot generate observable TGF fluxes. In an attempt to settle this issue the relativistic feedback mechanism was suggested by Joseph Dwyer. The Monte Carlo simulations and analytical descriptions of this type of feedback assume that acceleration region has a large size in a plane perpendicular to the direction of the electric field. Therefore these studies do not take into account transverse diffusion of RREAs starting points and the finite transverse size of the accelerating region. Electrons created by the feedback outside this region can not be accelerated by the electric field and form an avalanche, which may lead to a decrease in the total number of new avalanches and an increase in the requirements for self-sustaining RREA production by the feedback. In this article the transverse propagation of avalanches starting points was described using a modified two-dimensional diffusion equation. A correction to the criterion for self-sustaining production of RREAs was obtained. Monte Carlo simulation was also performed to calculate the correction for the feedback coefficient.
## I Keypoints
* The influence of a finite transverse size of the accelerating region on RREAs dynamics was analytically considered.
* Taking diffusion into account does not make a significant contribution to the feedback coefficient when transverse size of accelerating region much larger than its longitudinal one.
* For a transverse size comparable to the longitudinal one, diffusion leads to a significant decrease in the feedback coefficient and a reduction in the number of avalanches in new generations.
## II Introduction
One of the unsolved problems in atmospheric physics is the construction of a model of Terrestrial Gamma-ray Flashes (TGFs). This phenomenon was first discovered in 1994 by the Compton Gamma Ray Observatory [1] and was observed by other space gamma-ray observatories such as Fermi [2], which were created for observing gamma radiation from astrophysical sources. It has been established that avalanches of relativistic runaway electron avalanches (RREAs) accelerated by an electric field in thunderclouds might be the sources of these flashes [3].
The force acting on relativistic electrons from the accelerating field may exceed losses in interactions with air molecules [4]. Such electrons are called runaway electrons. They produce new runaway electrons, leading to the formation of an avalanche [5; 6]. The dynamics of avalanches is significantly influenced by feedback mechanisms studied by Joseph Dwyer [7]. As a result of feedback, the number of electrons is growing and new avalanches can be created.
There are positron and gamma feedback mechanisms. Positron feedback can be described as follows. An avalanche of runaway electrons radiates gamma-rays. These gamma-rays are generated by electron-positron pairs and positrons begin to propagate in the direction opposite to the direction of the electron avalanche. Then the positrons ionize the air at the beginning of the region, which leads to the formation of new RREA. The gamma feedback mechanism is based on the fact that radiated gamma rays are scattered backward and then, at the beginning of the accelerating region, generate new runaway electrons via Compton scattering or the photoelectric effect. Number of avalanches in new generation divided by number of avalanches in previous generation is called feedback coefficient. In other words, it is a probability that the RREA reproduces itself through relativistic feedback. If this coefficient is greater than one, avalanches multiplication becomes self-sustainable. This regime is called infinite feedback because if the electric field strength is constant, the process will never end, and the number of relativistic particles will be unlimited. It is extremely important to understand the conditions under which this regime
occurs, because exactly infinite feedback greatly increases the number of runaway electrons, and therefore, can be used to describe the high flux of photons in TGF. [8; 9].
For relatively low electric field strength, positron feedback dominates over gamma-ray feedback [10], which motivates to study the positron feedback mechanism in the first place. The criterion of infinite positron feedback was derived in the paper [11]. This work does not take into account diffusion of RREAs and the finite transverse size of the accelerating region. RREAs of new generations resulting from feedback may be created outside the acceleration region, which may lead to a decrease in the number of new avalanches and an increase in the requirements for self-sustaining RREA production by the feedback.
Balloon measurements showed that there are regions in the thundercloud where electric field exceeds threshold field (the minimum field required for the formation of runaway electrons) [12; 13]. However, in view of the peculiarities of balloon measurements, it is difficult to draw conclusions about the transverse size of the overthreshold regions, which may affect infinite feedback. Therefore, it is necessary to estimate the dependence of the criterion of the infinite feedback regime on transverse size. To settle this issue in this article, an correction to the feedback coefficient was derived. Furthermore, using Geant4 simulation, its value was estimated.
## III Avalanches diffusion
To describe the diffusion of RREAs via relativistic feedback, a simple diffusion equation can be used with an additional term responsible for the multiplication of avalanches. We will consider an accelerating region with a uniform electric field directed along the z-axis. The avalanche coordinate will be associated with the coordinate of the primary electron from which it was formed, and for simplicity only the two-dimensional distribution of the avalanche will be considered, without taking into account the z coordinate of the beginning of the avalanche. Therefore, the equation describing the concentration of avalanches is
\[\frac{\partial n_{a}}{\partial t}-\triangledown\cdot(D\cdot\triangledown n_{a })-\frac{n_{a}}{\tau^{*}}=n_{s} \tag{1}\]
where \(n_{a}\) is a two-dimensional distribution of the RREA starting points. \(n_{s}\) is a source function, and \(\tau^{*}=\frac{\tau}{ln\Gamma}\). The last term describes an increase in the number of avalanches due to the feedback factor \(\Gamma\) during the time of formation of a new generation \(\tau\). For only one initial avalanche in the center of coordinate system \(n_{s}(x,y,t)=\delta(x)\delta(y)\delta(t)\) the solution of equation (1) will be Green's function, which in the polar coordinate system is:
\[n_{a}=\frac{1}{4\pi tD}\Gamma^{\frac{t}{D}}\exp\left(-\frac{r^{2}}{4Dt}\right) \Theta(t) \tag{2}\]
This solution describes the distribution of avalanches in an accelerating region that has no boundaries in the transverse plane. Otherwise, the solution must satisfy additional boundary conditions. Since electrons born outside the accelerating region do not cause new avalanche creation, it would be logical to consider the concentration of avalanches at the edge equal to zero. Therefore the dynamics of avalanches in acceleration region with finite transverse size might be described with the following equation with boundary and initial conditions
\[\begin{cases}\frac{\partial n_{a}}{\partial t}-\triangledown\cdot(D\cdot \triangledown n_{a})-\frac{n_{a}}{\tau^{*}}=0\\ n_{a}(t,r)|_{r=R}=0\\ n_{a}(t,r)|_{t=0}=n_{I}\end{cases} \tag{3}\]
\(n_{I}\) denotes the initial distribution of avalanches. The solution can be found in the following form:
\[n_{a}=\sum_{k=1}^{\infty}T_{k}(t)X_{k}(r) \tag{4}\]
Solving the problem on the coordinate-dependent part we can obtain that \(X_{k}(r)=J_{0}(\frac{\mu_{k}}{R})\), where \(J_{0}\) is the Bessel function and \(\mu_{k}\) its zero.
Substitution a series into the equation (3) gives the equation for the time-dependent component
\[T_{k}(t)+T_{k}(t)\left(\frac{\mu_{k}^{2}D}{R^{2}}-\frac{1}{\tau^{*}}\right)=0 \tag{5}\]
The initial value of \(T_{k}(t)\) can be found from the initial conditions on the avalanches distribution and the orthogonality of the Bessel functions:
\[T_{k}(0)=\frac{\int_{0}^{R}n_{I}(r)J_{0}(\frac{\mu_{k}r}{R})rdr}{\int_{0}^{R} J_{0}^{2}(\frac{\mu_{k}r}{R})rdr} \tag{6}\]
Let \(A_{k}=T_{k}(0)\int_{0}^{R}J_{0}(\frac{\mu_{k}r}{R})dr\). Then the total number of avalanches in the acceleration region at time \(t\) is
\[N(t)=\sum_{k=1}^{\infty}A_{k}\cdot\exp\left(-\left(\frac{\mu_{k}}{R}\right)^{ 2}Dt\right)\exp\left(\frac{t}{\tau*}\right) \tag{7}\]
Assuming that \(t=i\cdot\tau\), which corresponds to the time when \(i\) generations of avalanches were born, the number of avalanches in \(i\) generation is
\[N_{i}=\sum_{k=1}^{\infty}A_{k}\cdot\Gamma^{i}\alpha_{k}^{i} \tag{8}\]
where \(\alpha_{k}=\exp(-\left(\frac{\mu_{k}}{R}\right)^{2}D\tau)\)
Correction to the criterion
The finite transverse size may lead to a decrease in the number of new avalanches and an increase in the requirements for self-sustaining RREA production by feedback. The obtained equation (8) gives us the opportunity to find a correction to the criterion of self-sustaining RREA production. If \(\Gamma\cdot\alpha_{1}\) is at least slightly more than 1 then other terms in the series decrease over time. Therefore it is enough to consider only the first term
\[N_{i}\approx A_{1}\cdot\Gamma^{i}\alpha_{1}^{i}=A_{1}\cdot\Gamma_{d}^{i} \tag{9}\]
where \(\Gamma_{d}=\Gamma\cdot e^{-\left(\frac{2.405}{R}\right)^{2}D\tau}\).
Thus, the criterion of self-sustaining production with correction is
\[\Gamma_{d}\geq 1 \tag{10}\]
## V Geant4 simulation
The Monte-Carlo simulation was performed via Geant4 to obtain value of the diffusion coefficient, which describes how strongly RREA starting points propagate in the transverse direction due to relativistic feedback. In this tool a physical list can be chosen to determine the processes that need to be taken into account. In this work G4EmStandardPhysics option4 physics list was chosen, which contains all necessary processes, including Compton scattering, photoelectric effect and pair production for energies characteristic for RREA processes [14; 15]. The simulation took place in a cylindrical volume, which was filled with air with a density corresponding to an altitude of 10 km above sea level, \(\rho=0.41kg/m^{3}\). For the longitudinal size of the area and the field strength, the following values were chosen: \(E=300kV/m\) and \(L=445.7\) m. Such parameters according to [11] provide a feedback coefficient \(\Gamma=1\). The easiest way is to get the diffusion coefficient from a distribution (2). Therefore, a sufficiently large radius of the cylinder \(R=2000m\) was chosen. With such a radius, boundary conditions could be neglected, since avalanches launched from the center of the cylinder will not create new generations beyond the edge of the accelerating region.
The analytical consideration of diffusion does not take into account the coordinate of the beginning of the avalanche along the z-axis and describes only the two - dimensional distribution of avalanches in the transverse plane. Therefore, in order to correctly estimate the diffusion coefficient, it is necessary to consider an average avalanche. The simplest way to do this in simulation is to get the avalanche distribution and then launch avalanches with this distribution from the center of the cylinder. This is equivalent to launching average avalanches from the center. Thus, simulation was divided into four steps.
The first two steps are needed to obtain the distribution of the avalanches along the z-axis. First, the seed electrons were launched at the beginning of the electric field region. These electrons form RREAs, which radiate gamma-rays via bremsstrahlung. The energy, position, and momentum of the positrons generated by these gamma rays were recorded. After that, in the second step of the simulation, recorded positrons were launched. Electrons generated by these positrons were recorded. Thus, the obtained distribution is shown in Figure 1. It is worth noting that the form of the distribution is consistent with the distribution obtained analytically in the paper [11].
In the third step obtained electrons were launched with recorded \(z\) coordinate and \(x=0\), \(y=0\). Finally, the fourth step consisted of launching recorded positrons from the third step. Electrons generated by these positrons were recorded. The transverse distribution of these electrons is shown in Figure 2. It differs from the second-generation avalanche distribution by a constant factor \(p_{e}\) - the probability that an electron turns around and runs away. This distribution was fitted according to formula (2). The value of the diffusion coefficient multiplied by the time between generations was obtained from the fit: \(D\tau\approx 836\ m^{2}\). This gives us the opportunity to evaluate the correction to the criterion of infinite feedback, therefore, the first four alpha coefficients are shown in Figure 3. As it was mentioned in section 4, \(\alpha_{1}\) in the first term of the series corresponds to the correction to the feedback coefficient \(\Gamma\). Moreover, all other alpha coefficients decrease with the growth of their number.
## VI
Fig. 1: RREA starting point distribution, which was obtained from electrons distribution along z-axis after second step of Monte-Carlo simulation. These electrons were created via feedback mechanism. It can be seen that shape of distribution is similar to analytically obtained one. [11].
## VI Discussion
The expressions obtained in Sections 3 and 4 for the feedback coefficient allow us to calculate the field strength required for the occurrence of the infinite feedback regime. Moreover, these expressions allow us to search for the minimum transverse size of the accelerating region. It can be calculated from the corrected feedback coefficient \(\Gamma_{d}\) and the rate of the TGF signal growth. This can be used, for example, to determine the size of regions with a uniform electric field. However, the presence of the diffusion coefficient and the average time \(\tau\) in these expressions complicates the application of the expressions obtained. These parameters depend on the electric field strength and the length of the accelerating region and therefore must be calculated or measured. An attempt to solve this problem was made in Section 5, where the diffusion coefficient multiplied by time \(\tau\) was evaluated by Monte Carlo simulation. Thus, we can use the obtained formulas only for a cell with a uniform electric field \(E=300\) kV/m and a longitudinal size \(L=445.7\) m.
The parameters of the accelerating region used in the simulation impose strong restrictions on the use of the coefficients obtained. However, as was done in [11], it can be assumed that the diffusion coefficient is determined only by the path length of the photon. This length, according to [16], almost does not depend on the strength of the electric field. The average time between two avalanche generations is determined by the average velocity of electrons and positrons. In the work [17] it was shown that the average velocity changes slightly with the electric field strength, but remains close to 0.89c. Therefore, in the first assumption the value of \(D\tau\) does not depend on the field strength. Using this assumption, for an accelerating region with a length of 445.7 m, it is possible to obtain the dependence of the intensity, at which infinite feedback occurs, on the transverse size (Figure 4). For the transverse size comparable with longitudinal one (\(R\approx 223\) m) \(E=300.2\) kV/m. However,
Figure 4: The dependence of the electric field strength at which an infinite feedback occurs on the transverse size of accelerating region (red line). The bold black line indicates the field strength necessary for positive feedback without taking into account the correction to the criterion. For the transverse size comparable with longitudinal one (\(R\approx 223\) m) \(E=300.2\) kV/m.
Figure 3: Dependence of the \(\alpha_{i}\) from transverse size of accelerating region \(R\) for the first four members of series (8). \(\alpha_{i}\) corresponds to correction factor to the feedback coefficient for each member of the series, which describes the number of avalanches. As it was said in Section 5, \(\alpha_{1}\) gives the greatest contribution to the sum of the series. For the transverse size comparable with longitudinal one (\(R\approx 223\) m) \(\alpha_{1}=0.91\), \(\alpha_{2}=0.60\)
Figure 2: The logarithm of the transverse density of electrons recorded after fourth step of Monte-Carlo simulation. These electrons were created via feedback mechanism and they are the starting points of secondary avalanches. Diamonds - simulation result. Solid line - fit according to formula (2). It can be seen from the Figure that the simulation results are in good agreement with the analytical model.
this result was obtained only for an area with a certain longitudinal size L. Therefore, study of the influence of the size of accelerating region on transversal diffusion is of great interest. Moreover, it is important to note that our model can only be applicable when the transverse size of the region is larger than the transverse size of one avalanche. Otherwise, the feedback can be significantly affected by the diffusion of electrons in an avalanche, which was studied in [18].
## VII Conclusion
The main purpose of this work was to describe the transverse propagation of avalanches as a result of feedback. This was done using a two-dimensional diffusion equation. From the solution of the equation, a correction to the minimal conditions for the self-sustaining feedback was obtained.
It was shown that the effect of the limited transverse size of the accelerating region on the feedback coefficient is small if the transverse size of the region is much larger than the longitudinal one. It becomes necessary to take diffusion into account when the transverse size becomes smaller than the longitudinal one. In this case, the correction to the electric field required for infinite feedback becomes extremely significant. This result was obtained for the accelerating region with a longitudinal size \(L=445.7\) m.
The aim of further research will be to analytically or via Monte-Carlo simulation obtain the dependence of the diffusion coefficient and the average time of new generation formation on the electric field strength and the longitudinal length of the accelerating region.
|
2310.17912 | Restoring the Broken Covenant Between Compilers and Deep Learning
Accelerators | Deep learning accelerators address the computational demands of Deep Neural
Networks (DNNs), departing from the traditional Von Neumann execution model.
They leverage specialized hardware to align with the application domain's
structure. Compilers for these accelerators face distinct challenges compared
to those for general-purpose processors. These challenges include exposing and
managing more micro-architectural features, handling software-managed scratch
pads for on-chip storage, explicitly managing data movement, and matching DNN
layers with varying hardware capabilities. These complexities necessitate a new
approach to compiler design, as traditional compilers mainly focused on
generating fine-grained instruction sequences while abstracting
micro-architecture details. This paper introduces the Architecture Covenant
Graph (ACG), an abstract representation of an architectural structure's
components and their programmable capabilities. By enabling the compiler to
work with the ACG, it allows for adaptable compilation workflows when making
changes to accelerator design, reducing the need for a complete compiler
redevelopment. Codelets, which express DNN operation functionality and evolve
into execution mappings on the ACG, are key to this process. The Covenant
compiler efficiently targets diverse deep learning accelerators, achieving
93.8% performance compared to state-of-the-art, hand-tuned DNN layer
implementations when compiling 14 DNN layers from various models on two
different architectures. | Sean Kinzer, Soroush Ghodrati, Rohan Mahapatra, Byung Hoon Ahn, Edwin Mascarenhas, Xiaolong Li, Janarbek Matai, Liang Zhang, Hadi Esmaeilzadeh | 2023-10-27T06:14:45Z | http://arxiv.org/abs/2310.17912v1 | # Restoring the Broken Covenant
###### Abstract.
Deep Learning has taken the IT industry by a storm and it is set to penetrate various disciplines and markets from healthcare [1] and social networking [2] to gaming [3] and entertainment [4]. However, its success is predicated upon the availability of responsive execution platforms as DNNs require massive computations [5, 6]. In fact, they have become the driving use-case for the development and adoption of domain-specific accelerators [7, 8]. These new architectures require state-of-the-art and highly optimized compilers prior to even delivering the expected performance and efficiency gains.
Four challenges make compilers for these designs different than ones targeting conventional general-purpose processors. First, these architectures no longer adhere [9, 10, 11] to the long-held abstraction of fine-grained Instruction-Set Architectures (ISAs) and Von Neumann model [12]. Therefore, more micro-architectural features and components need to be exposed, considered, and controlled by the compiler. For instance, an accelerator compute block typically exposes coarser-grained operations than an ALU that performs an individual addition instruction (e.g., a systolic array performs a whole matrix operation). Second, the on-chip storage is no longer a limited set of registers backed by a hardware-managed cache, it is usually several software-managed scratch pads with various access semantics. Third, the interconnection for on-chip data movement and off-chip loads/stores needs to be handled explicitly by compiler, with the appropriate granularity (e.g., tile size). Finally, the compiler needs to match the rather coarse-grained operations (layers) of a DNN to the varying granularity of computation and storage, supported by the hardware.
To address these challenges, one option is to take a software-centric approach [13, 14] by restricting architectures to a standardized ISA that makes the compiler reusable. However, this approach limits the architectural innovations, offering orders of magnitude benefits through novel, specialized execution semantics. Another option is to take a hardware-centric approach [15, 10] that demands re-implementing new compiler stacks and optimization infrastructure for each accelerator.
Alternatively, this paper takes on these challenges and sets out to simultaneously enable the reuse of the compiler while reducing constraints on the architecture. To achieve these conflicting objectives, we propose a compilation framework that integrates a novel architecture abstraction, dubbed the Architecture Covenant Graph (ACG), in its workflow. Traditional ISAs focus on what fine-grained instructions an architecture can perform, which typically operate with a register file and an opaque caching system. In contrast, ACG is defined to capture accelerator structure as a graph consisting of compute units, on-chip/off-chip memory components, and interconnect; each of which contains operational capabilities as attributes.
To leverage this abstraction, we also devise the Codelets construct which is combined with the ACG to enable our Covenant compiler to target varying types of DNN accelerators. While the ACG abstracts the architecture, the Codelets represent the DNN operations and are gradually transformed into accelerator execution schedules by the Covenant compiler. Each Codelet represents DNN layers as sequences of operation on input variables to produce output variables. During compilation, Codelets are transformed into schedules by mapping operands to ACG memory locations, and assigning operations to ACG compute nodes capable of execution. Once operands and operations are mapped to ACG nodes, the dependence between operations and their operands is translated to explicit data transfer operations over the ACG interconnect.
While a number of inspiring works have achieved multi-target compilation and scheduling support [14, 15, 16], the requirements for efficiently scheduling and generating code for new targets can be prohibitive. For scheduling to new targets, frameworks such as TVM [14] use flexible, target-agnostic scheduling directives to optimize DNN kernels, but each DNN operator schedule requires hand-tuning by architectural experts. As an alternative to manually scheduling, FlexTensor [17] and then Ansor [18] proposed novel search
algorithms capable of identifying optimal schedules using stochastic search and performance measurements, but are inflexible to scheduling on new and unique architectures. Our approach provides the opportunity to adapt these scheduling techniques to new targets and further prune the space of transformations by coalescing architectural characteristics into the schedule. For code generation, both TVM as well as Glow (Glow, 2017) intentionally exclude architectural details because they rely on LLVM (Glow, 2017) as a backend, which is not designed for accelerators. Instead, we provide a malleable technique for code generation which is particularly important for architectures ordinarily using intrinsics which cause powerful instructions to be treated as black boxes by compilers. To support additional accelerators as compilation targets, these frameworks require creation of custom compiler backends and hand-tuned schedule templates.
_Our Covenant compiler is intended for an orthogonal purpose: **automatically** scheduling and generating code for accelerators without a unified, LLVM-like backend by integrating an architecture abstraction into the compiler. This is one of the main contributions of the work, in addition to the **ACG** and **Codelet** constructs which enable Covenant to target varying deep learning accelerators._
To demonstrate the flexibility of the Covenant compiler, we implement ACGs for Qualcomm(r) Hexagon(tm) Vector eXtensions (HVX) 1(Hayay, 2017) and an open-source DNN accelerator (Hay, 2017). For both architectures, we compile 14 different DNN layers across a combination of transformer networks, neural recommender systems, and convolutional neural networks and measure their performance. When targeting HVX, our automated approach achieves 93.8% of the performance of TVM's hand-scheduled templates that rely on manually constructed intrinsic. Compared to manually-implemented DNN layers in Qualcomm Technologies' nnlib which include hand-written assembly kernels, we achieve 31.3% improved performance. Besides HVX, we target an open-source DNN accelerator (Hay, 2017) that shows the flexibility of the Covenant compiler to target an entirely different architecture. The Covenant compiler achieves 182\(\times\) performance improvement using the DNN accelerator compared to a CPU baseline. Finally, we illustrate the feasibility of implementing optimizations using the Covenant compiler by combining different optimization passes and achieve 128.6\(\times\) speedup compared to unoptimized code on HVX. These results show the flexibility of the Covenant compiler for automating scheduling and code generation for accelerators while maintaining high-performance by integrating architecture characteristics through the ACG and Codelets.
Footnote 1: Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries.
## 2. The Missing Link: An Abstraction
for Micro-Architecture Specification
General-purpose processors are based on the von Neumann model of computing, which is a sequential fine-grained instruction execution model. Hence, compilation for these processors is made possible by exposing the Instruction Set Architecture (ISA), through which the micro-architecture is completely abstracted away. However, rapidly emerging DNN accelerators tend to use other models of computing, such as systolic in the case of Google's TPU (Gool, 2017; Gool, 2017) and dataflow in the case of Microsoft's Brainwave (Gool, 2017; Gool, 2017). These DNN accelerators typically consist of one or more arrays of Processing Elements, that can only perform simple arithmetic operations in parallel, as shown by the example in Figure 0(b). Typically these PEs are connected to one another as well as on-chip memory through software-managed interconnection and memory hierarchy. As such, compilation for these novel architectures requires exposing more of the microarchitectural details. In contrast, general-purpose processors use a pipeline to enable a number of ALUs to carry out instructions, as illustrated in Figure 0(a). They are also connected to the memory through a hardware-managed cache. The fundamental differences in the compute model and the organization of the architecture and microarchitecture between DNN accelerators and general-purpose processor clearly demonstrates the need for a new abstraction for compilation. However, exposing every detail makes compiler design an adhoc practice for each specific microarchitecture that is not reusable. Instead, DNN accelerator abstractions are required to enable a reusable compilation
Figure 1. Comparison of microarchitectures for general purpose processors and DNN accelerators.
workflow for different types of DNN accelerator microarchitecture. The following section details such an abstraction, called the Architecture Covenant Graph (ACG).
### Architecture Covenant Graphs
We describe ACG and it's design rationale by using a running example of a generic DNN accelerator microarchitecture and the corresponding ACG in Figure 2. Figure 1(a) visualizes the microarchitecture for an example DNN accelerator, including its off-chip memory and software-managed, on-chip memory in purple, programmable interconnection in green, and three functional units with unique capabilities in yellow.
To capture the data movement properties on DNN accelerator microarchitectures such as these with programmable interconnection and different types of functional unit for mapping operations to, the ACG is modeled as directed graph as shown in Figure 1(b). Each ACG is comprised of vertices representing _programmable memory_ and compute components, and unidirectional or bidirectional edges connecting each component. The edges represent the programmable interconnection between the on-chip/off-chip memory and compute components. Edge direction is required for enabling a reusable compilation workflow, as it informs the scheduler of valid paths for moving data, such as DRAM to Global Scratchpad, and Global Scratchpad to one of the functional units in Figure 1(a). In this example, for each of the interconnections, data can be read and written to and from each of the functional units (Scalar Unit, Vector Unit, Matrix Unit) and the Global Scratchpad, as well as between DRAM and the Global Scratchpad. In some other cases multiple on-chip scratchpads are used for different purposes, with some scratchpads being restricted to sending data to functional units and unable to receive data, in which case the edge between them would be unidirectional. This is unlike traditional memory and caches in general-purpose processors, which are passive and generally do not execute instructions to send or receive data. Instead, the processor core is the active party that loads or stores data to these passive structures. In contrast, the compiler for a DNN accelerator often needs to generate instructions for memory components since they are active elements. Figure 1(a) also includes three separate programmable functional units capable of executing separate operations in parallel: a Matrix Unit, a Vector Unit, and a Scalar Unit. By using a directed graph, the compiler is capable of identifying opportunities for parallelizing operations across multiple functional units by selecting graph nodes which support the operation and have a common memory node predecessor.
However, scheduling the data movement also requires validation that the size of data being transferred is able to fit on the intermediate storage nodes such as Global Scratchpad in Figure 1(a) because there is no hardware-controlled data caching mechanisms. To distinguish between the attributes necessary for computation versus memory, ACG uses compute nodes shown in yellow and memory nodes shown in purple, each of which have distinct sets of attributes for informing the compilation process. In addition, lower-level architecture components shown in gray in Figure 1(a) such as the Controller for sending control signals to other components and Operation Schedule Memory for storing operations are not included in the ACG. _With the primary goal being machine code generation, the ACG excludes components such as these and other low-level details because they are not programmable, and do not provide relevant information to the compiler._
Lastly, the unique properties across different DNN accelerator microarchitectures and even across their functional units binds them closely to the binary codes necessary for execution. As an example, the Matrix Unit in Figure 1(a) uses dataflow execution to perform matrix multiplication, only requiring data availability from the scratchpad to execute instead of relying on an explicit matrix multiplication binary code. In addition, making data available may require a sequence of binary codes for separately sending each input data to the functional unit rather than a single, dedicated code. Therefore, the ACG specifies binary code for a DNN accelerator as mnemonics without trying them to a specific computation model or set of execution semantics. This allows the code generation implementation to be reused across different architectures by because sequences of mnemonics can be defined for a finite set of operations which are delineated by the ACG nodes and edges.
Below, the specification used for mnemonics is detailed, in addition to the different attributes of compute nodes, memory nodes, and edges, included in the ACG.
#### 2.1.1. Memory
Software-controlled memory suchasGlobal Scratchpad in Figure 3 allows the compiler greater control
Figure 2. Example DNN accelerator architecture and its ACG.
over data reuse, but also require explicit mnemonics for operations such as off-chip data transfers. To ensure valid memory accesses during execution, the access semantics and capacity of the memory needs to be known to the compiler so that memory request addresses are properly aligned. As shown in Figure 3, each memory node includes attributes defining their access semantics, such as the data_width for specifying the smallest unit of accessible data in bits is 32. The data_width is particularly important for DNN accelerators supporting mixed precision operations, because certain functional units might support 16-bit operations but read data from a memory component storing each 16-bit operand with a 32-bit data_width. In this case, the compiler must ensure that 16-bit operands are stored in 32-bit chunks rather than packed together, and the increased memory consumption for the operation is accounted for.
In addition, memory nodes use the banks attribute to denote the number of banks in a memory component, as it is common for on-chip memory to include varying number of banks for reading and writing multiple data in parallel to/from coarse grained functional units such as the Vector Unit or Matrix Unit shown in Figure 2b. Each bank is capable of sending data_width bits of data at a time, which means data_width\(\times\) banks determines the size of an addressable element in the memory component. When selecting the sizes of on-chip data to be stored and operated on, the compiler must use this information to ensure the size is correctly aligned in memory by requiring data chunks are divisible by the size of an addressable element. As an example, the Global Scratchpad has 32\(\times\)7 = 224 bit entries, which must be taken into account when generating mnemonics requiring address calculation based on immediate values.
Finally, compilers can exploit large on-chip scratchpads for data reuse by partitioning operands into chunks called tiles which are stored on-chip and operated on together. To validate tile selection, the compiler must ensure all being stored at once is within the capacity of the on-chip memory being used. For the Global Scratchpad, the capacity can be calculated by multiplying the depth attribute by the addressable element size: 224\(\times\)1024 = 229,376 bits, or 28,672 bytes.
#### 2.1.2. Interconnection
When it comes to generating code for transferring data on and off a DNN accelerator, a single binary code is often insufficient due to the limitations imposed by the interconnection between on and off-chip memory. For instance, DRAM in Figure 2a is connected to Global Scratchpad through a bidirectional Off-Chip Memory Interface interconnection. This link constrains the amount of data in bits transferred at a time, or may allow for more than one unit of Global Scratchpad data to be moved in a given cycle. In the running example, a directed edge called Mem. Interface represents these types of interconnection which represent the supported programmable communication capabilities. The directed ACG edges use the bandwidth attribute to define the amount of data in bits capable of being transmitted in a single operation as shown in Figure 4. This information is crucial during compilation, as DNN accelerators provide more flexible data transfer capabilities allowing variable-sized data transfers between on and off-chip memory. Furthermore, the bandwidth determines the number of memory requests the compiler needs to generate for this specific edge to load a tile of data.
In addition, the Interconnection is capable of sending data to multiple parallel programmable functional units, with unique data processing properties, therefore requiring different bandwidths. To distinguish between the different data transmission properties between a single interconnection and different DNN accelerator components, the ACG includes several Interconnection edges with unique bandwidths.
This is particularly important when making scheduling decisions, because a coarse-grained operation could be mapped to multiple parallel functional units with hardware-controlled synchronization, but the interconnection between on-chip storage and certain functional units may require multiple data transfer operations for sending the necessary operand data.
Figure 4. ACG interconnection examples
Figure 5. ACG compute nodes and their capabilities.
Figure 3. ACG storage nodes and their capabilities.
#### 2.1.3. Compute
DNN accelerators provide unique opportunities for mapping coarse-grained operations to a variety of compute resources, as shown in Figure 1(a), which includes a 2x2 Matrix Unit, 2-wide Vector Unit, and Scalar Unit. The ACG represents programmable functional units as compute nodes, using an attribute called capabilities to describe the coarse-grained functionality supported by the corresponding architecture component. Figure 5 demonstrates capabilities for each compute in Figure 1(b), with each compute node supporting varying granularity, datatype, and number of operations. Capabilities encapsulate opportunities for parallelism and type-specific operations in the compute nodes. They are defined by an operation name and an ordered list of datatype and element size pairs for each input/output operand associated with the operation. A subset of the supported operations are defined in Table 1. For example, the Vector Unit supports the ADD operation, taking two input operands with two, 16-bit integer elements and generates two 16-bit integer output elements. The sizes and datatypes are included in the operand specification because the specialized compute units in DNN accelerators are capable of performing different operations in parallel on varying kinds of operand datatypes and sizes.
By defining capabilities this way, the compiler can identify which functional units can execute parts of the DNN layer in parallel by matching the operation name and data type to the functional unit capability, and then breaking the coarse grained DNN operation into the same size chunks. To demonstrate this, consider an element-wise addition operation specified as: (i16,3)-ADD((i16,3),(i16,3)). The compiler can decompose this operation into a scalar addition on the Scalar Unit and a vector addition on the Vector Unit, as both compute nodes support 16-bit integer addition at different granularities. To ensure the full range of layer mappings are exposed to the compiler, capabilities defined for a compute do not require one-to-one mappings between capability primitive and a functional unit's mnemonic. As an example, the Vector Unit might not directly support a multiply-accumulate (MAC) operation using a single mnemonic, but it can be defined as a capability by breaking it into separate multiply-add mnemonics.
#### 2.1.4. Mnemonics
Thus far, the ACG has described the structure and programmability of a DNN accelerator, but the mnemonics which can be composed to carry out the data movement and operations represented in the ACG must also be defined to generate executable binaries. In contrast to general-purpose processors which use instructions and assume a von Neumann compute model, different DNN accelerators depend on different compute models with unique machine code semantics. Thus, machine codes for a DNN accelerator are defined as mnemonics stored as an ACG attribute for generating sequences of mnemonic code. Each individual mnemonic is defined with customizeable attributes for analysis/optimization, and an ordered list of named fields with fixed bitwidth, which can represent either a constant number or an enumerated set of values. As an example, a mnemonic with the ADD id is defined above and includes 4 fields, where src1,src2 and dst are constant fields representing the starting addresses in scratchpad, and target is an enumerated value field which can be set to one of SCALAR or VECTOR depending on the functional unit to be executed on. By generically defining mnemonics in this manner, they can be used for different types of DNN accelerators without binding the mnemonics to certain execution semantics.
## 3. Codelets
To flexibly enable DNN compilation to domain-specific architectures, a programming abstraction must capture both the semantics of an operation, and the relevant microarchitecture components it is tied to. In addition, a construct for enumerating the different types of macro-mnemonics required for code generation must be designed. Covenant uses compute kernel abstractions called Codelets which are complimentary to the ACG to enable compilation. Codelets are defined prior to compilation as a sequence of operations on parametric-shaped operands called _surrogates_ which represent DNN layers. Initially, the operations do not include architecture-specific details, which enables their portability across different architectures. However, during Covenant compilation each Codelet is gradually transformed to define the sequence and mapping of operations based on an ACG. Codelets are declared using a DNNlayer name, and are composed of compute,
\begin{table}
\begin{tabular}{l l|l} \hline
**Type** & **Name** & **Description** \\ \cline{2-3} & **RELU** & **Recified Linear Unit function.** \\ \cline{2-3} & **SIGMOD** & **Logistic sigmoid.** \\ \cline{2-3} & **TANN** & **Hyperbolic tangent function.** \\ \hline \multirow{3}{*}{**Binary**} & **ADDSUB** & **Element-wise addition and subtraction.** \\ \cline{2-3} & **MA-DIVDIV** & **Element-wise multiplication and division.** \\ \cline{2-3} & **MA-DIV** & **Element-wise maximum/minimum.** \\ \cline{2-3} & **MA-DIV** & **Matrix-matrix multiplication.** \\ \hline \multirow{2}{*}{**Ternary**} & **MAC** & **Multiply-accumulate.** \\ \cline{2-3} & **GEM** & **General Matrix Multiply.** \\ \hline \end{tabular}
\end{table}
Table 1. Subset of supported capabilities and their definitions.
Figure 6. Example of a mnemonic definition.
transfer, and loop operations which represent operations on tensors, movement of data, and repetition of operations. As an example, an add Codelet can be defined as shown in Figure (a)a. To integrate ACG information into the compiler, Codelet operations rely on different types of surrogate variables to encompass both data attributes (e.g., datatype, shape) and ACG location throughout execution.
### Surrogate Variables
The process for generating valid sequences and mappings of operations on data is inherently tied to accelerator attributes. As such, surrogate variables in Codelets encode shape information, datatype and ACG location:
\(\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ \texttt{ { \; }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \} \}} \}\}\}\}\}\\}\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
The ADD compute operation in Figure (b)b can be mapped to either PE1 or PE2, as both include the supported capability but with different granularities. The compiler automatically determines the mapping by selecting the ACG node capable of performing the most operations at a time, PE2 in this case, because it can do two element-wise additions at a time. Once selected, the Covenant compiler updates the location field in the compute operation with the target compute node.
_Transfer operations_ After mapping each compute operations in the Codelet, the compiler orchestrates data movement across programmable memory by adding explicit transfer operation to the Codelet. transfer operations are used in Codelets to represent data movements across a DNN accelerator, explicitly codifying scheduling of data locations as required by domain-specific compilers. In Figure (b)b, this can be accomplished by first finding the shortest path between MEM1 and PE2 and adding transfer operations for each operand and each edge. transfer operations are specified with a source, destination, and the transfer size in number of source elements in each dimension of the source operation. The semantics of a transfer operation can differ depending on the type of source and destination used, which accommodates the different operations required by programmable memory:
``` dst=transfer(src[i],MEM1",[b]);#Movedatafrom
A key component of compilation is tiling, and loops offer a familiar construct to apply tiling transformations, as loop splitting is a commonly technique for tiling on general purpose processors. When tiling a Codelet, loops are split into groups according to the number of transfers required to send data from it's source to the compute destination. Splitting a loop operations consists of factoring the number of iterations into an outer loop operations with a step size corresponding to how large a tile will be, and an inner loop operations which has a range equivalent to the outer loops step size.
### Macro-Mnemonics
For DNN accelerators, generating valid mnemonics is conditioned on which functional unit is being used because the same operation can generate different mnemonics depending on the compute unit. The Covenant compiler ensures valid code generation by combining operations types, operand types, and their ACG node attributes to select pre-defined functions for generating sequences of mnemonics called macro-mnemonics. These macro-mnemonics use the Codelet operation type it is matched with, the ACG node(s) it is associated with, and the containing Codelet as contextual input to define mnemonics generation. Each mnemonic is generated by populating it's fields with either statically determined values or by using attributes of Codelet operations.
## 4. Enabling Optimization
Compiler optimizations for DNNs have been shown to enable significant performance improvements when targeting CPU and GPU (Krizhevsky et al., 2017; Krizhevsky et al., 2017). However, state-of-the-art, stochastic optimization techniques which rely on performance measurements to guide the algorithm cannot be applied to domain-specific architectures without the ability to generate executable code. When targeting domain-specific architectures, optimizations have the potential to offer even greater benefits due to their tendency to provide more compute and memory resources with greater programmability.
The Covenant compiler is intended to be a community driven project which improves as a crowd-sourced effort. Therefore, the initial goal is to provide a framework which _enables_ new and existing optimization algorithms to be constructed and benefit from the use of the ACG rather than introducing new optimizations. Below, we discuss how existing optimizations can be transformed by integrating architectural details into the algorithm.
Codelet optimization passes are defined as functions which take an individual Codelet and the ACG as arguments, and return the transformed Codelet. Providing the ACG as an argument allows for retrieval of certain characteristics embedded in the ACG because Codelet operations only contain the ACG node names as attributes. The attributes embedded in ACG nodes bolster common optimizations used in traditional compilers which might otherwise be applied using a heuristic.
``` functionValidTiling(\(codelet\), \(ACG\)) let\(V\leftarrow\emptyset\)//Valid tilings let\(f\)=loopiterationfactorsforloop\({}_{i}\in codelet\) let\(P\)=factor\({}_{i}\in f_{i}\) for\(each\)\(p\in P\)dolet\(constraint\_sat\)=\(True\)//Keep track of data stored on each ACG storage node let\(storage\)[\(s\)]=0forachstorage\(nodes\in\)ACG forach\(n\)=transfer\(codelet\)dolet\(p\_t\)=\(\{factor\in p|factor\in t.offset\}\) let\(xf\)_\(x\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\)\(\_\_\)\(\_\_\)\(\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\\_\_\\_\\_\\\\\\\\_\
_Loop Unrolling_ Loop unrolling is another common optimization, used to reduce the impact of loop branching as well as memory overheads by transferring more data in a loop body and unrolling computations for the transferred data. Using the ACG, opportunities for loop unrolling can be identified by iterating over transfer operations, and checking the bandwidth of the edge connecting the source and destination ACG nodes. If the transfer size is less than the edge bandwidth, more data can be transferred in a single operation if the destination ACG node does not reach maximum capacity.
_Parallelization_ A central focus of domain-specific architectures for DNNs is providing as many opportunities for parallelism as possible. Taking advantage of the parallelism in such architectures is not always trivial, especially when heterogeneous compute cores are available with varying capabilities. However, the ACG simplifies parallelism identification through the capability attributes in compute nodes, which can be combined to form the equivalent operation and therefore be performed in parallel. As an example, Figure (a)a
demonstrates an ReLU operation on two 25-element tensors targeting an ACG composed of two compute nodes: a "SIMD" capable of performing four ReLU operations at a time, and a processing engine ("PE") capable of a scalar ReLU. The two tensors do not factor perfectly into the SIMD, which demonstrates a common difficulty when trying to identify parallelization. One solution to this problem is to introduce additional operations which pad zeros to each of the tensors so that they can be tiled correctly. Instead, the ACG can be used to identify other compute units, namely "PE", capable of being combined with the SIMD to form tiles of parallel operations.
_Mnemonic Packing_ For micro-architectures using Very Long Instruction Words (VLIW), multiple instructions can be performed in parallel by "packing" them together. In these architectures, compiler needs to identify independent instructions and pack them to increase utilization. With Codelet operation being coarsely defined to represent multiple mnemonics, forming mnemonic packets is performed during code generation as an optimization. To form mnemonic packets, ACG resource availability as well as mnemonic dependencies need to be identified. To enable packing, the ACG node executing each mnemonic is identified to determine the resources consumed by a VLIW packet and integrated into the packing algorithm. For dependency analysis, the field attributes in mnemonics can be annotated with read and write semantics to identify sequences of independent mnemonics. Using both of these mnemonic attributes allows packet formation by iterating over mnemonics for a Codelet and creating a packet with a single mnemonic occupying the tgt resource. Then, independent mnemonics capable of execution in the current packet, determined by the consumed ACG resources and available VLIW slots, can be hoisted into the current packet.
## 5. Evaluation
### Experimental Setup
_Benchmarks._ To evaluate covenant, we use a comprehensive set of benchmarks from various classes of DNNs including image classification (InceptionV3 (Vinyals et al., 2015), ResNet-50 (Vinyals et al., 2015)), object detection (MobileNetV3 (Vinyals et al., 2015)), natural language processing (BERT-Large (Dosov et al., 2016)), and neural recommendation systems (DLRM (Krizhevsky et al., 2014)). For image classification and object detection networks we choose convolutional and fully-connected layers that make up the majority of these networks. For BERT-Large, we benchmark the GEMM layers and the self-attention layer of an encoder block. Finally, for DLRM, we benchmark its Multi-Layer Perceptron (MLP) fully-connected layers. Table 2 lists all the DNN layer benchmarks with their layer dimensions. N shows the sequence length for language models and the batch size for other DNNs. IW/IH and OW/OH show the input/output width/height dimensions of the layers, while KW/KH parameters specify the weight kernel dimensions. Note that for FC/GEMM layers, these dimensions are equal to one. Finally, IC/OC column show the number of input/output channels for the DNN layers. We use INT8 precision for inputs/weights and INT32 precision for outputs of layers.
#### 5.1.1. Target Architectures
To demonstrate the flexibility of Covenant for multi-target compilation, we use two distinct architectures: HVX (Krizhevsky et al., 2014) and an open-source DNN accelerator (Dosov et al., 2016). For each architecture, we use the ACG DSL for Covenant compilation.
_DNNWeaver._ DNNWeaver is a parameterizable DNN architecture which consists of two main compute components: (1) a systolic array connected to several on-chip buffers that is capable of executing various-sized convolution and GEMM layers, and (2) a SIMD vector processing array connected to two vector scratchpad memories that supports the remainder of layers (e.g. pooling, activation, normalization, etc.) As shown in
Figure 9. Parallelization Identification Using an ACG.
Figure 10a, the systolic array is connected to four separate on-chip buffers by unidirectional edges, where it reads input activation data, model weights, and bias data from IBUF, WBUF, and BBUF buffers, respectively, and writes output to OBUF buffer. Additionally, the SIMD array is connected to OBUF with a unidirectional edge to consume its data, while is also connected with bidirectional edges to two scratchpad memories (VMEM1/2) to read/write vectors during computations. Table 3 lists a subset of attributes for DNNWeaver ACG nodes. _HVX_. HVXis a Digital Signal Processor (DSP) created by Qualcomm Technologies, which uses VLIW instructions and includes vector extensions. Figure 10b illustrates the ACG of HVX. As shown, HVX incorporates a scalar core that supports a diverse set of scalar instructions (Add, Mul, MAC, Max, etc.) and uses a General Register File (GRF) for operand read/write. In addition to the scalar core, HVXincludes an additional SIMD processor for vector instructions, with 32 lanes each capable of performing a range of four 8-bit operations to a single 32-bit operation per lane. As opposed to DNNWeaver where all the data transactions between DRAM and on-chip buffers are governed explicitly by the instructions, HVXis similar to typical general-purpose processors and incorporates hardware-managed caching mechanisms for loading/storing from/to DRAM, which is why DRAM is not included in the ACG.
#### 5.1.2. Performance Measurements and Comparisons
_Baseline frameworks._ We compare the performance of our proposed Covenant compilation framework to two other frameworks: nnlib and TVM. For all three comparison points we use the HVXas the target architecture and evaluate the performance of the compiled benchmark DNN layers. For benchmark baselines, we use optimized PyTorch [(28)] implementations on an Intel Xeon E7 CPU. nnlib[(29)] is a framework developed by Qualcomm Technologies for offloading DNN operations to HVX, comprising a set of hand-tuned C code and assembly kernels for DNN layers. TVM [(14)] is a compilation stack that supports a variety of general-purpose architectures as well as its own custom accelerator, VTA [(30)]. To compile to HVXusing TVM, we used hand-tuned schedules and manually defined intrinsics developed by Qualcomm Technologies' experts, which generate optimized LLVM code for HVX.
Figure 11. Performance comparison of various frameworks.
Figure 10. Visualization of ACGs for DNNWeaver and HVX. Blue nodes are memory and green nodes are compute.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Model** & **Layer** & **N** & **H/** & **OH/** & **OH/** & **KH/** & **KG/OC** & **\#** \\ & & **IM** & & & & & **Heads** \\ \hline \multirow{8}{*}{**BERT-LG**} & GEMM1 & 384 & 1 & 1 & 1 & 1024/1024 & - \\ & GEMM2 & 384 & 1 & 1 & 1 & 406/1024 & - \\ \cline{2-6} & ATN1- & 384 & 1 & 1 & 1 & 1024/64 & 16 \\ & GEMM & & & & & & \\ \cline{2-6} & ATN2- & 384 & 1 & 1 & 1 & 64/384 & 16 \\ & GEMM & & & & & & \\ \cline{2-6} & ATN3- & 384 & 1 & 1 & 1 & 384/64 & 16 \\ & GEMM & & & & & & \\ \cline{2-6} & ATM- & 384 & 1 & 1 & 1 & 1024/1024 & 1 \\ & GEMM & & & & & & \\ \cline{2-6} & FG1 & 1 & 1 & 1 & 1 & 745/367 & - \\ \cline{2-6} & FG2 & 1 & 1 & 1 & 1 & 367/512 & - \\ \cline{2-6} & FG3 & 1 & 1 & 1 & 1 & 512/526 & - \\ & FG4 & 1 & 1 & 1 & 1 & 256/1 & - \\ \cline{2-6} & FG1 & 1 & 1 & 1 & 1 & 2048/1000 & - \\ \cline{2-6} & CONV1 & 1 & 299 & 149 & 3 & 3/32 & - \\ \cline{2-6} & CONV1 & 1 & 224 & 112 & 3 & 3/16 & - \\ \cline{2-6} & CONV2 & 1 & 112 & 112 & 3 & 16/64 & - \\ \cline{2-6} & FG1 & 1 & 1 & 1 & 1 & 512/1000 & - \\ \cline{2-6} & CONV1 & 1 & 224 & 112 & 7 & 3/64 & - \\ \cline{2-6} & CONV2 & 1 & 224 & 56 & 3 & 64/64 & - \\ \hline \end{tabular}
\end{table}
Table 2. DNN Layer Benchmarks.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Architecture** & **ACG Node** & **Example Attributes** \\ \hline \multirow{8}{*}{**DNNWeaver**} & Systolic Array & (32,64)–GEMM((6,64),(8,64,64),(32,64)) \\ & SIMD & (32,64)–SDUB([(32,64),(32,64)]) \\ & (32,64)–SIGMOD/TANH([(32,64)]) \\ & VMEM1/2 & data width–32; banks-64; depth=2048 \\ & BUF & data width–36; banks–64; depth=2048 \\ & WBUF & data width–38; banks–4096; depth=4096 \\ & OBUF & data width–32; banks–64; depth=408 \\ & BBUF & data width–32; banks–64; depth=1024 \\ & DRAM & data width–8; banks–1; depth=32 billion \\ \hline \multirow{8}{*}{**Hexagon**} & CORE & (u,8,)-ADD([(u,8,8),(u,8)) \\ & (32,1)-ADD((32,1),(32,1)) \\ & (32,1)-ADD((u,8,4),(32,1)) \\ & (32,1)-ADD((u,8,4),(32,1)) \\ & (32,1)-ADD((32,1),(32,1)) \\ & HVX & (32,32)-ADD(SUB((32,32),(32,32)) \\ & (32,32)-MVM0.L(((u,82,4),(u,8,4)) \\ & (32,32)-GEMM(((u,82,4),(u,8,4),(32,32)) \\ & (32,32)-GEMM((u,82,4),(u,8,4),(4),(32,32)) \\ \cline{2-6} & GRF & data width–32; banks–4; depth=32 \\ & VRF & data width–1024; banks–32; depth=32 \\ & 12 & data width–8; banks-32; depth=1024 \\ \hline \end{tabular}
\end{table}
Table 3. A Subset DNNWeaver and HVXACG Attributes.
_Performance measurements._ To measure the performance of the codes compiled by Covenant, nnlib, and TVM targeting HVX, we use the built-in cycle-accurate Hexagon SDK simulator developed by Qualcomm Technologies' experts. To assure a fair comparison, we include the device execution time, which manifests the actual runtime of the DNN layers on the target hardware, for all the comparison points and use that without considering host execution overheads. To evaluate the capability of the Covenant framework in targeting multiple architectures, we also use DNNWeaver, an open-source DNN accelerator (DNN, 2019). To measure the runtime of the Covenant compiled code on DNNWeaver, we used its open-sourced cycle-accurate simulator (Zhu et al., 2019). To verify the correctness of the compiled codes for all the frameworks and target architectures, we compare the outputs generated by the simulators with the software implementation of the DNN layers in PyTorch.
### Results
#### 5.2.1. Framework Comparison
Figure 11 shows the speedup enabled by the three compilation frameworks targeting Hexagon DSP, compared to a baseline CPU implementation. Across all benchmarks, Covenant provides an average of 31.3% improvement compared to nnlib's hand-tuned kernels. Covenant also achieves 93.8% of TVM's performance on average. As Figure 11 shows, all three frameworks perform better on larger layers having more operations. This results from the compounding parallelization optimizations across more loop iterations. Among all benchmarks, BERT-GEMM1 and BERT-GEMM2 layers see the maximum performance gains, as the larger number of computations in these layers provide highest code optimization opportunities. Relative to TVM, the DLRM-FC4 has a smaller speedup in Covenant because it includes a branch instruction for the single-iteration OC loop, whereas TVM generated code avoids this overhead. With regard to nnlib, the improvements are more significant for larger layers, due to inclusion of hand-tuned tensor transformations allowing more MAC operations per cycle. However, these transformations can be detrimental for smaller layers (e.g., DLRM) and convolutional layers where the total size of reduction dimensions is smaller because the transformations cannot maximally utilized, and the overhead is magnified. Lastly, TVM is also able to achieve consistent speedups across each benchmark, similar to Covenant, with the added advantage of LLVM optimization passes. As a result, TVM manages to achieve high performance for even small benchmarks such as DLRM-FC4, but does not attain the significant speedups of nnlib which required specialized tensor transformations.
#### 5.2.2. Optimization Results
We evaluate the effectiveness of three Codelet optimizations when targeting HVX (DNN, 2019). Figure 12 shows the benefits across the benchmark DNNs enabled by the optimizations. The baseline is vanilla Covenant implementations for the DNN layers. We first use Vectorization based on the parallelization techniques described in Section 4. We then enable Mnemonic Packing, as described in Section 4, on top of Vectorization. Finally, we add the third optimization, Loop Unrolling as discussed in Section 4. As the figure 12 shows, Vectorization is the most effective technique. This is a due to massive data-level parallelism available in both DNN layers, as well as HVX. Among the benchmarks, DLRM-FC4 sees the least improvement due to its relatively smaller matrix dimensions. On average across all DNN layer benchmarks, Vectorization achieves 43.0\(\times\) speedup compared to the baseline CPU implementation. Mnemonic Packing leverages the mnemonic level parallelism opportunities in compiled DNN mnemonics to utilize the four available instruction slots in HVXarchitecture. On average, it brings about an additional 2.4\(\times\) performance improvements. Finally, Loop Unrolling is enabled to facilitate efficient memory accesses, which provides a 1.3\(\times\) extra performance improvements, on average.
#### 5.2.3. Multi-Target Compilation
To demonstrate the flexibility in targeting various hardware architectures, we use Covenant to compile to two different styles of architectures. Hexagon DSP is a more general-purpose-style architecture that supports a wide range of operations. On the other hand, DNNWeaver is a domain-specific specialized DNN accelerator with a systolic array architecture that only supports DNN execution. Figure 13 shows the performance of these two hardwares compared to the baseline CPU implementation. On average, HVXbrings 71.8\(\times\) speedup over baseline CPU, while DNNWeaver provides 490.9\(\times\) performance improvements, both using Covenant for compilation. The higher speedups offered by DNNWeaver are due to two reasons: 1) DNNWeaver
Figure 12. Performance improvements based on code optimizations implemented in Covenant.
Figure 13. Performance results of evaluated hardware, while using Covenant for compilation.
hargors 32\(\times\) more number of compute resources compared to HVXand 2) it utilizes a systolic array architecture which is specialized for vector-matrix multiplications, as opposed to SIMD architecture of HVX. Across all benchmarks, DNNWeaver performance improvements are more pronounced for larger DNN layers, as they require large matrix multiplications, suitable for systolic array architectures.
## 6. Related Work
With the growing interest in DNN accelerators, creating efficient and flexible compilers for them is increasingly vital. This work fundamentally differs from prior works in that it integrates a novel accelerator architecture abstraction (ACG) into the compilation stack through Codelets construct. These two enables seamless reuse of the same compiler across various accelerators. Below, we discuss the most related works.
_Compiler Infrastructure for DNN Accelerators._ MLIR (Zhu et al., 2017) and Glow (Glow, 2017) seek to enable compilation for different targets by offering multiple levels of IR. However, they fall short of code generation due to not offering a mechanism to describe the target hardware. Tensorflow's XLA (Glow, 2017) is another framework that uses a high-level graph IR for compilation to general-purpose processors and domain-specific Google's TPUs. Similarly, XLA is a set of optimizations on a specialized IR that is a representation of the DNN and does not concern itself with abstractions for the hardware (i.e., ACG and Codelets).
_Architecture Abstractions for Scheduling._ A prior work has leveraged architecture abstractions for scheduling on spatial architectures by modeling them as directed graphs (Zhu et al., 2017). This work is focused solely on scheduling methodology and does not deal with code generation, whereas Covenant comes with a complete compilation stack that leverages Codelets to facilitate use of scheduling techniques for code generation.
_Architecture Abstractions for Hardware Generation._ A number of prior works have used DSLs to incorporate architecture features into algorithm specification for the purposes of hardware generation (Zhu et al., 2017; Glow, 2017; Glow, 2017). LLHD (Zhu et al., 2017) uses MLIR (Zhu et al., 2017) to simplify hardware design and generation by defining an architecture description language. Covenant fundamentally differs from these prior works because it aims to leverage architecture abstractions to compile to various existing hardware as opposed to generating new hardware.
_Low-level IRs for DNN Scheduling._ Halide (Zhu et al., 2017) and it's extensions (Zhu et al., 2017) introduced the idea of distinguishing between computation and schedule to compile image processing pipelines, and include schedule transformations for common optimizations. TVM (Glow, 2017) takes inspiration from Halide and uses tensor expressions combined with additional scheduling operations such as tensorization to optimize code generation. Schedules for tensor expressions in TVM's IR support arbitrary transformations regardless of the target backend, but can be constrained with manual construction of a valid schedule templates for each tensor expression to constrain code generation (Zhu et al., 2017). Tensor Comprehensions (Zhu et al., 2017) and PlaidML (Zhu et al., 2017) automate the scheduling process using tensor-based IRs, yet lack flexibility for scheduling to new hardware. These works do not propose or integrate an architecture abstraction into the compiler. Moreover, in contrast to these IRs, the Covenant compiler performs scheduling by integrating architectural details into Codelets, enabling scheduling algorithms be reused across DNN operations and different targets.
_Schedularly for DNN Operations._ Another body of works have focused solely on scheduling for different architectures. FlexTensor (Glow, 2017) and Ansor (Glow, 2017) automate the scheduling process by extending TVM's code generation backend. However, they cannot perform scheduling for the accelerators without a pre-existing compiler and runtime environment. Fireiron (Zhu et al., 2017) is a scheduling language for targeting to only GPUs that explicitly incorporates data movement into schedule definitions. CoSA (Zhu et al., 2017) is a scheduling framework that incorporates hardware features into a mixed-integer programming algorithm to form constraints on schedules, without support for code generation. In contrast, Covenant compiler leverages the combination of ACGs and Codelets to provide a uniform and automated compilation framework with code generation backend for targeting to various DNN accelerators.
## 7. Conclusion
DNN accelerators are introducing a new age of compiler design requiring alternative constructs and abstractions. This paper defines two such building blocks, ACG and Codelets. The ACG is an architecture abstraction which makes various components of the accelerator and their connectivity accessible to the compiler. The ACG is integrated into the Covenant compiler through the Codelet construct which represents mutable operations on DNNs, and is progressively transformed into execution mappings and schedules on the ACG. The encouraging empirical results show this work is an effective step towards developing compilers, targeting different accelerators.
|
2305.02617 | Snapshot Averaged Matrix Pencil Method (SAM) For Direction of Arrival
Estimation | The estimation of the direction of electromagnetic (EM) waves from a radio
source using electrically short antennas is one of the challenging problems in
the field of radio astronomy. In this paper we have developed an algorithm
which performs better in direction and polarization estimations than the
existing algorithms. Our proposed algorithm Snapshot Averaged Matrix Pencil
Method (SAM) is a modification to the existing Matrix Pencil Method (MPM) based
Direction of Arrival (DoA) algorithm. In general, MPM estimates DoA of the
incoherent EM waves in the spectra using unitary transformations and least
square method (LSM). Our proposed SAM modification is made in context to the
proposed Space Electric and Magnetic Sensor (SEAMS) mission to study the radio
universe below 16 MHz. SAM introduces a snapshot averaging method to improve
the incoherent frequency estimation improving the accuracy of estimation. It
can also detect polarization to differentiate between Right Hand Circular
Polarlization (RHCP), Right Hand Elliptical Polarlization (RHEP), Left Hand
Circular Polarlization (LHCP), Left Hand Elliptical Polarlization (LHEP) and
Linear Polarlization (LP). This paper discusses the formalism of SAM and shows
the initial results of a scaled version of a DoA experiment at a resonant
frequency of ~72 MHz. | Harsha A. Tanti, Abhirup Datta, S. Ananthakrishnan | 2023-05-04T07:48:12Z | http://arxiv.org/abs/2305.02617v1 | # Snapshot Averaged Matrix Pencil Method (SAM) For Direction of Arrival Estimation
###### Abstract
The estimation of the direction of electromagnetic (EM) waves from a radio source using electrically short antennas is one of the challenging problems in the field of radio astronomy. In this paper we have developed an algorithm which performs better in direction and polarization estimations than the existing algorithms. Our proposed algorithm Snapshot Averaged Matrix Pencil Method (SAM) is a modification to the existing Matrix Pencil Method (MPM) based Direction of Arrival (DoA) algorithm. In general, MPM estimates DoA of the incoherent EM waves in the spectra using unitary transformations and least square method (LSM). Our proposed SAM modification is made in context to the proposed Space Electric and Magnetic Sensor (SEAMS) mission to study the radio universe below 16 MHz. SAM introduces a snapshot averaging method to improve the incoherent frequency estimation improving the accuracy of estimation. It can also detect polarization to differentiate between Right Hand Circular Polarization (RHCP), Right Hand Elliptical Polarization (RHEP), Left Hand Circular Polarization (LHCP), Left Hand Elliptical Polarization (LHEP) and Linear Polarization (LP). This paper discusses the formalism of SAM and shows the initial results of a scaled version of a DoA experiment at a resonant frequency of \(\sim\)72 MHz.
Direction of Arrival (DoA), Polarization, Electromagnetic wave (EM wave), Matrix Pencil (MP) method, Space Electric and Magnetic Sensor (SEAMS)
The radio frequencies ranging from 0.3 to 16 MHz is one of the unexplored realms of the electromagnetic spectrum in the field of radio astronomy. This frequency range covers the red-shifted 21 cm line from the early Universe (\(\sim 0.38\) to 400 million years after the Big Bang), radio emissions from planetary and exoplanetary magnetosphere, and traces a wide range of astrophysical phenomena (Bentum, 2018; Zarka, 2007; Bentum, 2017; Rajan et al., 2016; Bentum et al., 2020). In this frequency range the ground-based astronomical observations have been infrequent, due to the presence of the Earth's ionosphere and radio frequency interference (RFI). The ionosphere reflects and refracts radio waves at low frequencies. It inhibits transmissions from space below the ionospheric cut-off frequency. The cut-off frequency varies depending on the time of day and the Sun's activity, although it can go down to 10 MHz (Toledo-Redondo et al., 2012). Also, it is difficult to find or create a radio-quiet zone for astronomical observations. This is due to the presence of RFIs caused by the intercontinental communication signals which are broadcast by using the reflective feature of the ionosphere, which in turn makes observations at these radio frequencies complex (Bentum and Boonstra, 2016).
A space or moon based radio telescope can tackle the aforementioned challenges of such low frequency radio observations. There have been a few space missions dedicated to observations at these radio frequencies. The first space mission at these radio frequencies was Radio Astronomy Explorer (RAE-1), made observations of the Galaxy's spectrum from 0.4 to 6.5 MHz (Alexander et al., 1969). A successor, RAE-2, was launched to the lunar orbit for measurements in the frequency range of 0.025 to 13MHz (Alexander et al., 1975). Later, the Interplanetary Monitoring Platform (IMP-6) reported observations of galactic spectra at 22 frequencies, from 0.13 to 2.6 MHz (Brown, 1973). Then the Netherlands Chinese Low-Frequency Explorer (NCLE), was launched in 2018, as part of China's Chang'E 4 Lunar mission. This is the most recent experiment for long-wavelength observations. In addition, missions like Cassini-RPWS (Radio and Plasma Wave Science) and STEREO (Solar TErrestrial RElations Observatory) WAVES were designed, to perform
in-situ low frequency observations of Saturn's magnetosphere and Coronal mass ejections from the Sun respectively (Cecconi, 2007).
Another major development in the past decade is the implementation of the space-based very long baseline interferometers VSOP/HALCA by Japan and RadioAstron by Russia (Gurvits, 2020). Furthermore, there were numerous proposed concepts for space-based observations and instrumentation such as Farside Array for Radio Science Investigations of the Dark ages and Exoplanets (FARSIDE) (Burns et al., 2019), Orbiting Low Frequency Antennas for Radio Astronomy (OLFAR) (Bentum et al., 2020, 2011), Distributed Aperture Array for Radio Astronomy In Space (DARIS) (Bentum et al., 2011), Space-based Ultra-long wavelength Radio Observatory (SURO) (Baan, 2013), and Formation-flying sub-Ionospheric Radio astronomy Science and Technology (FIRST) Explorer (Bergman et al., 2009; Bentum et al., 2011). Also, there are a few projects under development such as America-led Dark Ages Polarimeter Pathfinder (DAPPER) for the frequency range 17 to 38 MHz (Burns et al., 2019) and India-led Space Electric and Magnetic Sensor (SEAMS) for the frequency range 0.3 to 16 MHz (Borade et al., 2021, 2018).
RFI suppression, array pattern (non-redundant baselines), high time resolution, time synchronisation, array element localization, antenna design, data handling, and space qualified instrumentation are some of the technological hurdles, in deploying the space based array (Weiler, 2000). This has led to the development of single satellite missions. Single satellite missions are the most often funded space-based missions due to their low technical complexity as well as budget constraints (Lazio et al., 2020; Weiler, 2000; Shkolnik, 2018). For very low frequency astronomical observation, it is necessary to localise and characterise (in terms of its polarisation property) the emissions from various sources to understand the emission mechanism of the source (Lecacheux, 1978). This can be performed by a space based radio telescope array by use of the triangulation method (Wilson et al., 2013) however, a single radio telescope will require co-located antenna configurations and gonio-polarimetric methods (DoA methods). The advances in antenna design and DoA methods/algorithms may overcome technological and economical obstacles. The widely known space projects that employ gonio-polarimetric methods for source localization are Cassini-RPWS, STEREO/WAVES, and NCLE (Cecconi, 2007; Cecconi and Zarka, 2005; Chen et al., 2010). DoA estimation, is a technique for calculating the direction of an electromagnetic wave that is strongly reliant on the sensors' (antennas) orientation (Rucker et al., 1997).
The antenna array methods are mostly used in the development of methods for DoA estimation (Nehorai and Paldi, 1994). Multi-Signal Classification (MUSIC), Estimation of Signal Parameter via Rotational Invariance Technique (ESPRIT), Root-MUSIC and Modified-ESPRIT are among the widely used antenna array method based DoA estimators (Waweru et al., 2014; Roy et al., 1986; Schmidt, 1986). These approaches are based on the eigen mode decomposition method wherein, a co-variance matrix is generated utilizing spatial smoothing to attain a full rank case. As a result, the DoA is well resolved while the technique is computationally complex (Yilmazer et al., 2006). Thus, these techniques requires high computational resources which is difficult in case of in-situ computation. So, techniques with low computational complexity with adequate resolution is essential. Pseudo-vector based DoA, Analytical inversion method, and MPM based DoA are a few methods with low computational complexity. The Pseudo-vector based DoA method estimates the EM wave direction by calculating a pseudo-vector using the spectral density tensor. This method is developed for tri-axial linear antenna configuration, wherein the antennas are arranged orthogonal to each other (Carozzi et al., 2000). The analytical inversion method is developed for the STEREO/WAVES and Cassini-RPWS space missions (Cecconi and Zarka, 2005). This method is analytically derived by utilizing the correlation between the co-located antennas (Cecconi and Zarka, 2005) and has low computational complexity. The analytical inversion method is developed for a specific antenna orientation as well as for in-situ observations (Cecconi, 2007). The Matrix Pencil Method based DoA (MPM DoA) is another method for estimating the DoA which directly operates on the obtained spectrum without estimating the co-variance matrix depending on array configuration (Yilmazer et al., 2006; Sarkar and Pereira, 1995). This makes the MPM DoA method computationally light (Yilmazer et al., 2006). The MPM DoA technique, which needs a triaxial antenna design, finds the DoA of multiple waves falling on the antenna arrangement using a unitary transformation approach and the LSM (Daldorff et al., 2009; Chen et al., 2010).
In this paper we propose a Snapshot averaged MPM (SAM) DoA algorithm, an improved version of the MPM DoA algorithm towards SEAMS mission (Section 1) along with the preliminary results of a proof of concept experiment performed for the DoA estimation. The SAM DoA algorithm introduces an averaging and polarisation detection method based on the boundary conditions and orientation of the antenna structure in space. This results in an increase in incoherent wave detection simultaneously increasing the accuracy and capability to differentiate between different types of polarisation. This modification provides a way to study the polarisation characteristics of the emissions mechanisms at these radio frequencies bands. To test the practicality of the results from the DoA algorithm, a laboratory experiment at the resonant frequency of the antenna (\(\sim 72\) MHz) has been performed. In addition, this polarisation detection capability can aid in several different remote sensing applications like synthetic aperture radar (SAR) and Vegetation monitoring (Egido et al., 2012; Dvorsky et al., 2020).
The current manuscript is organized in such a way that the Section 1, provides an overview of the SEAMS mission. Section 2, briefs about the principle of the EM wave direction and polarization detection. Section 2.1 and 3, present the DoA algorithm description along with the proposed modification and the simulation setup, respectively. Section 4 describes the analysis and results of the Snapshot averaged Matrix Pencil Method (SAM) DoA algorithm and section 5 demonstrates it at the resonant frequency of the antenna. Section 6 gives the Conclusion.
## 1 Space Electric and Magnetic Sensor(SEAMS)
SEAMS is a Radio telescope which is currently being designed to operate from 300 kHz to 16 MHz. The telescope will have three orthogonal electric and magnetic field sensors on-board. The first phase of the project is under development in SP Pune University and Phase II will follow.
In the first phase, only electric field vector sensors will be used for the measurement of RFI in low earth orbit. The system mainly consists of two orthogonal monopole antennas (electric field vector sensor, EFVS) as a proof of concept and as a precursor to Phase II, RF front end with matching network, filters and gain stages for both arms and a two channel data acquisition and analysis system with Telemetry-Telecommand interface. The first phase will be deployed in the Low Earth Orbit (LEO) on the 4th stage of the ISRO-PSLV rocket with the objectives of analysing the acquired RFI from the Earth, detect Auroral Kilometric Radiation (AKR), lightning in the atmosphere, strong solar bursts, etc. This phase will also provide insight about the feasibility of using Commercial Off-The-Sheff (COTS) components to design payloads in LEO and to reduce production cost and up-gradation required for SEAMS phase-2.
The SEAMS Phase-2, will have three orthogonal electric and magnetic sensors and the payload will be placed on the far side of the Moon or in the Moon-Earth L2 to avoid the RFI from Earth. The details of the science goals for Phase II are evolving but are described in the upcoming SEAMS- Phase I article (Kulakrini, A., etal 2022, under preparation).
## 2 DoA Estimation Method
Analysis of the spectral density tensor (\(S=\vec{E}\vec{E}^{\dagger}\)) can provide the direction and polarisation of the EM wave based on the field vector in Eq (1). Based on the Gell-Mann _SU(3)_ matrix, the anti-symmetric part of \(S\) can be converted into a dual pseudo-vector containing information about the EM wave's direction and polarization(Carozzi et al., 2000).
\[\vec{E}=e_{x}\hat{a_{x}}+e_{y}\hat{a_{y}}+e_{z}\hat{a_{z}} \tag{1}\]
The pseudo-vector (\(\vec{V}\)) associated with the spectral density tensor (Eq. (2)) is a parallel vector to the wave vector \(k\) and represents a three dimensional analogy of Stokes parameter \(V\) (of the \(I,Q,U\), and \(V\)) (Carozzi et al., 2000; Chen et al., 2010; Tanti and Datta, 2021).
\[\vec{V}=-2Im[e_{y}e_{z}^{*}\hat{a_{x}}-e_{x}e_{z}^{*}\hat{a_{y}}+e_{x}e_{y}^{* }\hat{a_{z}}] \tag{2}\]
Thus, the DoA of the EM wave (i.e, azimuth and elevation angle) is determined in the spherical coordinate system (See Eq. (3)). The polarization of the EM wave can be estimated by normalizing the magnitude of \(\vec{V}\) with respect to the three dimensional analogy of the Stokes \(I\) parameter (i.e, \(polarization=|\vec{V}|/I\) where, \(I=e_{x}e_{x}^{*}+e_{y}e_{y}^{*}+e_{z}e_{z}^{*}\)) (Daldorff et al., 2009). This results in a value between 0 to 1 where, 0 signifies completely linear polarization (LP) and 1 signifies completely circular polarization (CP).
\[\vec{V}=v\cdot(sin\theta cos\phi\ \hat{a_{x}}+sin\theta sin\phi\ \hat{a_{y}}+cos\theta\ \hat{a_{z}}) \tag{3}\]
Here, \(\theta\) and \(\phi\) represent the Elevation and Azimuth angles and can be rewritten as (Chen et al., 2010)
\[\theta =arccos(V_{z}/|\vec{V}|) \tag{4}\] \[\phi =\begin{cases}\arctan(V_{y}/V_{x})+\pi/2,&\text{if }V_{x}<0\\ \arctan(V_{y}/V_{x}),&\text{if }V_{x}>0\end{cases}\]
### Algorithm description
A single element of a tri-dipole or a tripole receiving \(N\) incoherent EM waves is expressed as
\[s_{x^{\prime}}(t)=\sum_{n=1}^{N}C_{x^{\prime}}^{n}e^{i\theta^{n}_{x^{\prime} }}e^{i\omega^{n}t}+n(t) \tag{5}\]
where, \(C_{x^{\prime}}\), \(\theta_{x^{\prime}}\), and \(\omega\) are the amplitude, phase and frequency of the incident incoherent EM wave on the antenna along the \(x^{\prime}\) axis which is aligned at an angle to the reference axis [Sec. 3]. Then the sampled signal will be represented as
\[S_{x^{\prime}}[k]=\sum_{n=1}^{N}(C_{x}^{n}e^{i\theta_{x^{\prime}}^{n}}e^{i \omega^{n}t_{0}})e^{i\omega^{n}k\delta}+n(t_{0}+k\delta) \tag{6}\]
where, \(S_{x^{\prime}}[k]\) is a discretized signal containing \(N\) incoherent EM waves, \((t_{0}+k\delta)\) represents the sampling time, \(n(t_{0}+k\delta)\) is noise, \(\delta\) is the sampling period, and \(\omega^{n}\) represents the angular frequency of the \(n^{th}\) incoherent wave. To compute the complex amplitude of the received EM wave and obtain the DoA using Eq. (2), the best estimations of \(N\) and \(\omega^{n}\) must be found. Therefore, the developed algorithm estimates the frequency of emission of the incident incoherent wave. A flow chart of SAM DoA algorithm and its predecessor is shown in figure 1. The following is the description of the developed algorithm:
* The acquired signal from each antenna is averaged for n snapshots by calculating the average value of the phase difference between the antennas.
* Thereafter, to reduce the computational cost a beam forming addition (Chen et al., 2010) of snapshot averaged signal is implemented.
* In order to estimate the \(N\) incoherent frequencies MPM (Sarkar and Pereira, 1995; Yilmazer et al., 2006; Daldorff et al., 2009; Chen et al., 2010) is used and is summarised in Appendix B.
* The N incoherent frequencies \(\omega^{n}\) obtained from the MPM are then used to find the complex amplitudes for each axis using a constrained LSM. Wherein, the prior phase information from the averaging method is provided to the constrained LSM. The least square method calculates the \(min(|KF_{i}-S_{i}|^{2})\) such that \(Qp\leq F_{i}\) where, \(K\) is matrix of dimension \(M\times M\) as the signal consists of \(M\) samples (\(M=0,1,2,...,M-1\)), \(F_{i}\) is the complex amplitude matrix, \(S_{i}\) is the spectral density matrix, \(p\) is the prior phase information from averaging method, and \(Q\) is the amplitude matrix.
* Then the DoA is calculated using the pseudovector formulation \(\vec{V}=-2\{A_{y}A_{z}sin(\delta_{y}-\delta_{z})\hat{a_{x}}-A_{x}A_{z}sin( \delta_{x}-\delta_{z})\hat{a_{y}}+A_{y}A_{z}sin(\delta_{x}-\delta_{y})\hat{a_{ z}}\}\) and by considering equations (2) and (4), as well as the detection of polarization (\(P=|\vec{V}|/I\)) and the plane wave direction constraints in our simulation setup (i.e., azimuth \([0^{\circ},360^{\circ})\) and elevation \([0^{\circ},90^{\circ}]\)). There is only one case where \(V_{z}\) is negative: when the incident wave has a LHCP. Since polarisation is defined as \(P=|\vec{V}|/I\), the \(P\) range is limited to \([0,1]\). As a result, the method will be limited to distinguishing between LP and CP. The polarisation finding equation is updated to Eq. (7) to increase the algorithm's polarisation detecting capacity. \[P=\begin{cases}|\vec{V}|/I,&\text{if }V_{z}>0\\ -|\vec{V}|/I,&\text{if }V_{z}<0\end{cases}\] (7)
## 3 Antenna and Simulation Setup
Two types of antenna configurations were simulated for the testing of the DOA algorithm in relation to the SEAMS mission: (1) triaxial dipoles (Tri-dipole) and (2) triaxial monopoles (Tripole). In the Tri-dipole and Tripole configurations, the dipoles and monopoles are \(\sim\)2 m and \(\sim\)1 m long, respectively. The length of the antenna should be adjusted according to the active matching network's sensitivity or operating frequency (Nordholt and Van Willigen, 1980). As illustrated in Fig. 2, the triaxial dipole and monopole are made up of three mutually orthogonal co-located dipoles and monopoles. The dipoles or monopoles are orientated along the \(x^{\prime}\), \(y^{\prime}\), and \(z^{\prime}\) axes, with each subtending an angle of \(35.3^{\circ}\)1. Equation (8) describes the connection between the unit vectors associated with the antenna axes (\(x^{\prime}\), \(y^{\prime}\), and \(z^{\prime}\), from the antenna frame) and the frame axes or global axes (from the reference frame) 2.
\[\hat{a_{x}} =\frac{\hat{a_{x^{\prime}}}}{\sqrt{3}}-\frac{(3+\hat{\sqrt{3}})a_{y^ {\prime}}}{6}+\frac{(3-\hat{\sqrt{3}})a_{y^{\prime}}}{6} \tag{8}\] \[\hat{a_{y}} =\frac{\hat{a_{x^{\prime}}}}{\sqrt{3}}+\frac{(3-\hat{\sqrt{3}})a_ {y^{\prime}}}{6}-\frac{(3+\hat{\sqrt{3}})a_{y^{\prime}}}{6}\] \[\hat{a_{z}} =\frac{\hat{a_{x^{\prime}}}}{\sqrt{3}}+\frac{\hat{a_{x^{\prime}} }}{\sqrt{3}}+\frac{\hat{a_{x^{\prime}}}}{\sqrt{3}}\]
Figure 1: Estimation Algorithm Flowchart - (a) MPM DoA algorithm, (b) Modified-MPM DoA algorithm. Both the algorithm use Matrix pencil method (F2) (Sarkar and Pereira, 1995) to estimate the number of incoherent waves and angular frequency (Daldorff et al., 2009; Chen et al., 2010). In the modified algorithm (shown in (b)) two modifications are introduced (1) Snapshot averaging algorithm (F1) and (2) Polarization, DoA correction (F3).
The field vector \(\vec{E}\) (Eq. (1)) is characterised in the reference frame axes according to the antenna orientation, and the field vector components in the reference frame are recast in terms of the field vector components in the antenna frame using Eq. (8).
\[\begin{split} e_{x}&=\frac{E_{x^{\prime}}}{\sqrt{3}}- \frac{E_{y^{\prime}}(3+\sqrt{3})}{6}+\frac{E_{z^{\prime}}(3-\sqrt{3})}{6}\\ e_{y}&=\frac{E_{x^{\prime}}}{\sqrt{3}}+\frac{E_{y^{ \prime}}(3-\sqrt{3})}{6}-\frac{E_{z^{\prime}}(3+\sqrt{3})}{6}\\ e_{z}&=\frac{E_{x^{\prime}}}{\sqrt{3}}+\frac{E_{y^{ \prime}}}{\sqrt{3}}+\frac{E_{z^{\prime}}}{\sqrt{3}}\end{split} \tag{9}\]
where, \(e_{x}\), \(e_{y}\) and \(e_{z}\) is the field vector components in reference frame, and \(E_{x^{\prime}}\), \(E_{y^{\prime}}\), and \(E_{z^{\prime}}\) are the field vector components in the antenna frame (Tanti and Datta, 2021).
In addition, the antenna configuration is designed and simulated using the CST antenna design software with a plane wave as an excitation source. This is analogous to a free space environment with no RFIs or noise. The simulation was carried out for plane waves arriving from various directions at different frequencies in the sub 20 MHz band (300 kHz - 16 MHz) and with varying polarizations. As the SEAMS mission will be deployed on the far side of the moon, a side of the antenna will always face the moon. Despite the fact that the satellite will be on the far side of the moon, low-level RFIs and reflections from the moon's surface will cause noise in the system (Bentum et al., 2020, 2019). If the tripole antenna always faces the sky with its ground towards the moon, it can lessen the effect of the noise and RFI. Thus, the plane wave direction in the simulation is constrained to the azimuth - \([0^{\circ},360^{\circ})\) and elevation - \([0^{\circ},90^{\circ}]\) since the moon is on one side of the antenna arrangement with the other side facing space.
## 4 Simulation Results and Discussion
As discussed in the above section (3), the antenna configuration simulation is performed for 1000 trials with RHCP and LHCP. The data set generated by the antenna simulation is of unique frequency. To simulate multiple incoherent frequencies, noise of different Signal to Noise Ratio (SNR) values (1 to 30 dB) is added to the time domain data. The performance of the DoA algorithm is evaluated using the following criteria:
Figure 2: Antenna configurations - (a) Tri-Dipole - 3 orthogonal co-located dipoles and (b) Tripole - 3 orthogonal co-located monopoles
1. Singular Value Ratio (SVR)3 response with change in SNR, Footnote 3: The SVR factor is the summation of the ratio between the consecutive values of the eigen values obtained from Singular value decomposition (SVD). This is a good measure to test the performance of the algorithm because the order of eigen values in the SVD are arranged from the most prominent feature in the signal to least prominent feature. If there are a few incoherent waves incident then the change observed in the consecutive eigen values will be abrupt or steep. If the ratio between the consecutive eigenvalues is high, it means that the signal can be detected easily.
2. SVR response with change in number of incoherent sources.
3. Root Mean Square Error (RMSE = \(\sqrt{N^{-1}\sum_{i=1}^{N}(x_{estimated}-x_{actual})^{2}}\)) of azimuth and elevation angles with change in SNR, Footnote 3: The SVR factor is the summation of the ratio between the consecutive values of the eigen values obtained from Singular value decomposition (SVD). This is a good measure to test the performance of the algorithm because the order of eigen values in the SVD are arranged from the most prominent feature in the signal to least prominent feature. If there are a few incoherent waves incident then the change observed in the consecutive eigen values will be abrupt or steep. If the ratio between the consecutive eigenvalues is high, it means that the signal can be detected easily.
4. RMSE of azimuth and elevation angles with change in \(N\), and Footnote 5: A polarization detection table.
In order to evaluate the algorithm, first a CST simulation was carried out using a plane wave excitation and the output voltage from the simulations are recorded. The simulation was performed at high time resolution such that the difference
Figure 3: (a) Time domain Voltage readings, (b) Magnitude and (c) Phase of the Frequency spectrum of the CST simulation of antenna structure shown in Fig. 2 for RHCP plane wave excitation of 15 MHz.
in the phases and amplitudes of the received signal is clearly visible when plotting time domain data as shown in Fig. 3(a); Fig. 3(b) and 3(c) show the Amplitude and phase of the signal in the frequency domain. The simulation results shown in Fig. 3 were obtained for RHCP plane wave with frequency \(15\,\mathrm{MHz}\) approaching from \(+z\) direction. The propagation vector of the plane wave will subtend an equal angle with all the antennas of the tripole and the tri-dipole given our antenna orientation (Section 3). Thus, the voltage induced on each antenna should be \(120^{\circ}\) out of phase with each other, and the same is observed from the simulation as shown in Fig. 3(a). This phase difference is more distinct in the Fig. 3(c) - phase of the Fourier spectrum. Furthermore, the signal received by the both the antenna configurations is observed to be the same.
The data obtained from the simulation are then contaminated with noise. Thereafter the noisy signal is used in the DoA algorithm to find the direction and polarization of the incident wave. As discussed in the section (2.1), \(N\) and \(\omega^{n}\) should be estimated first using the MPM. To evaluate the efficiency of the algorithm SVR is determined, which is the ratio of the eigen values obtained by decomposing \(\Gamma_{R}\left(\sigma_{N}/\sigma_{N+1}\right)\)(Chen et al., 2010; Kanjilal and Palit, 1995). This singular value obtained as a result of decomposition of the \(\Gamma_{R}\) will gradually decrease depending on the number of incident waves. Thus, as per the definition of SVR, the higher the value of the SVR, the better is the estimation of \(N\) and \(\omega^{n}\)(Kanjilal and Palit, 1995; Chen et al., 2010).
Figure 4(a) and 4(b) shows the effect of the change in SNR and \(N\) on SVR response of the SAM DoA algorithm and MPM DoA algorithm. It is observed that the SVR value increases with the SNR value which is as expected. It is also noticed from Fig. 4(a) and 4(b) that the increase in the incoherent waves or frequency reduces the SVR value. This indicates that the detection of an incident incoherent wave would be difficult for multiple incident waves. Furthermore, it is observed that the SAM DoA algorithm improves the SVR response with respect to change in the SNR along with an increase in the number of incoherent waves. From Fig. 4(b), it is observed that increasing the number of snapshot averages improves the SVR value. Thus, the SAM DoA method provides flexibility to increase the SVR value by averaging a large number of snapshots.
The accuracy of detecting the direction and the polarization is shown in the Fig. 4(c), 4(d), and Table 1. As discussed in the above section (3), the simulation is only performed for the plane wave direction in the range of azimuth \(0^{\circ}\) to \(360^{\circ}\) and elevation \(0^{\circ}\) to \(90^{\circ}\). In this case it observed that the RMSE decreases when SNR increases, as can be expected. It is also observed that the error in the direction estimation increases with an increase in the incoherent frequencies in the signal. However, it is to be noted that the RMSE of the SAM DoA algorithm has a significant improvement for a large number of averaged snapshots. Hence, if the number of snapshots to be averaged is increased then the detection of the direction would improve. This improvement in detection can also be observed by Fig. 5, representing the response of SVR and RMSE with respect to increase in the number of averaged snapshots. In the figure the single RMSE value or the effective RMSE\({}_{eff}\) for multiple incident incoherent wave is calculated by RMSE\({}_{eff}=\sqrt{\sum_{i=1}^{N}RMSE_{i}}\) where, RMSE\({}_{i}\) is error due to individual waves. Also the improvements mention can also be observed in Fig. 6 wherein, polar plot images for 9 incoherent sources having azimuth along the perimeter and elevation along the concentric circles. Fig. 6(b) shows the polar plot of the estimated DoA for SAM DoA are shown for SNR \(=\) 15 dB for 9 incoherent frequencies from different directions with number of snapshot averaged is 50. By comparing the images in Fig. 6(a) and 6(b) it is noticed that the area of the estimated DoA is reduced.
Table 1 illustrates the improvement in the detection of the polarization due to the modified algorithm introduced by equation (7). From the table it is clearly observed that the SAM DoA algorithm is capable of differentiating different polarization.
Table 1 shows that the MPM DoA algorithm can only distinguish between Linear, Circular, and Elliptical Polarization, while the improved SAM DOA method can identify all forms of polarization. Table 2 compares the computational complexity of various methods such as Analytical Inversion Method (Cecconi and Zarka, 2005), Pseudo-vector based method (Carozzi et al., 2000), and MPM DoA method (Daldorff et al., 2009; Chen et al., 2010). The computational complexity is calculated by considering the involved processes like Fourier tranformations, correlations, SVD etc. This is done by studying the formulation of these algorithms. From table 2, it is observed that the SAM DoA is computationally more complex and the accuracy of estimation is the highest for this method. In addition to this, the SAM DoA algorithm provides a flexible parameter \(n\) (i.e., average points), and increasing this parameter increases the accuracy of the estimation (Fig. 4) as well as the computational time. This parameter is a loop parameter, and therefore it will not add to the algorithm's complexity but will affect the computing time.
## 5 Setup and Result of a scaled experiment
To test the algorithm, a proof of concept scaled version of DoA experiment was carried out at the resonant frequency of the antenna (length \(\geq 1\,\mathrm{m}\)). This setup consists of a prototype of the tripole antenna arrangement, fabricated using a
\(\sim\)1 m long monopoles with resonance at \(\sim\)72 MHz. The experiment is performed using a fabricated tripole antenna as receiving element (\(T_{RX}\)) and a synthetic radio source made of an aerial antenna (Nooelec monopole antenna) that transmits a monotone signal (at 72 MHz) generated by an RF generator from a known direction (Azimuth (Az) and Elevation (El)). Figure 7 shows the block diagram of the experimental setup. The fabricated antenna is shown in Figure 8. Figure 9 shows the arrangement of the scaled experiment. In this setup, a synthetic source is formed using a Keysight RF generator N5173B producing a monotone signal at 72 MHz with +10 dBm power, connected to the aerial antenna. The receiving system consists of a tripole antenna connected directly to a digital storage oscilloscope (DSO) via RF cable to keep the phase distortions due to inline components (like amplifiers and filters) to a minimum.
Figure 4: Comparing the performance of the MPM DoA and the proposed SAM DoA estimation. (a) Effect of variation of the SNR on the SVR, a measure of efficiency of the DoA algorithm when estimating \(N\) and \(\omega^{n}\) if the number of incoherent wave incident on the antenna configuration is \(N=3\) and \(N=6\); (b) Effect of the change in the incident incoherent wave on the SVR at SNR = 15dB and \(n=20\); (c) Effect of the SNR on the detection of the fraction of the Radio wave by estimating RMSE and the number of snapshots averaged in the SAM DoA estimation is \(n=20\); (d) Effect of the change in incident incoherent wave on the detection of direction of the Radio wave with \(n=20\) in the SAM DoA estimation and SNR = 15 dB.
Figure 10 and 11 show the tripole antenna's reflection coefficient (\(S_{11}\)) and Radiation Impedance of each element in the configuration demonstrating its resonance at \(\sim\)72 MHz. This experiment was carried out utilizing minimum circuit components, since addition of circuit component would contaminate the phase of the received signal (see Appendix A).
The dataset of this experiment has been recorded with the sampling a frequency of 8 GHz in the DSO. The dataset is then down sampled by a factor of 4 to improve the frequency resolution (\(F_{r}\)) of the data. The resultant \(F_{s}\) = 2 GHz corresponds to \(F_{r}\) = 0.98 MHz for 2048 point FFT.
Figure 5: Response of SAM DoA estimation with change in number of incoherent wave (\(N\)) and number of averaged snapshot (\(n\)). (a) SVR response with change in the \(n\); (b) RMSE response with respect to \(n\).
Figure 6: Polar representation Azimuth and elevation angles of the estimated DoA for 9 incoherent frequencies incident on the antenna simultaneously from different directions of the radio sources at SNR of 15 dB. (a) shows the DoA plot for MPM DoA method and (b) shows the plot for SAM DoA method for \(n=50\).
Figure 12, 13, and 14 shows the received signal in time domain and frequency domain for the synthetic source located at Az/El of \(\sim 197^{\circ}\)/\(\sim 24^{\circ}\), \(210^{\circ}\)/\(29^{\circ}\), and \(\sim 41^{\circ}\)/\(\sim 51^{\circ}\). It is evident from the time and frequency domain plots in figures 12 to 14 that the amplitudes and the phases received by the three elements in the antenna are different4. It is observed that there is a abrupt jump in the received signal phase. This phase jump maybe related to the received amplitude of the signal. For example, in Figure 12(a), the \(x^{\prime}\) and \(z^{\prime}\) antennas have the highest and the smallest amplitude. Correspondingly the highest and lowest step jump are seen in Fig. 13(b) and Fig. 13(d) respectively. The stated phenomenon can be observed in readings from all three directions in this laboratory test. Based on the discussion in section 2 and 4, it is shown that the difference in amplitude and phases between the antennas is utilized for estimating the DoA. From figures 12 to 14 it is observed that the change in the direction of the sources causes a change in the frequency domain.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \(N\) & SNR & Actual Polarization & Detected Polarization & Detected Polarization \\ & (in dB) & of the incident waves & (MPM DoA) & (SAM DoA) \\ \hline \multirow{3}{*}{3} & \multirow{3}{*}{10} & 1 RHCP, & 2 EP, and & 1 RHEP, \\ & & 1 LHCP, and & 1 LP & 1 LHEP, and \\ & & 1 LP & & 1 LP \\ \hline \multirow{3}{*}{3} & \multirow{3}{*}{30} & 1 RHCP, & 2 CP, and & 1 RHCP, \\ & & 1 LHCP, and & 1 LP & 1 LHCP, and \\ & & 1 LP & & 1 LP \\ \hline \multirow{3}{*}{10} & \multirow{3}{*}{10} & 2 RHCP, & 8 EP, and & 4 RHEP \\ & & 2 LHCP, and & 2 LP & 4 LHEP, and \\ & & 2 RHEP, and & & 2 LP \\ & & 2 LHEP, and & & \\ & & 2 LP & & \\ \hline \multirow{3}{*}{10} & \multirow{3}{*}{30} & 2 RHCP, & 3 CP, & 2 RHCP, \\ & & 2 LHCP, and & 5 EP, and & 1 LHCP, \\ \cline{1-1} & & 2 RHEP, and & 2 LP & 2 RHEP, \\ \cline{1-1} & & 2 LHEP, and & & 3 LHEP, and \\ \cline{1-1} & & 2 LP & & 2 LP \\ \hline \end{tabular}
\end{table}
Table 1: Polarization detection table describing the classification of the detected polarization by the MPM DoA algorithm and SAM DoA algorithm. Here \(N\) is the number of incident incoherent waves.
Figure 7: A Block diagram of the experimental setup. A high frequency oscilloscope (DSO - 100 kHz to 2 GHz) is used to record the time domain data in this setup.
amplitude and phases of the received signal. In the estimate algorithm, the phase distortion caused by the antenna and the transmission cable impedance has to be considered (see Appendix A). Oscilloscope calibration with the transmission cable has been performed to account for reading errors by oscilloscope and the phase error due to transmission cable. This removed the losses due to the cable impedance as it has been calibrated along with the oscilloscope. Thus, the modification in the algorithm was done to consider the phase distortion by the antenna impedance.
Table 3 shows the estimated DoA for two different sources radiating at 72 MHz with LP. The table also shows a comparative estimation between Pseudovector estimation based DoA method (Carozzi et al., 2000), MPM DoA method (Sarkar and Pereira, 1995; Yilmazer et al., 2006; Daldorff et al., 2009; Chen et al., 2010), and SAM DoA method. In the experiment the signal is transmitted at the resonant frequency with a power of +10 dBm having SNR of \(>60\) dB; it is observed that the error in the estimation is between \(0\cdot 6^{\circ}\) which is high for the given SNR. However, it may be noted that all the DoA methods were able to characterise the EM wave as LP. The large estimation error might be due to several factors affecting the experiment such as multi-path, non planar wave front due to source being close to the
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline Features & Analytical & Pseudo & MPM & SAM DoA \\ & Inversion & vector & DoA & Method \\ & Method & estimation & Method & \\ & & & DoA & \\ \hline Direction & \(<6^{\circ}\) & \(>10^{\circ}\) & \(<1^{\circ}\) & \(<0.5^{\circ}\) \\ Estimation Error (SNR=\(15\) dB) & & & & \\ \hline Maximum & O(\(5N^{3}log_{2}\)-1 & \(0\)(\(3Nlog_{2}\)N + O(\(3Nlog_{2}\)N + O(\(3n^{2}Nlog_{2}\)N + O(\(N\) - (\(N\) - (\(N\) - (\(L\)) + \(L\))(\(L\) + \(L\))(\(M\) - (\(L\)))\(\(L\))\()\()\()\()\()\)\()\()\)\()\()\)\()\()\)\()\()\)\()\()\)\()\()\)\()\()\)\()\()\)\()\)\()\()\)\()\)\()\()\)\()\)\()\()\)\()\()\)\()\()\)\()\)\()\)\()\()\)\()\)\()\()\)\()\)\()\)\()\()\)\()\)\()\()\)\()\)\()\)\()\)\()\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\()\)\(1)\()\)\()\)\)\(\)\)\(\)\(\)\)\(\)\)\(1\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(1\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(1\)\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(1\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(1\)\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\
receiving antenna and the RFI environment of the laboratory. A more detailed study is required in order to understand the effect of physical or environmental parameters on the DoA estimation and hardware. Similar efforts has been made in the literature of the literature, which has been made in the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the the literature of the literature of the literature of the the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the literature of the of the literature of
to understand the arrival of waves from Saturn Kilometric Radiation by Cecconi et al. (2006). It is planned to carry out an elaborate experiment at our desired frequency band. As discussed in section 3, the SEAMS antenna will be an active antenna. Thus, phase contamination in the received signal by the matching network of the antenna has to be adjusted as per its frequency response (described in Appendix A).
## 6 Conclusion
An optimized MPM DoA estimation algorithm by the addition of an averaging and polarization detection method (SAM DoA) is described in this paper. The salient features of the SAM DoA are shown in table 4.
The averaging algorithm estimates the mean of multiple FFT snapshots before applying the MPM to reduce noise and improves the estimation of the incoherent frequencies in the given spectra. The Polarization detection method enabled the detection of different polarisations (Table 1). With these improvements, the algorithm has become adaptable so that it can respond to the increase in the number of incident incoherent waves. In addition, this algorithm has multiple
Figure 10: Complex Reflection coefficient measurement of all the monopole elements in the tripole antenna wherein, the black vertical marker shows the resonance frequency of 72 MHz.
applications in remote sensing where the polarisation of the reflected wave is important, as in the case of agriculture, cracks in materials, etc (Egido et al., 2012; Dvorsky et al., 2020).
The present analysis elaborates the simulation carried out with tri-dipole and tripole antenna configurations. The simulation results, show that the signal received by both the configurations is the same considering the direction constraints of our simulation setup. This will be used in the SEAMS mission which is proposed as an orbiter mission for the far-side of the moon and the antenna for such low frequencies will be an active antenna.
The total computational complexity of SAM DoA has increased when compared with the other algorithms (Table 2). However, the computational cost remains lower than that of other well known algorithms such as MUSIC, M-MUSIC, MP-MUSIC, ESPRIT and many more (Gentilho et al., 2019). The choice of modifications in the algorithm are done such that the computational cost remains low and most of the processing can be performed by the onboard computational devices or FPGA. This is necessary due to the data transfer limitations in space (Walker and Hoeber, 2013).
A proof of concept scaled experiment of DoA carried out in our laboratory at the resonant frequency of the antenna validated the feasibility of detection of DoA by utilizing a triaxial antenna configuration, but in order to test the performance of different algorithms, extensive tests are required. In the laboratory experiment, the observed phase
Figure 11: Radiation Impedance of all the elements in the tripole antenna along the local reference axis \(x^{\prime}\), \(y^{\prime}\), and \(z^{\prime}\) as illustrated in Fig. 8.
variations before and after the received frequency (Fig. 12 to 14) could be due to noisy environment or test equipments used. In order to better understand the phase variations, the data needs to be recorded using a sensitive data logger's ADC dump and then analyzed.
## Acknowledgements
H.A.T. acknowledges the valuable discussions with Mr. Atharva Kulkarni (SPPU) regarding the SEAMS payload design and electronics and with Mr. Krishna Makhija (NRAO) regarding the CST simulations. Authors are thankful to Department of Electronic Science, SPPU (specially Prof D. Gharpure) for its support right from the beginning of this project (2017). Authors are thankful to the entire team of the SEAMS project. H.A.T. is thankful to Mr. Archisman Guha (IIT Indore) and Mr. Abhijeet Dutta (IIT Indore) for their support in DoA experiment. H.A.T. is thankful to research scholars Ms. Aishrila Majumder (IIT Indore), Ms. Deeptil Ayyagari (IIT Indore) and Mr. Sarvesh Mangla (IIT Indore) for their technical suggestions while drafting this manuscript. Authors also thank Dr. C. Bhattacharya for his critical comments.
Figure 12: Signal received from a synthetic source of \(\sim\)72 MHz having Azimuth of \(\sim 197^{\circ}\) and elevation of \(\sim 24^{\circ}\). Here, (a) shows high time resolution voltage values received from the source using the Tripole, (b) to (d) shows the Amplitude and phase of the received signal in frequency domain by the tripole antenna along the local reference axis \(x^{\prime}\), \(y^{\prime}\), and \(z^{\prime}\) as illustrated in Fig. 8.
## Appendix A Phase Contamination by electronic components
In circuit theory any receiving antenna can be viewed as a independent voltage source with a source impedance called antenna impedance or radiation resistance (Balanis, 2016). Figure 15 is the circuit equivalent diagram of an receiving antenna with a load resistance of 50 \(\Omega\).
Since, the voltage received (\(V_{RX}\)) by the antenna is due to electric field of EM wave then, \(V_{RX}\ =\ h_{eff}\vec{E}\) where \(h_{eff}\) is the effective height of the antenna and \(\vec{E}\) is the electric field present in the EM wave. Using the plane wave consideration the electric field component can be written as \(\vec{E}\ =\ E_{0}e^{j(\vec{E}\cdot\vec{r}-\omega t)}\) here, \(\omega\ =\ 2\pi f\). Thus, the voltage received can be written as following:
\[V_{RX}=h_{eff}E_{0}e^{j\vec{E}\cdot\vec{r}}e^{-j\omega t}=Ae^{-j\omega t} \tag{10}\]
Using equation 10 and circuit in Figure 15 the received signal \(V_{out}\) can be written as
\[V_{out}=\frac{V_{RX}\times 50}{50+Z_{ANT}} \tag{11}\]
Figure 13: Signal received from a synthetic source of \(\sim\)72 MHz having Azimuth of \(\sim 210^{\circ}\) and elevation of \(\sim 29^{\circ}\).
Figure 14: Signal received from a synthetic source of \(\sim\)72 MHz having Azimuth of \(\sim 41^{\circ}\) and elevation of \(\sim 51^{\circ}\).
Figure 15: Receiving antenna circuit equivalent. \(V_{RX}\) is the voltage received by the antenna, \(Z_{ANT}\) is the intrinsic impedance or radiation resistance of the antenna, \(R\) is the load resistance of 50 \(\Omega\), and \(V_{out}\) voltage across the load.
As impedance comprises of resistive (\(R\)) and reactive component thus antenna impedance can be written as \(Z_{ANT}=R_{ANT}+jX_{ANT}\). Considering the antenna impedance and equation 11 on can observe analytically how phase is being modified due to the impedance in equation 12.
\[V_{out}=\frac{50A}{\sqrt{R_{ANT}^{2}+50^{2}}}e^{-j[wt+tan^{-1}(X_{ANT}/(R_{ANT}+ 50))]} \tag{12}\]
In case of addition of several circuit components either in series or in parallel, the antenna impedance \(Z_{ANT}\) in equation 12 has to be replaced by the effective impedance of the circuit also known as the Thevenin's equivalent.
## Appendix B Matrix Pencil Method
The Matrix Pencil method is used to obtain the best estimates since it interacts directly with the data instead of generating a co-variance matrix, reducing computer complexity (Yilmazer et al., 2006). Eq. (6) is used to generate a Hankel matrix
\begin{table}
\begin{tabular}{c|c|c} \hline Features & MPM DoA & SAM DoA \\ \hline RMSE in az/ele @ SNR 10dB & 0.998/0.97 & 0.35/0.77 (for \(n\)=20) \\ Polarization detection & Yes & Yes \\ RHCP or RHEP detection & Yes & Yes \\ LHCP or LHEP detection & No & Yes \\ Preferred Antenna configuration & Tri-dipole & Tripole \\ \hline \end{tabular}
\end{table}
Table 4: A comparison table of Matrix Pencil Method (MPM) DoA algorithm and the proposed SAM DoA Algorithm
\begin{table}
\begin{tabular}{l|c|c} \hline Algorithm & Estimated Az/EL & Actual Az/El \\ \hline Pseudovector estimation & \(212^{o}\)/\(15^{o}\) & \\ based DoA method (Carozzi et al., 2000) & & \\ MPM DoA method (Sarkar and Pereira, 1995; Yilmazer et al., 2006; Daldorff et al., 2009; Chen et al., 2010) & \\ SAM DoA method & \(202^{o}\)/\(18^{o}\) & \\ \hline Pseudovector estimation & \(207^{o}\)/\(32^{o}\) & \\ based DoA method (Carozzi et al., 2000) & & \\ MPM DoA method (Sarkar and Pereira, 1995; Yilmazer et al., 2006; Daldorff et al., 2009; Chen et al., 2010) & \\ SAM DoA method & \(212^{o}\)/\(31^{o}\) & \\ \hline Pseudovector estimation & \(43^{o}\)/\(48^{o}\) & \\ based DoA method (Carozzi et al., 2000) & & \\ MPM DoA method (Sarkar and Pereira, 1995; Yilmazer et al., 2006; Daldorff et al., 2009; Chen et al., 2010) & \\ SAM DoA method & \(40^{o}\)/\(52^{o}\) & \\ \hline \end{tabular}
\end{table}
Table 3: A comparison of the DoA obtained from the Experiment using different methods applicable to SEAMS antenna configuration. The test has been carried out for three different radiation directions at resonant frequency. The “Estimated Az/El” coloumn contains the estimated DoA by the algorithms and the “Actual Az/El” column shows the physically measured DoA of the source. The emitted signal was of LP and has been detected in all the algorithm as LP; the SNR for this experiment was \(>60\) dB.
in order to estimate N and \(\omega^{n}\).
\[\Lambda=\begin{bmatrix}S(0)&S(1)&\cdots&S(L)\\ S(1)&S(2)&\cdots&S(L+1)\\ \vdots&\vdots&\ddots&\vdots\\ S(M-L-1)&S(M-L)&\cdots&S(M-1)\end{bmatrix}_{(M-L)\times(L+1)}\]
where, \(L\) is selected between \((M/3,M/2]\) for optimum performance and is known as the pencil parameter (Sarkar and Pereira, 1995); \(M\) is the total sample length. The real matrix(\(\Lambda_{R}\)) is computed using a Unitary matrix transformation (Sarkar and Pereira, 1995)( \(\Lambda_{R}=U^{\dagger}[\Lambda\mid\Pi_{M-L}\Lambda^{*}\Pi_{L+1}]U\); where, \({}^{\dagger}\) represents hermitian conjugate and \(U\) is the unitary matrix (Daldorff et al., 2009)) and the complex number matrix \(\Lambda\). Later, an estimate of the singular values of \(\Lambda_{R}\) is generated using SVD formulation. Matrix \(A_{s}\) consisting of \(N\) largest singular vectors of \(\Lambda_{R}\) is estimated by performing a thresholding operation on the normalized Eigen value i.e., \(\sigma_{i}/\sigma_{max}\). \(N\) generalized singular values are then calculated using an unitary transformation (\(-[Re(U^{\dagger}J_{1}U)A_{s}]^{-1}\cdot Im(U^{\dagger}J_{1}U)A_{s}\)) which are, \(\psi_{1},\psi_{2},\cdots,\psi_{N}\). Also, \(N\) incoherent frequencies are calculated by \(\omega^{n}=2arctan(\psi_{n})/\delta\) for \(n=1,2,\cdots,N\)(Daldorff et al., 2009; Chen et al., 2010; Tanti and Datta, 2021).
|
2308.02833 | A Comprehensive Analysis of Real-World Image Captioning and Scene
Identification | Image captioning is a computer vision task that involves generating natural
language descriptions for images. This method has numerous applications in
various domains, including image retrieval systems, medicine, and various
industries. However, while there has been significant research in image
captioning, most studies have focused on high quality images or controlled
environments, without exploring the challenges of real-world image captioning.
Real-world image captioning involves complex and dynamic environments with
numerous points of attention, with images which are often very poor in quality,
making it a challenging task, even for humans. This paper evaluates the
performance of various models that are built on top of different encoding
mechanisms, language decoders and training procedures using a newly created
real-world dataset that consists of over 800+ images of over 65 different scene
classes, built using MIT Indoor scenes dataset. This dataset is captioned using
the IC3 approach that generates more descriptive captions by summarizing the
details that are covered by standard image captioning models from unique
view-points of the image. | Sai Suprabhanu Nallapaneni, Subrahmanyam Konakanchi | 2023-08-05T10:06:06Z | http://arxiv.org/abs/2308.02833v1 | A Comprehensive Analysis of Real-World Image Captioning and Scene Identification
###### Abstract:
Image captioning is a computer vision task that involves generating natural language descriptions for images. This method has numerous applications in various domains, including image retrieval systems, medicine, and various industries. However, while there has been significant research in image captioning, most studies have focused on high quality images or controlled environments, without exploring the challenges of real-world image captioning. Real-world image captioning involves complex and dynamic environments with numerous points of attention, with images which are often very poor in quality, making it a challenging task, even for humans. This paper evaluates the performance of various models that are built on top of different encoding mechanisms, language decoders and training procedures using a newly created real-world dataset that consists of over 800+ images of over 65 different scene classes, built using MIT Indoor scenes dataset. This dataset is captioned using the IC3 approach that generates more descriptive captions by summarizing the details that are covered by standard image captioning models from unique view-points of the image.
## 1 Introduction:
Image captioning can be thought of as translating an image's features and attributes into a meaningful sentence. This method has numerous applications in various domains, including image retrieval systems, medicine, and various industries. In image retrieval systems, image captioning can be used to provide more accurate and relevant search results by allowing users to search for images using text-based queries. In the medical field, image captioning can be used to automatically generate captions for medical images, aiding in diagnosis and treatment planning. In industrial sectors, image captioning can be used for automated quality control and visual inspection, as well as in autonomous systems such as self-driving cars.
Image captioning is a vision-language modeling task that has seen remarkable progress over the years, thanks to advancements in computer vision and natural language processing techniques. The earliest image captioning models were based on the visual encoding of images, which were then mapped to natural language descriptions using simple neural networks[16]. As the field progressed, language models such as Recurrent Neural Networks (RNNs)[17] and Long Short-Term Memory (LSTM) networks[1] were introduced to generate more complex and coherent sentences. To further improve the performance and generate more coherent captions, an attention mechanism[2, 3, 20] was introduced. More recent advancements in transformer models with self attention mechanisms[7], BERT[18] and GPT-3[15], have revolutionized image captioning, enabling models to generate more accurate and contextually relevant captions by learning from vast amounts of textual data. In addition to architecture, training strategies such as reinforcement learning[4, 5] and vision-language pre-training(VLP)[18, 19] have also played an important role in improving image captioning performance.
Figure1: Illustration of the timeline depicting the development of different architectures and methodologies in the field of image captioning.
Although these models have achieved state-of-the-art performance in image captioning tasks, most studies have focused on simple images or controlled environments, without exploring the challenges of real-world image captioning or understanding the real-world scenes. [Fig 2, 3] illustrate the factors that contribute to the challenges encountered in real-world scene identification. In this survey paper, we evaluate the performance of various models built on different architectures and methodologies, focusing on those that achieved comparable results on evaluation metrics. Our aim is to identify which architecture and methodology performs well in the challenging task of real-world image captioning and scene identification. To assess the models in real-world scenarios, we will develop a new dataset consisting of over 800+ indoor and outdoor scene images, built using MIT Indoor Scenes dataset. To ensure high-quality captions for the images, we will generate the captions using a model that uses the IC3 approach proposed by [14], instead of manual generation, as this model has yielded good results in capturing most of the details in a given scene, and this procedure will be discussed in detail in the following sections.
### Contributions:
The primary objective of our work is to present a comprehensive analysis of various models using different architectures and methodologies for real-world image captioning and scene identification tasks. While several notable works have surveyed image captioning techniques, architectures, and evaluation metrics [10, 11, 12], we will focus on evaluating models on real-world scenes dataset. To facilitate our analysis, we developed a new dataset consisting of over 65 distinct scene classes using the MIT indoor scenes dataset. To generate captions for evaluation, we used IC3-based image description instead of manual generation since this model has demonstrated good results on new human evaluation metrics based on "Helpfulness" and "Correctness". Along with the dataset, we also introduced a novel evaluation metric named Scene Identification Score (SIS) specifically for scene identification. By comparing the performance of various models on our real-world dataset, we aim to identify the most effective architectures and methodologies for image captioning and scene identification in real-world scenarios.
Figure 2: **3**: These images contain - a large group of people standing and sitting in an airport, possibly waiting in line for their plane. Captioning these images using a model becomes challenging as the visual features that strongly support the word “airport” are scarce. It is not only difficult for machines, but even humans can face similar challenges when interpreting such images.
## 2 Literature Survey:
We have conducted an extensive examination of different studies and architectures in the field of image captioning. We have explored the advantages and limitations of each approach, starting from the initial neural network-based captioning model. We then moved on to attention-based models, followed by transformer-based models. Next, we studied vision language pretrainers and finally Generative Pretrained Transformer-based models. This thorough survey of these studies will greatly assist us in analyzing and understanding real-world image captioning and scene identification.
### Show and Tell [2015]
"Show and Tell" by Vinyals et al. is one of the earliest and state-of-the-art works in the domain of image captioning. It introduced the implementation of a Recurrent Neural Network (RNN) as a language model. This model outperformed Kiros et al., who proposed the first Neural Network (Feedforward Network), and Mao et al., who introduced the first Recurrent NN, in terms of evaluation metrics. This was achieved by directly providing the visual input to the RNN and making changes to the encoding approach. However, employing RNNs as decoders in image captioning becomes an apparent idea when considering it as a language translation task, where the goal is to translate an image into text rather than translating between different spoken languages. In contrast to the encoding process in language translation tasks, where RNNs encode words into a hidden state, image captioning leverages CNNs as encoders to effectively capture visual features. The model utilizes Global Encoding with CNN, RNN/LSTM as the decoder and employs the Cross Entropy Loss training strategy.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Approach** & **BLEU-4** & **METEOR** & **CIDEr** \\ \hline Region Based + RNN/LSTM & **38** & **28** & **126** \\ [5, 21, 24, 25, 26] & & & \\ \hline Graph Based + RNN/LSTM & **38** & **28** & **128** \\ [6, 20, 27, 28, 29 ] & & & \\ \hline Self-Attention Based + RNN/LSTM & **39** & **29** & **129** \\ [7, 30, 31, 32, 33 ] & & & \\ \hline BERT + VLP & **39.5** & **29.3** & **129.3** \\ [Unified VLP] & **5.** & **5.** & **5.** \\ \hline \end{tabular}
\end{table}
Table 1: The study provides an analysis of the performance metrics(mode values) of three different architectures utilized by five different reference models. This analysis demonstrates how well these architectures perform when compared to the Unified VLP model (given in the last row), which incorporates Large-Scale Vision-Language Pre-Training.
Attention plays a crucial role in our ability to focus on important aspects while performing any task. This paper by Xu et al. extended the "Show and Tell" [1] model by introducing a visual attention mechanism to improve the alignment between the visual features and the generated words. The attention mechanism allows the model to selectively focus on different regions of the image at each time step during caption generation. By attending to the relevant regions, the model can generate more accurate and contextually appropriate captions. This study introduces two variants: soft attention, which enables the model to focus on different parts of the input sequence and attend to multiple elements simultaneously, and hard attention, which selects only a subset of elements in a given sequence. The language model and training strategy employed in this approach are broadly similar to those of [1] except for the encoding strategy. Unlike [1] which utilizes global encoding, this model employs a grid-based encoding technique that leads to improved feature extraction.
### Image Captioning with Semantic Attention [2016]
This study by You et al. builds upon the concept of visual attention, similar to the approach presented in [2]. However, notable modifications in the implementation and architecture of this model enabled it to surpass various state-of-the-art models that were previously on par. Unlike the fixed resolution spatial attention modeling in [2], this model allows for the utilization of attention concepts from any location and resolution within the image, even if they are not visually present. Additionally, the model incorporates a feedback process that combines top-down and bottom-up features. Instead of using pre-trained features at specific spatial locations like in [2], this model utilizes word features corresponding to detected visual concepts. Similar to [1], this model also utilizes Global Encoding with CNN, RNN/LSTM as the decoder and employs the Cross Entropy Loss training strategy.
### Self-critical Sequence Training for Image Captioning [2017]
SCST by Rennie et al. is one of the notable works in the field of image captioning as it has provided many insights to a large number of state of the art works of the future. This model learns from its own predictions during testing to evaluate and improve its performance. It doesn't rely on external evaluations but uses its own judgment to assess how well it is doing and make adjustments accordingly. This means that only the samples from the model that perform better than the current system during testing are given importance, while the samples that are not as good are given less attention or suppressed. Image features are encoded using a deep CNN, similar to the approach described in [2], with a few modifications in the architecture of the attention model (Att2in). In this model, the image feature derived from attention is only inputted to the cell node of the LSTM. The encoding strategy employed is grid-based, with an RNN/LSTM decoder, and training is conducted using cross-entropy loss and reinforcement learning techniques.
### Bottom-Up and Top-Down Attention for Image Captioning [2018]
In their study, Anderson et al. propose a combined bottom-up and top-down attention mechanism, inspired by the work in [3]. Unlike previous methods that used global or grid-based encoding, this study introduces a region-based encoding approach based on Faster R-CNN which consists of a Region Proposal Network (RPN) in the first stage and a RoI (Region of Interest) pooling layer in the second stage. This allows for easy extraction of objects and salient regions in the image. The captioning model they propose, while similar to previous approaches [2, 4], incorporates specific architectural changes and achieves state-of-the-art performance even without bottom-up attention. They also employ a reinforcement learning approach similar to [4], but with the addition of sampling distribution restrictions for a reduced training time. By combining the new region-based encoding technique, well-structured top-down attention, language LSTM models, and efficient reinforcement learning as the training strategy, this model achieves breakthroughs and state-of-the-art performance in image captioning.
### Auto-Encoding Scene Graphs for Image Captioning [2019]
When we humans imagine a boat, we naturally assume that it is on water, even if the water is not visible. This kind of assumption is called the inductive bias. To generate more detailed and high-quality captions that mimic human understanding, Yang et al. proposed a method called Scene Graph Auto-Encoder (SGAE). This method incorporates the inductive bias into the encoder-decoder pipeline by using a shared dictionary. The pipeline includes a graph encoder based on Graph Convolutional Networks (GCN) to encode the scene graph, and a visual encoder based on Region Proposal Networks (RPN). The captioning model also includes a language model similar to [5] and is trained using a reinforcement learning strategy similar to [4]. Indeed, most of this study is based on the previous works [4, 5], including the other mentioned studies, ultimately resulting in the achievement of state-of-the-art performance by this model. In summary, this model utilizes graph and region-based encoding, an RNN/LSTM decoder, and is trained using reinforcement learning.
### Meshed-Memory Transformer for Image Captioning [2020]
After the groundbreaking study "Attention is all you need" by Vaswani et al. in 2017, it has become evident that Transformers outperform every other architecture in generation and translation tasks. In contrast to prior methods, Cornia et al. present a fully-attentive model that draws inspiration from the work of Vaswani et al. This model utilizes self-attention and eliminates the necessity for recurrence and convolution layers. Despite this difference, the model can still be conceptualized as an encoder-decoder pipeline. The encoder consists of stacked encoding layers, each incorporating an attention mechanism, feed-forward layer, and two memory vectors. These layers collectively refine the relationships between image regions by leveraging a priori knowledge encoded in the memory vectors, similar to the notion of inductive bias [6]. In contrast to recurrent networks, which struggle with long-range dependencies, self-attention computes attention scores between all pairs of elements in the sequence, weighting their contributions to the representation of other elements. Each encoder layer is connected to a decoder layer, forming a mesh-like structure with learnable connectivity. Thus, due to
its mesh-like structure and the utilization of memory vectors to refine relationships between image regions, this model is named as Meshed Memory Transformer.
### BLIP: Bootstrapping Language-Image Pre-training [2022]
The paper "BLIP: Bootstrapping Language-Image Pretraining" introduces a novel approach that leverages large-scale pretraining to capture billions of parameters while training on vast image datasets. By incorporating both text and image data, the authors address the limitations of text-only pre training methods. Their bootstrapping framework integrates image features into the process and explores the effectiveness of combining textual and visual information in language model (LM) pretraining. The study demonstrates significant improvements in downstream tasks like image captioning and visual question answering. The authors propose a contrastive learning objective to align image and text embeddings and promote the learning of semantically meaningful representations. They also investigate strategies like multimodal transformers and cross-modal distillation to incorporate visual data. The results highlight the advantages of jointly leveraging text and image data, surpassing the performance of traditional text-only pretraining. The paper contributes to multimodal learning research and showcases the potential of integrating visual and textual information in LM pretraining, enabling better understanding and generation of multimodal content.
### Image Captioning by Committee Consensus [2023]
This model proposed by Chan et al. combines the state-of-the-art models OFA or BLIP, and the GPT3. In simpler terms, this method involves selecting an arbitrary number of captions, which are generated by advanced image captioning engines like OFA using a technique called temperature-based sampling. These captions are then summarized using a powerful generative engine such as the GPT-3's Davinci-v3. The authors of this study put considerable effort into crafting a well-designed prompt, which enabled the language model to generate highly detailed summaries that even surpassed the state-of-the-art OFA model. However, evaluating the performance of this model using standard metrics proved to be inadequate due to the high level of detail in the generated captions. Instead, the model's effectiveness was assessed using new human evaluation metrics focused on "Helpfulness" and "Correctness". Remarkably, this method resulted in captions that are significantly more descriptive and informative compared to those generated by individual models.
### Chat Captioner [2023]
Zhu et al. introduced a novel automatic questioning system called ChatCaptioner for image captioning. Building upon the use of Large-Scale Pre-trained models and Large Language Models as seen in previous works like [9], this approach combines the strengths of BLIP-2, a robust image captioning and visual question answering (VQA) model, with ChatGPT, a powerful Large Language Model. ChatGPT interacts with BLIP-2 by posing questions to its VQA engine, extracting additional image details. By leveraging visual question answering, automatic question generation, and conversation summarization, this method produces more detailed captions that cover multiple aspects of the image. While traditional evaluation metrics like BLEU score show lower performance compared to models
like BLIP-2, the quality of the generated captions is significantly improved. To evaluate their approach, the authors employed Human Votes for assessing image information and conducted Correctness Analysis, demonstrating that this model outperforms other models in both aspects.
## 3 Methodology:
In this study, we utilize different models, each constructed with distinct architectures. These architectures are based on visual encoding, language models, and training strategies. The first approach employs region-based encoding with RNN/LSTM as the language model and is trained using cross-entropy loss and reinforcement learning strategies (Sec 2.1). The second approach utilizes graph-based encoding with RNN/LSTM as the language model and is also trained using cross-entropy loss and reinforcement learning (Sec 2.2). The third approach adopts self-attention-based encoding with a transformer as the language model and is trained using cross-entropy loss and reinforcement learning strategies (Sec 2.3). It has been observed that despite the existence of multiple models based on the above three approaches, they perform comparably (Table-1). Despite the differences in their encoding strategies, these three methods share a common decoder and training strategies. This study also provides insights into the effectiveness of various encoding strategies for real-world scene identification. Table-1 also demonstrates the performance of these architectures compared to the Unified VLP model (shown in the last row), which incorporates Large-Scale Vision-Language Pre-Training. Other encoding strategies such as Global and Grid-based encoding were excluded due to significant differences in performance metrics, which would introduce unfair comparisons.
Typically, an image captioning pipeline consists of an encoder and a decoder [Fig 4]. The encoder utilizes a CNN to extract the features from the input image, while the decoder generates captions word by word using the encoded features through an RNN. In addition, a suitable training procedure is employed to train these components effectively. In our analysis, we will provide a comprehensive overview of three models, focusing on their encoding, decoding, and training strategies.
## Visual Encoding or Encoders
Encoding the visual features of an image is a crucial and fundamental step in generating detailed and precise captions. This becomes particularly significant when dealing with real-world images and scenes that encompass a multitude of intricate features and details. The encoder plays a pivotal role in capturing these details from various points of attention. The illustration in [Fig 5, 6, 7] demonstrates
Figure 4: A simple encoder-decoder pipeline for image captioning
the functioning of region-based, graph-based, and self-attention-based encoding mechanisms respectively in an image.
#### 4.2.2 Language Models or Decoders
The main aim of a decoder or language model is to transform the encoded visual features into meaningful and coherent descriptions by estimating the probabilities of a given sequence of words appearing in a sentence. In our analysis, we explore Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM)[Fig 8] and Transformers[Fig 9].
#### 4.2.3 Training Procedures
The development of a robust captioning pipeline relies heavily on an appropriate training procedure. In the domain of image captioning, two commonly employed training methods are Cross-Entropy Loss and Reinforcement Learning.
Figure 5: 6, 7: depict the encoding mechanisms used in this study. Figure 5 illustrates the visual regions used for region-based encoding, Figure 6 represents the graphical representations used for graph-based encoding, and Figure 7 showcases the point-to-point relationships utilized in the self-attention based encoding mechanism.
Figure 8: 9: An LSTM cell in a Recurrent Neural Network [2], Self-attention based transformer in a decoding layer [7] respectively
### Region Based Encoding - RNN/LSTM - Reinforced, Cross-Entropy Loss Training
In their study, Anderson et al. propose a combined bottom-up and top-down attention mechanism, inspired by the work in [3]. Unlike previous methods that used global or grid-based encoding, this study introduces a region-based encoding approach based on Faster R-CNN which consists of a Region Proposal Network (RPN) in the first stage and a RoI (Region of Interest) pooling layer in the second stage. This allows for easy extraction of objects and salient regions in the image. The captioning model they propose, while similar to previous approaches [2, 4], incorporates specific architectural changes and achieves state-of-the-art performance even without bottom-up attention.
They also employ a reinforcement learning approach similar to [4], but with the addition of sampling distribution restrictions for a reduced training time. By combining the new region-based encoding technique, well-structured top-down attention, language LSTM models, and efficient reinforcement learning as the training strategy, this model achieves breakthroughs and state-of-the-art performance in image captioning. However, for generating captions, we employed [21] as our chosen method, which is derived from [3], primarily because it offers advantages in terms of reduced resource usage and time requirements compared to the original [3] approach.
### Graph Based Encoding - RNN/LSTM - Reinforced, Cross-Entropy Loss Training
When we humans imagine a boat, we naturally assume that it is on water, even if the water is not visible. This kind of assumption is called the inductive bias. To generate more detailed and high-quality captions that mimic human understanding, Yang et al. proposed a method called Scene Graph Auto-Encoder (SGAE). This method incorporates the inductive bias into the encoder-decoder pipeline by using a shared dictionary. The pipeline includes a graph encoder based on Graph Convolutional Networks (GCN) to encode the scene graph, and a visual encoder based on Region Proposal Networks (RPN).
The captioning model also includes a language model similar to [5] and is trained using a reinforcement learning strategy similar to [4]. Indeed, most of this study is based on the previous works [4, 5], including the other mentioned studies, ultimately resulting in the achievement of state-of-the-art performance by this model. In summary, this model utilizes graph and region-based encoding, an RNN/LSTM decoder, and is trained using reinforcement learning.
### Self-Attention Based Encoding - Transformer - Reinforced, Cross-Entropy Loss Training
After the groundbreaking study "Attention is all you need" by Vaswani et al. in 2017, it has become evident that Transformers outperform every other architecture in generation and translation tasks. In contrast to prior methods, Cornia et al. present a fully-attentive model that draws inspiration from the work of Vaswani et al. This model utilizes self-attention and eliminates the necessity for recurrence
and convolution layers. Despite this difference, the model can still be conceptualized as an encoder-decoder pipeline. The encoder consists of stacked encoding layers, each incorporating an attention mechanism, feed-forward layer, and two memory vectors. These layers collectively refine the relationships between image regions by leveraging a priori knowledge encoded in the memory vectors, similar to the notion of inductive bias [6].
In contrast to recurrent networks, which struggle with long-range dependencies, self-attention computes attention scores between all pairs of elements in the sequence, weighting their contributions to the representation of other elements. Each encoder layer is connected to a decoder layer, forming a mesh-like structure with learnable connectivity. Thus, due to its mesh-like structure and the utilization of memory vectors to refine relationships between image regions, this model is named as Meshed Memory Transformer.
## 4 Dataset:
The majority of studies have primarily focused on training, testing, and validating their models using three main datasets: MS COCO, Flickr30k, and Flickr8k [10, 11, 12]. These datasets were developed within controlled environments, consisting of high-quality images and accompanying captions. However, real-world scene images often exhibit distortions and lower quality. In our study, it is crucial to validate these models using a more specific dataset that represents real-world scenes rather than a generic one. While there is currently no dataset exclusively dedicated to real-world scenes, we have created our own dataset for this purpose. Instead of manually captioning the images in the dataset, which would practically be not possible given our limited resources and time, we adopted a different approach by utilizing a sophisticated model proposed by [14] that combines the state-of-the-art model OFA, and the GPT3. In simpler terms, this method involves selecting an arbitrary number of captions, which are generated by advanced image captioning engines like OFA using a technique called temperature-based sampling. These captions are then summarized using a powerful language model such as GPT3. This method resulted in captions that are significantly more descriptive and informative compared to those generated by individual models. [Fig 10, 11, 12] displays the images alongside the corresponding generated captions.
Our new validation dataset contains nearly 800 different images belonging to about 65 different scene categories. We used a 10 GB T4 GPU in addition to a 12 GB CPU for the computation. For each image, we generated two captions, and the entire process for the entire dataset took approximately 15 hours. Out of every 100 captions, we randomly selected 10 for manual qualitative analysis. The majority of these captions correctly described the scene and included all the identifiable details.
## 5 Evaluation Metrics:
In our evaluation, we assess models 3.1, 3.2, and 3.3 based on their performance in real-world image captioning and scene identification, taking into account the specific objective at hand. While it is true that image captioning and scene identification are interconnected tasks, as both rely on the quality of the encoder and decoder architecture, as well as the training procedure employed, it is the underlying intuition, inductive bias, or prior knowledge that enables the model to make conclusions about a particular scene based on extracted features, details, attributes, or objects. Therefore, we believe it is necessary to evaluate these two tasks separately in order to gain a comprehensive understanding of the models' capabilities.
For evaluating the quality of image captioning, various metrics such as BLEU, METEOR, ROUGE, and CIDEr are commonly used. In our analysis, we have opted to utilize the BLEU metric. However, when it comes to scene identification, there is currently no established metric available. To address this gap, we have introduced a novel metric called Scene Identification Score (SIS).
### BLEU (Bilingual Evaluation Understudy):
The BLUE metric, short for Bilingual Evaluation Understudy, is a widely used measure in natural language processing (NLP) to evaluate the quality of machine-generated translations. The primary goal of the BLUE metric is to assess the similarity between the machine-generated translation and the human references. It takes into account both the precision of matching n-grams and the brevity penalty to penalize excessively short translations. The higher the BLUE score, which ranges from 0 to 1, the closer the machine-generated translation is to the human references.
### SIS (Scene Identification Score):
The concept behind SIS is straightforward: for each scene class, we measure the percentage of captions generated by a model that accurately identifies the corresponding scene. The SIS score for a model is then calculated as the average of these percentages across all scene classes. This metric provides a means to assess the performance of models in scene identification. To compute the SIS score, we leveraged the OpenAI's Davinci engine API, employing a meticulously crafted prompt design.
## 6 Results and Discussion:
We have employed the SIS metric in conjunction with the LBPF image captioning model to assess its effectiveness in analyzing and identifying the overall scene depicted in an image. To conduct our evaluation, we selected approximately 36 categories of images from the MIT indoor scenes dataset. After generating captions for these images using LBPF, we compared the captions with the corresponding image categories to determine if the model successfully recognized the scene. Unfortunately, the results of this comparison were not satisfactory. The model exhibited poor performance on the majority of the tested images, struggling particularly with images containing
complex features such as "movie theater," "cloister," and "museum." It appears that the model's shortcomings in real-world scene identification and captioning stem from a lack of inductive bias [6] or prior knowledge [7].
**Fig 10 (left): a.) A dining room with a wooden table and chairs, a refrigerator, and two windows. b.) A dining room with a wooden table and chairs in front of a window, and possibly two or three windows in the room.**
**Fig 11 (center): a.) A bedroom with two beds, a lamp, and possibly a window.b.) A hotel room with two twin beds and a lamp near a window.**
**Fig 12 (right): a.) A motorcycle parked in a garage next to a convertible sports car.b.) A garage containing a motorcycle and a convertible sports car parked next to each other.**
**Table-2: Results of image scene detection are mentioned in the following:**
**Image_Category** **Number of Images** **Scene Detection Count** **Percentage**
Airport 12 **1** **8.4**
Auditorium 12 **0** **0**
Bakery 8 **1** **12.5**
Bar 24 **3** **12.5**
Bathroom 11 **9** **81.9**
Bedroom 11 **5** **45.5**
Bookstore 12 **5** **41.7**
Casino 15 **0** **0**
Church 11 **2** **18.2**
Classroom 13 **0** **0**
**Cloister 16 **0** **0**
\begin{tabular}{|p{108.4pt}|p{108.4pt}|p{108.4pt}|} \hline \hline Closet & 16 & 0 & 0 \\ \hline ClothingStore & 5 & 0 & 0 \\ \hline ComputerRoom & 10 & 3 & 30 \\ \hline ConcertHall & 10 & 0 & 0 \\ \hline Corridor & 12 & 0 & 0 \\ \hline Deli & 8 & 4 & 50 \\ \hline Dentaloffice & 6 & 0 & 0 \\ \hline DiningRoom & 16 & 0 & 0 \\ \hline Elevator & 7 & 0 & 0 \\ \hline FastFood Restaurant & 13 & 2 & 15.4 \\ \hline Florist & 8 & 0 & 0 \\ \hline Grocery Store & 17 & 0 & 0 \\ \hline Gym & 16 & 0 & 0 \\ \hline HairsSalon & 9 & 0 & 0 \\ \hline HospitalRoom & 15 & 0 & 0 \\ \hline Bus & 14 & 8 & 57.2 \\ \hline Subway & 16 & 0 & 0 \\ \hline Kindergarten & 7 & 1 & 14.3 \\ \hline Kitchen & 19 & 14 & 73.7 \\ \hline laundromat & 12 & 0 & 0 \\ \hline Library & 9 & 2 & 22.3 \\ \hline Mall & 10 & 0 & 0 \\ \hline Movie Theatre & 13 & 0 & 0 \\ \hline Museum & 6 & 0 & 0 \\ \hline SwimmingPool & 12 & 11 & 91.7 \\ \hline \hline \end{tabular}
## 7 Conclusion and Future Work:
We have presented a comprehensive timeline and literature review of different models used in image captioning, covering various architectures and methodologies. Starting from the initial Feed Forward Neural Network-based model to the latest GPT-based models, we have highlighted the advancements and improvements made over the years. Additionally, we have identified and discussed the challenges faced in real-world image captioning, as well as conducted a detailed analysis of real-world scene identification using three different architectures. Furthermore, we have introduced a novel evaluation metric called SIS (Scene Identification Score) specifically for real-world scene identification.
Moving forward, our future work will concentrate on the application of Large Scale Pretraining to a generic CNN/RNN-based encoder-decoder pipeline, which remains largely unexplored despite being mentioned in several survey papers. We plan to explore training existing models on state-of-the-art image networks, incorporating new training strategies and leveraging advanced technologies. Moreover, considering the recent advancements in large language models like GPT-4, we are interested in exploring the emerging possibilities in the image captioning domain presented by these models.
## 8 Acknowledgements:
We express our sincere gratitude to Prof. Radha Guha for her invaluable guidance and mentorship throughout our study. We are grateful to SRM University AP for generously providing us with the necessary computational resources. Additionally, we would like to acknowledge the support of the OpenAI team for their provision of the partial free tire API, which significantly reduced our computational costs. We extend our thanks to the open source projects and the supportive community that played a crucial role in our research journey.
|
2301.07298 | Nonlocalization of singular potentials in quantum dynamics | Nonlocal modeling has drawn more and more attention and becomes steadily more
powerful in scientific computing. In this paper, we demonstrate the superiority
of a first-principle nonlocal model -- Wigner function -- in treating singular
potentials which are often used to model the interaction between point charges
in quantum science. The nonlocal nature of the Wigner equation is fully
exploited to convert the singular potential into the Wigner kernel with weak or
even no singularity, and thus highly accurate numerical approximations are
achievable, which are hardly designed when the singular potential is taken into
account in the local Schr\"odinger equation. The Dirac delta function, the
logarithmic, and the inverse power potentials are considered. Numerically
converged Wigner functions under all these singular potentials are obtained
with an operator splitting spectral method, and display many interesting
quantum behaviors as well. | Sihong Shao, Lili Su | 2023-01-18T04:24:33Z | http://arxiv.org/abs/2301.07298v1 | # Nonlocalization of singular potentials in quantum dynamics
###### Abstract
Nonlocal modeling has drawn more and more attention and becomes steadily more powerful in scientific computing. In this paper, we demonstrate the superiority of a first-principle nonlocal model -- Wigner function -- in treating singular potentials which are often used to model the interaction between point charges in quantum science. The nonlocal nature of the Wigner equation is fully exploited to convert the singular potential into the Wigner kernel with weak or even no singularity, and thus highly accurate numerical approximations are achievable, which are hardly designed when the singular potential is taken into account in the local Schrodinger equation. The Dirac delta function, the logarithmic, and the inverse power potentials are considered. Numerically converged Wigner functions under all these singular potentials are obtained with an operator splitting spectral method, and display many interesting quantum behaviors as well.
Wigner equation; Singular potential; Nonlocal effect; Spectral method; Operator splitting
81S30; 45K05; 35Q40; 65M70; 35S05
## 1 Background and motivation
It has been shown that the point charge description of electrons usually agrees well with the experimental results [1], where the interaction between them is dominated by the Coulomb potential -- a typical singular potential in quantum science [2, 3, 4, 5, 6]. Such Coulomb interaction has found various applications in physics [2, 7] and chemistry [8, 9, 1]. Apart from that, there exist some other singular potentials to describe the interactions arising from scattering problems [10], the short-range interactions in condensed matter [11, 12], Dirac monopole in the magnetic field [13] and etc. The logarithmic potential is also adopted to measure the entropy density in the study of two-phase flow [14].
Directly plugging a singular potential into the Schrodinger equation
\[\mathrm{i}\hbar\frac{\partial}{\partial t}\psi(x,t)=-\frac{\hbar^{2}}{2m}\nabla _{x}^{2}\psi(x,t)+V(x)\psi(x,t), \tag{1}\]
and then seeking the numerical solutions runs into a problematic situation, where \(\psi(x,t)\) gives the wavefunction, \(m\) and \(\hbar\) signify the mass of the particle and the reduced Planck constant, respectively. Let's take the Dirac delta function potential as an example:
\[V(x)=H\delta(x) \tag{2}\]
with a power of \(H\) (the influence of potential) and the Dirac delta function (see Fig. 1):
\[\delta(x)=\begin{cases}+\infty&x=0\\ 0&x\neq 0\end{cases},\quad\int_{\mathbb{R}}\delta(x)\mathrm{d}x=1. \tag{3}\]
This Dirac delta function potential, which diverges at \(x=0\), is often adopted to model an infinite well and barrier [15]. Obviously, there is no way for the finite difference method to find a suitable approximation to the function (3) and then to the equation
(1) equipped with the singular potential (2). Recourses to the Galerkin method inevitably sacrifices the accuracy or convergence order, which has been already pointed out by [16, 17] in solving the elliptical boundary value problem with the Dirac delta function source \(-\Delta u(x)=\delta(x)\).
In this paper, we adopt a nonlocalization approach based on the integral formulation to deal with the singular potential situation. Specifically, we turns to the Wigner function [18]
\[f(x,k,t)=\int_{\mathbb{R}}\psi(x+\frac{y}{2},t)\,\psi^{\dagger}(x-\frac{y}{2}, t)\,\exp(-\mathrm{i}ky)\ \mathrm{d}y, \tag{4}\]
and its governing equation
\[\frac{\partial}{\partial t}f(x,k,t)+\frac{\hbar k}{m}\nabla_{x}f(x,k,t)=\Theta _{V}[f](x,k,t), \tag{5}\]
both of which are defined in phase space \((x,k)\) with \(x\) being the position and \(k\) the wavenumber. Starting from the definition (4) where the Wigner function is calculated from the density matrix \(\psi(x,t)\psi^{\dagger}(x,t)\) by changing to center-of-mass coordinates followed by a Fourier transform, the Wigner equation (5) can be derived from the Schrodinger equation (1) in a straightforward manner. The nonlocalization of singular potentials is embodied in the pseudo-differential operator
\[\Theta_{V}[f](x,k,t) =\int_{\mathbb{R}}V_{w}(x,k-k^{\prime})f(x,k^{\prime},t)\ \mathrm{d}k^{\prime}, \tag{6}\] \[V_{w}(x,k) =\frac{1}{2\pi\mathrm{i}\hbar}\int_{\mathbb{R}}\exp(-\mathrm{i} ky)\,\left[V(x+\frac{y}{2})-V(x-\frac{y}{2})\right]\mathrm{d}y, \tag{7}\]
and all the information of potential \(V(x)\) is contained in the Wigner kernel \(V_{w}(x,k)\). Substituting the Dirac delta function potential (2) into Eq.(7) leads to
\[V_{w}(x,k)=\frac{2H}{\pi\hbar}\sin(2xk), \tag{8}\]
Figure 1: The Dirac delta function potential and its Wigner kernel with power \(H=1\). It can be intuitively seen that the singular potential is transformed into a non-singular Wigner kernel.
the plot of which is displayed in Fig. 1. It can be readily observed there that the Wigner kernel \(V_{w}(x,k)\) is no longer singular and thus we have a chance to seek highly accurate numerical solutions to the Wigner equation (5) with singular potentials. That is, the point singularity in \(V(x)\) is distributed over the whole space with the nonlocal action of pseudo-differential operator, thereby alleviating or even eliminating the singularity. After obtaining the Wigner function (4), the average of a quantum operator \(\hat{A}\) can be expressed as
\[\langle\hat{A}\rangle_{t}=\iint_{\mathbb{R}\times\mathbb{R}}A(x,k)f(x,k,t)\ \mathrm{d}x \mathrm{d}k, \tag{9}\]
where \(A(x,k)\) gives the corresponding classical function in phase space. In other words, the Wigner function formulation is fully equivalent to the wavefunction formulation for quantum mechanics [19]. Generally speaking, nonlocal models may offer more explanations for phenomena that involve possible singularities including the interaction with singular potentials and occurring at a distance [20].
By exploiting the intrinsic nonlocal nature of the Wigner function approach, we are able to obtain highly accurate numerical approximations to observable quantities in quantum dynamics with singular potentials with the help of spectral methods and operator splitting techniques. For demonstration purposes, this work focuses on the singular potentials the Wigner kernels of which have analytical forms, like the Dirac delta function potential. Otherwise, other extra numerical techniques should be adopted, for example, the truncated kernel method [21, 22]. It should be noted that there exist few high precision numerical simulations of the Wigner equation under singular potentials except for a recent attempt to numerically solve the Wigner-Coulomb system [23] as well as some qualitative analysis results [24, 25].
The rest of the paper is organized as follows. Section 2 presents the numerical results for the Dirac delta function potential and an comparison with the finite size model. Scattering of the Fermi-Dirac distribution in 4-D phase space is shown as well. Extensions to other three types of singular potentials are given in Section 3. Finally, conclusions and discussions are drawn in Section 4.
## 2 Quantum dynamics in a Dirac delta function potential
After truncating the \(k\)-space into \(\mathcal{K}=[k_{min},k_{max}]\)[26], a Fourier spectral approximation with \(N_{k}\) terms to the Wigner function \(f(x,k,t)\) reads
\[f(x,k,t)\approx f_{N_{k}}(x,k,t)=\sum_{\nu=-N_{k}/2+1}^{N_{k}/2}\alpha_{\nu}(x, t)\,\psi_{\nu}(k), \tag{10}\]
where \(\psi_{\nu}(k)=e^{2\pi\mathrm{i}\nu(k-k_{min})/L_{k}}\) with \(L_{k}=k_{max}-k_{min}\) gives the basis. Then the pseudo-differential term (6) can be approximated as follows
\[\Theta_{V}[f](x,k,t) \approx \Theta_{V}^{T}[f_{N_{k}}](x,k,t)=\sum_{\nu=-N_{k}/2+1}^{N_{k}/2}c _{\nu}(x)\,\alpha_{\nu}(x,t)\,\psi_{\nu}(k), \tag{11}\] \[c_{\nu}(x) = \int_{\mathcal{K}^{\prime}}V_{w}(x,k^{\prime})\,\mathrm{e}^{-2 \pi\mathrm{i}\nu k^{\prime}/L_{k}}\,\mathrm{d}k^{\prime},\quad\mathcal{K}^{ \prime}=[-L_{k},\,L_{k}]. \tag{12}\]
For the situation of the Dirac delta function potential (2), plugging Eq. (8) into Eq. (11) leads to the following close formula
\[c_{\nu}(x)=\frac{2H\mathrm{i}}{\pi\hbar}\left(\frac{\sin(\omega_{\nu}^{+}(x)L_ {k})}{\omega_{\nu}^{+}(x)}-\frac{\sin(\omega_{\nu}^{-}(x)L_{k})}{\omega_{\nu}^ {-}(x)}\right), \tag{13}\]
where \(\omega_{\nu}^{\pm}(x)=2x\pm\frac{2\pi}{L_{k}}\nu\) and the limits must be used when \(\omega_{\nu}^{\pm}(x)=0\). It can be easily observed in Eq. (2) that only \(c_{\nu}(x)\) involves the singular potential (2), through the non-singular Wigner kernel (8), thereby implying that the formula (4) treats the singularity with high accuracy, where the numerical errors only come from the truncation of \(k\)-space and the spectral approximation of \(f(x,k,t)\). After that, we adopt the Chebyshev spectral element method with inflow boundary conditions[27] in \(x\)-space and the fourth-order operator splitting technique[28] in \(t\)-direction to determine the remaining unknowns \(\alpha_{\nu}(x,t)\). For simplicity, we use the same \(M\) collocation points in all \(Q\) cells in \(x\)-space. Moreover, the above numerical method can be readily extended to 4-D and higher-dimensional scenarios in a dimension-by-dimension manner by using the tensor product of 2-D basis functions.
The \(L^{2}\)-error
\[\epsilon_{2}(t)=\left[\iint_{{\cal X}\times{\cal K}}\left(F(x,k,t)-f^{\rm ref }(x,k,t)\right)^{2}{\rm d}x{\rm d}k\right]^{1/2} \tag{5}\]
and \(L^{\infty}\)-error
\[\epsilon_{\infty}(t)=\max_{(x,k)\in{\cal X}\times{\cal K}}\left\{|F(x,k,t)-f^ {\rm ref}(x,k,t)|\right\} \tag{6}\]
are used to analyze the convergence of the errors, where \({\cal X}:=[X_{L},X_{R}]\) is the computational domain in \(x\)-space, \(F(x,k,t)\) represents the numerical solution, and the numerical solution obtained on the finest mesh is taken as the reference \(f^{\rm ref}(x,k,t)\). For convenience, the above errors in Eqs. (5) and (6) are numerically calculated on the following uniform mesh
\[(x_{i},k_{j})=((i-1/2)(X_{R}-X_{L}),(j-1/2)(k_{max}-k_{min}))/N_{um},\ i,j=1, \ldots,N_{um}, \tag{7}\]
where \(N_{um}\) denotes the mesh size.
### 2-D scattering of Gaussian wave packet
As stated in,[26, 27] the Gaussian wave packet
\[f(x,k,0)=\frac{1}{\pi}\,e^{-\frac{(x-a^{0})^{2}}{2\sigma^{2}}-2\sigma^{2}(k-k ^{0})^{2}} \tag{8}\]
is usually adopted as the initial function to test the convergence rate as well as to investigate the quantum tunneling, where \((x^{0},k^{0})\) gives the initial center position and \(\sigma\) is the minimum position spread. We will simulate its quantum scattering in the Dirac delta function potential, which has never been reported before in the literature. For the purpose of testing only, we set \(\hbar=1\ {\rm eV}\cdot{\rm fs}\), \(m=1\ {\rm eV}\cdot{\rm fs}^{2}\cdot{\rm nm}^{-2}\), \(x^{0}=-10\ {\rm nm}\), \(k^{0}=2\ {\rm nm}^{-1}\), and \(\sigma=2\ {\rm nm}\). The computational domain is chosen as \([X_{L},X_{R}]=[-30\ {\rm nm},30\ {\rm nm}]\) which is divided evenly into \(Q=20\) cells, \(-k_{min}=k_{max}=\pi\ {\rm nm}^{-1}\), and the quantum evolution with a fixed time step \(\Delta t=0.01\ {\rm fs}\) is stopped at \(t_{fin}=10\ {\rm fs}\).
The first test tries \(H=1\) in Eq. (2) and the resulting Wigner functions at \(t=2.5,5,7.5,10\ {\rm fs}\) are displayed in Fig. 2, which is obtained on the finest mesh we have tried: \(N_{k}=512,M=55\). It is clearly observed there that the wave packet still goes partially through the barrier though the barrier whose height is infinite. That is a clear manifestation of quantum tunneling effect in the Dirac delta function potential,[29] reflecting a fundamental difference between quantum world and macroscopic world. A possible explanation is that the width of the Dirac delta function barrier tends to
be extremely small, very close to \(0\) in particular, albeit the infinitely large height. To study the convergence rate with respect to \(N_{k}\), the number of collocation points in each \(x\)-cell is fixed to be \(M=55\). Similarly, when studying the convergence rate with respect to \(M\), the number of collocation points in \(k\)-space is fixed to be \(N_{k}=512\). Fig. 3 displays the spectral convergence of \(\epsilon_{2}(10)\) and \(\epsilon_{\infty}(10)\) against \(N_{k}\) or \(M\) where we have set \(N_{um}=600\) in Eq. (7). That is, the numerical results in Fig. 2 are numerically converged.
To be more specific, the uncertainty
\[\sigma_{x}(t)\,\sigma_{p}(t)=\sqrt{\left\langle\left(\hat{x}-\left\langle\hat{ x}\right\rangle_{t}\right)^{2}\right\rangle_{t}}\,\sqrt{\left\langle\left( \hat{p}-\left\langle\hat{p}\right\rangle_{t}\right)^{2}\right\rangle_{t}} \tag{9}\]
is adopted to measure the nonlocality, and its numerical value is still obtained on the uniform mesh given in Eq. (7), where \(p=\hbar k\) is the momentum. We show in Table 1 that the numerical results with different mesh sizes and find that the numerically converged value for \(\sigma_{x}(10)\sigma_{p}(10)\) is \(20.8510\) for \(H=1\).
Figure 2: The Wigner functions at different instants: A Gaussian wave packet runs through the Dirac delta function barrier Eq. (2) with power \(H=1\). Quantum tunneling effect can be clearly observed.
_Numerical values for \(\sigma_{x}(10)\sigma_{p}(10)\) with respect to increasing \(N_{k}\), \(M\) and \(N_{um}\) and the converged value is \(20.8510\)._
In addition to the uncertainty, we continue to use the partial mass
\[P_{r}(t)=\int_{[0,X_{R}]\times\mathcal{K}}f(x,k,t)\ \mathrm{d}x\mathrm{d}k \tag{10}\]
for investigating the tunneling effect for ten different powers: \(H=\pm 0.5\), \(\pm 1\), \(\pm 1.5\), \(\pm 2\), and \(\pm 2.5\). \(P_{r}\) can also be regarded as the tunneling rate in view of that the total mass equals to one. Figs. 4 and 5 show the tunneling rates and uncertainties for potential barriers \(H>0\) and potential wells with \(H<0\), respectively. The curves in Fig. 4 exhibit the deceleration of \(P_{r}(t)\) as \(H\) increases, and uncertainty peaking at \(H=1\). It can be further found that the moment when the uncertainty reaches the maximum coincides with that the tunneling rate reaches to about \(0.5\). At this moment, the variances accumulate the most and it is difficult to observed the position and momentum of the wavepacket simultaneously. Moreover, when the power is high enough, i.e., \(H\geq 1.5\), there are significant fluctuations of \(\sigma_{x}(t)\,\sigma_{p}(t)\) and \(P_{r}(t)\). That could be comprehended as follows. The influence of the power \(H\) leads to the high oscillations in the Wigner function at the center \(x=0\) nm. Therefore, these two observables fluctuate during the wave packet interacting with the barrier. When it comes to the wells with negative power, it can be observed in Fig. 5 that the trend
\begin{table}
\begin{tabular}{c c|c c|c c} \hline \hline \(N_{k}\) & \(\sigma_{x}(10)\sigma_{p}(10)\) & \(M\) & \(\sigma_{x}(10)\sigma_{p}(10)\) & \(N_{um}\) & \(\sigma_{x}(10)\sigma_{p}(10)\) \\ \hline
32 & 19.9616 & 21 & 21.0169 & 100 & 20.8355 \\
64 & 20.8938 & 26 & 20.8682 & 200 & 20.8467 \\
128 & 20.8570 & 31 & 20.8406 & 300 & 20.8510 \\
256 & 20.8529 & 36 & 20.8502 & 400 & 20.8510 \\
300 & 20.8524 & 41 & 20.8515 & 450 & 20.8510 \\
400 & 20.8515 & 45 & 20.8509 & 500 & 20.8510 \\
500 & 20.8510 & 51 & 20.8510 & 550 & 20.8510 \\
512 & 20.8510 & 55 & 20.8510 & 600 & 20.8510 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Numerical values for \(\sigma_{x}(10)\sigma_{p}(10)\) with respect to increasing \(N_{k}\), \(M\) and \(N_{um}\) and the converged value is \(20.8510\).
Figure 3: Spectral convergence with respect to \(N_{k}\) (left) and \(M\) (right) during the scattering of a Gaussian wave packet in the Dirac delta function potential Eq. (2) with \(H=1\).
of the tunneling rate \(P_{r}(t)\) is opposite to that of the uncertainty \(\sigma_{x}(t)\,\sigma_{p}(t)\): \(P_{r}(t)\) decreases and \(\sigma_{x}(t)\,\sigma_{p}(t)\) increases as as \(|H|\) increases.
### Finite size effect
The point charge causes singularity because it has no size. In view of this, one may use a finite size model to avoid the singularity. The Gaussian function with size of \(a\), denoted by \(V_{a}(x)\), is usually used to mimic the point charge model [1, 30], the validity of which relies on the following limit
\[\delta(x)=\lim_{a\to 0+0}V_{a}(x)=\lim_{a\to 0+0}\frac{1}{\sqrt{2\pi}a}\exp{(- \frac{x^{2}}{2a^{2}})}. \tag{11}\]
However, we would like to point out that there is a huge gap between the quantum behavior caused by the point charge \(\delta(x)\) and finite-size charge \(V_{a}(x)\).
Figure 4: Tunneling rates and uncertainties for the Dirac delta function barriers with powers \(H=0.5\), \(1\), \(1.5\), \(2\) and \(2.5\). The moment when the uncertainty reaches the maximum coincides with that the tunneling rate reaches about \(50\%\).
Figure 5: Tunneling rates and uncertainties for the Dirac delta function well with powers \(H=-0.5\), \(-1\), \(-1.5\), \(-2\) and \(-2.5\).
Table 2 displays the numerically converged uncertainties at \(t_{fin}=10\) fs for decreasing sizes. When \(a=10^{-16}\) nm, \(\sigma_{x}(10)\,\sigma_{p}(10)\) is about 0.8003, which is far less that 20.8510 caused by \(\delta(x)\). In fact, as the size gradually becomes smaller, the uncertainty first grows to 1.2272, then gradually decreases, and finally stays around 0.8003. Such apparent discrepancy can be also observed from the Wigner functions at the final instant in Fig. 6. Compared the Wigner function under \(V_{a}(x)\) with \(a=0.5\) nm in Fig. 6(a), it is obvious that the Wigner function under \(\delta(x)\) in Fig. 6(b) reaches the extrema around \(x=0\) nm, where the singular point exactly locates at, and the oscillation between positive and negative values is more vicious. More specifically, at the
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(a\) nm & 10 & 5 & 0.5 & 0.1 & 0.01 & 1E-3 & 1E-4 & 1E-16 & 0 \\ \hline \(\sigma_{x}(10)\,\sigma_{p}(10)\) & 0.9811 & 1.0663 & 1.2272 & 1.0041 & 0.8027 & 0.8005 & 0.8003 & 0.8003 & 20.8510 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Uncertainties for different sizes at \(t_{fin}=10\) fs. \(a=0\) nm signifies the Dirac delta function barrier. The uncertainty of the Dirac delta function barrier \(\delta(x)\) is much larger than that of the Gaussian barrier of any finite size \(V_{a}(x)\).
Figure 6: Different quantum behavior caused by the Gaussian barrier with finite size \(V_{a}(x)\) and Dirac delta function barrier \(\delta(x)\). The Wigner functions at \(t_{fin}=10\) fs are displayed in (a) and (b), the tunneling rate in (c), and the uncertainty in (d). Both tunneling rates and uncertainties are obviously different.
final instant, the average position and moment are \(\left(\left\langle\hat{x}\right\rangle_{10},\left\langle\hat{p}\right\rangle_{10} \right)=\left(9.7732,1.9840\right)\) under the Gaussian barrier, but alter to \(\left(\left\langle\hat{x}\right\rangle_{10},\left\langle\hat{p}\right\rangle_{10 }\right)=\left(1.1313,0.0047\right)\) under \(\delta(x)\). Fig. 6(c) provides the tunneling rate \(P_{r}(t)\). It reaches almost to 1 as expected under the Gaussian barrier, which is consistent with the weak presence of negative part of the Wigner function in Fig. 6(a). By contrast, it is manifested in Fig. 6(d) that the same wave packet can just partially pass through the Dirac delta function barrier and its uncertainty increases significantly, which should result from the infinite height of the potential.
In a word, our numerical experiments suggests an essential difference between the singular potential and its regularized counterpart as already shown in investigating nuclear magnetic shielding.[30] No matter how small the size of Gaussian barrier we choose, it is still a smooth and local potential. The Dirac delta function potential, on the contrary, an ideal model widely used to simulate the point charge field source of quantum chemical reactions, has some essentially difference from such regularized one in studying quantum phenomena.
### 4-D scattering of Fermi-Dirac distribution
In view of the high dimensionality of phase space, the foregoing treatment of singular potentials using the Wigner function approach can be also extended to high-dimensional scenarios. This section is devoted to scattering of the Fermi-Dirac distribution in 4-D phase space under a singular potential. Specifically, we adopt the following position-independent 2-D Fermi-Dirac distribution function [31, 32] as the initial data for the Wigner equation (1.5)
\[f(x_{1},x_{2},k_{1},k_{2},0)=\frac{\sqrt{2mk_{B}T}}{\pi\hbar}\int_{0}^{\infty }\frac{1}{1+\exp(y^{2}+\frac{((\hbar k_{1})^{2}+(\hbar k_{2})^{2})/(2m)-E_{ \rm F}}{k_{B}T})}{\rm d}y, \tag{2.12}\]
where \(m=0.067\,m_{e}\), \(m_{e}=5.68562966\ {\rm eV}\cdot{\rm fs}^{2}\cdot{\rm nm}^{-2}\), \(\hbar=0.658211899\ {\rm eV}\cdot{\rm fs}\), \(k_{B}=8.61734279\times 10^{-5}\ {\rm eV}\cdot{\rm K}^{-1}\), \(T\) is taken as 300 K, and \(E_{\rm F}=0.1\) eV signifies the Fermi energy. Meanwhile, we choose an annular singular potential \(V(x_{1},x_{2})=\sum_{i=1}^{8}\delta(x_{1}-d_{1}^{i})\delta(x_{2}-d_{2}^{i})\), where all singular points, numbered 1 to 8 in an anti-clockwise direction with the right-most one being the first, are evenly distributed in a circle with radius equal to 2 nm and \((d_{1}^{i},d_{2}^{i})\) gives the position of the \(i\)-th singular point.
Figure 7: 4-D scattering of Fermi-Dirac distribution: Errors of the spatial marginal distribution against \(N_{k}\) (left) and \(M\) (right).
The computational domain is \(\mathcal{X}\times\mathcal{X}\times\mathcal{K}\times\mathcal{K}\) with \(\mathcal{X}=[-10\text{ nm},10\text{ nm}]\) and \(\mathcal{K}=[-\pi\text{ nm}^{-1},\pi\text{ nm}^{-1}]\). We use the same \(N_{k}\) collocation points for all \(\mathcal{K}\), the same number of elements \(Q=5\) and \(M\) collocation points in each element for all \(\mathcal{X}\). The quantum dynamics is evolved to \(t_{fin}=2.5\) fs with time step \(\Delta t=0.01\) fs. We measure the errors of spatial marginal distribution of the Wigner function,
\[F_{sm}(x_{1},x_{2},t)=\iint_{\mathcal{K}\times\mathcal{K}}f(x_{1},x_{2},k_{1},k _{2},t)\,\mathrm{d}k_{1}\mathrm{d}k_{2}, \tag{13}\]
in a similar way to calculating Eqs. (5) and (6). Fig. 7 presents the errors against
Figure 8: 4-D scattering of Fermi-Dirac distribution: Spatial marginal distributions in Eq. (13) subtracted by the corresponding constant distributions in the free space. Eight Dirac delta function potentials give eight singular points (small white dots), which are numbered \(1\) to \(8\) in an anti-clockwise direction with the right-most one being the first and evenly distributed in a circle with radius equal to \(2\) nm. In (i), we plot the contour inside the circle at \(t=2.5\) fs with \(10\) equally spaced contour lines from \(-0.01216\) to \(0.004087\).
\(N_{k}\) and \(M\) after fixing \(N_{um}=400\) in Eq. (7) and the spectral convergence is evident again. The spatial marginal distributions on the finest mesh at different instants are displayed in Fig. 8 where the corresponding constant distributions in the free space, \(F_{sm}^{free}(x_{1},x_{2},t)\equiv 0.05384\), have been subtracted. It can be observed there that the Fermi-Dirac distribution first reacts strongly to the eight singular points in the circle during which many small oscillations are produced in the central area of the circle, and then gradually expands to the surroundings. Obviously, such expanding is blocked by the eight Dirac delta function potentials and the resulting interference forms 12 branches outside the circle: 4 big branches lie in the main directions, north, east, south and west, respectively, and the remaining 8 small branches are equally distributed between them, where six lines that are determined by pairs of singular points: \((1,7)\), \((2,6)\), \((3,5)\), \((1,3)\), \((4,8)\) and \((5,7)\) serve as the boundaries of branches. Inside the circle, the interference pattern shows a clear square structure that is also shaped by the same six lines. At the final time \(t=2.5\) fs, a basin structure emerges: The spatial marginal distribution inside the circle is reduced to less than \(F_{sm}^{free}\) (see Fig. 8(i)), and that above \(F_{sm}^{free}\) is all outside the circle (see Fig. 8(h)).
## 3 Extensions to other singular potentials
In this section, we devote ourselves into using the Wigner function approach to the following singular potentials:
* The logarithmic potential (10) \[V(x)=H\log(x)\] (11) which is naturally related to the entropy expression [14];
* The inverse power potential for \(\alpha\in(0,1)\) (12) \[V(x)=H|x|^{-\alpha}\] which can be found in various quantum mechanical models [5, 33, 34];
* The inverse square potential (13) \[V(x)=H|x|^{-2}\] which has strong singularity at \(x=0\) and is extensively used in high-energy scattering studies [34].
### The logarithmic potential
Plugging Eq. (10) into Eq. (7) yields the corresponding Wigner kernel
\[V_{w}(x,k)=-\frac{H}{\hbar}\frac{\sin(2xk)}{|k|}, \tag{14}\]
and then substituting it in Eq. (3) leads to
\[\frac{\hbar}{2H\mathrm{i}}c_{\nu}(x)=\int_{0}^{L_{k}}\frac{\sin(2xk^{\prime}) \sin(\tilde{\nu}k^{\prime})}{k^{\prime}}\mathrm{d}k^{\prime}:=\left(\int_{0}^ {\varepsilon}+\int_{\varepsilon}^{L_{k}}\right)g_{\nu}(x,k^{\prime})\mathrm{d }k^{\prime}, \tag{15}\]
where \(\varepsilon\) is a prescribed small parameter and \(\tilde{\nu}=2\pi\nu/L_{k}\). Using the the Taylor expansion in \((0,\varepsilon)\) gives
\[\int_{0}^{\varepsilon}g_{\nu}(x,k^{\prime})\mathrm{d}k^{\prime}=\tilde{\nu}x \varepsilon^{2}-(\frac{1}{3}\tilde{\nu}x^{3}+\frac{1}{12}\tilde{\nu}^{3}x) \varepsilon^{4}+\mathcal{O}(\varepsilon^{7}),\]
and with the help of the cosine integral function \(\mathrm{Ci}(x)=-\int_{x}^{+\infty}\frac{\cos t}{t}\,\mathrm{d}t\),we have
\[\int_{\varepsilon}^{L_{k}}g_{\nu}(x,k^{\prime})\mathrm{d}k^{\prime}=\frac{ \mathrm{Ci}(|\omega_{\nu}^{+}(x)|\varepsilon)-\mathrm{Ci}(|\omega_{\nu}^{-}(x) |\varepsilon)-\mathrm{Ci}(|\omega_{\nu}^{+}(x)|L_{k})+\mathrm{Ci}(|\omega_{\nu }^{-}(x)|L_{k})}{2}.\]
Accordingly, the expansion coefficient \(c_{\nu}(x)\) given in an oscillatory improper integral (3.5) can be approximated to the machine resolution by choosing \(\varepsilon=1\)E-5. Other parameters are set to be: \(H=1\), \(-X_{L}=X_{R}=30\) nm, \(-k_{min}=k_{max}=\pi\) nm\({}^{-1}\), \(N_{k}=512\), \(Q=20\), \(M=55\), \(\Delta t=0.01\) fs, and \(N_{um}=600\). We use an initial Gaussian wave packet close to the origin by setting \(x^{0}=-1\) nm and \(k^{0}=0.5\) nm\({}^{-1}\) in Eq. (2.8). Numerical convergence tests are given in Fig. 9 and show clearly the spectral accuracy. The left column of Fig. 10 plots the numerically converged Wigner functions at \(t=2.5\), \(5\), \(7.5\), \(10\) fs obtained on the finest mesh. It can be observed there that, the wave packet is attracted by the logarithmic potential (3.1), and keeps moving around the singular point, during which many small oscillations appear around the origin along with the singularity.
It has been shown that the Poisson summation formula can be used to approximate the Wigner kernel (1.7) well (\(y_{\zeta}=\zeta\,\Delta y\) and \(\Delta y\) being the spacing):
\[V_{w}(x,k)\approx\frac{\Delta y}{2\pi\mathrm{i}\hbar}\sum_{\zeta=-\infty}^{+ \infty}\left[V(x+\frac{y_{\zeta}}{2})-V(x-\frac{y_{\zeta}}{2})\right]\,e^{- \mathrm{i}ky_{\zeta}} \tag{3.6}\]
when the external potential \(V(x)\) is smooth and localized,[26] but fails when taking the Coulomb interaction into account.[23] Here we would like to confirm this failure using the logarithmic potential. After using the "approximate" Wigner kernel (3.6) to replace the analytical one (3.4) and keeping all other settings unchanged, we rerun the simulations in the left column of Fig. 10 and display the resulting Wigner functions in the middle column of Fig. 10, which shows some obvious discrepancy. Compared with the reference solutions, the Wigner functions obtained with Eq. (3.6) display much more severe oscillations with higher peaks and deeper valleys, and the corresponding spatial marginal distributions show some spurious oscillations and non-physical negative values (see the right column of Fig. 10).
Figure 9: The logarithmic potential: Numerical convergence against \(N_{k}\) (left) and \(M\) (right) at \(t_{fin}=10\) fs.
### The inverse power potential
According to the Fourier transform of the inverse power function with \(\alpha\in(0,1)\): \(\mathcal{F}[|x|^{\alpha}](\xi)=-2\sin(\frac{\pi}{2}(\alpha-1))\Gamma(\alpha)|\xi| ^{-\alpha}\), the Wigner kernel of Eq. (10) reads
\[V_{w}(x,k)=\frac{H\sin(\frac{\pi}{2}\alpha)\Gamma(\alpha)2^{2-\alpha}}{\pi\hbar }\frac{\sin(2xk)}{|k|^{\alpha}}, \tag{11}\]
Figure 10: The logarithmic potential: The Wigner functions corresponding to the analytical Wigner kernel (11) (left), to the “approximate” Wigner kernel (12) (middle) and their spatial marginal distributions (right).
Figure 11: The inverse power potential: Numerical convergence with respect to \(N_{k}\) (left) and \(M\) (right) at \(t_{fin}=10\) fs.
Figure 12: The inverse power potential: The Wigner functions at different instants. We set \(H=1\) and \(\alpha=1/2\) in Eq. (10).
where \(\Gamma(x)\) gives the Gamma function. Combining Eqs. (10) and (11) together yields
\[c_{\nu}(x)=\frac{2H{\rm i}\sin(\frac{\pi}{2}\alpha)\Gamma(\alpha)2^{2-\alpha}}{ \pi\hbar}\int_{0}^{L_{k}}\frac{\sin(2xk)\sin(2\pi\nu k/L_{k})}{k^{\alpha}}\,{ \rm d}k, \tag{12}\]
which can be efficiently approximated to the machine accuracy with the help of the Generalized Hypergeometric function \({}_{1}F_{2}((1-\alpha)/2;\,1/2,(3-\alpha)/2;\,x)\)[35]. For example, when \(\alpha=1/2\), that Generalized Hypergeometric function reduces to the Fresnel function \({\rm C}(x)=\int_{0}^{x}\cos(\pi t^{2}/2)\,{\rm d}t\). We adopt the same parameters as in Section 3.1 to simulate the scattering between the Gauss wave packet and the inverse power potential. Fig. 11 shows the spectral convergence with respect to \(N_{k}\) and \(M\) again, and Fig. 12 the Wigner functions at four different instants. We are able to observe there that the wave packet partially passes through the inverse power potential barrier; and the negative Wigner function, sandwiched between two scattered wave packets in opposite directions, strongly implies the uncertainty principle around the singularity. Fig. 13 further plots the effect of power \(\alpha\) on the tunneling rate \(P_{r}(t)\) in Eq. (11) and uncertainty \(\sigma_{x}(t)\,\sigma_{p}(t)\) in Eq. (10). It is evident that, the tunneling rate gradually increases as \(\alpha\) rises, which reflects that the width of the potential becomes steadily smaller. Since \(P_{r}(t)\) never rises over \(0.5\) throughout the scattering, the uncertainty \(\sigma_{x}(t)\,\sigma_{p}(t)\) shows a mounting tendency whilst \(P_{r}(t)\) ascends, which is incurred by the shape change of the potential, namely the increase of \(\alpha\).
### The inverse square potential
The Wigner kernel of the inverse square potential (11) is
\[V_{w}(x,k)=-\frac{4H}{\hbar}|k|\sin(2xk) \tag{13}\]
and plugging it into Eq. (11), we are able to get a close form for \(c_{\nu}(x)\) like Eq. (5). The parameters are: \([X_{L},X_{R}]=[-30\;{\rm nm},30\;{\rm nm}]\), \([k_{min},k_{max}]=[-\pi\;{\rm nm}^{-1},\pi\;{\rm nm}^{-1}]\), \(N_{k}=512\), \(Q=40\), \(M=55\), \(N_{um}=600\), \(\Delta t=0.005\) fs, \(H=1\), \(x^{0}=-5\) nm, \(k^{0}=1\;{\rm nm}^{-1}\). Fig. 14 verifies the spectral convergence against both \(N_{k}\) and \(M\). Fig. 15 displays the quantum dynamics where the Gaussian wave packet is almost totally
Figure 13: The inverse power potential: Tunneling rates and uncertainties. It is clearly seen that the uncertainty \(\sigma_{x}(t)\,\sigma_{p}(t)\) shows a mounting tendency whilst \(P_{r}(t)\) ascends.
reflected after hitting the singular barrier. The tunneling rate \(P_{r}(t)\) are 0.01158, 0.009115, 0.006731 and 0.002025 at \(t=2\), 4, 5, and 8 fs, respectively, indicating transparently that it is difficult for the wave packet to pass through the barrier due to the strong singularity of the inverse power potential. This is very different from the scattering shown in Section 3.2 with the inverse power potential which has much weaker singularity. Moreover, severe oscillations of positive and negative Wigner functions clearly appear around the origin, which accords with the summary that the quantum behavior near the singularities is difficult to be measured.
## 4 Conclusions
With the help of the Wigner function approach for quantum mechanics -- a first-principle nonlocal model, we perform highly accurate numerical simulations of quantum dynamics under singular potentials, during which the nonlocal characteristic of Wigner function contributes to the attenuation of singularity of the potentials. Numerically converged Wigner functions under the Dirac delta function, the logarithmic, and the inverse power potentials are obtained with an operator splitting spectral method. Many interesting quantum behaviors are also revealed during the scattering under these singular potentials. It should be noted that all existing Wigner simulations truncate the nonlocal integral in \(k\)-space, but the effect of such truncation on long-time simulations of quantum dynamics is hardly estimated in advance. Instead, motivated by recently proposed adaptive technologies on unbounded domains [36, 37, 38], we are developing numerical methods to solve the Wigner equation without truncating the nonlocal \(k\)-integral.
## Acknowledgements
This research was supported by the National Key R&D Program of China (Nos. 2020AAA0105200, 2022YFA1005102) and the National Natural Science Foundation of China (Nos. 12288101, 11822102). SS is partially supported by Beijing Academy of Artificial Intelligence (BAAI).
|
2303.11283 | Resource Saving via Ensemble Techniques for Quantum Neural Networks | Quantum neural networks hold significant promise for numerous applications,
particularly as they can be executed on the current generation of quantum
hardware. However, due to limited qubits or hardware noise, conducting
large-scale experiments often requires significant resources. Moreover, the
output of the model is susceptible to corruption by quantum hardware noise. To
address this issue, we propose the use of ensemble techniques, which involve
constructing a single machine learning model based on multiple instances of
quantum neural networks. In particular, we implement bagging and AdaBoost
techniques, with different data loading configurations, and evaluate their
performance on both synthetic and real-world classification and regression
tasks. To assess the potential performance improvement under different
environments, we conduct experiments on both simulated, noiseless software and
IBM superconducting-based QPUs, suggesting these techniques can mitigate the
quantum hardware noise. Additionally, we quantify the amount of resources saved
using these ensemble techniques. Our findings indicate that these methods
enable the construction of large, powerful models even on relatively small
quantum devices. | Massimiliano Incudini, Michele Grossi, Andrea Ceschini, Antonio Mandarino, Massimo Panella, Sofia Vallecorsa, David Windridge | 2023-03-20T17:19:45Z | http://arxiv.org/abs/2303.11283v2 | # Resource Saving via Ensemble Techniques for Quantum Neural Networks
###### Abstract
Quantum neural networks hold significant promise for numerous applications, particularly as they can be executed on the current generation of quantum hardware. However, due to limited qubits or hardware noise, conducting large-scale experiments often requires significant resources. Moreover, the output of the model is susceptible to corruption by quantum hardware noise. To address this issue, we propose the use of ensemble techniques, which involve constructing a single machine learning model based on multiple instances of quantum neural networks. In particular, we implement bagging and AdaBoost techniques, with different data loading configurations, and evaluate their performance on both synthetic and real-world classification and regression tasks. To assess the potential performance improvement under different environments, we conduct experiments on both simulated, noiseless software and IBM superconducting-based QPUs, suggesting these techniques can mitigate the quantum hardware noise. Additionally, we quantify the amount of resources saved using these ensemble techniques. Our findings indicate that these methods enable the
construction of large, powerful models even on relatively small quantum devices.
## 1 Introduction
The emerging field of quantum machine learning [1] holds promise for enhancing the accuracy and speed of machine learning algorithms by utilizing quantum computing techniques. Although the potential of quantum machine learning is expected to be advantageous for certain classes of problems in chemistry, physics, material science, and pharmacology [2], its applicability to more conventional use cases remains uncertain [3]. Notably, utilizable quantum machine learning algorithms generally need to be adapted to run on 'NISQ' devices [4], that are current noisy quantum computer, no error corrected and with modest number of qubits and circuit depth capabilities. In the quantum machine learning scenario, the quantum counterparts of classical neural networks, quantum neural networks [5], have emerged as the de facto standard model for solving supervised and unsupervised learning tasks in the quantum domain.
While quantum neural networks have generated much interest, they presently have some issues. The first is _barren plateau_[6] characterised by the exponentially-fast decay of the loss gradient's variance with increasing system size. This problem may be exacerbated by various factors, such as having overly-expressive quantum circuits [7]. To address this issue, quantum neural networks need to be carefully designed [8] and to incorporate expressibility control techniques such as projection [9] and bandwidth control [10]. The second problem, which is the one addressed in this work, concerns the amount of resources required to run quantum neural networks (the limited number of total qubits -currently up to over a hundred- and the low fidelity of operations on current quantum devices severely restrict the size of the quantum neural network in terms of input dimension and layers).
In order to address the latter issue, we propose employing of NISQ-appropriate implementation of ensemble learning [11], a widely used technique in classical machine learning for tuning the bias and variance of a specific machine learning mechanism via the construction of a stronger classifier using multiple weak components, such that the ensemble, as a whole, outperforms the best individual classifier. The effectiveness of ensemble systems has been extensively demonstrated empirically and theoretically [12], although there does not currently exist any overarching theoretical framework capable of e.g. covering the requirements of ensemble components diversity to guarantee its out-performance. We here seek to provide and quantify a motivation for employing classical ensemble techniques in relation to NISQ-bases quantum neural networks, which we address via the following three arguments.
The first argument concerns the potential for the superior performance of an ensemble system composed of small quantum neural networks compared to a single larger quantum neural network. This notion is based on the rationale that while quantum neural networks are inherently powerful machine learning
models, they exhibit intrinsic variance due to the nature of highly non-convex loss landscape, implying that different predictors will result from randomly-initialised stochastic gradient descent training, in common with classical neural networks. (Modern deep learning practice often deliberately overparameterises the network in order to render the loss more convex [13], with the asymptotic case of infinitely wide neural networks exhibiting a fully convex loss landscape, making it effectively a linear model [14]). Although overparameterization in quantum neural networks has been studied theoretically [15, 16, 17] and has been shown to be beneficial to generalization performances within certain settings, the increase in resource requirements makes this approach almost completely impractical on NISQ devices. In the classical literature, however, it has been demonstrated that ensemble techniques can perform comparably to the largest (generally overparameterized) models with significantly fewer resources (especially in relation to overall model parameterization), c.f. for example [18, Figure 2].
The second argument pertains to the resource savings achievable by ensemble systems, particularly in terms of the number of qubits, gates, and training samples required. For example, the boosting ensemble technique involves progressive dividing of the training dataset into multiple, partially overlapping subsets on the basis of their respective impact on the performance of the cumulative ensemble classifier created by summing of the partial weak classifiers trained on previously-selected data subsets. This enables the ensemble quantum neural network to be constructed in parallel with individual quantum neural networks operating on datasets of reduced size. The random subspace technique, by contrast, trains each base predictor on a random subset of features, but also provides an advantage in terms of the overall number of qubits and gates required. Employing the random subspace technique in a quantum machine learning setting would parallel the various quantum circuit splitting techniques (c.f. for example [19]), and divide-and-conquer approaches, that have been utilized in the field of quantum chemistry [20] and quantum optimization [21].
Our third argument, which is specific to quantum computing, examines the potential of ensembles' noise-canceling ability. Previous works have demonstrated that ensembles can enhance the performance of several noisy machine-learning tasks (see [22]). Our investigation aims to determine whether and to what extent these techniques can reduce the impact of noise during the execution on a NISQ device _at the applicative level_. This approach differs from most current approaches, which aim to reduce noise at a lower level, as described in [23].
We here examine the impact of ensemble techniques based on bagging (bootstrap aggregation) and boosting ensembles in a quantum neural network setting across seven variant data loading schemes. Bagging techniques are selected for their applicability in high-variance settings, i.e. those exhibiting significant fluctuations in relation to differ initialisations and differ sample subselections; contrarily, boosting techniques are effective in relation to high-bias models, i.e. those which are relatively insensitive to data subsampling.
Our first objective is to quantify the amount of resources (in particular, the number of qubits, gates, parameters, and training samples) saved by the respective approaches. Secondly, we evaluate the performance using quantum
neural networks as base predictors to solve a number of representative synthetic and real-world regression and classification tasks. Critically, the accuracy and loss performance of these approaches are assessed with respect to the number of layers of the quantum neural networks in a simulated environment. We thus obtain a layer-wise quantification of performance that addresses one of the fundamental questions in architecting deep neural systems, namely, how many layers of abstraction to incorporate? Note that this question is fundamentally different in a quantum setting compared to classical neural systems; in the latter, the possibility of multi-level feature learning exists, and thus the potential for indefinite performance improvement with neural layer depth [17]. This contrast with the quantum neural networks, in which an increase in the number of layers affects the expressibility of the ansatz and thus might introduce a barren plateau [7].
Finally, the noise-canceling capabilities of ensembles will be investigated by testing a synthetic linear regression task on IBM's superconductor-based quantum processing unit (QPU) Lagos.
ContributionsOur contributions are the following:
* We evaluate various ensemble schemes that incorporate bagging and boosting techniques into quantum neural networks, and quantify the benefits in terms of resource savings, including the number of qubits, gates, and training samples required for these approaches.
* We apply our approach to the IBM Lagos superconductor-based quantum processing unit to investigate the potential advantages of bagging techniques in mitigating the effects of noise during the execution of quantum circuits on NISQ devices.
* We conduct a layer-wise analysis of quantum neural network performance in the ensemble setting with a view to determining the implicit trade-off between ensemble advantage and layer-wise depth.
## 2 Related Works
The quest for quantum algorithms able to be executed on noisy small-scale quantum systems led to the concept of Variational Quantum Circuits (VQCs), i.e. quantum circuits based on a hybrid quantum-classical optimization framework [24, 25]. VQCs are currently believed to be promising candidates to harness the potential of QC and achieve a quantum advantage [26, 27, 28]. VQCs rely on a hybrid quantum-classical scheme, where a parameterized quantum circuit is iteratively optimized with the help of a classical co-processor. This way, low-depth quantum circuits can be efficiently designed and implemented on the available NISQ devices; the noisy components of the quantum process are mitigated by the low number of quantum gates present in the VQCs. The basic structure of a VQC include a data encoding stage, where classical data are embedded
into a complex Hilbert space as quantum states, a processing of such quantum states via an ansatz made of parameterized rotation gates and entangling gates, and finally a measurement of the circuit to retrieve the expected outcome. Many different circuit architectures and ansatzes have been proposed for VQCs [29; 30; 31; 32], depending on the structure of the problem or on the underlying quantum hardware. VQCs demonstrated remarkable performances and a good resilience to noise in several optimization tasks and real-world applications. For example, researchers in [33] introduced a circuit-centric quantum classifier based on VQC that could effectively be implemented on a near-term quantum device. It correctly classified quantum encoded data and demonstrated to be robust against noise. Authors in [25] proposed a VQC that successfully approximated high-dimensional regression and classification functions with a limited number of qubits.
VQCs are incredibly well-suited for the realization of quantum neural networks with a constraint on the number of qubits [34]. A quantum neural network is usually composed of a layered architecture able to encode input data into quantum states and perform heavy manipulations in a high-dimensional feature space. The encoding strategy and the choice of the circuit ansatz are critical for the achievement of superior performances over classical NNs: more complex data encoding with hard-to-simulate feature maps could lead to a concrete quantum advantage [35], but too expressive quantum circuits may exhibit flatter cost landscapes and result in untrainable models [7]. An example of quantum neural network was given in [36], where a shallow NN was employed to perform classification and regression tasks using both simulators and real quantum devices. In [37], authors proposed a multi-layer Quantum Deep Neural Network (QDNN) with three variational layers for an image classification task. They managed to prove that QDNNs have more representation capacity with respect to classical deep NN. A hybrid Quantum-classical Recurrent Neural Network (QRNN) was presented in [38] to solve a time series prediction problem. The QRNN, composed of a quantum layer as well as two classical recurrent layers, demonstrated superior performances over the classical counterpart in terms of prediction error.
However, quantum neural networks suffer from some non-negligible problems, which deeply affect their performances and limit their impact in the quantum ecosystem. Firstly, they are still subject to quantum noise, and it gets worse as the number of layers (i.e., the depth of the quantum circuit) increases [39; 40]. Secondly, barren plateaus phenomena may occur depending on the ansatz and the number of qubits chosen, reducing the trainability of such models [7; 41; 6]. Finally, data encoding on NISQ devices continues to represent an obstacle when the number of features is considerable [34], making them hard to implement and train [38].
In classical ML, ensemble learning has been investigated for years to improve generalization and robustness over a single estimator [42; 11]. Ensembling is based on the so-called "wisdom of the crowd" principle, namely it combines the predictions of several base estimators with the same learning algorithm to build a single stronger model. Despite there are many different ensemble methods, the latter can be easily grouped into two different categories: bagging methods, which
build and train several estimators independently and then compute an average of their predictions [43], and boosting methods, which in turn train the estimators sequentially so that the each one corrects the predictions of the prior models and output a weighted average of such predictions [44]. Ensemble methods for NNs have also been extensively studied, yielding remarkable performances in both classification and regression tasks [45, 46, 47].
In the quantum setting, the adoption of an ensemble strategy has received little consideration in the past few years, with very few approaches focusing on near-term quantum devices and VQC ensembles. In [48, 49], the authors exploit the superposition principle to obtain an exponentially large ensemble wherein each instance is weighted according to its accuracy on the training dataset. However, they make use of a fault-tolerant approach rather than considering limited quantum resources. A similar approach is explored in [50], where authors create an ensemble of Quantum Binary Neural Networks (QBNNs) with reduced computational training cost without taking into consideration the amount of quantum resources necessary to build the circuit. An efficient strategy for bagging with quantum circuits is proposed in [51] instead. Very recently, [52] has proposed a distributed framework for ensemble learning on a variety of NISQ quantum devices, although it requires many NISQ devices to be actually implemented. A quantum ECOC multiclass ensemble approach was proposed in [53]. In [54], the authors investigated the performance enhancement of a majority-voting-based ensemble system in the quantum regime. Authors in [55] studied the role of ensemble techniques in the context of quantum reservoir computing. Finally, an analysis of robustness to hardware error as applied to quantum reinforcement learning, and presenting compatible results, is given in [56].
In this paper, we propose a classical ensemble learning approach to the outputs of several quantum neural networks in order to reduce the quantum resources for a given quantum model and provide superior performances in terms of error rate over single quantum neural network instances. To the best of our knowledge, no one has ever proposed such an ensemble framework for VQCs. We also compare both bagging and boosting strategy to provide an analysis on the most appropriate ensemble methods for quantum neural networks in a noiseless setting. An error analysis with respect to the number of layers of the quantum neural networks reveals that bagging models greatly outperform the baseline model with low number of layers, with remarkable performances as the number of layers increase. Finally, we apply our approach to the IBM Lagos superconductor-based QPU to investigate the potential advantages of bagging techniques in mitigating the effects of noise during the execution of quantum circuits on NISQ devices.
## 3 Background and Notation
We provide a brief introduction to the notation and concepts used in this work. The sets \(\mathcal{X}\) and \(\mathcal{Y}\) represent the set of features and targets, respectively. Typically,
\(\mathcal{X}\) is equal to \(\mathbb{R}^{d}\), with \(d\) equal to the dimensionality in input, whereas \(\mathcal{Y}\) is equal to \(\mathbb{R}\) for regression tasks and \(\mathcal{Y}\) is equal to \(\{c_{1},...,c_{k}\}\) for \(k\)-ary classification tasks. Sequences of elements are indexed in the apex with \(x^{(j)}\), where the \(i\)-th component is denoted as \(x_{i}\). The notation \(\epsilon\sim\mathcal{N}(\mu,\sigma^{2})\) indicates that the value of \(\epsilon\) is randomly sampled from a univariate normal distribution with mean \(\mu\) and variance \(\sigma^{2}\). We use the function \(\llbracket P\rrbracket\) to denote one when the predicate \(P\) is true and zero otherwise.
### Models in quantum machine learning
We define the state of a quantum system as the density matrix \(\rho\) having unitary trace and belonging to the Hilbert space \(\mathcal{H}\equiv\mathbb{C}^{2^{n}\times 2^{n}}\) where \(n\) is the number of qubits. The system starts in the state \(\rho_{0}=|0\rangle\!\langle 0|\). The evolution in a closed quantum system is described by a unitary transformation \(U=\exp(-itH)\), \(t\in\mathbb{R}\), \(H\) Hermitian operator, and acts like \(\rho\mapsto U^{\dagger}\rho U\). The measurement of the system in its computational basis \(\{\Pi_{i}=|i\rangle\!\langle i|\}_{i=0}^{2^{n}-1}\) applied to the system in the state \(\rho\) will give outcome \(i\in 0,1,...,2^{n}-1\) with probability \(\mathrm{Tr}[\Pi_{i}\rho\Pi_{i}]\) after which the state collapses to \(\rho^{\prime}=\Pi_{i}\rho\Pi_{i}/\,\mathrm{Tr}[\Pi_{i}\rho\Pi_{i}]\). A different measurement operation is given by the expectation value of an observable \(O=\sum_{i}\lambda_{i}\Pi_{i}\) acting on the system in state \(\rho\), whose value is \(\langle O\rangle=\mathrm{Tr}[\rho O]\).
Quantum computation can be described using a quantum circuit, a sequence of gates (i.e. elementary operations) acting on one or more qubits of the system terminating with the measurement operation over some or all of its qubits. The output of the measurement can be post-processed using a classical function. "The set of gates available shall be _universal_", i.e. the composition of such elementary operation allows the expression of any unitary transformation with arbitrary precision. An exemplar universal gate set is composed of parametric operators \(R_{x}^{(i)}(\theta)=\exp(-i\frac{\theta}{2}\sigma_{x}^{(i)})\), \(R_{y}^{(i)}(\theta)=\exp(-i\frac{\theta}{2}\sigma_{y}^{(i)})\), \(R_{z}^{(i)}(\theta)=\exp(-i\frac{\theta}{2}\sigma_{z}^{(i)})\), and the operator \(\mathrm{CNOT}^{(i,j)}=\exp(-i\frac{\pi}{4}(I-\sigma_{z}^{(i)})(I-\sigma_{x}^{ (j)}))\). The gate \(I\) is the identity. The matrices \(\sigma_{x}=\left(\begin{smallmatrix}0&1&0\\ 1&0\end{smallmatrix}\right),\sigma_{y}=\left(\begin{smallmatrix}0&1&0\\ 1&0\end{smallmatrix}\right),\sigma_{z}=\left(\begin{smallmatrix}0&1&0\\ 1&0\end{smallmatrix}\right)\) are the Pauli matrices. The apex denotes explicitly the qubits in which the transformation acts.
Quantum machine learning forms a broad family of algorithms, some of which require fault-tolerant quantum computation while others are ready to execute on current generation 'NISQ' (noisy) quantum devices. The family of NISQ-ready techniques of interest in this document is denoted _variational quantum algorithms_[24]. These algorithms are based on the tuning of a cost function \(C(\theta)\) dependent on a set of parameters \(\theta\in[0,2\pi]^{P}\) and optimized classically (possibly via gradient descent-based techniques) to obtain the value \(\theta^{*}=\arg\min_{\theta}C(\theta)\). Optimization through gradient-descent thus involves computation of the gradient of \(C\). This can be done using finite difference methods or else the parameter-shift rule [57]. The parameter-shift rule is particularly well-suited for NISQ devices as it can utilise a large step size relative to finite difference methods, making it less sensitive to noise in calculations.
In general, \(C(\theta)\) is a function corresponding to a parametric quantum transformation \(U(\theta)\) of a length polynomial in the number of qubits, the set of input
states \(\{\rho_{i}\}\), and the set of observables \(\{O_{k}\}\). Specifically, a _quantum neural network_ is a function in the form
\[f(x;\theta)=\mathrm{Tr}[U^{\dagger}(\theta)V^{\dagger}(x)\rho_{0}V(x)U(\theta)O] \tag{1}\]
where \(\rho_{0}\) is the initial state of the system, \(V(x)\) is a parametric quantum circuit depending on the input parameters \(x\in\mathcal{X}\), \(U(\theta)\) is a parametric quantum circuit named an _ansatz_ that depends on the trainable parameters \(\theta\in[0,2\pi)^{P}\), and \(O\) is an observable. Given the training dataset \(\{(x^{(i)},y^{(i)})\}_{i=1}^{M}\in(\mathcal{X}\times\mathcal{Y})^{M}\), the cost function of a quantum neural network, being a supervised learning problem, is the empirical risk
\[C(\theta)=\sum_{i=1}^{M}\ell(f(x^{(i)};\theta),y^{(i)}) \tag{2}\]
where \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) is any convex loss function, e.g. the mean square error.
The quantum neural network constitutes a linear model in the Hilbert space of the quantum system as a consequence of the linearity of quantum dynamics. It behaves, in particular, as a _kernel machine_ that employs the unitary \(V(x)\) as the feature map \(\rho\mapsto\rho_{x}=V(x)\rho\), while the variational ansatz \(\rho\mapsto\rho_{\theta}=U(\theta)\rho\) adjusts the model weights. Note that although the model is linear in the Hilbert space of the quantum system, the measurement projection makes it nonlinear in the parameter space, enabling a set of rich dynamics. quantum neural networks can have a layer-wise structure, i.e., \(U(\theta)=\prod_{i=1}^{\ell}U_{i}(\theta_{i})\), which provides it with further degrees of freedom for optimization (however, due to the lack of nonlinearity between the layers, the model does not possess the hierarchical feature learning capabilities of classical neural networks).
The selection of the ansatz is thus a crucial aspect in defining the quantum neural network, and it is required to adhere to certain classifier-friendly principles. Expressibility is one such, being the property governing the extent of the search space that can be explored by the optimization method. Although there are various ways to formalize expressibility, one of the most widely used definitions is based on the generation of state ensembles \(\{\rho_{\theta}=U(\theta)\rho_{0}\mid\theta\in\Theta\}\) that are similar to Haar-random (i.e. uniform) distributions of states. Expressible unitaries are those for which the operator norm of a certain expression involving the Haar measure and the state ensemble is small. However, expressible circuits are susceptible to the barren plateau problem, where the variance of the gradient decreases exponentially with the number of qubits, making parameter training infeasible. The varieties of ansatz and their expressibilities are presented in [58]. Expressibility is tightly connected to the concept of controllability in quantum optimal control, and authors in [8] show that the asymptotic limit of the number of layers \(\ell\rightarrow\infty\) in the expressible circuits are the controllable ones, i.e. those whose ansatz is underlied by a Lie algebra matching the space of skew-Hermitian matrices \(\mathfrak{u}(2^{n})\).
### Ensemble techniques
The purpose of using ensemble systems is to improve the generalization performance through reducing the bias or variance of a decision system. Such a result is obtained by training several models and combining the outcomes according to a combination rule. A large body of literature on ensemble techniques exists; the reader is referred to [11] for a general overview.
The idea behind the ensemble system may be motivated by Condorcet's jury theorem [12]: a jury of \(m\) peers, each having probability \(p=\frac{1}{2}+\epsilon,0<\epsilon\ll 1\), of giving the correct answer, implies that the probability of the verdict given by majority voting to be correct is
\[p_{\text{jury}}=\sum_{k=\lceil m/2\rceil+1}^{m}\binom{m}{k}p^{k}(1-p)^{m-k} \tag{3}\]
and quickly approaches 1 as \(m\rightarrow\infty\). The theorem, broadly interpreted, suggests that a combination of small, individually ineffective machine learning models \(h_{1},...,h_{m}\) (_weak learners_) can be combined to constitute a more powerful one, with arbitrarily good performance depending on the nature of data manifold and the base classifiers \(h_{\text{ens}}\) (_strong learner_). According to [11], three aspects characterize an ensemble system: a data selection strategy, the composition plus training strategies of the single model instances, and the combination rule of its output. Some of the possible choices are summarized in Figure 1.
The data selection strategy determines how the data should be distributed to the individual instances. If all instances are trained on the same dataset, their predictions will be highly correlated, resulting in similar output. The _bootstrapping_ technique creates smaller, overlapping
Figure 1: Taxonomy of the three aspects characterizing an ensemble system.
replacement from the dataset, which are then assigned to different instances. Alternatively, the _pasting_ technique can be used for processing larger datasets by subsampling without replacement. Another approach is to divide the dataset by randomly assigning different sets of features with replacement, known as the random subspace technique (when the bootstrapping and random subspace techniques are combined, the result is the _random patch_ technique).
There are numerous schemes for combining predictors, with _bagging_ being the most straightforward and commonly used. Bagging, short for bootstrap aggregation, involves the creation of multiple homogeneous model instances trained on bootstrapped datasets. An instance of a bagging scheme is the random forest, which involves bagging decision trees trained on differing sample subsets (in some cases, random forests may favor a random patch data selection strategy over bagging). Another predictor combination scheme is _boosting_, which involves training a sequence of predictors via subsampling data according to the following strategy: an initial predictor is trained on a uniformly drawn subset of samples, while the \(i\)-th instance of the predictor is trained on a subset of elements that the previous ensemble classifier incorrectly predicted. The ensemble is itself the convex cumulative sum over predictors. Numerous variations of boosting exist, one of the most notable being AdaBoost [59]. Contrary to vanilla boosting, AdaBoost employs an exponential loss such that the ensemble error function allows for the fact that it is only the sign of outcome that is significant. These two scheme are illustrated in Figure 2. The other major ensemble scheme is _stacking_ in which a collection of heterogeneous classifiers trained on the same dataset are combined via an optimised meta-classifier.
The combination rule merges the output of individual models \(h_{1},...,h_{m}\). In classification tasks i.e. where the label output is discrete \(y\in C=\{c_{1},...,c_{k}\}\), the most commonly used rule is majority voting. This is calculated as \(y_{\text{ens}}=\arg\max_{c\in C}\sum_{i=1}^{m}\llbracket h_{i}(x)=c\rrbracket\). Where there exists prior knowledge regarding the performance of individual predictors, positive weights \(w_{i}\) can be assigned, such that the output is a weighted majority vote. The ensemble prediction in this case will be \(y_{\text{ens}}=\arg\max_{c\in C}\sum_{i=1}^{m}w_{i}\llbracket h_{i}(x)=c\rrbracket\). Alternatively, the
Figure 2: Comparison between bagging (left) and ‘vanilla’ boosting (right) techniques. The bagging ensemble trains the models in parallel over a subset of the dataset drawn uniformly; each prediction is then merged via an average function. The boosting ensemble trains the models sequentially, the first predictor draws the samples uniformly, and the subsequent models draw the elements from a probability distribution biased toward previously misclassified items.
_borda count_ method sorts labels in descending order by likelihood, with the ensemble prediction being the highest ranking sum. Nevertheless, averaging functions can also be utilised for ensemble classifiers. For regression tasks where \(y\in\mathbb{R}\), common combination rules are (possibly weighted) mean, minimum, and maximum.
## 4 Discussion
Ensemble techniques, while well-established in the classical realm, have been largely overlooked in the quantum literature, leaving a number of open questions in this setting, such as whether bagging techniques, which reduce variance, can be deployed as effectively as boosting techniques, which reduce bias (both of which are also data-manifold and base-model dependent). It is also unclear as to the relative resource saving in terms of circuit size (number of qubits) and depth (number of gates), and also samples required for training, that can be obtained by using an ensemble of quantum neural networks instead of a single, large quantum network. Furthermore, it is not currently well understood the extent to which an ensemble system can mitigate hardware noise. Our experiments are designed to explore these questions.
To investigate the first two aspects, we conduct a suite of experiments within a simulation environment, employing seven distinct ensemble schemes with varying strategies for data selection, model training and decision combination applied to four synthetic and real-world datasets, encompassing both regression and classification tasks. Specifically, we analyze: a synthetic linear regression dataset, the Concrete Compressive Strength regression dataset, the Diabetes regression dataset, and the Wine classification dataset, which are widely used benchmarks for evaluating machine learning models.
Six of the proposed techniques are classified as bagging methods, employing bootstrapped data to generate the ensemble, while the seventh is a sequential boosting technique, namely AdaBoost. In particular, we implemented the AdaBoost.R2 version [60] for the regression tasks and the AdaBoost SAMME.R version [61] for the classification problem. The bagging ensembles are characterized by two parameters: the sample ratio \(r_{n}\in[0,1]\), which determines the percentage of training samples used for each base predictor (with replacement), and the feature ratio \(r_{f}\in[0,1]\), which indicates the percentage of features used for each predictor (without replacement). We test six bagging schemes by varying \((r_{n},r_{f})\in\{0.2,1.0\}\times\{0.3,0.5,0.8\}\). For both the classification and regression tasks, the outputs of the base predictors are combined via averaging. In the case of the AdaBoost ensemble, the training set for each base predictor has the same size and dimensionality as the original training set. However, the samples are not uniformly drawn but are selected and weighted based on the probability of misclassification by previous classifiers composing the cumulative ensemble; single predictors are hence combined using a weighted average. Each ensemble system comprises 10 base predictors. The characteristics of these ensemble schemes are summarized in Table 1, where FM identifies the baseline quantum neural network
model, whereas Bag_\(r_{f}\)_\(r_{n}\) represents a bagging model with \(r_{f}\) percentage of the features and \(r_{n}\) percentage of the samples. Our experiments aim to evaluate the performance of each of the ensemble frameworks in comparison to the baseline model, as well as to assess the overall resource saving, including the number of qubits and overall parametric requirements.
To investigate the impact of quantum hardware noise, we conduct additional experiments on the IBM Lagos QPU. Such a device is a 7-qubit superconducting-based quantum computer. The topology of Lagos is depicted in Figure 3. Specifically, we compare the performance of the baseline model FM with that of the Bag_0.8_0.2 configuration on the linear regression dataset. Our goal is to determine whether ensemble techniques can effectively mitigate quantum noise, and whether the difference in performance between single predictors and ensemble systems is more pronounced within a simulated environment in comparison with real-world execution on quantum hardware.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Data Loading} & \multirow{2}{*}{Ensemble} & \multirow{2}{*}{\#BP} & \multirow{2}{*}{Rule} \\ \cline{2-2} \cline{5-6} & & RSBS (\(r_{f}\)) & & & & \\ \hline FM & - & - & - & - & - \\ Bag\_0.3\_0.2 & 0.3 & 0.2 & Bagging & 10 & Avg \\ Bag\_0.3\_1.0 & 0.3 & 1.0 & Bagging & 10 & Avg \\ Bag\_0.5\_0.2 & 0.5 & 0.2 & Bagging & 10 & Avg \\ Bag\_0.5\_1.0 & 0.5 & 1.0 & Bagging & 10 & Avg \\ Bag\_0.8\_0.2 & 0.8 & 0.2 & Bagging & 10 & Avg \\ Bag\_0.8\_1.0 & 0.8 & 1.0 & Bagging & 10 & Avg \\ AdaBoost & 1.0 & 1.0 & AdaBoost & 10 & W.Avg \\ \hline \hline \end{tabular}
\end{table}
Table 1: Characteristics of the baseline benchmark model (0) and ensemble systems (I to VII). The ensemble system is identified by its broad data loading method (BST for Boosting and RSBS for Random Subspace), predictor composition & training type (Ensemble), number of base predictors (#BP), composition rule (Rule, with Avg representing the average function and W.Avg representing weighted average).
Figure 3: Topology of IBM Lagos quantum processing unit
### Experimental setup
This section outlines experimental protocols used to evaluate the performance of the various ensemble approaches in terms of both the experimental structure and specific parameters/settings used to configure the algorithm and hardware.
Choice of quantum neural networksWe utilize a quantum neural network of the form \(f(x;\theta)=\mathrm{Tr}[U^{\dagger}(\theta)V^{\dagger}(x)\rho_{0}V(x)U(\theta)O]\), which operates on \(n\) qubits, with \(n\) corresponding to the number of features in the classification/regression problem. For the feature map, we opted for the simple parametric transformation \(V(x)=\bigotimes_{i=1}^{n}R_{y}^{(i)}(x_{i})\). This choice was motivated by the findings in [62], suggesting that more complex feature maps can lead to unfavorable generalization properties, incorporation of which may thus unnecessarily bias our findings. (In [63], various feature maps are compared).
The ansatz is implemented with the parametric transformations structured layer-wise with, for \(\ell\) the number of layers, a total of \(3\ell n\) parameters. It is thus defined as:
\[U_{\ell}(\theta)= \prod_{k=1}^{\ell}\Bigg{[}\left(\bigotimes_{i=1}^{n}R_{x}^{(i)}( \theta_{3kn+2n+i})\right)\left(\prod_{i=1}^{n-1}\mathrm{CX}^{(i,i+1)}\right) \left(\bigotimes_{i=1}^{n}R_{z}^{(i)}(\theta_{3kn+n+i})\right)\] \[\left(\prod_{i=1}^{n-1}\mathrm{CX}^{(i,i+1)}\right)\left(\bigotimes _{i=1}^{n}R_{x}^{(i)}(\theta_{3kn+i})\right)\Bigg{]} \tag{4}\]
The role of CNOT gates is the introduction of entanglement in the system, which would otherwise be efficiently classical simulable. We select as the observable \(O=\sigma_{z}^{(0)}\), which operates on a single qubit. Local observables like this one are less susceptible to the barren plateau problem than global ones, for example, \(O=\otimes_{i=1}^{n}\sigma_{z}^{(i)}\) (as noted in [41]). The quantum neural network described in our investigation is pictured in Figure 4.
Training of the modelTo train models, we utilize a standard state-of-the-art gradient descent-based algorithm, ADAM. The Mean Squared Error (MSE) was selected as the loss function and error metric to evaluate the performances of the models in the regression tasks, as it is a standard error metric in supervised learning. MSE was selected as the loss function to train the networks because it is more sensitive to larger errors. Categorical Cross Entropy (CCE) was used as the loss function for the classification task instead, while Accuracy score was employed as error metric to assess the goodness of the classification. Given the output \(f\) of the model, the computation of its gradient \(\nabla f\), which is required to calculate the gradient of the loss function, is accomplished using the parameter-shift rule [57], since the commonly-used finite difference method \(\nabla f(x;\theta)\approx(f(x;\theta)-f(x;\theta+\epsilon))/\epsilon\) is highly susceptible to hardware noise. The optimization hyper-parameters used are the learning rate, set to 0.1, and the number of training epochs, which was selected through empirical investigation (specifically, we carry out 150 training epochs to obtain the simulated results,
while for QPU-based results, we perform just 10 epochs due to technological constraints on current hardware).
DatasetsWe assess the performance of our approach using both synthetic and real-world datasets, across both regression and classification problems. The linear regression dataset is artificially generated with parametric control over the number of samples \(n\), the dimensionality \(d\), and the noise variance \(\sigma\). It is procedurally generated by randomly sampling a weight vector \(w\) uniformly over \([-1,1]^{d}\) such that the training set \(\{(x^{(i)},y^{(i)})\}_{i=1}^{n}\) is constructed with \(x^{(i)}\) uniformly sampled from \([-1,1]^{d}\), \(y^{(i)}=w\cdot x^{(i)}+\epsilon^{(i)}\), and \(\epsilon^{(i)}\) sampled from a normal distribution with zero mean and variance \(\sigma\). In our case we have \(n=250\) (jointly the training and testing datasets), \(d=5\) and \(\sigma=0.1\). The other datasets involved in the experiments are the _Concrete Compressive Strength_ dataset, the _Diabetes_ dataset, and the _Wine_ dataset. The first of these is a multivariate regression problem calculating the strength of the material based on its age and ingredients. The second is a multivariate regression problem correlating the biological and lifestyle characteristic of patients to their insulin levels. The third one is a multivariate, three-class classification problem investigating the geographic origin of wine samples from their chemical characteristics. All are freely available and open source. Table 2 summarizes the characteristics of these datasets. Every dataset is divided into 80% train samples and 20% test samples. Moreover, in a data preprocessing phase, raw data were scaled in the range \([-1,1]\) to best suit the output of the quantum neural networks; the scaler was fitted using training data only. No other preprocessing technique, i.e. PCA, has been applied.
Implementation detailsOur implementation is written in Python3, and utilizes Pennylane as a framework to define and simulate quantum circuits, with the Pennylane-Qiskit plugin used to execute circuits on IBM Quantum devices
Figure 4: Quantum Neural Network used to classify the linear regression dataset, having 5 qubits and \(\ell=1\) layers. The rotational gates parameterized by the feature \(x_{i}\) form the feature map, while those parameterized via the \(\theta\)s form the ansatz.
via the Qiskit software stack. To improve simulation times, we employed the JAX linear algebra framework as the simulation backend. By using JAX, the quantum circuit can be just-in-time compiled to an intermediate representation called XLA, which can significantly speed up simulation times (by up to a factor of 10). Our simulations were run on a commercial computer with an AMD Ryzen 7 5800X (8-core CPU with a frequency of 3.80 GHz) and 64 GB of RAM. The experiments on the noise canceling properties of ensemble systems were conducted on the ibm_lagos quantum processing unit, which consists of 7 qubits arranged in the topology \(\{(0,1);(1,2);(1,3);(3,4);(4,5);(4,6)\}\). The single-gate fidelity and CNOT fidelity of this QPU did not exceed \(2.89e^{-4}\) and \(8.63e^{-3}\), respectively (according to the latest calibration available).
### Resource efficiency of quantum neural network ensembles
Besides performance, resource efficiency is a key argument for the utilization of quantum neural network ensembles. Efficiency can be measured by various metrics: for example, number of qubits, gates, parameters, and training samples required to achieve comparable performance.
To determine the potential savings in the number of qubits we here deploy the random subspace technique (also known as _attribute bagging_ or _attribute bootstrap aggregation_). Our experiments (cf Figure 5) suggest a potential saving of 20% to 80% of the total qubit budget via this approach. However, such a saving is made at the cost of the ensemble was a whole having the potential for less rich class-discrimination behaviour, dependent on both the sampling required to achieve full feature coverage and the nature of the underlying data manifold. A positive consequence of reducing the number of qubits, though, is that each quantum circuit will have fewer gates and parameters, resulting in improved noise robustness on real hardware (i.e less decoherence, higher overall fidelity), as well as faster gradient calculation (individual gradient calculations require \(P+1\) quantum circuit evaluations for \(P\) parameters). This allows for a saving of the parameter budget of up to 75% in the indicated experimental regime, while the saving on gates corresponds proportionately (cf Figure 4). Savings for each dataset and ensemble technique are as depicted in Figure 5.
\begin{table}
\begin{tabular}{l l l r r l} \hline \hline Dataset & Source & Nature & \# Features & \# Samples & Task \\ \hline Linear & - & Synthetic & 5 & 250 & Regression \\ Concrete & UCI & Real-world & 8 & 1030 & Regression \\ Diabetes & Scikit-Learn & Real-world & 10 & 442 & Regression \\ Wine & UCI & Real-world & 13 & 178 & Classification \\ \hline \hline \end{tabular}
\end{table}
Table 2: Characteristics of the datasets analyzed. UCI stands for the open source _UCI Repository_. _Scikit-Learn_ is an open-source software library for Python3. The number of features does not include the target.
### Simulated Domain Experiments
Initially, we evaluate our method in a simulated environment, one free of noise, such that the output estimation is infinitely precise. This differs significantly from execution on a NISQ quantum processing unit, which introduces various types of hardware error (such as decoherence and infidelity of operations) as well as sampling error caused via the measurement operation. We examine the performance of both the baseline models and ensemble systems in a scenario where the number of layers (i.e. quantum neural network depth) is gradually increased. To establish robustness to random initialization of parameters (that is, susceptibility to local minima effects), each simulation is repeated ten times.
#### 4.3.1 Experiment I
The first experiment seeks to perform linear regression on a synthetic noisy 5-dimensional dataset. The function generating the targets is as follows: \(y=w\cdot x+\epsilon\), where \(x\in(-1,1)^{5}\subseteq\mathbb{R}^{5}\), \(w\in\mathbb{R}^{5}\) is randomly generated from a uniform distribution having as support the range \(-1\) to \(1\), and \(\epsilon\) is a Gaussian noise of mean zero and standard deviation \(0.1\). The total number of samples composing this synthetic dataset is 250. Each experimental data point instantiates a layer number, a number of bagged features, and a percentage of training data points available to the ensemble.
The results of the first experiment are indicated in Figure 6. Both FM and AdaBoost achieve the lowest MSE generalisation error of about 0.021 at 10 layers, reaching a performance plateau at 5 layers. The bagging models utilising 80% of the features are able to reach satisfactory results with 10 layers, which are only 0.03 - 0.05 points higher than the error obtained by the best performing models. In general, it appears that quantum bagging models with a high number of features are able to generalize well on unseen data in this setting, even with only 20% of the training samples (unsurprisingly, the performance of bagging models with only 20% of training samples are worse than those of the counterparts using 100% of the training samples). Nevertheless, they still achieve remarkable results and show impressive generalization capabilities, confirming the effectiveness of
Figure 5: Number of qubits & parameters employed in individual experiments.
bagged quantum models in generalizing well with relatively little training data [64].
It is also notable that all of the bagging models have a lower MSE generalisation error as compared to FM and AdaBoost when the number of layers is low. In particular, with just 1 layer, all of the bagging models outperform FM and AdaBoost. However, as the number of layers increases, the performances of bagging models begin to plateau more rapidly than FM and Adaboost which, in contrast, continue their trend of decreasing error with increasing circuit depth. This is consistent with the notion that as base classifiers become expressive their risk of overfitting increases (i.e. they develop an intrinsically low bias). Adaboost, in particular, is known to be most effective in relation to weak, under-fitting base classifiers.
Finally, the decreasing error trend seen in the more complex bagging models as well as the FM and AdaBoost models is not visible in relation bagging with 30% of the features. We conjecture that since this bagging configuration utilises only 1 qubit, it cannot appropriately model the evolution of the quantum state with respect to the input. Hence, despite leveraging 10 different submodels of 1 qubit (i.e., one feature) each, the performance of bagging models with 30% of the features cannot improve as the number of layers increases (adding more layers in this case translates in performing rotations on the single qubit only, without the possibility of further CNOTs or other entangling gate operations). This result hence highlights the importance of entanglement in quantum neural network models as a means of improving performance.
#### 4.3.2 Experiment II
The second experiment seeks to assess the performance of the respective ensemble techniques on the Concrete Compressive Strength dataset, which consists in 1030 samples of 8 features. The target value to predict in this regression case is hence the concrete compressive strength, measured in Megapascal (MPa), a highly nonlinear function of age and composition of the material.
The results of the regression experiment are in line with the findings of Experiment I, and are reported in Figure 7. FM, AdaBoost and the two bagging models applied in relation to 80% of features achieve comparable results at 10 layers, with the Bag_0.8_1.0 configuration obtaining the lowest MSE error, followed by Bag_0.8_0.2, FM and finally by AdaBoost. Also in this case, the differential between bagging models with 20% of samples and with 100% of samples is marginal, confirming the effectiveness of bagging quantum models in relation to reduced training dataset size. In contrast with Experiment I, bagging models having 30% of available features now have 2 qubits, and therefore demonstrate a relative improvement in test error when \(l=2\). However, their expressive power soon saturates and their error curves plateau.
In general, the generalization capability of bagging models decreases monotonically with the number of layers, in contrast to FM and AdaBoost. In fact, they exhibit episodes of overfitting when utilising 5 (and up to 7) layers, while bagging appears to be able to evade this outcome. This is again not surprising,
Figure 6: Evolution of MSE error with respect to the number of quantum neural network layers in Experiment I. Each experimental data point instantiates a layer number, a number of bagged features and a percentage of training data points available to the ensemble.
since AdaBoost is designed to reduce bias, while bagging ensembles are designed to reduce variance.
All of the bagging models analyzed still outperform FM and AdaBoost at a low number of layers, suggesting that they may be the right choice for implementation on NISQ devices, or else when there is any necessity of implementing low-depth quantum circuits. As in the first experiment, it is also of interest to note that all the bagging models with \(l=1\) here have very similar MSE values, while their performances vary as the number of layers increases. This may indicate that the MSE value reached at \(l=1\) is the optimal for that family of bagging models, given their expressibility. Moreover, a sharp decrease in MSE beyond the first layers would appear to be a common pattern, both with respect to the ensembles and the FM model. For example, at \(l\geq 3\), the MSE error of FM and AdaBoost dramatically decrease, while bagging models with 50% of the features exhibit this trend between \(l=1\) and \(l=2\). (A future analysis of this topic might seek to exploit this characteristic in order to predict _a priori_ how many layers one would need to attain an error level within a given bound).
#### 4.3.3 Experiment III
The dataset used in Experiment III is the reference Diabetes dataset from Scikit-learn, consisting of 10 numerical features, including age, sex, body mass index, blood serum measurements, and also a target variable, a quantitative measure
Figure 7: Evolution of MSE error with respect to the number of quantum neural network layers in Experiment II.
of disease progression one year after baseline. The dataset is composed of 442 instances and is often used for non-trivial regression analysis in ML.
Figure 8 illustrates the results of this experiment. The performance of the quantum models is notably different from those of the previous two experiments. It may be seen that the best performing models are the bagging models containing 80% of the features, while FM and AdaBoost achieve satisfactory results up to 6 layers, at which point their MSE begins to increase. At \(l=10\), every model has stabilized, however. Bag_0\(8\)1.0 and Bag_0\(8\)0.2 have an MSE of respectively 8.8% and 6.1% lower than that of FM. AdaBoost has an MSE comparable to the error of Bag_0\(3\)1.0, being only 0.9% higher than FM. Bagging models with 50% of the features have surprisingly good results, better than those of FM and very close to bagging models with 80% of the features.
As in Experiment I and II, a very sharp MSE reduction between \(l=1\) and \(l=3\) is evident for all of the models. Less complex models like bagging with 30% and 50% of the features immediately reach a plateau, while the error curves for bagging with 80% of the features, FM and AdaBoost evolves as the number of parameters increases. Considering layer numbers between \(l=6\) and \(l=8\), it is clear that FM and AdaBoost overfit as the number of model parameters increases, and thus they perform poorly on test data. In particular, they overfit to such an extent that they almost reach the same performance level of the simplest bagging models with 30% of the features. The latter show no indication of overfitting however, in common with bagging models having 50% of the features. Bagging with 80% of the features shows light overfitting when \(l>6\), but still achieve the best results from among all of the tested algorithms.
The robustness of bagging models to overfitting with respect to AdaBoost and FM arises from their ability to reduce variance via averaging of decorrelated error across the predictions of each submodel. By contrast, when the number of layers is high, AdaBoost and FM utilise a model that is too complex and expressive for the underlying task, leading to overfitting. In concordance with Experiment II, this results suggests that attribute bagging is an effective solution to overfitting in the NISQ setting in common with that of the classical domain.
In addition, this experiment also highlights more markedly the discrepancy between the error level of bagging models with the same number of features but a distinct number of training samples. The difference between the MSE of the bagging model with 30% and 20% of samples and that with 100% of samples is now far more apparent, suggesting that when the variance of the dataset is very high, even bagging models require a sufficient threshold of training samples to perform well in the NISQ setting.
#### 4.3.4 Experiment IV
For the classification task in Experiment IV, we used the reference UCI Wine dataset. It is a multi-class classification dataset corresponding to the results of a chemical analysis of wines grown within a specific region of Italy. It consists of 13 numerical features representing various chemical properties, such as alcohol, malic acid, and ash content, and a target variable indicating the class of the
e. The dataset has 178 samples and is a common baseline ML benchmark for low-parametric complexity classifiers.
Results from Experiment IV are reported in Figure 9. Although they cannot be directly compared to the previous results due to the intrinsically different nature of the problem, there are few comparative insights that can be gained from the respective plot of Accuracy curves. First, all the models except bagging with 30% of the features achieve the same accuracy score of 97.2% using 10 layers. The performances of Bag_0.3_0.2 and Bag_0.3_1.0 are still relatively strong, however, having an accuracy score of 94.2% and 96.9% respectively. Given the very low complexity of these two models, this is a striking result.
A further notable aspect of the Accuracy curves is that all ensemble models converge with far fewer layers than FM. In particular, they require 3 layers in order to reach a performance plateau on average, after which they saturate and the accuracy score reaches saturation as well. By contrast, FM struggles to achieve a comparable accuracy score, only achieving an accuracy greater than 90% when \(l\geq 7\). This means that the ensemble models are able to learn and capture the complex relationships between the input features far more efficiently than FM, which requires a much deeper architecture to attain comparable results. This observation is particularly relevant when considering the implementation of these models on NISQ devices, where the number of qubits and the coherence time are severely limited.
Figure 8: Evolution of MSE error with respect to the number of quantum neural network layers in Experiment III.
Moreover, as expected, bagging models with 100% of the samples obtain a higher accuracy score than their counterparts with 20% of the features given the same number of layers. This suggests that using more training samples can improve the performance of ensemble models provided that the number of layers is low, as it allows them to better capture the underlying patterns of class discriminability in the data.
### Experiments executed on superconducting-based QPU
For the real-hardware evaluation, we compare the performance of the baseline quantum neural network with the Bag_0.8_0.2 ensemble on the same synthetic linear regression dataset used in Experiment I. We selected the Bag_0.8_0.2 model as representative ensemble technique for its outstanding performance in the simulated experiments despite the low number of training samples. To ensure statistical validity, we repeat each experiment 10 times. However, due to technological constraints on real quantum hardware, we analyze only the linear dataset with a quantum neural network having a single layer.
Figure 10 presents the real-world experimental findings, which indicate that the bagging ensemble reduces the expected mean square error by one-third and the expected variance by half when executed on quantum hardware, compared to the baseline model. Such results demonstrate that the noise-canceling capabilities
Figure 9: Evolution of Accuracy score with respect to quantum neural network depth in Experiment IV.
of ensemble technique can be effectively exploited to work on NISQ devices in realistic settings. Additionally, the performance of the ten bagging models varied significantly, underlining the need to reinitialise the ensemble multiple times and validate it against a suitable validation dataset to ensure that the best model is selected.
## 5 Conclusion
We propose the use of ensemble techniques for practical implementation of quantum machine learning models on NISQ hardware. In particular, we justify the application of these techniques based on their capacity for significant reduction in resource usage, including in respect to the overall qubit, parameter, and gate budget, which is achieved via the random subspace (attribute bagging) technique. This resource-saving is especially crucial for noisy hardware, which is typically limited to a small number of qubits, being vulnerable to decoherence, noise, and operational errors. Consequently, the contribution of ensemble techniques may be seen as a form of quantum noise reduction.
To establish this, we evaluated and compared various configurations of bagging and boosting ensemble techniques on synthetic and real-world datasets, tested in both a simulated, noise-free environment and a superconducting-based QPU by IBM, and subtending a range of layer depths.
Our experimental findings showed that bagging ensembles can effectively train quantum neural network instances using fewer features and qubits, which leads to ensemble models with superior performance compared to the baseline model. Reducing the number of features in bagging models of quantum neural networks directly translates into a reduction in the number of qubits, that is a desirable characteristics for practical quantum applications. Ensembles of
Figure 10: Comparison of average performance of the baseline model and the Bag_0\(8\)0.2 ensemble technique on IBM quantum hardware. (10a) shows the difference in terms of MSE over 10 executions. (10b) shows the performance of the bagging model with respect to its estimators.
quantum neural network can also help addressing some of the toughest challenges associated with noise and decoherence in NISQ devices, as well as to mitigate barren plateau effects. These can be key considerations in the development of quantum machine learning models, particularly when working with limited resources on modern quantum systems.
Moreover, bagging models were found to be extremely robust to overfitting, being able to effectively capture the underlying patterns in the data with high generalization ability. This makes them better suited for tasks where generalization is important, such as in real-world applications. However, it is important to notice that the effectiveness of bagging quantum models diminishes with a decrement in the number of features, which suggests that complex bagging models are still needed to obtain satisfactory results. Using only a subset of the features can reduce the computational complexity of the model and prevent overfitting, but it may also result in a loss of information and a decrease in performance. On the contrary, the number of training samples do not seem to have a deep impact on bagging quantum models, hence this bagging strategy may be used when executing quantum neural network instances on real hardware in order to deal with long waiting queues and job scheduling issues. In this regard, having a low number of training data leads to faster training procedures and quantum resource savings. The training of ensembles can also be done in parallel on multiple QPUs in a distributed learning fashion. Therefore, it is important to strike a balance between model complexity and performance to achieve the best possible outcomes.
Additionally, the fact that the bagging models outperform FM and AdaBoost at low number of layers suggests that the former models are better suited for low-depth quantum circuits, which have limited capacity and are prone to noise and errors. For quantum machine learning tasks with NISQ devices, using bagging models with a low number of layers may be a good strategy to achieve good generalization performance while minimizing the impact of noise and errors in the circuit.
Overall, our results suggest that ensembles of quantum neural network models can be a promising avenue for the development of practical quantum machine learning applications on NISQ devices, both from a performance and resource usage perspective. A careful evaluation of the trade-offs between model complexity, performance, quantum resources available and explainability may be necessary to make an informed decision.
In a future work, we plan to further investigate the relationship between ensembles and quantum noise, which is a key consideration when developing quantum neural network models. Our findings could potentially contribute to the development of more efficient and accurate quantum machine learning algorithms, which could have significant implications for real-world applications.
## Acknowledgements
The contribution of M. Panella in this work was supported by the "NATIONAL CENTRE FOR HPC, BIG DATA AND QUANTUM COMPUTING" (CN1, Spoke 10) within the Italian "Piano Nazionale di Ripresa e Resilenza (PNRR)", Mission 4 Component 2 Investment 1.4 funded by the European Union - NextGenerationEU - CN00000013 - CUP B83C22002940006. MG and SV are supported by CERN through CERN Quantum Technology Initiative. Access to the IBM Quantum Services was obtained through the IBM Quantum Hub at CERN. The views expressed are those of the authors and do not reflect the official policy or position of IBM and the IBM Q team. MI is part of the Gruppo Nazionale Calcolo Scientifico of "Istituto Nazionale di Alta Matematica Francesco Severi". AM is supported by Foundation for Polish Science (FNP), IRAP project ICTQT, contract no. 2018/MAB/5, co-financed by EU Smart Growth Operational Programme.
## Declaration
### Authors' contributions
MI, MG, and AC had the initial idea, implemented the interface for executing experiments on the IBM QPUs, performed the experiments, and analyzed the data. MG, SV, DW, AM, and MP supervised the project. All authors contributed to the manuscript.
### Availability of data and materials
The data and source code utilized in our study are freely accessible at [https://github.com/incud/Classical-ensemble-of-Quantum-Neural-Networks](https://github.com/incud/Classical-ensemble-of-Quantum-Neural-Networks). The procedural generation code for the Linear Regression dataset is also accessible at the same URL. In addition, the UCI Repository provides open access to Concrete and Wine datasets, which can be found at https://[https://archive.ics.uci.edu/ml/index.php](https://archive.ics.uci.edu/ml/index.php). The Diabetes dataset provided by Scikit-Learn is also freely available and included with the Python3 package.
|
2308.14089 | MedAlign: A Clinician-Generated Dataset for Instruction Following with
Electronic Medical Records | The ability of large language models (LLMs) to follow natural language
instructions with human-level fluency suggests many opportunities in healthcare
to reduce administrative burden and improve quality of care. However,
evaluating LLMs on realistic text generation tasks for healthcare remains
challenging. Existing question answering datasets for electronic health record
(EHR) data fail to capture the complexity of information needs and
documentation burdens experienced by clinicians. To address these challenges,
we introduce MedAlign, a benchmark dataset of 983 natural language instructions
for EHR data. MedAlign is curated by 15 clinicians (7 specialities), includes
clinician-written reference responses for 303 instructions, and provides 276
longitudinal EHRs for grounding instruction-response pairs. We used MedAlign to
evaluate 6 general domain LLMs, having clinicians rank the accuracy and quality
of each LLM response. We found high error rates, ranging from 35% (GPT-4) to
68% (MPT-7B-Instruct), and an 8.3% drop in accuracy moving from 32k to 2k
context lengths for GPT-4. Finally, we report correlations between clinician
rankings and automated natural language generation metrics as a way to rank
LLMs without human review. We make MedAlign available under a research data use
agreement to enable LLM evaluations on tasks aligned with clinician needs and
preferences. | Scott L. Fleming, Alejandro Lozano, William J. Haberkorn, Jenelle A. Jindal, Eduardo P. Reis, Rahul Thapa, Louis Blankemeier, Julian Z. Genkins, Ethan Steinberg, Ashwin Nayak, Birju S. Patel, Chia-Chun Chiang, Alison Callahan, Zepeng Huo, Sergios Gatidis, Scott J. Adams, Oluseyi Fayanju, Shreya J. Shah, Thomas Savage, Ethan Goh, Akshay S. Chaudhari, Nima Aghaeepour, Christopher Sharp, Michael A. Pfeffer, Percy Liang, Jonathan H. Chen, Keith E. Morse, Emma P. Brunskill, Jason A. Fries, Nigam H. Shah | 2023-08-27T12:24:39Z | http://arxiv.org/abs/2308.14089v2 | # MedAlign: A Clinician-Generated Dataset for Instruction Following with Electronic Medical Records
###### Abstract
The ability of large language models (LLMs) to follow natural language instructions with human-level fluency suggests many opportunities in healthcare to reduce administrative burden and improve quality of care. However, evaluating LLMs on realistic text generation tasks for healthcare remains challenging. Existing question answering datasets for electronic health record (EHR) data fail to capture the complexity of information needs and documentation burdens experienced by clinicians. To address these challenges, we introduce MedAlign, a benchmark dataset of 983 natural language instructions for EHR data. MedAlign is curated by 15 clinicians (7 specialities), includes clinician-written reference responses for 303 instructions, and provides 276 longitudinal EHRs for grounding instruction-response pairs. We used MedAlign to evaluate 6 general domain LLMs, having clinicians rank the accuracy and quality of each LLM response. We found high error rates, ranging from 35% (GPT-4) to 68% (MPT-7B-Instruct), and an 8.3% drop in accuracy moving from 32k to 2k context lengths for GPT-4. Finally, we report correlations between clinician rankings and automated natural language generation metrics as a way to rank LLMs without human review. We make MedAlign available under a research data use agreement to enable LLM evaluations on tasks aligned with clinician needs and preferences.
## 1 Introduction
Large language models (LLMs) have revolutionized natural language processing in tasks such as reading comprehension, reasoning, and language generation [2, 55], prompting researchers to explore applications in healthcare [10, 3]. Recent LLMs like Med-PaLM [41] and GPT-4 [30] have demonstrated expert-level performance on medical question-answering benchmarks including MedQA [19], PubMedQA [20], MedMCQA [32], MMLU [16], and the USMLE [30, 23]. However, these benchmarks employ multiple-choice, exam-style evaluations where question stems summarize key information and a single answer choice is best. It is not known if performance on these tasks will translate when a model is deployed in complex clinical environments.
To be useful, LLMs need to perform well on the specific information-related tasks that clinicians currently complete themselves while caring for patients. These tasks are a significant burden on clinicians, who spend 49% of their day interacting with computers instead of patients [45] and 10 hours a week generating documentation [13], in part contributing to professional burnout [43, 28]. Examples of these tasks include summarizing a patient's asthma treatment history from different specialists the patient has visited, generating a differential diagnosis based on partially resulted laboratory data, or searching through the clinical notes for mentions of a patient's family support system in order to create the best plan for the patient's hospital discharge (see Table 2). Such tasks could be passed as instructions to an LLM in the form of a question or imperative (e.g., "Write a discharge summary") grounded in a patient's Electronic Health Record (EHR, an electronic representation of a patient's medical history). However, despite the excitement about LLMs to transform the practice of medicine, evaluations to date have not authentically represented the variety of tasks and idiosyncrasies of EHR data that clinicians face in the real world [50].
Given the recent emergence of instruction-following capabilities in LLMs [48], there is potential for LLMs to ameliorate such administrative burden. Hand-curated exemplars of instructions and responses have been critical to improve performance of models [6], especially on clinical reasoning and knowledge recall tasks in the healthcare domain [41]. Thus, a high quality dataset of instruction-EHR-response tuples that represents the breadth of clinical tasks is essential not only as a shared benchmark [39], but potentially to accelerate the training of specialized LLMs for healthcare.
However, building such a dataset requires an extraordinary effort from a multidisciplinary collaboration. In particular, generating an instruction-following benchmark dataset with representative EHR-based tasks and expert responses is challenging due to the substantial cost and logistical complexity of clinician review. There is a need for an EHR dataset that (1) contains a diverse set of questions and instructions generated by practicing clinicians [39]; (2) pairs these queries with EHRs from both inpatient and ambulatory care settings; (3) leverages both structured and unstructured data from the longitudinal EHR; and (4) is available to the broader academic community.
Figure 1: Instruction following with electronic health record (EHR) data. In MedAlign, individual patient EHRs are transformed into XML markup (example provided in Figure S3) and paired with clinician-generated instructions (blue) and responses (orange) to evaluate language models.
In light of these challenges and opportunities, we present three contributions:
1. **MedAlign Dataset:** We introduce a benchmark dataset called MedAlign consisting of 983 questions and instructions submitted by 15 practicing clinicians spanning 7 medical specialties. For 303 of these instructions, we provide a clinician-written reference answer and paired EHR for grounding prompts. Each clinician evaluated and ranked outputs from 6 different LLMs on these 303 instructions and wrote "gold standard" answers. To our knowledge, MedAlign is the first dataset of EHR-based instruction-answer pairs (not just question-answer pairs) written by clinicians, with clinician evaluations of LLM-generated outputs. Table 1 summarizes MedAlign and its distinction from existing datasets for clinical information needs.
2. **Automated Instruction-EHR Matching:** We demonstrate the feasibility of a simple retrieval-based approach to pair an instruction with a relevant patient EHR. By isolating the process of instruction solicitation, we were able to scale and diversify the set of clinicians who submitted instructions. Furthermore, we show that our process for matching instructions to relevant EHRs produces a relevant pairing 74% of the time -- at least twice as frequently as randomly pairing instructions to EHRs.
3. **Automated Evaluation of LLM Responses:** We analyze the correlation between clinician rankings and automated natural language generation (NLG) metrics as a way to scalably reproduce such analyses, potentially reducing future needs for clinicians to label and rank LLM responses.
## 2 Background and Related Work
The volume of patient care data is growing exponentially, with a compound annual growth rate approaching 36% [7]. Utilizing LLMs to more efficiently interact with patient data and medical knowledge holds great potential to help clinicians manage increasingly complicated information needs and circumvent low-usability EHR interfaces [26]. However, evaluation of LLMs to improve meaningful outcomes like clinician burnout or patient health has been inadequately studied, mainly due to benchmark datasets which do not represent true clinician needs [17], narrowly focus on a specific medical specialty or subset of EHR data [24], and/or are overly simplistic due to templated question construction [33, 52]. These works highlight the challenges in collecting high-quality clinician-generated questions and answers; we consider each in turn.
Questions and instructions in an EHR-based benchmark dataset should be paired with relevant patient EHRs. In order to ensure relevancy, prior works have provided clinicians with specific patient EHRs and asked them to generate questions based on those patients' data [24]. Unfortunately, requiring EHRs as context for question generation limits scalability, as medical institutions restrict access to patient data to preserve patient privacy. Pampari et al. [33] attempted to overcome these scalability issues by generating questions via a template-based approach, but this led to issues with question quality and diversity [52]. Our method of soliciting clinician-generated instructions without a specific patient's EHR as context overcomes these scaling issues, albeit at the cost of potentially less relevant instruction-to-EHR pairings (we discuss our approach to addressing this problem in the Dataset Curation section).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Dataset** & **Questions** & **Documents** & **Patients** & **Specialties** & **Labeler** & **Source** \\ \hline Raghavan et al. [37] & 5696 & 71 & 71 & - & Medical Students & Clinical Note \\ Pampari et al. [33] & 73111 & 303 & 303 & - & Programmatic & Discharge Summary \\ Fan [11] & 245 & 138 & - & 1 & Author & Discharge Summary \\ Yue et al. [52] & 50 & - & - & - & Medical Experts & Clinical Note \\ Yue et al. [53] & 1287 & 36 & - & - & Medical Experts & Clinical Note \\ Oliveira et al. [31] & 18 & 9 & 9 & - & Author & Clinical Note \\ Soni et al. [42] & 3074 & 1009 & 100 & 1 & Clinicians & Radiology Note \\ \hline MedAlign (Ours) & 983 & 37264 & 276 & 7 & Clinicians & EHR \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of our work, MedAlign, to existing EHR QA datasets.
Beyond generating questions, generating expert answers at scale is also prohibitively difficult. Reviewing an EHR to answer patient-specific queries can take 30+ minutes for a single patient [40]. This excludes any time required to generate a response to the query. Prior works have attempted to overcome the bottleneck of generating responses by extracting answers verbatim from individual clinical notes or discharge summaries [42, 31, 11]. However, many clinical tasks require synthesizing information from multiple documents and structured data to arrive at an adequate response. In such cases, answers extracted from a single note in the patient's record may not be an adequate; free-text text generation is required. While there is at least one example of an EHR-based question answering dataset in the literature that includes both structured and unstructured data [37], it neither contains free-text responses nor is publicly available. Finally, all of the aforementioned datasets focus on question answering specifically and not on instruction following more broadly. To the best of our knowledge, there does not exist _any_ EHR-based benchmark dataset that addresses instruction following.
The significant costs of clinician review present barriers not only for _de novo_ dataset generation, but also for reliable evaluation of new methods on existing datasets. Automated metrics for evaluating Natural Language Generation (NLG) systems have shown moderate to high correlation with human judgments on tasks like machine translation [12], but it is unclear whether these findings extend to other domains and tasks. While there is precedent [24] for _applying_ automated metrics like BLEU [34], ROUGE-L [25], METEOR [1], and BERTScore [54] to NLG tasks in the clinical domain, there is comparatively very little work assessing correspondence between these metrics and human judgment on clinical NLG tasks. Thus not only do we have a poor understanding of how LLMs perform on EHR-based instruction-following tasks, but also we do not know whether it is possible to reliably automate such evaluations. Automation could substantially reduce the "barrier to entry" for research teams with limited resources.
## 3 Dataset Curation Process
Electronic Health Records (EHRs)EHR systems are software for managing patient medical record data. From a clinician's view, a patient EHR is accessed via a graphical user interface that provides access to data elements associated with medical care, such as medication lists and treatment plans. These data are stored as a collection of timestamped structured (tabular) and unstructured (e.g., text) events, which when ordered by time form a patient's longitudinal EHR timeline. Our EHR data is represented using the OMOP CDM [47], a standardized schema for exchanging medical data, translated into a single, XML markup document per record (example provided in Figure S3). Figure 1 outlines the workflow for prompting a language model using MedAlign instructions, responses, and EHR markup.
Collection ProtocolReviewing patient medical data requires adhering to strict security protocols to protect patient privacy and prevent protected health information (PHI) leaks. This motivated our 3-stage curation process: (1) online instruction collection from clinicians; (2) instruction-EHR matching; and (3) response generation. Note we deliberately decouple instruction collection from response generation. This enables sampling a larger set of instructions from a more diverse set of clinician specialities while minimizing exposure to patient data. However, this approach requires defining a matching function to pair instructions with relevant patient EHRs, a process which may generate errors due to irrelevant instruction-EHR pairings. We discuss the performance of a retrieval-based matching system below.
Stage 1: Collecting InstructionsClinicians were recruited in our academic medical center via email. Through the use of an online form, clinicians were asked to submit instructions as posed to a hypothetical AI assistant designed to facilitate EHR-based tasks. Participants were instructed to envision a clinical vignette typical of their daily practice and to formulate an instruction that the AI could perform to make their work easier, faster, and less stressful. For each instruction, participants were asked to provide metadata to assist in matching the instruction to a patient, including pertinent clinical characteristics and the clinical context where the instruction could be used, e.g., "when deciding whether to use contrast in a CT scan". See Appendix C for all collected fields.
Stage 2: Instruction-EHR matchingAll instructions include information on their clinical context and the patient population which the instruction targets. We used instructions tagged "applicable to patients generally" to maximize their relevance in EHR matching. We evaluated 2 methods for matching instructions with EHRs: a simple baseline based on uniform random sampling; and (2) a retrieval-based method using BM25Okapi [46].
For the retrieval approach, we concatenated every instruction with its corresponding patient characteristics and clinical context to construct a search query. We used this query to retrieve the 5 most relevant EHRs within a randomly selected subsample of patients (77200) from our hospital database. This same subsample was used to match patients for our baseline uniform random sample. After matching, the authors conducted a manual review to assess binary relevance of all generated instruction-EHR pairs.
Stage 3: Instruction Response GenerationFor this stage, clinicians were tasked with reviewing the instruction and associated EHR data, then writing a response to that instruction. Clinicians were asked whether the instruction could be feasibly applied to the patient in the EHR (e.g., not asking about smoking history in an infant) and if the EHR contained all necessary information to answer the instruction. They then manually generated an expert response to the instruction. This response was intended to be brief and clinically relevant, drawing on any information available in the supplied EHR record, as well as any appropriate external references. The most recent timestamp in the EHR was designated as the "time anchor", meaning the response was written as if the instruction had been posed at that point in time.
## 4 Dataset Description
Instructions CollectedA total of 15 clinicians submitted instructions during the data collection process. These medical practitioners represented 7 distinct specialties, which included Internal Medicine (492 instructions submitted), Neurology (320), Radiology (402), Cardiology (71), Oncology (14), Surgery (12), and Primary Care (3). Clinicians provided a varying number of instructions ranging from 1 to 278 with a mean of 93. From the 1314 instructions collected, 455 were marked as applicable to patients generally and 859 were relevant only to patients with specific clinical characteristics. We removed near-identical instructions (defined by a ROUGE-L similarity above 0.7), yielding 983 instructions of which 407 were marked as applicable to patients generally.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Category** & **Example Instruction** & **Gold** & **All** \\ \hline Retrieve \& Summarize the most recent annual physical with the & 223 & 667 \\ & PCP & & \\ Care Planning & Summarize the asthma care plan for this patient & 22 & 136 \\ & including relevant diagnostic testing, exacerbation & & \\ & history, and treatments & & \\ Calculation \& Scoring & Identify the risk of stroke in the next 7 days for this & 13 & 70 \\ & TIA patient & & \\ Diagnosis Support & Based on the information I’ve included under HPI, what & 4 & 33 \\ & is a reasonable differential diagnosis? & & \\ Translation & I have a patient that speaks only French. Please & 0 & 2 \\ & translate these FDG-PET exam preparation & & \\ & instructions for her & & \\ Other & What patients on my service should be prioritized for & 41 & 75 \\ & discharge today? & & \\ \hline Total & & 303 & 983 \\ \hline \hline \end{tabular}
\end{table}
Table 2: MedAlign instruction categories and example instructions.
Instruction-EHR MatchesBased on evaluation by the authors, for 240 (59%) of the instructions applicable to "patients in general" the first record retrieved by BM25 was relevant. For 303 instructions (74%), at least one of the top 5 EHRs returned by BM25 was relevant. In contrast, only 38% of EHRs retrieved via uniform random sampling were deemed relevant.
Instruction TaxonomyTo better understand higher-level themes within the instructions submitted, a practicing clinician developed a taxonomy of instructions. This taxonomy, described in detail in Table S2, includes 6 categories spanning 20 subcategories. The overall distribution of submitted instructions across categories is summarized in Table 2.
## 5 Benchmarking LLM Performance
LLM SelectionWe evaluated six distinct LLMs, chosen to capture both state-of-the-art, closed-source LLM capabilities available to consumers via an API as well as smaller, open-source and user-modifiable LLMs with more lenient commercial licensing (e.g., MosaicML's MPT-7B-Instruct model). Additionally, we designed our experiments to directly evaluate the impact of model parameters and context length.
For a state-of-the-art LLM, we selected GPT-4 (through Microsoft's Azure OpenAI HIPAA compliant gpt-4-32k-0301 API) due to its state-of-the-art performance on various medical tasks, its long 32k context length, and its availability to researchers and clinics. However, despite this context length, it proved insufficient for accommodating full EHRs (more than 80% of EHRs in MedAlign contain more than 32k tokens, see Table S11). To address this limitation, we explored a multi-step refinement (MR) approach [44] to maximize effective context length. In this approach, the EHR is divided into "chunks" designed to be as big as possible while still fitting within the model's context length. A response to the instruction is generated using the first EHR "chunk", then the second "chunk" is given to the model and the model can refine its response or maintain the same response, and so on, until the entire EHR has been passed through the model.
For smaller, open-source models we evaluated Vicuna-7B and Vicuna-13B [5] as well as MPT-7B-Instruct [27]. These models are widely available and user-modifiable with favorable licensing agreements, but they have considerably smaller context lengths (2048 tokens) compared to GPT-4. To enable more direct comparisons, we assessed GPT-4 under a restricted context length designed to exactly match the context length of the Vicuna model.
Generating LLM Responses to EHR-based Questions and InstructionsUsing a standard prompt template (Figure S8), each model was tasked to fulfill the given instruction grounded on its corresponding EHR pair. Due to current models' context length restrictions, EHRs needed to be truncated. To calculate the number of tokens of EHR context to include in the prompt, we took each model's maximum context length (in number of tokens under that model's specific tokenizer), reserved 256 tokens for generation, and subtracted any tokens used for the corresponding structured prompt and instruction. This truncation was performed
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Context** & **Correct**\(\uparrow\) & **WR**\(\uparrow\) & **Rank**\(\downarrow\) \\ \hline GPT-4 (MR) & 32768\({}^{\dagger}\) & **65.0\%** & 0.658 & 2.80 \\ GPT-4 & 32768 & 60.1\% & **0.676** & **2.75** \\ GPT-4 & 2048\({}^{*}\) & 51.8\% & 0.598 & 3.11 \\ Vicuna-13B & 2048 & 35.0\% & 0.401 & 3.92 \\ Vicuna-7B & 2048 & 33.3\% & 0.398 & 3.93 \\ MPT-7B-Instruct & 2048 & 31.7\% & 0.269 & 4.49 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Human evaluation of LLM responses. **Context**: The model’s context length, using its native tokenizer. **Correct**: The percentage of model responses deemed correct by clinicians. **WR**: Average win rate marginalizing over model pairings. **Rank**: Empirical mean of human-assigned rankings. \({}^{\dagger}\)With multi-step refinement the effective context length is infinite, as the model observes the entire EHR albeit in small chunks at a time. \({}^{*}\)For GPT-4 (2k) we used the GPT-4 32k models from OpenAI but restricted its context length using the Vicuna-native tokenizer for direct comparison.
by counting tokens from the end of the record, ensuring that as much recent information as possible was retained.
Clinician Evaluation of LLM ResponsesNine physicians were asked to evaluate and rank the responses generated by 6 separate LLMs. The instructions and EHRs reviewed by the clinicians were exactly the same in structure and content as those provided to the LLMs. Clinicians recorded a binary evaluation of whether the response was correct or incorrect, with "incorrect" defined as meeting at least one of the following criteria:
1. Response is not clinically appropriate based on the available EHR information;
2. Response includes errors that, if corrected, would change the clinical interpretation;
3. Response does not address the instruction.
Responses _not_ marked as "incorrect" were deemed to be "correct". Clinicians then ranked the quality of the LLM responses based on which provided the most clinically relevant and appropriate response. Equivalent ranks were permitted. The clinicians were blinded to which LLM generated each output, and the order of LLM output was reshuffled for each instruction. Each clinician reviewed 49 instruction-patient pairs on average, yielding 303 pairs reviewed overall with 50 instruction-EHR pairs being reviewed by three clinicians.
Overall, we found that more than half of the responses generated by the GPT-4 variants we tested were deemed correct by clinicians (65% for GPT-4 (32k + MR), 60.1% for GPT-4 (32k), 51.8% for GPT-4 (2k)). By contrast, only about one in three responses generated by the Vicuna and MPT-7B models were considered correct (35.0% for Vicuna-13B, 33.3% for Vicuna-7B, 31.7% for MPT-7B-Instruct; see Table 3). In head-to-head comparisons, GPT-4 without context length restriction was preferred over the Vicuna-13B model in 72% of instances, and preferred over MPT-7B-Instruct 81% of the time (see Figure 2). The GPT-4 model with 32k context length and no multi-step refinement had the highest overall average win-rate against all other models (0.676).
Figure 2: (Left) Head-to-head comparison of model performance based on human ranks. The number in row \(i\), column \(j\) indicates the proportion of instructions for which the response generated by the model in row \(i\) was strictly preferred over the model in row \(j\). (Right) Head-to-head evaluation of model performance using COMET Ranks (the same matrix structure as on the left, but using rankings derived from COMET, an automated metric, rather than clinician-generated rankings). Model win rates using COMET follow a similar pattern as to model win rates using human rankings.
## 6 Automated Evaluation of LLM Responses
With the aim to to find an automated proxy for clinician-in-the-loop evaluation, we analyzed the correlation between a suite of automated metrics and human preference rankings using Kendall's Rank Correlation ("Kendall's Tau") [22]. We also calculated the inter-rater correlation between human rankers, yielding a mean Kendall's Tau coefficient of 0.44. The average correlations between metrics and human rankings is shown in Table 4. As noted by previous studies [29, 56], the majority of these metrics have shown moderate correlation with human preference and are widely reported in NLG tasks.
We evaluated each model output using both source-free (SF) and source-augmented (SA) automated metrics. Source-free metrics compare a model's output to a gold standard reference answer (in our case generated by a clinician) without the use of any additional context or sources (i.e., without any information from the EHR). We selected BERTScore [54], METEOR [1], chrF++ [35, 36], GoogleBLEU [51], and ROUGE-L [25] due to their availability and wide use. Source-augmented metrics consider source (e.g., the EHR) in addition to a gold reference and model output. The SA metrics we considered (and the LMs they use) include UniEval (T5-large) [56], COMET (XLM-RoBERTa) [38], and CTC Summary Consistency (BERT) [9]. As these models have limited context length we used the BM25Okapi algorithm to retrieve relevant snippets from within the patient's EHR using the instruction as a search query.
Overall, COMET [38] exhibited the strongest correlation with clinician preference rankings, approaching the level of human inter-reviewer reliability (0.37 vs. 0.44). As seen in Figure 2, the overall trends of head-to-head comparisons were preserved when using model output rankings from COMET vs. clinician-generated rankings. Specifically, GPT-4 was consistently preferred over the Vicuna and MPT-7B models by both COMET and clinicians, and the Vicuna models were consistently preferred over the MPT-7B model. Within the GPT-4 variants and between the two Vicuna models considered, win-rate preferences were not necessarily preserved, suggesting utility of COMET as a reasonable but perhaps coarse measure of model performance in this setting. The next most correlated metric with human rankings after COMET was BERTScore, a source-free metric, with an average correlation coefficient of 0.34.
## 7 Security, Privacy, and Compliance
A university institutional review board granted approval for this study, designated under the reference number 57916. The data utilized in this research adhered to the university's de-identification protocol. All authors working with the data have individually undergone and successfully completed institutional training in HIPAA
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Automated Metric**} & **Source** & **Avg.** & **95\% CI** \\ & **Augmented** & **Corr.** & \\ \hline COMET & ✓ & 0.37 & 0.33-0.41 \\ BERTScore & & 0.34 & 0.30-0.38 \\ METEOR & & 0.32 & 0.28-0.36 \\ chrF++ & & 0.29 & 0.25-0.33 \\ GoogleBLEU & & 0.29 & 0.25-0.33 \\ ROUGE-L & & 0.27 & 0.23-0.31 \\ BLEURT & & 0.25 & 0.21-0.30 \\ LENS & & 0.18 & 0.14-0.22 \\ UniEval Relevance & ✓ & 0.27 & 0.23-0.32 \\ UniEval Fluency & ✓ & 0.11 & 0.06-0.15 \\ UniEval Coherence & ✓ & 0.09 & 0.04-0.13 \\ UniEval Consistency & ✓ & 0.09 & 0.04-0.13 \\ UniEval Overall & ✓ & 0.20 & 0.15-0.24 \\ \hline Inter-Rater Reliability & & 0.44 & 0.34-0.53 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Correlation (mean Kendall’s Tau) between automated metrics’ rankings and human ranking of LLM outputs. Mean Kendall’s Tau between human reviewers (inter-rater reliability) was 0.44.
and data privacy, prior to their engagement with the de-identified data. All models fed with de-identified data were either deployed in an in-house HIPAA compliant compute infrastructure or through a HIPAA business associate agreement.
## 8 Discussion and Conclusion
Readily available datasets and benchmarks for easy-to-evaluate tasks like closed-form question answering have helped to measure the remarkable progress of LLMs, even in medical domains [23]. However, logistical difficulties and significant labeling costs have hindered progress towards establishing a shared dataset and benchmark for tasks amenable to LLMs and which truly represent clinician needs. We share such a benchmark dataset with the research community, which takes a novel approach towards instruction gathering by modularizing and isolating the process of solicitation and EHR pairing. To the best of our knowledge, our dataset is the first to evaluate LLM performance on clinician-generated instructions using comprehensive, longitudinal EHRs. This affords several new insights.
The importance of context length.While GPT-4 with a restricted context length of 2048 tokens achieved a correctness rate of 51.8%, the exact same GPT-4 model given 32000 tokens of context from the EHR achieved a correctness rate of 60.1%. Thus the additional context length yielded an additional 8.3% in the proportion of correct responses. Given the sheer quantity of tokens and concepts contained within comprehensive EHRs, including in MedAlign as shown in Table S11, it is perhaps not surprising that instruction following performance was poor with a limited context length. Indeed, not a single EHR in MedAlign can fit entirely within the Vicuna or MPT-7B's 2048 context length, and only 19.6% of these records can entirely fit within the 32k context length afforded by GPT-4. This highlights the importance of context length in applying LLMs to EHR-based tasks and motivates efforts to increase context lengths via e.g., methods that do so implicitly via position interpolation [4] or approaches that explicitly improve the training efficiency of mathematical operations [8].
The importance of pre-training data.The biggest difference in both percentage of responses deemed correct and mean human ranking amongst the LLMs we considered was between the Vicuna/MPT models and GPT-4 variants. Comparing Vicuna-13B (the best performing of the open-source LLMs considered) with GPT-4 constrained to the exact same prompt and context (2048 tokens using the Vicuna tokenizer), we observed a 16.8% improvement in the proportion of responses deemed correct and a 0.81 improvement in mean rank. Because the pre-training data, architecture, and other details of the GPT-4 models are not shared publicly, it is difficult to ascertain exactly what differentiates these two models. However, given the strong performance of GPT-4 on clinical reasoning challenges [21, 30] and the fact that fine-tuning LLMs on medical data improves performance in these domains [15], we suspect this gap in performance is due to more biomedically-focused text being included in GPT-4's pre-training corpus as compared to that of Vicuna-13B. It is also possible that GPT-4 was simply trained on more data generally relative to Vicuna-13B.
The importance of model size.One intriguing result was the similarity in mean rank and proportion of correct responses between the Vicuna-7B model (33.3% correct, mean rank 3.93) and the Vicuna-13B model (35.0% correct, mean rank 3.92). The lack of any meaningful difference despite having double the number of parameters suggests that simply increasing neural network size alone may not lead to as sizeable gains as e.g., increasing the number of tokens seen during pre-training or increasing the diversity in the pre-training data. This corroborates recent findings on the importance of balance between model parameters and pre-training dataset size [18] as well as the importance of dataset quality [14].
Limitations.Our approach of first soliciting instructions and _then_ pairing these instructions to EHRs can increase the scale and diversity of instructions collected, but at a cost. Despite yielding almost twice as many relevant pairings as simply randomly selecting an EHR for each instruction, our BM25 approach did not yield a relevant match for 26% of instructions. In other words, while an instruction submitted by a clinician was of course relevant to the _hypothetical_ patient they had in mind at the time of submission, it frequently ended up not being relevant to an _actual_ patient EHR. There are potential ways to improve this matching process e.g.,
by using vector databases powered by BERT-style models which could better capture semantic alignment between queries and EHRs relative to BM25 [49]. Additionally, while we solicited instructions from a large number of clinicians at our academic medical center with diverse specialties and backgrounds, the clinicians who submitted data to MedAlign represent only a small fraction of the overall clinician workforce.
Conclusion.This work establishes, for the first time, the performance of some of the most capable LLMs available -- GPT-4, LLaMA, and MPT-7B-Instruct -- on EHR-based instruction-following tasks. We find that approximately one-third of the best-performing LLM's responses are incorrect. The benchmark dataset we share, MedAlign enables researchers to measure what matters and focus on tasks that are clinically relevant with significant potential positive impact. In addition, our findings establishing significant correlation between human preference and existing automated metrics provide a path for researchers to make technical progress without requiring the organizational infrastructure for clinical labeling. Finally, our novel approach towards soliciting clinician instructions paves the way for even larger-scale data collection efforts, both for training and evaluation purposes.
|
2303.01090 | Cross-helicity effect on $α$-type dynamo in non-equilibrium
turbulence | Turbulence is typically not in equilibrium, i.e. mean quantities such as the
mean energy and helicity are typically time-dependent. The effect of
non-stationarity on the turbulent hydromagnetic dynamo process is studied here
with the use of the two-scale direct-interaction approximation (TSDIA), which
allows to explicitly relate the mean turbulent Reynolds and Maxwell stresses
and the mean electromotive force (EMF) to the spectral characteristics of
turbulence, such as e.g. the mean energy, as well as kinetic and
cross-helicity. It is demonstrated, that the non-equilibrium effects can
enhance the dynamo process when the magnetohydrodynamic (MHD) turbulence is
both helical and cross-helical. This effect is based on the turbulent
infinitesimal-impulse cross-response functions, which do not affect turbulent
flows in equilibrium. The evolution and sources of the cross-helicity in MHD
turbulence is also discussed. | Krzysztof A. Mizerski, Nobumitsu Yokoi, Axel Brandenburg | 2023-03-02T09:18:55Z | http://arxiv.org/abs/2303.01090v1 | # Cross-helicity effect on \(\alpha\)-type dynamo
###### Abstract
Turbulence is typically not in equilibrium, i.e. mean quantities such as the mean energy and helicity are typically time-dependent. The effect of non-stationarity on the turbulent hydromagnetic dynamo process is studied here with the use of the two-scale direct-interaction approximation (TSDIA), which allows to explicitly relate the mean turbulent Reynolds and Maxwell stresses and the mean electromotive force (EMF) to the spectral characteristics of turbulence, such as e.g. the mean energy, as well as kinetic and cross-helicity. It is demonstrated, that the non-equilibrium effects can enhance the dynamo process when the magnetohydrodynamic (MHD) turbulence is both helical and cross-helical. This effect is based on the turbulent infinitesimal-impulse cross-response functions, which do not affect turbulent flows in equilibrium. The evolution and sources of the cross-helicity in MHD turbulence is also discussed.
## 1 Introduction
The effect of hydromagnetic dynamo action is ubiquitous in astrophysical plasmas e.g. in stellar and planetary interiors, accretion discs or the interstellar medium (cf. Roberts and Soward 1972, Brandenburg and Subramanian 2005, Dormy and Soward 2007, Roberts and King 2013, Balbus and Hawley 1991a,b). This is particularly important in view of the recent advancement of tokamak devices, reaching very high plasma temperatures, thus giving hope for the production of thermonuclear fusion power (cf. Li _et al._ 2019, Gibney 2022). The investigations of the large-scale dynamo mechanisms in magnetohydrodynamic (MHD) turbulence, that is those that lead to generation of large-scale magnetic fields, is mainly limited to equilibrium, i.e. statistically stationary turbulence.
One of the widely known and often invoked mechanisms is the so-called \(\alpha\)-effect, which requires chirality (lack of reflexional symmetry) in the turbulent flow, and this requires some mechanism that breaks the 'up-down' symmetry of the system, cf. Krause
and Radler (1980), Dormy and Soward (2007), Moffatt and Dormy (2019). A large-scale electromotive force (EMF) is then generated and this leads to the amplification of magnetic energy. The lack of reflectional symmetry is typically introduced by stratification and background rotation and a useful measure of the flow chirality is the kinetic helicity, \(\langle{\bf U}\cdot\nabla\times{\bf U}\rangle\), where \(\langle\cdot\rangle\) denotes the ensemble mean. Another pseudoscalar quantity of importance in dynamo theory is the cross-helicity \(\langle{\bf U}\cdot{\bf B}\rangle\), cf. e.g. Hamba and Tsuchiya (2010), Yokoi (2013); see Yokoi (2023) for a review.
The aim of this paper can be shortly stated as a demonstration of the influence of non-equilibrium effects in MHD turbulence on the \(\alpha\)-effect and thereby on large-scale dynamos. This issue has already been investigated in a series of papers by Mizerski (2018a,b, 2020, 2021, 2022), which however, assumed that the turbulence was stirred by a Gaussian and helical forcing; the physical properties of the forcing were then present in the expressions for the \(\alpha\) coefficient. On the contrary, here we apply the Two-Scale Direct-Interaction Approximation (TSDIA), which allows to remove the stirring force, but instead we need to assume some statistical properties of the background turbulence. Nevertheless, this approach allows to explicitly relate the mean electromotive force to kinetic and cross-helicities, through consideration of the Green response functions, which describe the responses of the turbulent flow and magnetic field to infinitesimal perturbations, cf. e.g. Yoshizawa (1985, 1990, 1998), Yokoi (2013, 2018). We show that the infinitesimal-impulse cross-responses affect the mean EMF through non-equilibrium effects in MHD turbulence, and the \(\alpha\)-effect is potentially enhanced, provided that the kinetic and cross-helicities are both non-zero. We also discuss the evolution equation of the cross-helicity, its sources and sinks in MHD turbulence, hence the possibility of a coexistence of the kinetic and cross-helicities; this issue is also investigated numerically.
## 2 Mathematical formulation
To study the magnetohydrodynamic turbulence in an incompressible conducting fluid we consider the following dynamical equations describing the evolution of the velocity field of the fluid flow \({\bf U}({\bf x},t)\) and the magnetic field \({\bf B}({\bf x}.t)\)
\[\frac{\partial{\bf U}}{\partial t}+\left({\bf U}\cdot\nabla\right){\bf U}=- \nabla\Pi-2\mathbf{\Omega}\times{\bf U}+\left({\bf B}\cdot\nabla \right){\bf B}+\nu\nabla^{2}{\bf U},\]
\[\frac{\partial{\bf B}}{\partial t}+\left({\bf U}\cdot\nabla\right){\bf B}= \left({\bf B}\cdot\nabla\right){\bf U}+\eta\nabla^{2}{\bf B},\]
\[\nabla\cdot{\bf U}=0\qquad\nabla\cdot{\bf B}=0,\]
where
\[\Pi=\frac{p}{\rho}+\frac{B^{2}}{2}-\frac{1}{2}(\mathbf{\Omega}\times {\bf x})^{2},\]
is the total pressure, \(\rho\) is the density, \(\Omega\) is the angular velocity, \(\nu\) is the viscosity, \(\eta\) is the magnetic diffusivity. For the purpose of simplicity we rescaled the magnetic field in the following way \({\bf B}/\sqrt{\mu_{0}\rho}\rightarrow{\bf B}\), where \(\mu_{0}\) is the vacuum permeability (so that the prefactor \(1/\mu_{0}\rho\) in the Lorentz-force term in the Navier-Stokes equation is lost); in the following we also rescale the currents, \(\sqrt{\mu_{0}/\rho}{\bf J}\rightarrow{\bf J}\), so that \({\bf J}=\nabla\times{\bf B}\). Next, denoting by angular brackets the ensemble mean,
\[\langle\cdot\rangle-\mbox{ensemble mean}\]
we put forward the standard decomposition
\[{\bf U}=\langle{\bf U}\rangle+{\bf u}^{\prime},\quad{\bf B}=\langle{\bf B} \rangle+{\bf b}^{\prime},\quad p=\langle p\rangle+p^{\prime},\]
and write down separately the equations for the mean fields \(\left\langle\mathbf{U}\right\rangle\) and \(\left\langle\mathbf{B}\right\rangle\) and the turbulent fluctuations \(\mathbf{u}^{\prime}\) and \(\mathbf{b}^{\prime}\); this yields
\[\frac{\partial\left\langle\mathbf{U}\right\rangle}{\partial t}+ \left(\left\langle\mathbf{U}\right\rangle\cdot\nabla\right)\left\langle \mathbf{U}\right\rangle= -\nabla\left\langle\varPi\right\rangle-2\boldsymbol{\Omega} \times\left\langle\mathbf{U}\right\rangle+\left(\left\langle\mathbf{B} \right\rangle\cdot\nabla\right)\left\langle\mathbf{B}\right\rangle+\nu\nabla^ {2}\left\langle\mathbf{U}\right\rangle\] \[-\nabla\cdot\left(\left\langle\mathbf{u}^{\prime}\mathbf{u}^{ \prime}\right\rangle-\left\langle\mathbf{b}^{\prime}\mathbf{b}^{\prime}\right\rangle \right), \tag{4a}\] \[\frac{\partial\left\langle\mathbf{B}\right\rangle}{\partial t}=\nabla\times \left(\left\langle\mathbf{U}\right\rangle\times\left\langle\mathbf{B}\right \rangle\right)+\nabla\times\left\langle\mathbf{u}^{\prime}\times\mathbf{b}^{ \prime}\right\rangle+\eta\nabla^{2}\left\langle\mathbf{B}\right\rangle,\] (4b) \[\nabla\cdot\left\langle\mathbf{B}\right\rangle=0,\quad\nabla\cdot\left\langle \mathbf{U}\right\rangle=0, \tag{4c}\]
where
\[\boldsymbol{\mathcal{E}}=\left\langle\mathbf{u}^{\prime}\times\mathbf{b}^{ \prime}\right\rangle, \tag{5}\]
is the large-scale electromotive force (EMF) and
\[\frac{\partial\mathbf{u}^{\prime}}{\partial t}-\nu\nabla^{2} \mathbf{u}^{\prime}+2\boldsymbol{\Omega}\times\mathbf{u}^{\prime}+\left(\left \langle\mathbf{U}\right\rangle\cdot\nabla\right)\mathbf{u}^{\prime}+\left( \mathbf{u}^{\prime}\cdot\nabla\right)\left\langle\mathbf{U}\right\rangle- \left(\left\langle\mathbf{B}\right\rangle\cdot\nabla\right)\mathbf{b}^{\prime }-\left(\mathbf{b}^{\prime}\cdot\nabla\right)\left\langle\mathbf{B}\right\rangle\] \[+\nabla\varPi^{\prime}=-\nabla\cdot\left(\mathbf{u}^{\prime} \mathbf{u}^{\prime}-\mathbf{b}^{\prime}\mathbf{b}^{\prime}\right)+\nabla \cdot\left(\left\langle\mathbf{u}^{\prime}\mathbf{u}^{\prime}\right\rangle- \left\langle\mathbf{b}^{\prime}\mathbf{b}^{\prime}\right\rangle\right), \tag{6a}\] \[\frac{\partial\mathbf{b}^{\prime}}{\partial t}-\eta\nabla^{2} \mathbf{b}^{\prime}+\left(\left\langle\mathbf{U}\right\rangle\cdot\nabla \right)\mathbf{b}^{\prime}-\left(\left\langle\mathbf{B}\right\rangle\cdot \nabla\right)\mathbf{u}^{\prime}+\left(\mathbf{u}^{\prime}\cdot\nabla\right) \left\langle\mathbf{B}\right\rangle-\left(\mathbf{b}^{\prime}\cdot\nabla \right)\left\langle\mathbf{U}\right\rangle\] \[=\nabla\times\left(\mathbf{u}^{\prime}\times\mathbf{b}^{\prime}- \left\langle\mathbf{u}^{\prime}\times\mathbf{b}^{\prime}\right\rangle\right),\] (6b) \[\nabla\cdot\mathbf{b}^{\prime}=0,\quad\nabla\cdot\mathbf{u}^{ \prime}=0. \tag{6c}\]
## 3 Non-equilibrium effects in dynamo theory
Previous results of Mizerski (2018a,b, 2020, 2021, 2022), obtained in the absence of the Coriolis force but with chiral stochastic forcing, in the context of the geodynamo and galactic dynamos suggest that the non-stationary \(\alpha\)-effect is proportional to the energy production rate resulting from the presence of the forcing (e.g. stochastic buoyancy) and is oscillatory on time scales induced by the forcing, which could be long (cf. also Mizerski _et al_. 2012 for non-stationary dynamo in the context of the elliptical instability). Here we utilize the Two-Scale Direct Interaction Approximation, in order to extract the effect of non-stirred, non-equilibrium turbulence on the large-scale hydromagnetic dynamo. In other words, the new approach allows to study non-stationary MHD turbulence and the turbulent dynamo effect in the absence of external stochastic forcing although with assumed statistical properties of the background turbulence. We demonstrate, that in non-equilibrium turbulence the quantity \(\left\langle\mathbf{u}^{\prime}\cdot\mathbf{j}^{\prime}\right\rangle\) plays a significant role in generation of the large-scale EMF through the \(\alpha\)-effect and the effect of \(\left\langle\mathbf{u}^{\prime}\cdot\mathbf{j}^{\prime}\right\rangle\) vanishes in stationary turbulence.
### Application of the TSDIA method
Let us introduce a small parameter \(\delta\) and define slow and fast spatial and temporal variables
\[\boldsymbol{\xi}=\mathbf{x},\quad\mathbf{X}=\delta\mathbf{x},\quad\tau=t,\quad T =\delta t. \tag{7}\]
The large-scale fields depend only on the slow variables, \(\left\langle\mathbf{U}\right\rangle(\mathbf{X},T)\) and the fluctuations depend on both, \(\mathbf{u}^{\prime}(\boldsymbol{\xi},\mathbf{X};\tau,T)\). We also define the Fourier transform, involving Galilean
transformation to the frame moving with the velocity \(\left\langle\mathbf{U}\right\rangle\)
\[u_{i}^{\prime}(\boldsymbol{\xi},\mathbf{X};\tau,T)=\int\mathrm{d}^{3}k\hat{u}_{ i}^{\prime}(\mathbf{k},\mathbf{X};\tau,T)\mathrm{e}^{-\mathrm{i}\mathbf{k}\cdot( \boldsymbol{\xi}-\left\langle\mathbf{U}\right\rangle\tau)}, \tag{10}\]
but the explicit dependence on the slow variables \(\mathbf{X}\) and \(T\) will be typically suppressed in notation for clarity. The details of the TSDIA approach are provided in Appendix A (see also SS 9.6 of Yoshizawa 1998, Yoshizawa 1985, 1990 and Yokoi 2023) and here we present the major results. The method involves introduction of the concept of background turbulence with given statistical properties, uninfluenced by the large-scale field and rotation, hence isotropic; this background turbulence is defined by the following correlation functions
\[\left\langle\hat{f}_{i}(\mathbf{k};\tau)\hat{g}_{j}(\mathbf{k}_{1};\tau_{1}) \right\rangle=\left[P_{ij}(\mathbf{k})Q_{fg}\left(k;\tau,\tau_{1}\right)+\frac {1}{2}\mathrm{i}\epsilon_{ijk}\frac{k_{k}}{k^{2}}H_{fg}\left(k;\tau,\tau_{1} \right)\right]\delta(\mathbf{k}+\mathbf{k}_{1}), \tag{11}\]
\[\left\langle G_{fgij}^{\prime}(\mathbf{k};\tau,\tau_{1})\right\rangle=\delta _{ij}G_{fg}\left(k;\tau,\tau_{1}\right), \tag{12}\]
where \(f\) and \(g\) represent one of the variables \(\mathbf{u}_{00}^{\prime}\) and \(\mathbf{b}_{00}^{\prime}\) and \(G_{fij}^{\prime}(\mathbf{k};\tau,\tau_{1})\) denote the Green's functions describing the system's response to infinitesimal disturbances. It is useful at this stage to write down explicitly the following quantity
\[\left\langle\mathbf{u}_{00}^{\prime}(\mathbf{x},\tau)\cdot\mathbf{ j}_{00}^{\prime}(\mathbf{x},\tau_{1})\right\rangle =-\mathrm{i}\epsilon_{ijk}\int\mathrm{d}k\int\mathrm{d}k^{\prime} k_{j}^{\prime}\left\langle\hat{u}_{00i}^{\prime}(\mathbf{k};\tau)\hat{b}_{00k}^{ \prime}(\mathbf{k}^{\prime};\tau_{1})\right\rangle\mathrm{e}^{-\mathrm{i}( \mathbf{k}+\mathbf{k}^{\prime})\cdot\mathbf{x}}\] \[=\int\mathrm{d}kH_{ub}\left(k;\tau,\tau_{1}\right)=\int\mathrm{d }kH_{bu}\left(k;\tau_{1},\tau\right), \tag{13}\]
since this quantity will play an important role in the theory of non-equilibrium \(\alpha\)-effect, developed below.
The derivation of the formula for the EMF presented in Appendix A leads to
\[\boldsymbol{\mathcal{E}}=\alpha\left\langle\mathbf{B}\right\rangle-\left( \beta+\zeta\right)\left\langle\mathbf{J}\right\rangle-\nabla\zeta\times \left\langle\mathbf{B}\right\rangle+\gamma\left(\left\langle\mathbf{W}\right \rangle+2\boldsymbol{\Omega}\right), \tag{14}\]
where \(\mathbf{J}=\nabla\times\mathbf{B}=\left\langle\mathbf{J}\right\rangle+ \mathbf{j}^{\prime}\) and \(\mathbf{W}=\nabla\times\mathbf{U}=\left\langle\mathbf{W}\right\rangle+ \mathbf{w}^{\prime}\) denote electric currents and the vorticity respectively. The statistically stationary case has been studied in detail in Yoshizawa (1998) and Yokoi (2013, 2018).
We now concentrate on the \(\alpha\)-effect, which can be decomposed into two contributions,
\[\alpha=\alpha_{S}+\alpha_{\mathrm{X}}, \tag{15}\]
the standard one, related to the so-called residual helicity
\[\alpha_{S}= \frac{1}{3}\int\mathrm{d}^{3}k\int_{-\infty}^{\tau}\mathrm{d} \tau_{1}\left[G_{uu}\left(k,\mathbf{X};\tau,\tau_{1},T\right)H_{bb}\left(k, \mathbf{X};\tau,\tau_{1},T\right)\right.\] \[\left.-G_{bb}\left(k,\mathbf{X};\tau,\tau_{1},T\right)H_{uu} \left(k,\mathbf{X};\tau_{1},\tau,T\right)\right], \tag{16}\]
and a less obvious one, related to the cross helicity and the quantity \(\left\langle\mathbf{u}^{\prime}\cdot\mathbf{j}^{\prime}\right\rangle\) which takes the form
\[\alpha_{\mathrm{X}}= -\frac{1}{3}\int\mathrm{d}^{3}k\int_{-\infty}^{\tau}\mathrm{d} \tau_{1}G_{bu}\left(k,\mathbf{X};\tau,\tau_{1},T\right)H_{ub}\left(k,\mathbf{X };\tau,\tau_{1},T\right)\] \[+\frac{1}{3}\int\mathrm{d}^{3}k\int_{-\infty}^{\tau}\mathrm{d} \tau_{1}G_{ub}\left(k,\mathbf{X};\tau,\tau_{1},T\right)H_{bu}\left(k,\mathbf{X };\tau,\tau_{1},T\right). \tag{17}\]
Since the helical functions of the background turbulence satisfy
\[H_{bu}\left(\tau,\tau_{1}\right)=H_{ub}\left(\tau_{1},\tau\right), \tag{18}\]
we obtain
\[\alpha_{\rm X}= -\frac{1}{3}\int{\rm d}^{3}k\int_{-\infty}^{\tau}{\rm d}\tau_{1}G_{ bu}\left(k,{\bf X};\tau,\tau_{1},T\right)H_{ub}\left(k,{\bf X};\tau,\tau_{1},T\right)\] \[+\frac{1}{3}\int{\rm d}^{3}k\int_{-\infty}^{\tau}{\rm d}\tau_{1}G_ {ub}\left(k,{\bf X};\tau,\tau_{1},T\right)H_{ub}\left(k,{\bf X};\tau_{1},\tau,T \right). \tag{3.11}\]
We now introduce the following symmetric and antisymmetric parts of \(H_{ub}\) with respect to exchange of time variables
\[H_{ub}^{(s)}\left(\tau,\tau_{1}\right)=\frac{1}{2}\left(H_{ub}\left(\tau,\tau_ {1}\right)+H_{ub}\left(\tau_{1},\tau\right)\right), \tag{3.12a}\] \[H_{ub}^{(a)}\left(\tau,\tau_{1}\right)=\frac{1}{2}\left(H_{ub}\left(\tau, \tau_{1}\right)-H_{ub}\left(\tau_{1},\tau\right)\right), \tag{3.12b}\]
which allows to further separate the \(\alpha_{\rm X}\) term into two contributions
\[\alpha_{\rm X}= -\frac{1}{3}\int{\rm d}^{3}k\int_{-\infty}^{\tau}{\rm d}\tau_{1} \left[G_{ub}\left(k,{\bf X};\tau,\tau_{1},T\right)+G_{bu}\left(k,{\bf X};\tau,\tau_{1},T\right)\right]H_{ub}^{(a)}\left(k,{\bf X};\tau,\tau_{1},T\right)\] \[+\frac{1}{3}\int{\rm d}^{3}k\int_{-\infty}^{\tau}{\rm d}\tau_{1} \left[G_{ub}\left(k,{\bf X};\tau,\tau_{1},T\right)-G_{bu}\left(k,{\bf X};\tau,\tau_{1},T\right)\right]H_{ub}^{(s)}\left(k,{\bf X};\tau_{1},\tau,T\right). \tag{3.13}\]
The first term in equation (3.13), i.e.
\[\alpha_{\rm neq}=-\frac{1}{3}\int{\rm d}^{3}k\int_{-\infty}^{\tau}{\rm d}\tau _{1}\left[G_{ub}\left(k,{\bf X};\tau,\tau_{1},T\right)+G_{bu}\left(k,{\bf X}; \tau,\tau_{1},T\right)\right]H_{ub}^{(a)}\left(k,{\bf X};\tau,\tau_{1},T\right), \tag{3.14}\]
clearly constitutes a contribution from non-stationarity of the turbulence, as the antisymmetric part \(H_{ub}^{(a)}\) is clearly a non-equilibrium effect.
### Physics of the non-equilibrium \(\alpha_{\rm neq}\)-effect
If we further assume that the function
\[{\cal G}\left(\tau,\tau_{1}\right)=G_{ub}\left(\tau,\tau_{1}\right)+G_{bu} \left(\tau,\tau_{1}\right) \tag{3.15}\]
is independent of \(k\), the non-equilibrium \(\alpha\)-effect can be expressed as follows:
\[\alpha_{\rm neq}=-\frac{1}{3}\int_{-\infty}^{\tau}{\rm d}\tau_{1}{\cal G} \left(\tau,\tau_{1}\right)\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{ \prime}\rangle^{(a)}\left({\bf x},\tau,\tau_{1}\right), \tag{3.16}\]
where
\[\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{\prime}\rangle^{(a)}\left({\bf x },\tau,\tau_{1}\right)=\frac{1}{2}\left[\langle{\bf u}_{00}^{\prime}\left({\bf x },\tau\right)\cdot{\bf j}_{00}^{\prime}\left({\bf x},\tau_{1}\right)\rangle- \langle{\bf u}_{00}^{\prime}\left({\bf x},\tau_{1}\right)\cdot{\bf j}_{00}^{ \prime}\left({\bf x},\tau\right)\rangle\right]. \tag{3.17}\]
The memory effect, expressed by the time integral in (3.17) is clearly crucial, as \(\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{\prime}\rangle^{(a)}\left({\bf x },\tau,\tau\right)=0\). Next, inspection of the evolution equations for the Green's functions leads to the conclusion that \(G_{ub}\) must be an odd function of \({\bf b}_{00}^{\prime}\). This is expected, since the \(\alpha_{\rm X}\) contribution to the \(\alpha\)-effect results from the action of the Lorentz force, and since \(H_{ub}\) is associated with the quantity \(\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{\prime}\rangle\), i.e. \(H_{ub}\) is linear in \({\bf b}_{00}^{\prime}\), it follows that \(G_{ub}\) must be an odd function of the latter. Moreover, since \(\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{\prime}\rangle\) is a scalar quantity (does not change sign under reflections), \(G_{ub}\) must be skew. The only dynamical quantity that is skew and odd in \({\bf b}_{00}^{\prime}\) is the cross helicity, \(\langle{\bf u}_{00}^{\prime}\cdot{\bf b}_{00}^{\prime}\rangle\), hence we expect that \(G_{ub}\sim Q_{ub}\). Having in mind that the response function \({\cal G}(\tau,\tau_{1})\) is non-dimensional we can now provide
the following rough estimate of the non-equilibrium \(\alpha_{\rm neq}\)-effect
\[\alpha_{\rm neq}\sim-\frac{2}{3}\int_{-\infty}^{\tau}{\rm d}\tau_{1}\Upsilon^{ (s)}\left({\bf x},\tau,\tau_{1}\right)\left\langle{\bf u}_{00}^{\prime}\cdot{ \bf j}_{00}^{\prime}\right\rangle^{(a)}\left({\bf x},\tau,\tau_{1}\right), \tag{3.18}\]
where
\[\Upsilon\left({\bf x},\tau,\tau_{1}\right)=\frac{\left\langle{\bf u}_{00}^{ \prime}\left({\bf x},\tau\right)\cdot{\bf b}_{00}^{\prime}\left({\bf x},\tau_{ 1}\right)\right\rangle}{\sqrt{\left\langle{\bf u}_{00}^{\prime 2}\right\rangle \left({\bf x},\tau\right)\left\langle{b}_{00}^{\prime 2}\right\rangle \left({\bf x},\tau_{1}\right)}}, \tag{3.19}\]
\[\Upsilon^{(s)}\left({\bf x},\tau,\tau_{1}\right)=\frac{1}{2}\left[\Upsilon \left({\bf x},\tau,\tau_{1}\right)+\Upsilon\left({\bf x},\tau_{1},\tau\right) \right], \tag{3.20}\]
and the cross helicity has been normalized by the geometric mean of the kinetic and magnetic fluctuational energies (see Yokoi 2011 for a discussion of different cross-helicity normalizations). The latter equation expresses an effect which results from the lack of equilibrium in the turbulent state.
The second term in (3.13) is likely to be small because of the factor \(G_{ub}(\tau,\tau_{1})-G_{bu}(\tau,\tau_{1})\). For example in the case when \(\nu=\eta\) the two response functions \(G_{ub}\) and \(G_{bu}\) are equal and \(\alpha_{\rm X}=\alpha_{\rm neq}\). This still holds approximately true, when the diffusivities are unequal but weak,
\[G_{ub}\approx G_{bu},\qquad{\rm and}\qquad\alpha_{\rm X}\approx\alpha_{\rm neq}. \tag{3.21}\]
The same symmetry arguments as in the case of \(\alpha_{\rm neq}\) can be also applied to the second term in (3.13) which is therefore proportional to the non-dimensional cross helicity \(\Upsilon=\Upsilon({\bf x},\tau,\tau)\) and the quantity \(\left\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{\prime}\right\rangle= \left\langle{\bf u}_{00}^{\prime}\left({\bf x},\tau\right)\cdot{\bf j}_{00}^{ \prime}\left({\bf x},\tau\right)\right\rangle\), i.e. \(\alpha_{\rm X}-\alpha_{\rm neq}\sim\tau_{t}\Upsilon\left\langle{\bf u}_{00}^{ \prime}\cdot{\bf j}_{00}^{\prime}\right\rangle\), where \(\tau_{t}\) is the turn over time of the most energetic turbulent eddies. However, as remarked above this effect should be weak, when the diffusion is weak or the magnetic Prandtl number \({\rm Pr}_{M}=\nu/\eta\approx 1\).
Finally, we also expect the \(\left\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{\prime}\right\rangle\) correlations in fully turbulent flows to be proportional to the kinetic helicity \(\left\langle{\bf u}_{00}^{\prime}\cdot{\bf w}_{00}^{\prime}\right\rangle\), since typically the velocities and magnetic fields tend to align in such flows. Again, the prefactor must be skew and odd in \({\bf b}_{00}^{\prime}\), therefore we propose
\[\left\langle{\bf u}_{00}^{\prime}\cdot{\bf j}_{00}^{\prime}\right\rangle\approx \Upsilon\langle{\bf u}_{00}^{\prime}\cdot{\bf w}_{00}^{\prime}\rangle=\frac{ \left\langle{\bf u}_{00}^{\prime}\cdot{\bf b}_{00}^{\prime}\right\rangle \langle{\bf u}_{00}^{\prime}\cdot{\bf w}_{00}^{\prime}\rangle}{\sqrt{\left \langle{u}_{00}^{\prime 2}\right\rangle\left\langle{b}_{00}^{\prime 2}\right\rangle}}. \tag{3.22}\]
Introducing the latter relation into (3.18) leads to
\[\alpha_{\rm neq}\sim-\frac{2}{3}\int_{-\infty}^{\tau}{\rm d}\tau_{1}\left( \Upsilon^{(s)}\left({\bf x},\tau,\tau_{1}\right)\right)^{2}\left\langle{\bf u }_{00}^{\prime}\cdot{\bf w}_{00}^{\prime}\right\rangle^{(a)}\left({\bf x},\tau,\tau_{1}\right), \tag{3.23}\]
which shows, that the non-equilibrium \(\alpha_{\rm neq}\)-effect relies on coexistence of the kinetic and cross helicities and their history in MHD turbulence (more precisely, in the case of kinetic helicity only the antisymmetric part of the time correlations \(\left\langle{\bf u}_{00}^{\prime}\cdot{\bf w}_{00}^{\prime}\right\rangle^{(a)} \left({\bf x},\tau,\tau_{1}\right)\) contributes to the new effect).
### Calculation of the \(\alpha_{\rm neq}\)-effect
We will now investigate this dynamo mechanism in some more detail. In order to calculate the effect of non-equilibrium turbulence we adopt a similar approach to that in SS 7 of Yoshizawa (1998). In stationary turbulence the functions \(H_{fg}\left(k,{\bf X};\tau,\tau_{1},T\right)\) and \(G_{f}\left(k,{\bf X};\tau,\tau_{1},T\right)\) depend only on \(|\tau-\tau_{1}|\), hence to study the non-equilibrium effects we postulate a similar formulae for these functions as those of Yoshizawa (1998) (cf. formulae 6.53-6.54 of this book), but modified in order to introduce simple explicit and distinct
dependencies on \(\tau\) and \(\tau_{1}\)
\[H_{fg}\left(k,\mathbf{k}\cdot\left\langle\mathbf{B}\right\rangle,\mathbf{X}; \tau,\tau_{1},T\right)=\sigma\left(k,\mathbf{X},T\right)\mathrm{e}^{-\varpi \left(k,\mathbf{X},T\right)\left|\tau-\tau_{1}\right|}\mathcal{H}\left(\tau \right)\mathcal{H}_{1}\left(\tau_{1}\right), \tag{3.24}\]
\[G_{fg}\left(k,\mathbf{X};\tau,\tau_{1},T\right)=\theta\left(\tau-\tau_{1} \right)\varsigma\left(k,\mathbf{X},T\right)\mathrm{e}^{-\varpi\left(k,\mathbf{ X},T\right)\left|\tau-\tau_{1}\right|}\mathcal{G}\left(\tau\right)\mathcal{G}_{1} \left(\tau_{1}\right), \tag{3.25}\]
for some functions \(\mathcal{H}\left(\tau\right)\), \(\mathcal{H}_{1}(\tau_{1})\), \(\mathcal{G}(\tau)\) and \(\mathcal{G}_{1}(\tau_{1})\). We can decompose these functions into Fourier modes, which allows to adopt the following, simpler, generic model
\[H_{fg}\left(\tau,\tau_{1}\right)=\sigma\mathrm{e}^{-\varpi\left|\tau-\tau_{1} \right|}\sin\left(\varpi_{h0}\tau\right)\sin\left(\varpi_{h1}\tau_{1}\right), \tag{3.26a}\]
\[G_{fg}\left(\tau,\tau_{1}\right)=\theta\left(\tau-\tau_{1}\right)\varsigma \mathrm{e}^{-\varpi\left|\tau-\tau_{1}\right|}\sin\left(\varpi_{g0}\tau\right) \sin\left(\varpi_{g1}\tau_{1}\right), \tag{3.26b}\]
where the dependence on the slow variables and the wavenumber \(k\) was suppressed in notation for clarity; moreover \(\varpi>0\) and to fix ideas we also assume \(\varpi_{h0}>0\), \(\varpi_{g0}>0\), \(\varpi_{h1}>0\) and \(\varpi_{g1}>0\). For the sake of simplicity we also assume
\[G_{ub}\approx G_{bu}. \tag{3.27}\]
The following calculation
\[\int_{-\infty}^{\tau}\mathrm{d}\tau_{1}G_{fg}\left(\tau,\tau_{1} \right)H_{fg}\left(\tau,\tau_{1}\right)\\ =\frac{\sigma\varsigma}{4}\left(\cos\Delta_{0}\tau-\cos\Sigma_{0} \tau\right)\left[\frac{1}{4\varpi^{2}+\Delta_{1}^{2}}\left(2\varpi\cos\Delta_ {1}\tau+\Delta_{1}\sin\Delta_{1}\tau\right)\right.\\ \left.-\frac{1}{4\varpi^{2}+\Sigma_{1}^{2}}\left(2\varpi\cos \Sigma_{1}\tau+\Sigma_{1}\sin\Sigma_{1}\tau\right)\right], \tag{3.28}\]
where
\[\Delta_{i}=\varpi_{hi}-\varpi_{gi},\qquad\Sigma_{i}=\varpi_{hi}+\varpi_{gi}, \tag{3.29}\]
shows, that in non-equilibrium turbulence both contributions to the \(\alpha\)-effect, the'standard' \(\alpha_{S}\) and the one associated with cross helicity \(\alpha_{X}\), are enhanced by non-stationarity. Since the frequencies correspond to the fast oscillations of turbulent fluctuations in most of the cases the cosines and sines do not contribute to large time scales (their time average vanishes). Under the time average over long time scales \(\delta^{-1}t\) the non-zero contribution comes from the cases \(\varpi_{hi}=\varpi_{gi}\) (or \(\varpi_{hi}\approx\varpi_{gi}\)). Therefore we pick (\(\varpi\), \(\varpi_{h}\), \(\varpi_{g}\))-modes such that the following relations are satisfied
\[\Delta_{i}\ll\varpi\ll\varpi_{hi},\,\varpi_{gi},\quad\mathrm{for}\quad i=0,1, \tag{3.30}\]
in which case
\[\int_{-\infty}^{\tau}\mathrm{d}\tau_{1}G_{fg}\left(\tau,\tau_{1}\right)H_{fg} \left(\tau,\tau_{1}\right)\approx\frac{\sigma\varsigma}{8\varpi}; \tag{3.31}\]
for comparison in the stationary case one obtains \(\sigma_{s}\varsigma_{s}/2\varpi_{s}\) with \(H_{fg}=\sigma_{s}\exp(-\varpi_{s}\left|\tau-\tau_{1}\right|)\), \(G_{fg}=\varsigma_{s}\exp(-\varpi_{s}\left|\tau-\tau_{1}\right|)\). However, the influence of non-stationarity on the'standard' \(\alpha_{S}\) contribution has been studied using different methods in Mizerski (2018a,b, 2020, 2021, 2022). Here we concentrate on the cross-helicity contribution \(\alpha_{\mathrm{X}}\approx\alpha_{\mathrm{neq}}\), which is apparent within the TSDIA approach. Introduction of the formulae (3.26a,b) into (3.14) yields
\[\alpha_{\mathrm{neq}}\approx-\frac{\pi}{6}\int\mathrm{d}k\frac{\sigma\varsigma k ^{2}}{\varpi}. \tag{3.32}\]
According to our previous observations in the above we have \(\varsigma\sim\Upsilon\). We note that a very similar result is obtained if one assumes a simpler non-stationary form of the \(H_{ub}\) and
\(G_{ub}\) functions
\[H_{ub}\left(\tau,\tau_{1}\right)=\sigma\mathrm{e}^{-\varpi|\tau-\tau_{1}|}\sin \left[\varpi_{h}\left(\tau-\tau_{1}\right)\right],\]
\[G_{ub}\left(\tau,\tau_{1}\right)=\theta\left(\tau-\tau_{1}\right)\varsigma \mathrm{e}^{-\varpi|\tau-\tau_{1}|}\sin\left[\varpi_{g}\left(\tau-\tau_{1} \right)\right],\]
which satisfies \(H_{ub}(\tau,\tau_{1})=-H_{ub}(\tau_{1},\tau)\), and considers the limit (3.30).
In the above calculation we have used some standard models of the statistical properties of turbulence in order to emphasize the importance of the history of evolution of the helicities in the turbulent dynamo process. The \(\alpha_{\mathrm{neq}}\)-effect, induced by the simultaneous presence of cross and kinetic helicities, can be strong and depends on their magnitude.
## 4 Coexistence of the kinetic and cross helicities in turbulence
We now consider the question of the likelihood of coexistence of the cross and kinetic helicities in developed turbulence. Although it is not possible to draw definite conclusions in this matter, it is still instructive to study the sources and sinks of the cross helicity in turbulent flows in order to develop some intuition about its generation.
In the Appendix B we consider a stirred turbulence (with homogeneous, isotropic, stationary and helical Gaussian forcing) and show that under the first-order smoothing approximation the kinetic helicity is proportional to the helicity of the forcing, whereas the cross-helicity is defined by the product \(\langle\mathbf{f}\cdot\nabla\times\mathbf{f}\rangle(\langle\mathbf{B} \rangle\cdot\boldsymbol{\Omega})\). In other words, within the FOSA approach the existence of the cross-helicity is dependent on the existence of the mean field component parallel to the background rotation vector.
A more general calculation is presented in the Appendix C, where we have derived the general evolution equation for the cross-helicity (cf. also Yokoi and Homba 2007, Yokoi 2011, Yokoi and Balarac 2011, Yokoi and Hoshino 2011, Yokoi 2013). This equation involves mean quantities such as the mean EMF \(\boldsymbol{\mathcal{E}}\) and the mean Reynolds and Maxwell stresses \(\langle u^{\prime}_{i}u^{\prime}_{j}-b^{\prime}_{i}b^{\prime}_{j}\rangle\). For the former we utilize the result (3.6) and for the latter we take the expression obtained also via the TSDIA approach in Yokoi and Hoshino (2011), i.e.
\[-\langle u^{\prime}_{i}u^{\prime}_{j}-b^{\prime}_{i}b^{\prime}_{j}\rangle \frac{\partial\langle B\rangle_{i}}{\partial x_{j}}=\frac{7}{10}\beta\mathcal{ S}_{ij}\mathcal{M}_{ij}-\frac{7}{10}\gamma\mathrm{Tr}\left(\boldsymbol{ \mathcal{M}}^{2}\right),\]
where
\[\mathcal{S}_{ij}=\frac{\partial\langle U\rangle_{i}}{\partial x_{j}}+\frac{ \partial\langle U\rangle_{j}}{\partial x_{i}},\qquad\mathcal{M}_{ij}=\frac{ \partial\langle B\rangle_{i}}{\partial x_{j}}+\frac{\partial\langle B \rangle_{j}}{\partial x_{i}}.\]
This leads to
\[\frac{D}{Dt}\left\langle\mathbf{u}^{\prime}\cdot\mathbf{b}^{ \prime}\right\rangle= -\alpha\left(\langle\mathbf{B}\rangle\cdot\langle\mathbf{W} \rangle+2\left\langle\mathbf{B}\right\rangle\cdot\boldsymbol{\Omega}\right)+ \left(\beta+\zeta\right)\left(\langle\mathbf{J}\rangle\cdot\langle\mathbf{W} \rangle+2\left\langle\mathbf{J}\right\rangle\cdot\boldsymbol{\Omega}\right)\] \[-\gamma\left(\langle\mathbf{W}\rangle+2\boldsymbol{\Omega} \right)^{2}-\frac{7}{10}\gamma\mathrm{Tr}\left(\boldsymbol{\mathcal{M}}^{2}\right)\] \[+\frac{7}{10}\beta\mathrm{Tr}\left(\boldsymbol{\mathcal{S}} \cdot\boldsymbol{\mathcal{M}}\right)+\left(\nabla\zeta\times\langle\mathbf{B }\rangle\right)\cdot\left(\langle\mathbf{W}\rangle+2\boldsymbol{\Omega}\right)\] \[+\nabla\cdot\left[\left\langle\left(-\Pi^{\prime}+\frac{ \mathbf{u}^{\prime 2}+\mathbf{b}^{\prime 2}}{2}\right)\mathbf{b}^{\prime}\right\rangle+ \left\langle\frac{\mathbf{u}^{\prime 2}+\mathbf{b}^{\prime 2}}{2}\right\rangle \langle\mathbf{B}\rangle-\nu\left\langle\mathbf{w}^{\prime}\times\mathbf{b}^ {\prime}\right\rangle+\eta\left\langle\mathbf{u}^{\prime}\times\mathbf{j}^{ \prime}\right\rangle\right]\] \[-\left(\nu+\eta\right)\left\langle\mathbf{w}^{\prime}\cdot \mathbf{j}^{\prime}\right\rangle. \tag{4.3}\]
Of course if the turbulence is stirred with some forcing \(\mathbf{f}\) there is also another production term \(\langle\mathbf{f}\cdot\mathbf{b}^{\prime}\rangle\).
According to (3.23) the magnitude of the non-equilibrium \(\alpha\)-effect depends on both, the kinetic and cross helicities and their history. The total \(\alpha\)-effect consists of the two
contributions \(\alpha=\alpha_{\rm S}+\alpha_{\rm X}\), where the standard one can be assumed proportional to the kinetic helicity, \(\alpha_{\rm S}\approx-\tau_{t}\langle{\bf u}^{\prime}\cdot{\bf w}^{\prime} \rangle/3\). The final balance between the two contributions \(\alpha_{\rm S}\) and \(\alpha_{\rm X}\) determines whether the \(\alpha\) coefficient has the same or the opposite sign to the kinetic helicity. The effect of different terms in the equation (10) has been studied in the aforementioned works of Yokoi and Hama (2007), Yokoi (2011), Yokoi and Balarac (2011) and Yokoi (2013) under some simplifying assumptions, in particular under the neglect of the effects from the \(G_{ub}\) and \(G_{bu}\) response functions, responsible for the non-equilibrium effects studied here. Assuming that \(\alpha=-\tau_{t}\langle{\bf u}^{\prime}\cdot{\bf w}^{\prime}\rangle/3\) they showed, that the first term \(-\alpha\langle{\bf B}\rangle\cdot\langle{\bf W}\rangle\) always leads to destruction of the cross helicity. This is no longer true, when \(\Upsilon\neq 0\) in non-equilibrium turbulence, since depending on the balance between the \(\alpha_{\rm S}\) and \(\alpha_{\rm X}\) terms the term \(-\alpha\langle{\bf B}\rangle\cdot\langle{\bf W}\rangle\) in (10) may either amplify or destroy the cross helicity. Furthermore, Yokoi and Hoshino (2011) take \(\beta+\zeta\sim\langle{\bf u}^{\prime 2}\rangle\) and \(\gamma\sim\langle{\bf u}^{\prime}\cdot{\bf b}^{\prime}\rangle\) which allows them to identify another two terms that always lead to destruction of the cross-helicity, namely
\[-\gamma\left(\langle{\bf W}\rangle+2\mathbf{\Omega}\right)^{2}- \frac{7}{10}\gamma{\rm Tr}\left(\mathbf{\mathcal{M}}^{2}\right). \tag{11}\]
In addition Yokoi and Hoshino (2011) have described various situations when the terms \(\left(\beta+\zeta\right)\langle{\bf J}\rangle\cdot\langle{\bf W}\rangle\), \(\beta{\rm Tr}\left(\mathbf{\mathcal{S}}\cdot\mathbf{ \mathcal{M}}\right)\) and \(\nabla\cdot\left[\left\langle{\bf u}^{\prime 2}+{\bf b}^{\prime 2}\right\rangle \langle{\bf B}\right)\right]\) may lead to production of the cross-helicity in the geometry of the tokamak devices. Finally, in the term \(-2\alpha\left\langle{\bf B}\right\rangle\cdot\mathbf{\Omega}\) we recover the action of the mean field component parallel to the rotation vector, as in the FOSA approach.
The action of all the other terms in (10) is difficult to predict and, in general, they can either amplify or destroy the cross-helicity in developed turbulence. The final balance on the right hand side of (10) depends on many dynamical features of turbulence and is expected to be time dependent. Therefore in order to demonstrate the possibility of coexistence of the cross- and kinetic helicities in magnetized turbulence we have performed numerical simulations of the compressible version of equations (1a-c) in the presence of gravity, density stratification and an imposed magnetic field \({\bf g}\parallel\nabla\rho\parallel{\bf B}_{0}\parallel\mathbf{\Omega}\) in a periodic box with the use of the Pencil Code (Pencil Code Collaboration) with \(256^{3}\) mesh points; stress-free and perfectly conducting boundary conditions were imposed at the top and bottom boundaries; see Appendix D. The action of rotation along the direction of stratification leads to kinetic helicity (see figure 5 of Jabbari et al. 2014 for simulation results) and the action of a magnetic field along the direction of stratification leads to cross helicity (Rudiger et al. 2011).
The values of the physical parameters are as follows: working again with the unscaled magnetic field \(B=0.01\,c_{\rm s}\sqrt{\mu_{0}\bar{\rho}}\) and gravity \(g=1\,c_{\rm s}^{2}k_{1}\) (these are varied in other runs), where \(c_{\rm s}\) is the speed of sound, \(\Omega=0.5\,c_{\rm s}k_{1}\) is kept fixed in all runs, \(\bar{\rho}\) is the mean density and \(k_{1}\) is the box wavenumber; the remaining parameters, which are constant for all runs are listed in table 1, where we used the Alfven speed \(v_{A}=B/\sqrt{\mu_{0}\overline{\rho}}\) to quantify the strength of the imposed and rms magnetic fields through \(v_{A0}\) and \(v_{A}^{\rm rms}\), respectively. The results obtained for two values of the imposed magnetic field which differ by an order of magnitude at variable gravity strength are depicted in figure 1 and tables 1 and 2; see also Appendix E for additional figures. The normalized helicities, \(\langle{\bf u}^{\prime}\cdot{\bf b}^{\prime}\rangle/\sqrt{\langle{u^{\prime 2}} \rangle\langle{b^{\prime 2}}\rangle}\) and \(\langle{\bf u}^{\prime}\cdot{\bf w}^{\prime}\rangle/\sqrt{\langle{u^{\prime 2 }}\rangle\langle{w^{\prime 2}}\rangle}\) are plotted against time and they are both clearly non-zero in all the considered cases; the cross-helicity is plotted in red and the blue lines correspond to the kinetic helicity whereas their time averages are marked with white lines. In addition, only for the sake of reference, the figures also show the estimates of the non-equilibrium
effect in the form
\[\alpha_{\rm neq}\approx-\frac{1}{3}\frac{\langle{\bf u}^{\prime}\cdot{\bf b}^{ \prime}\rangle}{\sqrt{\langle u^{\prime 2}\rangle\langle b^{\prime 2}\rangle}} \int_{-\infty}^{\tau}{\rm d}\tau_{1}\left[\langle{\bf u}^{\prime}\left({\bf x },\tau\right)\cdot{\bf j}^{\prime}\left({\bf x},\tau_{1}\right)\rangle- \langle{\bf u}^{\prime}\left({\bf x},\tau_{1}\right)\cdot{\bf j}^{\prime}\left( {\bf x},\tau\right)\rangle\right], \tag{10}\]
which can be compared with the following standard estimate of the \(\alpha\)-effect, associated with the presence of the kinetic and current helicities
\[\alpha_{S}\approx-\frac{1}{3}\tau_{t}\left(\langle{\bf u}^{\prime}\cdot{\bf w} ^{\prime}\rangle-\langle{\bf b}^{\prime}\cdot{\bf j}^{\prime}\rangle\right), \tag{11}\]
where \(\tau_{t}=1/u_{\rm rms}k_{\rm f}\) is the turnover time of most energetic turbulent eddies, with \(u_{\rm rms}=\sqrt{\langle u^{\prime 2}\rangle}\) and \(k_{\rm f}=30\,k_{1}\) denoting the forcing the wavenumber (\(k_{1}=2\pi/L\) is the wavenumber of the box of length \(L\)).
Although in the numerically studied cases the statistical non-stationarity of turbulence is rather weak and the estimate of the \(\alpha_{\rm neq}\) coefficient is always at least an order of magnitude weaker than \(\alpha_{\rm S}\), the former is clearly different from zero and its relative importance seems to correlate with the magnitude of the cross-helicity. The relative
enhancement of the \(\alpha_{\rm neq}\)-effect visible for a stronger magnetic field (Run B) and weaker gravity (Run E) corresponds to the enhancement of the cross-helicity with respect to the kinetic one. Of course in the latter case (see figure 2), although the \(\alpha_{\rm neq}\) coefficient has the largest relative magnitude it also has a different sign than \(\alpha_{\rm S}\), hence in this case the non-equilibrium effects tend to suppress the standard dynamo effect. In figure 3 we see, that weak magnetic field and strong gravity have suppressed the non-equilibrium effect to a very small relative magnitude.
At smaller scale separation, i.e., for smaller values of \(k_{\rm f}\), we expect the turbulence to be more intermittent and degree of non-stationarity to be enhanced. To address this possibility, we have performed additional simulations for smaller values of \(k_{\rm f}\) with the other parameters being the same as for Run A. The results shown in table 2 do show that \(\alpha_{\rm neq}\) is twice as large when \(k_{\rm f}\) is reduced from 30 to 10, but an additional decrease of \(k_{\rm f}\) from 10 to 3 does not lead to an additional increase of \(\alpha_{\rm neq}\). To some extent, however, this is caused by the normalization by \(\alpha_{0}\), which has increased by about 60%.
We conclude, that in fully developed helical turbulence, that is in turbulence with strong kinetic helicity, the cross-helicity is rather likely to be produced as well and at least for some periods of time the two helicities can coexist.
## 5 Conclusions
We have analysed the hydromagnetic dynamo process in non-equilibrium turbulence. It was shown that in non-equilibrium MHD turbulence the effect of the infinitesimal-impulse cross responses \({\bf u}^{\prime}\leftrightarrow{\bf b}^{\prime}\) is pronounced, which vanishes in stationary state. This creates additional terms in the expression for the large-scale electromotive force.
The main conclusion is that the non-equilibrium effects in MHD turbulence modify the \(\alpha\)-effect by introducing a correction dependent on the square of the non-dimensional cross-helicity \(\Upsilon=\langle{\bf u}^{\prime}\cdot{\bf b}^{\prime}\rangle/\sqrt{\langle u^ {\prime 2}\rangle\langle b^{\prime 2}\rangle}\), the kinetic helicity and their history in the MHD turbulence, which takes the form provided in (3.23). This requires coexistence of both, the kinetic and cross-helicities in the turbulent flow. The discussion of the production mechanisms of the cross-helicity, provided in section 4 and the results of numerical
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline & \(\frac{g}{c_{\rm e}^{2}k_{\rm f}}\) & \(\frac{v_{\rm A0}}{c_{\rm a}}\) & \(\frac{\langle{\bf u}^{\prime}\cdot{\bf b}^{\prime}\rangle}{\sqrt{\langle u^{ \prime 2}\rangle\langle b^{\prime 2}\rangle}}\) & \(\frac{\langle{\bf u}^{\prime}\cdot{\bf w}^{\prime}\rangle}{\sqrt{\langle u^{ \prime 2}\rangle\langle w^{\prime 2}\rangle}}\) & \(\frac{\langle{\bf b}^{\prime}\cdot{\bf y}^{\prime}\rangle}{\sqrt{\langle u^{ \prime 2}\rangle\langle w^{\prime 2}\rangle}}\) & \(\frac{\alpha_{\rm neq}}{\alpha_{0}}\) & \(\frac{\alpha_{\rm S}}{\alpha_{0}}\) & \(\frac{u_{\rm rms}}{c_{\rm a}}\) & \(\frac{v_{\rm rms}^{\rm rms}}{c_{\rm a}}\) \\ C & 0.5 & 0.01 & \(-9.8\times 10^{-3}\) & \(-1.6\times 10^{-2}\) & \(-2.0\times 10^{-4}\) & \(7.8\times 10^{-4}\) & \(1.8\times 10^{-2}\) & 0.10 & 0.03 \\ A & 1.0 & 0.01 & \(-1.7\times 10^{-2}\) & \(-3.0\times 10^{-2}\) & \(-3.3\times 10^{-4}\) & \(1.1\times 10^{-3}\) & \(3.5\times 10^{-2}\) & 0.11 & 0.04 \\ D & 2.0 & 0.01 & \(-2.0\times 10^{-2}\) & \(-3.6\times 10^{-2}\) & \(-2.8\times 10^{-4}\) & \(6.1\times 10^{-4}\) & \(4.1\times 10^{-2}\) & 0.16 & 0.04 \\ E & 0.5 & 0.10 & \(-5.5\times 10^{-2}\) & \(-1.9\times 10^{-2}\) & \(-6.2\times 10^{-4}\) & \(-5.6\times 10^{-3}\) & \(1.5\times 10^{-2}\) & 0.08 & 0.07 \\ B & 1.0 & 0.10 & \(-5.3\times 10^{-2}\) & \(-3.2\times 10^{-2}\) & \(-1.2\times 10^{-2}\) & \(2.3\times 10^{-3}\) & \(1.8\times 10^{-2}\) & 0.09 & 0.12 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the simulation results for Runs A–E.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline & \(k_{\rm f}\) & \(\frac{\langle{\bf u}^{\prime}\cdot{\bf b}^{\prime}\rangle}{\sqrt{\langle u^{ \prime 2}\rangle\langle b^{\prime 2}\rangle}}\) & \(\frac{\langle{\bf u}^{\prime}\cdot{\bf w}^{\prime}\rangle}{\sqrt{\langle u^{ \prime 2}\rangle\langle w^{\prime 2}\rangle}}\) & \(\frac{\langle{\bf b}^{\prime}\cdot{\bf y}^{\prime}\rangle}{\sqrt{\langle u^{ \prime 2}\rangle\langle w^{\prime 2}\rangle}}\) & \(\frac{\alpha_{\rm neq}}{\alpha_{0}}\) & \(\frac{\alpha_{\rm S}}{\alpha_{0}}\) & \(\frac{u_{\rm rms}}{c_{\rm a}}\) & \(\frac{v_{\rm rms}^{\rm rms}}{c_{\rm a}}\) \\ A & 30 & \(-1.7\times 10^{-2}\) & \(-3.0\times 10^{-2}\) & \(-3.3\times 10^{-4}\) & \(1.1\times 10^{-3}\) & \(3.5\times 10^{-2}\) & 0.11 & 0.04 \\ A & 210 & \(-1.3\times 10^{-1}\) & \(-1.2\times 10^{-1}\) & \(1.3\times 10^{-3}\) & \(-1.7\times 10^{-2}\) & \(6.9\times 10^{-2}\) & 0.12 & 0.12 \\ A & 3 & \(-6.4\times 10^{-2}\) & \(-2.1\times 10^{-1}\) & \(-3.0\times 10^{-2}\) & \(-6.0\times 10^{-3}\) & \(5.5\times 10^{-2}\) & 0.19 & 0.09 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the simulation results for Runs A, A2, and A3.
simulations, lead to a conclusion that such coexistence is possible and perhaps even ubiquitous in many natural systems. Simple strong production mechanisms have been identified already and thoroughly discussed in earlier works, e.g. Yokoi and Hoshino (2011).
The non-equilibrium effects in turbulence affect also other components of the mean EMF (11), that is the turbulent diffusivity \(\beta\) and the coefficients \(\zeta\) and \(\gamma\) in a non-trivial way, through the effect of the Green's cross-response functions \(G_{ub}\) and \(G_{bu}\). This interesting topic should be investigated in more detail in future studies.
## Acknowledgements
We would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme "Frontiers in dynamo theory: from the Earth to the stars" (DYT2) where much of the work on this paper was undertaken.
## Funding
KAM was supported by a subsidy from the Polish Ministry of Education and Science for the Institute of Geophysics, Polish Academy of Sciences. NY was supported by the Japan Society of the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research JP18H01212. We also acknowledge the support of the EPSRC grant no EP/R014604/1 and the Swedish Research Council (Vetenskapsradet, 2019-04234). Nordita is sponsored by Nordforsk. We acknowledge the allocation of computing resources provided by the Swedish National Allocations Committee at the Center for Parallel Computers at the Royal Institute of Technology in Stockholm and Linkoping.
## Declaration of Interests
The authors report no conflict of interest.
## Data availability statement
The data that support the findings of this study are openly available on Zenodo at doi:10.5281/zenodo.7683615 (v2023.02.28). All calculations have been performed with the Pencil Code; DOI:10.5281/zenodo.3961647.
## Author ORCID
K. A. Mizerski, [https://orcid.org/0000-0003-0106-675X](https://orcid.org/0000-0003-0106-675X)
N. Yokoi, [https://orcid.org/0000-0002-5242-7634](https://orcid.org/0000-0002-5242-7634)
A. Brandenburg, [https://orcid.org/0000-0002-7304-021X](https://orcid.org/0000-0002-7304-021X)
Appendix A Outline of the two-scale direct-interaction approximation (TSDIA) with self- and cross-interaction response functions for the velocity and magnetic fields.
The two-scale direct-interaction approximation (TSDIA) is a combination of the direct-interaction approximation (DIA) for strongly nonlinear homogeneous isotropic
turbulence and the multiple-scale analysis with the derivative expansion with respect to the large-scale inhomogeneity. The TSDIA provides a powerful tool for investigating strongly-nonlinear turbulence with large-scale inhomogeneities. In applying the TSDIA scheme to the magnetohydrodynamic turbulence, the Elsasser variable formulation has been often adopted. In this formulation, symmetries of the velocity and magnetic-field equations are fully utilized, which reduces the complexities in treating the original MHD equations. The correspondence between the Elsasser variable formulation and the usual velocity-magnetic-field formulation in the TSDIA has been discussed in some literature (Yoshizawa, 1998, Hamba and Sato, 2008, Yokoi, 2013). Here, we present the outline of the TSDIA formulation under the velocity and magnetic-field variables with special references to the self- and cross-interaction response functions in the MHD turbulence. For the outline of the DIA in the context of the TSDIA, the reader is referred to textbooks such as Yoshizawa (1998) and Yokoi (2020).
Wave-number space equations We introduce the Fourier representation concerning the fast space variable \(\mathbf{\xi}\) as
\[f^{\prime}(\mathbf{\xi},\mathbf{X};\tau,T)=\int d\mathbf{k}f(\mathbf{k},\mathbf{X}; \tau,T)\exp[-i\mathbf{k}\cdot(\mathbf{\xi}-\langle\mathbf{U}\rangle\tau)], \tag{10}\]
where the Fourier transform of the fast variable is taken in the frame co-moving with the local mean velocity \(\langle\mathbf{U}\rangle\). Hereafter, for the sake of simplicity of notation, the arguments of the slow variable for the fluctuation field \(f(\mathbf{\xi},\mathbf{X};\tau,T)\) is suppressed and just denoted as \(f(\mathbf{\xi};\tau)\).
The system of two-scale differential equations under the velocity and magnetic-field variables in the wavenumber space is written as
\[\frac{\partial u^{i}(\mathbf{k};\tau)}{\partial\tau}+\nu k^{2}u^{ i}(\mathbf{k};\tau)+ik^{j}\langle B\rangle^{j}b^{i}(\mathbf{k};\tau)\] \[-iM^{ij\ell}(\mathbf{k})\iint d\mathbf{p}d\mathbf{q}\ \delta(\mathbf{k}-\mathbf{p}-\mathbf{q})\times\left[u^{j}(\mathbf{p};\tau)u^{ \ell}(\mathbf{q};\tau)-b^{j}(\mathbf{p};\tau)b^{\ell}(\mathbf{q};\tau)\right]\] \[=\delta\left[-D^{ij}(\mathbf{k})\frac{\widehat{D}u^{j}(\mathbf{k} ;\tau)}{DT_{1}}-D^{ij}(\mathbf{k})u^{m}(\mathbf{k};\tau)\left(\frac{\partial \langle U\rangle^{j}}{\partial X^{m}}+\epsilon^{mj\ell}\Omega_{0}^{\ell}\right)\right.\] \[\left.\qquad\qquad+\langle B\rangle^{j}\frac{\partial b^{i}( \mathbf{k};\tau)}{\partial X_{1}^{j}}+D^{ij}(\mathbf{k})b^{m}(\mathbf{k};\tau )\frac{\partial\langle B\rangle^{j}}{\partial X^{m}}\right], \tag{11}\]
\[-ik^{j}u^{j}(\mathbf{k};\tau)+\delta\frac{\partial u^{j}(\mathbf{k};\tau)}{ \partial X^{j}}=0, \tag{12}\]
\[\frac{\partial b^{i}(\mathbf{k};\tau)}{\partial\tau}+\eta k^{2}b^{ i}(\mathbf{k};\tau)+ik^{j}\langle B\rangle^{j}u^{i}(\mathbf{k};\tau)\] \[+iN^{ij\ell}(\mathbf{k})\iint d\mathbf{p}d\mathbf{q}\ \delta(\mathbf{k}-\mathbf{p}-\mathbf{q})\times\left[b^{j}(\mathbf{p};\tau)u^{ \ell}(\mathbf{q};\tau)-u^{j}(\mathbf{p};\tau)b^{\ell}(\mathbf{q};\tau)\right]\] \[=\delta\left[-D^{ij}(\mathbf{k})\frac{\widehat{D}b^{j}(\mathbf{k} ;\tau)}{DT_{1}}+D^{ij}(\mathbf{k})b^{m}(\mathbf{k};\tau)\left(\frac{\partial \langle U\rangle^{j}}{\partial X^{m}}+\epsilon^{mj\ell}\Omega_{0}^{\ell}\right)\right.\] \[\left.\qquad\qquad+\langle B\rangle^{j}\frac{\partial u^{i}( \mathbf{k};\tau)}{\partial X_{1}^{j}}-D^{ij}(\mathbf{k})u^{m}(\mathbf{k};\tau )\frac{\partial\langle B\rangle^{j}}{\partial X^{m}}\right], \tag{13}\]
\[-ik^{j}b^{j}(\mathbf{k};\tau)+\delta\frac{\partial b^{j}(\mathbf{k};\tau)}{ \partial X^{j}}=0, \tag{14}\]
where
\[\left(\nabla\mathbf{x}_{\mathrm{I}},\frac{D}{DT_{\mathrm{I}}}\right)=\exp\left(-i \mathbf{k}\cdot\mathbf{U}\tau\right)\left(\nabla\mathbf{x},\frac{D}{DT}\right) \exp\left(i\mathbf{k}\cdot\mathbf{U}\tau\right) \tag{10}\]
is the differential operators in the interaction representation. Here in (11) and (12),
\[M^{ijk}(\mathbf{k})=\frac{1}{2}\left(k^{j}D^{ik}(\mathbf{k})+k^{k}D^{ij}( \mathbf{k})\right), \tag{11}\]
with the solenoidal projection operator
\[D^{ij}(\mathbf{k})=\delta^{ij}-\frac{k^{i}k^{j}}{k^{2}}, \tag{12}\]
and
\[N^{ijk}(\mathbf{k})=k^{j}\delta^{ik}-k^{k}\delta^{ij}. \tag{13}\]
The operators \(M\) and \(N\) are point vertices showing the wave-number conservation among the nonlinear mode coupling with \(\delta(\mathbf{k}-\mathbf{p}-\mathbf{q})\).
In (11) and (12), to keep the material derivatives objective (invariant with respect to rotations), we adopt a co-rotational derivative
\[\frac{\widehat{D}{u^{\prime}}^{i}}{DT}=\frac{\partial{u^{\prime}}^{i}}{ \partial T}+\langle U\rangle^{j}\frac{\partial{u^{\prime}}^{i}}{\partial X^ {j}}+\epsilon^{jik}\Omega_{0}^{k}{u^{\prime}}^{j} \tag{14}\]
with
\[\boldsymbol{\Omega}_{0}=\boldsymbol{\Omega}/\delta \tag{15}\]
in place of the Lagrange or advective derivative
\[\frac{{D{u^{\prime}}^{i}}}{DT}=\frac{\partial{u^{\prime}}^{i}}{\partial t}+ \langle U\rangle^{j}\frac{\partial{u^{\prime}}^{i}}{\partial x^{j}}, \tag{16}\]
which is not objective with respect to a rotation.
Scale-parameter expansion We expand a field \(f(\mathbf{k};\tau)\) with respect to the scale parameter \(\delta\), and further expand each field by the external field (the mean magnetic field in the present case) as
\[f^{i}(\mathbf{k};\tau) = \sum_{n=0}^{\infty}\delta^{n}f^{i}_{n}(\mathbf{k};\tau)-\sum_{n=0 }^{\infty}\delta^{n+1}i\frac{k^{i}}{k^{2}}\frac{\partial}{\partial X^{j}_{ \mathrm{I}}}f^{j}_{n}(\mathbf{k};\tau) \tag{17}\] \[= \sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\delta^{n}f^{i}_{nm}( \mathbf{k};\tau)-\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\delta^{n+1}i\frac{k^ {i}}{k^{2}}\frac{\partial}{\partial X^{j}_{\mathrm{I}}}f^{j}_{nm}(\mathbf{k}; \tau).\]
In this two-scale formulation, inhomogeneities and anisotropies enter with the scale parameter \(\delta\) and the external parameters \(\langle\mathbf{B}\rangle\) in higher-order fields. The lowest-order fields \(f_{00}\) fields correspond to the homogeneous and isotropic turbulence.
Using the expansion (17), we write the equations of each order in matrix form. With the abbreviated form of the spectral integral
\[\int_{\Delta}=\iint d\mathbf{p}d\mathbf{q}\ \delta(\mathbf{k}-\mathbf{p}- \mathbf{q}), \tag{18}\]
the \(f_{00}({\bf k};\tau)\) equations are given as
\[\left(\begin{array}{c}0\\ 0\end{array}\right)=\left(\begin{array}{cc}\frac{\partial}{\partial\tau}+\nu k^{ 2}&0\\ 0&\frac{\partial}{\partial\tau}+\eta k^{2}\end{array}\right)\left(\begin{array}[] {c}u^{i}_{00}({\bf k};\tau)\\ b^{i}_{00}({\bf k};\tau)\end{array}\right)\] \[+i\left(\begin{array}{cc}-M^{ij\ell}({\bf k})\int_{\Delta}u^{j} _{00}({\bf p};\tau)&M^{ij\ell}({\bf k})\int_{\Delta}b^{j}_{00}({\bf p};\tau)\\ N^{ij\ell}({\bf k})\int_{\Delta}b^{j}_{00}({\bf p};\tau)&-N^{ij\ell}({\bf k}) \int_{\Delta}u^{j}_{00}({\bf p};\tau)\end{array}\right)\left(\begin{array}[] {c}u^{\ell}_{00}({\bf q};\tau)\\ b^{\ell}_{00}({\bf q};\tau)\end{array}\right), \tag{10}\]
the \(f_{01}({\bf k};\tau)\) equations are given as
\[\left(\begin{array}{cc}\frac{\partial}{\partial\tau}+\nu k^{2} &0\\ 0&\frac{\partial}{\partial\tau}+\eta k^{2}\end{array}\right)\left(\begin{array} []{c}u^{i}_{01}({\bf k};\tau)\\ b^{i}_{01}({\bf k};\tau)\end{array}\right)\] \[+i\left(\begin{array}{cc}-2M^{ij\ell}({\bf k})\int_{\Delta}u^{j }_{00}({\bf p};\tau)&2M^{ij\ell}({\bf k})\int_{\Delta}b^{j}_{00}({\bf p};\tau) \\ N^{ij\ell}({\bf k})\int_{\Delta}b^{j}_{00}({\bf p};\tau)&-N^{ij\ell}({\bf k}) \int_{\Delta}u^{j}_{00}({\bf p};\tau)\end{array}\right)\left(\begin{array}[] {c}u^{\ell}_{01}({\bf q};\tau)\\ b^{\ell}_{01}({\bf q};\tau)\end{array}\right)\] \[=-ik^{j}\langle B\rangle^{j}\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{c}u^{i}_{00}({\bf k};\tau)\\ b^{i}_{00}({\bf k};\tau)\end{array}\right)\equiv\left(\begin{array}{c}F^{i}_ {01u}\\ F^{i}_{01b}\end{array}\right), \tag{11}\]
and the \(f_{10}({\bf k};\tau)\) equations are
\[\left(\begin{array}{cc}\frac{\partial}{\partial\tau}+\nu k^{2} &0\\ 0&\frac{\partial}{\partial\tau}+\eta k^{2}\end{array}\right)\left(\begin{array} []{c}u^{i}_{10}({\bf k};\tau)\\ b^{i}_{10}({\bf k};\tau)\end{array}\right)\] \[+i\left(\begin{array}{cc}-2M^{ij\ell}({\bf k})\int_{\Delta}u^{j }_{00}({\bf p};\tau)&2M^{ij\ell}({\bf k})\int_{\widehat{\cal A}}b^{j}_{00}({ \bf p};\tau)\\ N^{ij\ell}({\bf k})\int_{\Delta}b^{j}_{00}({\bf p};\tau)&-N^{ij\ell}({\bf k}) \int_{\Delta}u^{j}_{00}({\bf p};\tau)\end{array}\right)\left(\begin{array}[] {c}u^{\ell}_{10}({\bf q};\tau)\\ b^{\ell}_{10}({\bf q};\tau)\end{array}\right)\] \[=\langle B\rangle^{j}\frac{\partial}{\partial X^{j}_{1}}\left( \begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{c}u^{i}_{00}({\bf k})\\ b^{i}_{00}({\bf k})\end{array}\right)-D^{ij}({\bf k})\frac{\widehat{D}}{DT_{1}} \left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)\left(\begin{array}{c}u^{j}_{00}({\bf k})\\ b^{j}_{00}({\bf k})\end{array}\right)\] \[+\left(\begin{array}{cc}-D^{ij}({\bf k})\left(\frac{\partial \langle U\rangle^{j}}{\partial X^{\ell}}+\epsilon^{fjn}\Omega^{n}_{0}\right)&D^ {ij}({\bf k})\frac{\partial\langle B\rangle^{j}}{\partial X^{\ell}}\\ -D^{ij}({\bf k})\frac{\partial\langle B\rangle^{j}}{\partial X^{\ell}}&D^{ij}({ \bf k})\left(\frac{\partial\langle U\rangle^{j}}{\partial X^{\ell}}+\epsilon^ {fjn}\Omega^{n}_{0}\right)\end{array}\right)\left(\begin{array}{c}u^{\ell}_ {00}({\bf k};\tau)\\ b^{\ell}_{00}({\bf k};\tau)\end{array}\right)\] \[\equiv\left(\begin{array}{c}F^{i}_{10u}\\ F^{i}_{10b}\end{array}\right), \tag{12}\]
where, \(F_{01u}\), \(F_{01b}\), \(F_{10u}\), and \(F_{10b}\) denote each component of the second right-hand sides (r.h.s.) of (12) and (12). They can be regarded as the forcing for the evolution equations of \(f_{01}({\bf k};\tau)\) and \(f_{10}({\bf k};\tau)\), respectively.
Introduction of Green's functions For the purpose of solving these differential equations, we introduce the Green's functions associated with (12). We consider the response of the turbulence to an infinitesimal disturbance. Reflecting the structure of
the MHD equations and the field expansion (A.13), the left-hand side of the linearized differential equations for the Green's function is in the same form as the l.h.s. of (A.16) and (A.17) or the differential operators to the \(f_{01}({\bf k};\tau)\) and \(f_{10}({\bf k};\tau)\) fields. In order to treat mutual interaction between the velocity and magnetic field, we consider four Green's functions; the Green function \(G_{uu}\) representing the response of the velocity field \({\bf u}\) to the velocity perturbation \({\bf u}\), \(G_{ub}\) the response of \({\bf u}\) to the magnetic perturbation \({\bf b}\), \(G_{bu}\) the response of \({\bf b}\) to the velocity perturbation \({\bf u}\), and \(G_{bb}\) the response of magnetic field \({\bf b}\) to the magnetic perturbation \({\bf b}\). From the l.h.s. of (A.16) and (A.17) we construct the system of equations representing the responses to the infinitesimal forcing. It follows that these four Green's functions should be defined by their evolution equations as
\[\left(\begin{array}{cc}\frac{\partial}{\partial\tau}+\nu k^{2}& 0\\ 0&\frac{\partial}{\partial\tau}+\eta k^{2}\end{array}\right)\left(\begin{array} []{cc}G_{uu}^{ij}&G_{bu}^{ij}\\ G_{ub}^{ij}&G_{bb}^{ij}\end{array}\right)\] \[+i\left(\begin{array}{cc}-2M^{ikm}\int_{\Delta}u_{00}^{k}&2M^{ ikm}\int_{00}^{k}\\ N^{ikm}\int_{\Delta}b_{00}^{k}&-N^{ikm}\int_{\Delta}^{\Delta}u_{00}^{k}\end{array} \right)\left(\begin{array}{cc}G_{uu}^{mj}&G_{bu}^{mj}\\ G_{ub}^{mj}&G_{bb}^{mj}\end{array}\right)=\delta^{ij}\delta(\tau-\tau^{\prime}) \left(\begin{array}{cc}1&0\\ 0&1\end{array}\right).\] (A.18)
Considering that the r.h.s. of (A.16) and (A.17) are the force terms, we formally solve \(f_{01}\) and \(f_{10}\) fields with the aid of the Green's functions. The \(f_{01}\) fields are expressed as
\[\left(\begin{array}{c}u_{01}^{i}\\ b_{01}^{i}\end{array}\right)=\int_{-\infty}^{\tau}\!\!d\tau_{1}\left(\begin{array} []{cc}G_{uu}^{ij}&G_{ub}^{ij}\\ G_{bu}^{ij}&G_{bb}^{ij}\end{array}\right)\left(\begin{array}{c}F_{01u}^{j} \\ F_{01b}^{j}\end{array}\right).\] (A.19)
Note that \({\bf u}_{01}\) and \({\bf b}_{01}\) are expressed the \({\bf b}_{00}\) and \({\bf u}_{00}\) coupled with the mean magnetic field \(\langle{\bf B}\rangle\), respectively. As this result, \({\bf u}_{01}\) and \({\bf b}_{01}\) multiplied by \({\bf b}_{00}\) and \({\bf u}_{00}\) in an external product manner will not contribute to the EMF.
On the other hand, the \(f_{10}\) fields are expressed as
\[\left(\begin{array}{c}u_{10}^{i}\\ b_{10}^{i}\end{array}\right)=\int_{-\infty}^{\tau}\!\!d\tau_{1}\left(\begin{array} []{cc}G_{uu}^{ij}&G_{ub}^{ij}\\ G_{bu}^{ij}&G_{bb}^{ij}\end{array}\right)\left(\begin{array}{c}F_{10u}^{j} \\ F_{10b}^{j}\end{array}\right).\] (A.20)
Statistical assumption on the basic fields We assume that the basic or lowest-order fields are homogeneous and isotropic.
\[\frac{\left\langle\vartheta_{00}^{i}({\bf k};\tau)\chi_{00}^{j}({\bf k}^{ \prime};\tau^{\prime})\right\rangle}{\delta({\bf k}+{\bf k}^{\prime})}=D^{ ij}({\bf k})Q_{\vartheta\chi}({\bf k};\tau,\tau^{\prime})+\frac{i}{2}\frac{k^{ \ell}}{k^{2}}e^{ij\ell}H_{\vartheta\chi}({\bf k};\tau,\tau^{\prime}),\] (A.21)
where \(\boldsymbol{\vartheta}_{00}\) and \(\boldsymbol{\chi}_{00}\) represent one of \({\bf u}_{00}\) and \({\bf b}_{00}\), and the indices \(\vartheta\) and \(\chi\) do one of \(u\) and \(b\). The Green's functions are written as
\[\langle G_{\vartheta\chi}^{ij}({\bf k};\tau,\tau^{\prime})\rangle=D^{ij}({ \bf k})G_{\vartheta\chi}({\bf k};\tau,\tau^{\prime}).\] (A.22)
The spectral functions, \(Q_{uu}\), \(Q_{bb}\), \(Q_{ub}\), \(H_{uu}\), \(H_{bb}\), \(H_{ub}\), and \(H_{bu}\), are related to the turbulent statistical quantities (the turbulent kinetic energy, magnetic energy, cross helicity, kinetic helicity, electric-current helicity, torsional correlations between velocity and magnetic field) of the basic or lowest-order fields as
\[\int d{\bf k}\ Q_{uu}(k;\tau,\tau)=\langle{\bf u^{\prime}_{00}}^{2}\rangle/2,\] (A.23)
\[\int d{\bf k}\ Q_{bb}(k;\tau,\tau)=\langle{\bf b}^{\prime}_{00}{}^{2}\rangle/2, \tag{100}\]
\[\int d{\bf k}\ Q_{ub}(k;\tau,\tau)=\langle{\bf u}^{\prime}_{00}\cdot{\bf b}^{ \prime}_{00}\rangle, \tag{101}\]
\[\int d{\bf k}\ H_{uu}(k;\tau,\tau)=\langle{\bf u}^{\prime}_{00}\cdot{\bf\omega }^{\prime}_{00}\rangle, \tag{102}\]
\[\int d{\bf k}\ H_{bb}(k;\tau,\tau)=\langle{\bf b}^{\prime}_{00}\cdot{\bf j}^{ \prime}_{00}\rangle, \tag{103}\]
\[\int d{\bf k}\ H_{ub}(k;\tau,\tau)=\langle{\bf u}^{\prime}_{00}\cdot{\bf j}^{ \prime}_{00}\rangle, \tag{104}\]
\[\int d{\bf k}\ H_{bu}(k;\tau,\tau)=\langle{\bf b}^{\prime}_{00}\cdot{\bf\omega }^{\prime}_{00}\rangle. \tag{105}\]
Calculation of the electromotive force (EMF) The turbulent electromotive force (EMF) is expressed in terms of the wave-number representation of the velocity and magnetic-field as
\[E^{i}_{\rm M}\equiv\epsilon^{ijk}\langle u^{\prime j}b^{\prime k}\rangle= \epsilon^{ijk}\int d{\bf k}\ \langle u^{j}({\bf k};\tau)b^{k}({\bf k}^{\prime};\tau)\rangle/\delta({\bf k}+{ \bf k}^{\prime}). \tag{106}\]
Using the results of (102) and (103), we calculate the velocity-magnetic-field correlation up to the \(f_{01}g_{00}\) and \(f_{10}g_{00}\) orders as
\[\langle u^{j}b^{k}\rangle=\langle u^{j}_{00}b^{k}_{00}\rangle+\langle u^{j}_{ 01}b^{k}_{00}\rangle+\langle u^{j}_{00}b^{k}_{01}\rangle+\delta\langle u^{j}_ {10}b^{k}_{00}\rangle+\delta\langle u^{j}_{00}b^{k}_{10}\rangle+\cdots. \tag{107}\]
In the direct-interaction approximation (DIA) formalism, the lowest-order spectral functions \(Q_{uu}\), \(Q_{bb}\), \(Q_{ub}\), \(H_{uu}\), \(H_{bb}\), \(H_{ub}\), and \(H_{bu}\), and the lowest-order Green's functions \(G_{uu}\), \(G_{bb}\), \(G_{ub}\), and \(G_{bu}\) are replaced with their exact counterparts, \(\tilde{Q}_{uu}\), \(\tilde{Q}_{bb}\), \(\cdots\), and \(\tilde{G}_{uu}\), \(\tilde{G}_{bb}\), \(\cdots\), respectively. Under this renormalization procedure on the propagators (spectral and response functions), important turbulent correlation functions are calculated. For the sake of simplicity, hereafter, the tilde denoting an exact propagator will be omitted as \(\tilde{Q}_{uu}\to Q_{uu}\), \(\tilde{G}_{uu}\to G_{uu}\), etc.
Here we present the final results of the turbulent EMF as
\[\langle{\bf u}^{\prime}\times{\bf b}^{\prime}\rangle=\alpha\langle{\bf B} \rangle-(\beta+\zeta)\nabla\times\langle{\bf B}\rangle-(\nabla\zeta)\times \langle{\bf B}\rangle+\gamma\left(\langle{\bf W}\rangle+2\mathbf{ \Omega}\right), \tag{108}\]
where transport coefficients \(\alpha\), \(\beta\), \(\zeta\), and \(\gamma\) are given as
\[\alpha=\frac{1}{3}\left[-I\{G_{bb},H_{uu}\}+I\{G_{uu},H_{bb}\}-I\{G_{bu},H_{ub }\}+I\{G_{ub},H_{bu}\}\right], \tag{109}\]
\[\beta=\frac{1}{3}\left[I\{G_{bb},Q_{uu}\}+I\{G_{uu},Q_{bb}\}-I\{G_{bu},Q_{ub}\} -I\{G_{ub},Q_{bu}\}\right], \tag{110}\]
\[\zeta=\frac{1}{3}\left[I\{G_{bb},Q_{uu}\}-I\{G_{uu},Q_{bb}\}+I\{G_{bu},Q_{ub} \}-I\{G_{ub},Q_{bu}\}\right], \tag{111}\]
\[\gamma=\frac{1}{3}\left[I\{G_{bb},Q_{ub}\}+I\{G_{uu},Q_{bu}\}-I\{G_{bu},Q_{uu} \}-I\{G_{ub},Q_{bb}\}\right] \tag{112}\]
with the abbreviate form of integral
\[I\{A,B\}=\int d{\bf k}\int_{-\infty}^{\tau}\!\!d\tau_{1}A(k;\tau,\tau_{1})B(k; \tau,\tau_{1}). \tag{113}\]
## Appendix B Cross helicity and \(\langle{\bf u}^{\prime}\cdot{\bf j}^{\prime}\rangle\) under FOSA
In the presence of the Coriolis force and under the 'first-order smoothing approximation' (FOSA), in the Fourier space the linearised equations take the form
\[\left(-{\rm i}\omega+\nu k^{2}\right)\hat{u}_{i}({\bf q})+2\Omega\epsilon_{i3j} \hat{u}_{j}({\bf q})= \hat{f}_{i}({\bf q})+{\rm i}{\bf k}\cdot\langle{\bf B}\rangle\,\hat{b}_{ i}({\bf q}), \tag{10}\]
\[\left(-{\rm i}\omega+\eta k^{2}\right)\hat{b}_{i}({\bf q})= {\rm i}{\bf k}\cdot\langle{\bf B}\rangle\,\hat{u}_{i}({\bf q}), \tag{11}\]
where the the forcing is assumed Gaussian with zero mean, homogeneous, stationary, and isotropic
\[\left\langle\hat{f}_{i}({\bf k},\omega)\hat{f}_{j}({\bf k}^{\prime},\omega^{ \prime})\right\rangle=\left[\frac{D_{0}}{k^{3}}P_{ij}({\bf k})+{\rm i}\frac{D _{1}}{k^{5}}\epsilon_{ijk}k_{k}\right]\delta({\bf k}+{\bf k}^{\prime})\delta (\omega+\omega^{\prime}), \tag{12}\]
and \(P_{ij}({\bf k})=\delta_{ij}-k_{i}k_{j}/k^{2}\) is the projection operator on the plane perpendicular to the wave vector \({\bf k}\). Introducing
\[\gamma_{\nu}=-{\rm i}\omega+\nu k^{2},\qquad\gamma_{\eta}=-{\rm i}\omega+\eta k ^{2}, \tag{13}\]
and considering the weak seed field limit defined by
\[\langle{\bf B}\rangle^{2}\ll\langle{\bf U}\rangle^{2}\,,\quad\mbox{hence also}\quad\left\langle{\bf b}^{\prime 2}\right\rangle\ll\left\langle{ \bf u}^{\prime 2}\right\rangle, \tag{14}\]
the equations reduce to
\[\hat{u}_{i}({\bf q})\approx\mathfrak{G}_{ij}\hat{f}_{j}^{>}({\bf q}), \tag{15}\]
\[\hat{b}_{i}({\bf q})\approx{\rm i}\frac{{\bf k}\cdot\langle{\bf B}\rangle}{ \gamma_{\eta}}\mathfrak{G}_{ij}\hat{f}_{j}({\bf q}), \tag{16}\]
where
\[\mathfrak{G}_{ij}=\frac{1}{\gamma_{\nu}^{2}+4\Omega^{2}\frac{k_{i}^{2}}{k^{2} }}\left[\gamma_{\nu}\delta_{ij}-2\Omega\epsilon_{i3j}+2\Omega\frac{k_{i}k_{m} }{k^{2}}\epsilon_{jm3}\right]. \tag{17}\]
The cross-helicity takes the form
\[\left\langle h_{ub}\right\rangle = \left\langle u_{i}^{\prime}({\bf x},t)b_{i}^{\prime}({\bf x},t)\right\rangle \tag{18}\] \[= {\rm i}\int{\rm d}^{4}q\int{\rm d}^{4}q^{\prime}{\rm e}^{{\rm i} \left[({\bf k}+{\bf k}^{\prime})\cdot{\bf x}-\left(\omega+\omega^{\prime} \right)t\right]}\frac{{\bf k}^{\prime}\cdot\langle{\bf B}\rangle}{\gamma_{ \eta}\left({\bf q}^{\prime}\right)}\mathfrak{G}_{ij}\left({\bf q}\right) \mathfrak{G}_{ik}\left({\bf q}^{\prime}\right)\left\langle\hat{f}_{j}({\bf q} )\hat{f}_{k}({\bf q}^{\prime}\right)\right\rangle\] \[= -{\rm i}\left\langle B\right\rangle_{m}\int{\rm d}^{4}q\frac{k_{m }}{\gamma_{\eta}\left(-{\bf q}\right)}\mathfrak{G}_{ij}\left({\bf q}\right) \mathfrak{G}_{ik}\left(-{\bf q}\right)\left[\frac{D_{0}}{k^{3}}P_{jk}({\bf k} )+{\rm i}\frac{D_{1}}{k^{5}}\epsilon_{jks}k_{s}\right]\] \[= -8\Omega\left\langle B\right\rangle_{m}\int\frac{{\rm d}k}{k}\int _{-\infty}^{\infty}{\rm d}\omega\int_{0}^{2\pi}{\rm d}\varphi\int_{-1}^{1}{\rm d }X\frac{\omega^{2}D_{1}}{\left(\omega^{2}+\eta^{2}k^{4}\right){\cal F}( \omega,X)}\frac{k_{z}k_{m}}{k^{2}}\] \[= -16\pi D_{1}{\cal I}\left(\nu,\eta,\Omega,k\epsilon\right)\left( \langle{\bf B}\rangle\cdot\mathbf{\Omega}\right),\]
where
\[{\cal F}(\omega,X)= \left(\omega^{2}+\nu^{2}k^{4}\right)^{2}-8\Omega^{2}X^{2}\left( \omega^{2}-\nu^{2}k^{4}\right)+16\Omega^{4}X^{4}\] \[= \left(\omega^{2}-4\Omega^{2}X^{2}\right)^{2}+2\omega^{2}\nu^{2}k ^{4}+8\nu^{2}k^{4}\Omega^{2}X^{2}+\nu^{4}k^{8}>0, \tag{19}\]
and
\[D_{1}\sim\left\langle{\bf f}\cdot\nabla\times{\bf f}\right\rangle,\qquad{\cal I }\left(\nu,\eta,\Omega,k\epsilon\right)>0. \tag{20}\]
On the other hand for the scalar quantity \(\left\langle s_{uj}\right\rangle=\left\langle\mathbf{u}^{\prime}\cdot\mathbf{j}^{ \prime}\right\rangle\) this approach yields
\[\left\langle s_{uj}\right\rangle = \tag{12}\] \[= -\epsilon_{irt}\left\langle B\right\rangle_{m}\int\mathrm{d}^{4}q \int\mathrm{d}^{4}q^{\prime}\mathrm{e}^{\mathrm{i}\left[\left(\mathbf{k}+ \mathbf{k}^{\prime}\right)\cdot\mathbf{x}-\left(\omega+\omega^{\prime}\right) t\right]}\frac{k_{r}^{\prime}k_{m}^{\prime}}{\gamma_{\eta}\left(\mathbf{q}^{\prime} \right)}\mathfrak{G}_{ij}\left(\mathbf{q}\right)\mathfrak{G}_{tk}\left(\mathbf{ q}^{\prime}\right)\left\langle\hat{f}_{j}(\mathbf{q})\hat{f}_{k}(\mathbf{q}^{ \prime})\right\rangle\] \[= -\epsilon_{irt}\left\langle B\right\rangle_{m}\int\mathrm{d}^{4} q\frac{k_{r}k_{m}}{\gamma_{\eta}\left(-\mathbf{q}\right)}\mathfrak{G}_{ij} \left(\mathbf{q}\right)\mathfrak{G}_{ik}\left(-\mathbf{q}\right)\left[\frac{D_ {0}}{k^{3}}P_{jk}(\mathbf{k})+\mathrm{i}\frac{D_{1}}{k^{5}}\epsilon_{jks}k_{s }\right]\] \[= 8\Omega\left\langle B\right\rangle_{m}\int k\mathrm{d}k\int_{- \infty}^{\infty}\mathrm{d}\omega\int_{0}^{2\pi}\mathrm{d}\varphi\int_{-1}^{1} \mathrm{d}X\frac{\omega^{2}D_{0}}{\left(\omega^{2}+\eta^{2}k^{4}\right) \mathcal{F}(\omega,X)}\frac{k_{z}k_{m}}{k^{2}}\] \[= 16\pi\left(\left\langle\mathbf{B}\right\rangle\cdot\boldsymbol{ \Omega}\right)\int k\mathrm{d}k\int_{-\infty}^{\infty}\mathrm{d}\omega\int_{- 1}^{1}\mathrm{d}X\frac{\omega^{2}X^{2}D_{0}}{\left(\omega^{2}+\eta^{2}k^{4} \right)\mathcal{F}(\omega,X)}\] \[= 16\pi D_{0}\widetilde{\mathcal{I}}\left(\nu,\eta,\Omega,k_{\ell} \right)\left(\left\langle\mathbf{B}\right\rangle\cdot\boldsymbol{\Omega} \right),\]
where
\[D_{0}\sim\left\langle\mathbf{f}^{2}\right\rangle,\qquad\widetilde{\mathcal{I}} \left(\nu,\eta,\Omega,k_{\ell}\right)>0. \tag{13}\]
In the above we have used
\[\int\frac{k_{j}}{k}f\left(\cos^{2}\theta\right)\mathrm{d}\hat{\Omega}=0,\qquad \int\frac{k_{i}k_{j}k_{k}}{k^{3}}f\left(\cos^{2}\theta\right)\mathrm{d}\hat{ \Omega}=0, \tag{14}\]
\[\int\frac{k_{j}k_{n}}{k^{2}}f\left(\cos^{2}\theta\right)\mathrm{d}\hat{ \Omega}=\pi\int_{-1}^{1}f(X^{2})\left\{\delta_{jn}\left(1-X^{2}\right)+\delta_ {j3}\delta_{n3}\left(3X^{2}-1\right)\right\}\mathrm{d}X. \tag{15}\]
where \(\hat{\Omega}\) denotes the solid angle and the spherical coordinates \((k,\theta,\varphi)\) have been used (with a substitution \(X=\cos\theta\)). Furthermore, in a similar way we can calculate the kinetic helicity and turbulent energy
\[\left\langle h_{kin}\right\rangle = \left\langle u_{i}(\mathbf{x},t)w_{i}(\mathbf{x},t)\right\rangle \tag{16}\] \[= \mathrm{i}\epsilon_{ijk}\int\mathrm{d}^{4}q\int\mathrm{d}^{4}q^{ \prime}k_{j}^{\prime}\mathfrak{G}_{in}\left(\mathbf{q}\right)\mathfrak{G}_{km} \left(\mathbf{q}^{\prime}\right)\left\langle\hat{f}_{n}(\mathbf{q})\hat{f}_{ m}(\mathbf{q}^{\prime})\right\rangle\mathrm{e}^{\mathrm{i}\left[\left( \mathbf{k}+\mathbf{k}^{\prime}\right)\cdot\mathbf{x}-\left(\omega+\omega^{ \prime}\right)t\right]}\] \[= -\mathrm{i}\epsilon_{ijk}\int\mathrm{d}^{4}qk_{j}\mathfrak{G}_{ in}\left(\mathbf{q}\right)\mathfrak{G}_{km}\left(-\mathbf{q}\right)\left[\frac{D_{0}}{k^{3}}P_{ nm}(\mathbf{k})+\mathrm{i}\frac{D_{1}}{k^{5}}\epsilon_{nmp}k_{p}\right]\] \[= -\int\frac{\mathrm{d}k}{k}\int_{-\infty}^{\infty}\mathrm{d} \omega\int_{0}^{2\pi}\mathrm{d}\varphi\int_{-1}^{1}\mathrm{d}X\frac{2D_{1} \left(\omega^{2}+\nu^{2}k^{4}+4\Omega^{2}X^{2}\right)}{\left(\omega^{2}+\nu^ {2}k^{4}\right)^{2}-8\Omega^{2}X^{2}\left(\omega^{2}-\nu^{2}k^{4}\right)+16 \Omega^{4}X^{4}}\] \[= -4\pi D_{1}\mathcal{I}_{u^{2}}\left(\nu,\Omega,k_{\ell}\right)\]
\[\left\langle\mathbf{u}^{\prime 2}\right\rangle = \left\langle u_{i}(\mathbf{x},t)u_{i}(\mathbf{x},t)\right\rangle \tag{17}\] \[= \int\mathrm{d}^{4}q\int\mathrm{d}^{4}q^{\prime}\mathfrak{G}_{ in}\left(\mathbf{q}\right)\mathfrak{G}_{im}\left(\mathbf{q}^{\prime}\right)\left\langle \hat{f}_{n}(\mathbf{q})\hat{f}_{m}(\mathbf{q}^{\prime})\right\rangle\mathrm{e }^{\mathrm{i}\left[\left(\mathbf{k}+\mathbf{k}^{\prime}\right)\cdot\mathbf{x}- \left(\omega+\omega^{\prime}\right)t\right]}\] \[= \int\mathrm{d}^{4}q\mathfrak{G}_{in}\left(\mathbf{q}\right) \mathfrak{G}_{im}\left(-\mathbf{q}\right)\left[\frac{D_{0}}{k^{3}}P_{nm}( \mathbf{k})+\mathrm{i}\frac{D_{1}}{k^{5}}\epsilon_{nmp}k_{p}\right]\] \[= 2\int\frac{\mathrm{d}k}{k}\int_{-\infty}^{\infty}\mathrm{d} \omega\int_{0}^{2\pi}\mathrm{d}\varphi\int_{-1}^{1}\mathrm{d}X\frac{D_{0}\left( \omega^{2}+\nu^{2}k^{4}+4\Omega^{2}X^{2}\right)}{\left(\omega^{2}+\nu^{2}k^{4} \right)^{2}-8\Omega^{2}X^{2}\left(\omega^{2}-\nu^{2}k^{4}\right)+16\Omega^{4}X^ {4}}\] \[= 4\pi D_{0}\mathcal{I}_{u^{2}}\left(\nu,\Omega,k_{\ell}\right).\]
Note, that in the weak seed field limit (14) the turbulent energy reduces to \(\left\langle\mathbf{u}^{\prime 2}\right\rangle+\left\langle\mathbf{b}^{\prime 2}\right\rangle \approx\left\langle\mathbf{u}^{\prime 2}\right\rangle\). We can now utilize the above results to show
\[\left\langle\mathbf{u}^{\prime}\cdot\mathbf{j}^{\prime}\right\rangle= \left(\frac{\mathcal{I}_{uj}\left(\nu,\eta,\Omega,k_{\ell}\right) }{\mathcal{I}_{ub}\left(\nu,\eta,\Omega,k_{\ell}\right)}\frac{D_{0}^{2}}{D_{1} ^{2}}\right)\frac{\left\langle\mathbf{u}^{\prime}\cdot\mathbf{b}^{\prime} \right\rangle}{\left\langle\mathbf{u}^{\prime 2}\right\rangle}\left\langle\mathbf{u}^{ \prime}\cdot\mathbf{w}^{\prime}\right\rangle\] \[\sim\frac{\left\langle\mathbf{u}^{\prime}\cdot\mathbf{b}^{\prime }\right\rangle}{\left\langle\mathbf{u}^{\prime 2}\right\rangle}\left\langle \mathbf{u}^{\prime}\cdot\mathbf{w}^{\prime}\right\rangle \tag{15}\]
Appendix C Evolution equations for \(\left\langle\mathbf{u}^{\prime}\cdot\mathbf{b}^{\prime}\right\rangle\) and \(\left\langle\mathbf{u}^{\prime}\cdot\mathbf{j}^{\prime}\right\rangle\)
Utilizing the evolution equations
\[\frac{D\mathbf{u}^{\prime}}{Dt}=-\nabla\Pi^{\prime}-2\boldsymbol{ \Omega}\times\mathbf{u}^{\prime}-\left(\mathbf{u}^{\prime}\cdot\nabla\right) \left\langle\mathbf{U}\right\rangle+\left(\left\langle\mathbf{B}\right\rangle \cdot\nabla\right)\mathbf{b}^{\prime}+\left(\mathbf{b}^{\prime}\cdot\nabla \right)\left\langle\mathbf{B}\right\rangle+\nu\nabla^{2}\mathbf{u}^{\prime}\] \[+\nabla\cdot\left(\mathbf{b}^{\prime}\mathbf{b}^{\prime}\right)+ \nabla\cdot\left(\left\langle\mathbf{u}^{\prime}\mathbf{u}^{\prime}\right\rangle -\left\langle\mathbf{b}^{\prime}\mathbf{b}^{\prime}\right\rangle\right), \tag{16}\]
\[\frac{D\mathbf{b}^{\prime}}{Dt}=\left(\left\langle\mathbf{B}\right\rangle \cdot\nabla\right)\mathbf{u}^{\prime}-\left(\mathbf{u}^{\prime}\cdot\nabla \right)\left\langle\mathbf{B}\right\rangle+\left(\mathbf{b}^{\prime}\cdot \nabla\right)\left\langle\mathbf{U}\right\rangle+\eta\nabla^{2}\mathbf{b}^{ \prime}+\left(\mathbf{b}^{\prime}\cdot\nabla\right)\mathbf{u}^{\prime}- \nabla\times\boldsymbol{\mathcal{E}}, \tag{17}\]
where
\[\frac{D}{Dt}=\frac{\partial}{\partial t}+\left(\left\langle\mathbf{U}\right\rangle +\mathbf{u}^{\prime}\right)\cdot\nabla, \tag{18}\]
we arrive at
\[\frac{D}{Dt}\left\langle\mathbf{u}^{\prime}\cdot\mathbf{b}^{ \prime}\right\rangle= -\boldsymbol{\mathcal{E}}\cdot\left(\left\langle\mathbf{W}\right\rangle +2\boldsymbol{\Omega}\right)-\left\langle u_{i}^{\prime}u_{j}^{\prime}-b_{i}^{ \prime}b_{j}^{\prime}\right\rangle\partial_{j}\left\langle B\right\rangle_{i}\] \[+\nabla\cdot\left[\left\langle\left(-\Pi^{\prime}+\frac{\mathbf{ u}^{\prime 2}+\mathbf{b}^{\prime 2}}{2}\right)\mathbf{b}^{\prime}\right\rangle+\left\langle \frac{\mathbf{u}^{\prime 2}+\mathbf{b}^{\prime 2}}{2}\right\rangle\left\langle\mathbf{B}\right\rangle\right]\] \[-\mu_{0}\left(\nu+\eta\right)\left\langle\mathbf{w}^{\prime} \cdot\mathbf{j}^{\prime}\right\rangle, \tag{19}\]
and
\[\frac{D}{Dt}\left\langle\mathbf{u}^{\prime}\cdot\mathbf{j}^{ \prime}\right\rangle= -\left\langle\mathbf{u}^{\prime}\times\mathbf{j}^{\prime} \right\rangle\cdot\left(\left\langle\mathbf{W}\right\rangle+2\boldsymbol{ \Omega}\right)-\left\langle w_{i}^{\prime}u_{j}^{\prime}-j_{i}^{\prime}b_{j}^{ \prime}\right\rangle\partial_{j}\left\langle B\right\rangle_{i}-\partial_{j} \left\langle U\right\rangle_{m}\left\langle\epsilon_{ijk}u_{i}^{\prime} \partial_{m}b_{k}^{\prime}\right\rangle\] \[+\left\langle\left[\left(\left\langle\mathbf{B}\right\rangle+ \mathbf{b}^{\prime}\right)\cdot\nabla\right]\mathbf{b}^{\prime}\cdot\mathbf{j }^{\prime}+\left[\left(\left\langle\mathbf{B}\right\rangle+\mathbf{b}^{ \prime}\right)\cdot\nabla\right]\mathbf{u}^{\prime}\cdot\mathbf{w}^{\prime}-u_ {i}^{\prime}\epsilon_{ijk}\partial_{j}u_{m}^{\prime}\partial_{m}b_{k}^{\prime}\right\rangle\] \[-\nabla\cdot\left\langle\Pi^{\prime}\mathbf{j}^{\prime}\right\rangle -\left(\nu-\eta\right)\left\langle\mathbf{w}^{\prime}\cdot\nabla^{2}\mathbf{b} ^{\prime}\right\rangle, \tag{20}\]
where in the last equation, apart from no-slip boundary conditions, we have also assumed vanishing of the helical quantity \(\left\langle\mathbf{w}^{\prime}\cdot\mathbf{j}^{\prime}\right\rangle\) at the boundaries.
## Appendix D Basic equations used in the compressible case
In the numerical simulations, instead of equations (1a-c), we solve the following set of equations for a compressible isothermal gas with constant sound speed \(c_{\mathrm{s}}\) for \(\mathbf{U}\), \(\rho\), and the magnetic vector potential \(\mathbf{A}\):
\[\frac{\partial\mathbf{U}}{\partial t}+\left(\mathbf{U}\cdot\nabla\right) \mathbf{U}=-c_{\mathrm{s}}^{2}\nabla\ln\rho-2\boldsymbol{\Omega}\times \mathbf{U}+\frac{1}{\rho}\mathbf{J}\times\mathbf{B}-\nu\mathbf{Q}+\mathbf{g}+ \mathbf{f}, \tag{21a}\] \[\frac{\partial\rho}{\partial t}=-\nabla\cdot(\rho\mathbf{U}),\] (21b) \[\frac{\partial\mathbf{A}}{\partial t}=\mathbf{U}\times\mathbf{B}- \eta\mu_{0}\mathbf{J}, \tag{21}\]
where
\[\mathbf{Q}=-\nabla^{2}\mathbf{U}-\frac{1}{3}\nabla\nabla\cdot\mathbf{U}-\mathbf{S} \nabla\ln\rho, \tag{10}\]
\[\mu_{0}\mathbf{J}=-\nabla^{2}\mathbf{A}+\nabla\nabla\cdot\mathbf{A}, \tag{11}\]
\[\mathbf{B}=\mathbf{B}_{0}+\nabla\times\mathbf{A}, \tag{12}\]
and
\[\mathsf{S}_{ij}=\frac{1}{2}(\partial_{i}U_{j}+\partial_{j}U_{i})-\frac{1}{3} \delta_{ij}\nabla\cdot\mathbf{U} \tag{13}\]
are the components of the traceless rate-of-strain tensor and \(\mathbf{f}\) is a random forcing function consisting of plane unpolarized waves with typical wavenumber \(k_{\mathrm{f}}\) and an amplitude such that \(u_{\mathrm{rms}}/c_{\mathrm{s}}\approx 0.1\); see table 1. Here, \(\boldsymbol{\Omega}=(0,0,\Omega)\) is the angular velocity, \(\mathbf{g}=(0,0,-g)\) is gravity, \(\mathbf{B}_{0}=(0,0,B_{0})\) is the imposed magnetic field, \(\eta\) is the magnetic diffusivity, and \(\nu\) is the kinematic viscosity, whose value is such that \(u_{\mathrm{rms}}/\nu k_{1}\approx 1000\). A resolution of \(N^{3}=256^{3}\) mesh points is then sufficient. Since we chose \(k_{\mathrm{f}}/k_{1}=30\), we have for the Reynolds number \(\mathrm{Re}\equiv u_{\mathrm{rms}}/\nu k_{\mathrm{f}}\approx 30\). For the magnetic Prandtl number we chose, as in Jabbari et al. (2014) the value \(\mathrm{Pr}_{M}\equiv\nu/\eta=0.5\), so the magnetic Reynolds number is \(\mathrm{Re}_{M}\equiv u_{\mathrm{rms}}/\eta k_{\mathrm{f}}\approx 15\). The equilibrium stratification is given by \(\ln(\rho/\rho_{0})=-z/H_{\rho}\), where \(H_{\rho}=c_{\mathrm{s}}^{2}/g\) is the density scale height.
## Appendix E Results for Runs B-E
In figure 2, we present the results for Runs B and E with a stronger magnetic field: \(B=0.1\,c_{\mathrm{s}}\sqrt{\mu_{0}\rho}\), and two values of gravity \(g=1\,c_{\mathrm{s}}^{2}k_{1}\) and \(g=0.5\,c_{\mathrm{s}}^{2}k_{1}\). Finally, in figure 3, we present the results for Run C and D with weaker magnetic field \(B=0.01\,c_{\mathrm{s}}\sqrt{\mu_{0}\rho}\), and two values of gravity: \(g=0.5\,c_{\mathrm{s}}^{2}k_{1}\) and \(g=2\,c_{\mathrm{s}}^{2}k_{1}\). |
2306.04736 | BU-CVKit: Extendable Computer Vision Framework for Species Independent
Tracking and Analysis | A major bottleneck of interdisciplinary computer vision (CV) research is the
lack of a framework that eases the reuse and abstraction of state-of-the-art CV
models by CV and non-CV researchers alike. We present here BU-CVKit, a computer
vision framework that allows the creation of research pipelines with chainable
Processors. The community can create plugins of their work for the framework,
hence improving the re-usability, accessibility, and exposure of their work
with minimal overhead. Furthermore, we provide MuSeqPose Kit, a user interface
for the pose estimation package of BU-CVKit, which automatically scans for
installed plugins and programmatically generates an interface for them based on
the metadata provided by the user. It also provides software support for
standard pose estimation features such as annotations, 3D reconstruction,
reprojection, and camera calibration. Finally, we show examples of behavioral
neuroscience pipelines created through the sample plugins created for our
framework. | Mahir Patel, Lucas Carstensen, Yiwen Gu, Michael E. Hasselmo, Margrit Betke | 2023-06-07T19:12:03Z | http://arxiv.org/abs/2306.04736v1 | # BU-CVKit: Extendable Computer Vision Framework for Species Independent Tracking and Analysis
###### Abstract
A major bottleneck of interdisciplinary computer vision (CV) research is the lack of a framework that eases the re-use and abstraction of state-of-the-art CV models by CV and non-CV researchers alike. We present here BU-CVKit, a computer vision framework that allows the creation of research pipelines with chainable Processors. The community can create plugins of their work for the framework, hence improving the re-usability, accessibility, and exposure of their work with minimal overhead. Furthermore, we provide MuSeqPose Kit, a user interface for the pose estimation package of BU-CVKit, which automatically scans for installed plugins and programmatically generates an interface for them based on the metadata provided by the user. It also provides software support for standard pose estimation features such as annotations, 3D reconstruction, re-projection, and camera calibration. Finally, we show examples of behavioral neuroscience pipelines created through the sample plugins created for our framework.
## 1 Introduction
Computer Vision has the potential to become an integral part of interdisciplinary research. The advances in sub-fields such as Pose Estimation, Object Detection, Segmentation, and 3D Reconstruction can directly impact research in sciences as diverse as Conservation Ecology, Neuroscience, Physiology, Psychology, and many more. Integration of state-of-the-art computer vision with applied sciences can be supported by open-source python packages or wrappers like OpenCV [4], TensorFlow [1], PyTorch [13], Scikit-learn [15], OpenPose [5], MMPPose [7], DeepLabCut [12], and SLEAP [16] with different levels of abstraction and complexity. However, researchers in the applied sciences usually have to rely on themselves to implement or, at best, adapt computer vision methods for their research pipeline. There is an absence of an extendable framework that provides a high-level abstraction for accessing the methods implemented through these open-source software. Such a framework could allow for easier integration of these methods in interdisciplinary pipelines, thus benefiting non-computer-vision specialists on the one hand and also providing computer vision researchers the tools to further their research on the other.
We present here BU-CVKit framework, which bridges the accessibility gap to state-of-the-art computer vision research for researchers from diverse backgrounds and application disciplines. The core idea behind BU-CVKit is to provide chainable modules that can be used to create abstract research pipelines which can be easily modified and shared with the community. We also present MuSeqPose Kit, which provides an intuitive user interface to the pose estimation sub-module of the framework.
We demonstrate the potential of our framework by implementing plugins for two state-of-the-art 2D/3D pose estimation methods, DeepLabCut [12] and OptiPose [14]. DeepLabCut is a widely-used feature-rich framework that provides state-of-the-art markerless 2D pose estimation of animals. OptiPose is a denoising auto-encoder that encodes postural dynamics to optimize coarse 3D poses. We use the plugins to create standard behavioral neuroscience pipelines.
## 2 BU-CVKit Framework
BU-CVKit is an extendable framework that provides standard functionalities such as efficient input/output, evaluation metrics, geometric transformations, camera calibration, multi-view reconstruction, and other abstractions.
To illustrate how BU-CVKit can reduce programming overhead and thus potentially accelerate computer vision research, we give the following pipeline example. We denote a 2D pose estimation method as function \(f\), a 3D reconstruction method as function \(g\), a 3D pose filtering method as function \(h\), and a data analysis method as function \(i\). BU-CVKit enables the user to design a research pipeline that outputs
\[o=i(h(g(f(data)))). \tag{1}\]
Furthermore, consider a single-step 3D-pose estimation method \(j\), where \(j\approx g*f\). A user can replace \(f\) and \(g\) without affecting the semantics or flow of the pipeline and can thus explore different methods.
In the remainder of this section, we discuss the three major modules of the framework.
### Input/Output Modules
Efficient and intuitive input/output is necessary for the application domains of computer vision. We here describe two types of input/output modules. First, BU-CVKit contains an abstract buffered VideoReader class that can be extended to provide sequential or random access to video frames using different backbone libraries. With our package, we provide buffered implementation of OpenCV [4], Deffcode [17], Decord, and an Image plugin, which supports reading a directory of images and providing them as a video stream to support the data format adopted by the datasets. We compared the throughput of the standard OpenCV video reader versus the buffered modules of BU-CVKit under different CPU loads reading high-resolution video frames, see Table 1. The buffered module performs better than the standard OpenCV video readers and the improvement is much more evident when the CPU is idle. This performance improvement may be beneficial for scientists who need to review videos of animal behavior and their analysis in real-time. Users can also create their own IO plugins, perhaps one that utilizes GPU codecs to achieve even better performance.
The second I/O Module we describe here is the pose estimation data reader module. It provides an abstract DataReader class that can be extended to provide intuitive access to 2D or 3D pose estimation data. Each instance of a person's or animal's pose is converted into a Skeleton object which in turn contains multiple Part Objects. Each Part object extends a Numpy array and therefore supports efficient vectorized operations. In addition, the Skeleton object supports further Pose-Estimation-specific features such as behavior annotations and unified arithmetic and geometric operations.
To support input/output operations with a wide area of toolkits, the module includes implementations for reading, writing, and translating flattened CSV files, our custom \(n\)-dimensional CVKit files for storing pose data, HDF5 [8] files, Mat [10] files, and 2D DeepLabCut [12] files.
### Processor Modules
The pose estimation package provides an abstract Processor class that can be extended to implement plugins for state-of-the-art computer vision methods. The instances of these Processors are chainable and, therefore, can be used to create a pipeline that takes raw data and generate the desired output. The Processors can be classified into three categories - filters, generative, and utility. The filter package contains the Processors that denoise the input data. The included dimension-independent filters are a constant acceleration Kalman filter, linear interpolation, a statistical distance filter, a moving average filter, and a velocity filter. The generative module contains the Processors that generate new data from the provided input. This module includes 3D reconstruction and reprojection processors, a kinematics generator, and plot generators. Finally, the utility module contains a file loader, a file saver, and an input statistics generator. Although these Processors are not directly used in processing or analyzing data, they are required to facilitate chaining and other utilities.
### Pose Estimation Utility Modules
The framework provides several pose estimation utility modules. The camera calibration is a standard first step in any pose estimation work. Therefore we provide a calibration module that uses EasyWand [18] to generate Direct Linear Transformation (DLT) [2, 19] coefficients and camera parameters. The 3D reconstruction and reprojection modules use DLT to estimate the 3D positions of matching 2D keypoints and mapping 3D coordinates into 2D camera planes. The metrics module for pose estimation provides the Mean Per Joint Position Error (MPJPE) [11] and dynamic percentage of correct keypoints (PCK@x) [11] metrics. Finally, the geometric transformations module enables 3D rotation, translation, and axes alignment.
## 3 Plugin Sub-system
The abstract code design of the CVKit enables researchers to extend the framework by creating plugins for their own work. The Processor class described in the previous section acts as the main entry point for plugins. We here showcase the potential of the plugin system by extending the capabilities of BU-CVKit through our three plugins. Our DeepLabCut [12] plugin provides a Processor for running a DeepLabCut model on videos and providing 2D pose predictions. Our OptiPose [14] processor takes in the
\begin{table}
\begin{tabular}{l c c} \hline \hline VideoReader & \multicolumn{2}{c}{Throughput [fps]} \\ & CPU Load & CPU Idle \\ \hline OpenCV VideoCapture(original) & 47.23 & 162.07 \\ CVReader (buffered OpenCV) & 48.59 & 193.05 \\ Deffcode & 49.30 & 193.09 \\ Decord & 64.71 & 193.42 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average frames-per-second throughput of baseline OpenCV implementation and different BU-CVKit buffered implementations, when computed over 1,000 1,024x1,024 H264-encoded frames under two CPU work modes on a 6-core Xeon processor.
3D poses generated by the reconstruction Processor and optimizes them through a pre-trained OptiPose model. It also supports generating a view direction vector that can be fed into a game engine to generate a simulated egocentric view of the subject. The resulting pipeline created by BU-CVKit is shown in Fig. 1. It starts with the DeepLabCut plugin to compute 2D predictions, the reconstruction to perform 3D Reconstruction, a series of filters, pose optimization through an OptiPose model, and viewpoint simulation.
The NeuroAnalysis plugin processes the tracking data generated from the OptiPose plugin (not shown in Fig. 1). It provides a single-ray-tracing Processor that is used by other behavioral analysis generators. With the plugin, the gaze direction of the animal can be analyzed by tracing the view direction vector to surfaces in the animal's environment. Here we make the assumption that the rodent is generally looking at the direction it is facing, i.e., where its snout is pointing. We also model the rodent's attention focus, where the attention score attenuates radially outwards from the assumed gaze point. With these tools, we can localize the rodent's approximate focus of interest on the four walls of the blue tower, shown in Fig. 1. Summative analysis then leads to heat maps that highlight to which areas on the tower the animal seemed to have paid attention most often, see Fig. 2.
The EBC-Processor helps detect Egocentric Boundary Cells (EBC), neurons that exhibit increased firing when a boundary is at a specific distance and direction relative to the subject [3]. Analytical plots, as in Fig. 3, are generally used to distinguish non-EBC patterns from EBCs. Each plot is produced for a specific cell. The plot on the left in Fig. 3 shows no discernible directional or spatial pattern in the cell activity, whereas the plot on the right exemplifies a firing pattern when the boundary is on the left of the rodent.
Animal behavioral analysis studies how an animal explores an environment under a spectrum of novel conditions. The amount of time the rodent spends occupying certain regions and the instances of rearing are used as indicators of exploration and memory [6, 9]. The Occupancy Processor can generate 2D plots that show the distribution of where the time was spent globally as well as locally near the objects of interest (Fig. 4). Similarly, the Rearing Processor detects rearing instances by monitoring selected keypoints to generate 3D bar plots of rearing frequency (Fig. 4).
We stress that a major advantage of the framework architecture is that the processors are easily replaceable. The DeepLabCut Processor and the OptiPose Processor can be
Figure 1: Chainable Processors: A Processor pipeline using the DeepLabCut [12] and OptiPose [14] plugins. (a) The raw input thermal and RGB images. (b) Predicted 2D poses generated by the DeepLabCut plugin. (c) Optimized 3D poses generated by the pose-optimization Processor from the OptiPose plugin. (d) A simulated binocular egocentric view generated by the viewpoint reconstruction Processor from the OptiPose plugin.
Figure 3: Results of a Generative plugin that combines neural recording with animal tracking to plot cellular activity and their corresponding spatial locations. The black lines indicate the trajectory of the rodent exploring an area (as in Fig.1 but without objects), and each dot is the location where the cell fired. The color of the dot represents the head direction of the rodent at the time the cell fired. (a) Example of non-deterministic cell firing. (b) Example of an Egocentric Boundary Cell firing.
Figure 2: Gaze heatmaps computed by a Generative plugin that uses ray tracing with radially attenuated focus to approximate the gaze of the animal on the blue tower shown in Fig. 1, for each wall of the tower (the west wall of the blue tower faces the red tower). The dimensions of the heatmaps correspond to the dimensions of the walls, i.e., 62 mm (width) and 247 mm (height). The rodent most often looked at the center of the East wall (hot colors).
replaced by any 2D pose estimator and 3D pose optimizer, respectively, without affecting the neuro-processors. They can also be bypassed entirely by creating a plugin for one of the single-stage 3D pose estimators. This allows researchers to try different methods without drastically changing their analysis pipeline.
## 4 MuSeqPose Kit
We provide MuseqPose Kit, a user interface to the pose-estimation sub-package of BU-CVKit. It provides a powerful video annotation widget that supports interpolating annotations across frames. The widget also provides a reprojection toolbox to mitigate the need for annotating every view. The camera calibration widget automatically picks a diverse set of synchronized annotated frames and generates the necessary files for the EasyWand Calibration package. The synchronized playback widget allows for displaying videos with real-time plots. Finally, the Pipeline widget automatically scans for the installed plugin metadata and generates a user interface to access the underlying Processors. Using this widget, users can create, visualize, execute, and save their research pipelines using this widget.
## 5 Conclusions
With BU-CVKit Framework, we have provided an extendable framework for simplifying and accelerating interdisciplinary research. We showcase the usability of such a framework by implementing plugins for state-of-the-art computer vision research and using them for behavioral neuroscience. We plan to add extendable Segmentation and Object Detection packages. The plug-and-play aspect of the Processors enables researchers to try different methods without extensively modifying their pipeline. Finally, since non-computer-science researchers may have a varying range of programming skills, we provide a powerful user interface that automatically adapts to the installed plugins while providing pose estimation features.
Figure 4: Top: Results of a behavioral analysis plugin that generates Occupancy Maps for comparing the change in behavior corresponding to the novelty of objects. Bottom: Results of a behavioral analysis plugin that generates ”rearing” plots to compare exploratory behavior under novel or familiar conditions. |
2308.15397 | Color Aesthetics: Fuzzy based User-driven Method for Harmony and
Preference Prediction | Color is the most important intrinsic sensory feature that has a powerful
impact on product sales. Color is even responsible for raising the aesthetic
senses in our brains. Account for individual differences is crucial in color
aesthetics. It requires user-driven mechanisms for various e-commerce
applications. We propose a method for quantitative evaluation of all types of
perceptual responses to color(s): distinct color preference, color harmony, and
color combination preference. Preference for color schemes can be predicted by
combining preferences for the basic colors and ratings of color harmony.
Harmonious pallets are extracted from big data set using comparison algorithms
based on fuzzy similarity and grouping. The proposed model results in useful
predictions of harmony and preference of multicolored images. For example, in
the context of apparel coordination, it allows predicting a preference for a
look based on clothing colors. Our approach differs from standard aesthetic
models, since in accounts for a personal variation. In addition, it can process
not only lower-order color pairs, but also groups of several colors. | Pakizar Shamoi, Atsushi Inoue, Hiroharu Kawanaka | 2023-08-29T15:56:38Z | http://arxiv.org/abs/2308.15397v1 | # Color Aesthetics: Fuzzy based User-driven Method for Harmony and Preference Prediction
###### Abstract
Color is the most important intrinsic sensory feature that has a powerful impact on product sales. Color is even responsible for raising the aesthetic senses in our brains. Account for individual differences is crucial in color aesthetics. It requires user-driven mechanisms for various e-commerce applications. We propose a method for quantitative evaluation of all types of perceptual responses to color(s): distinct color preference, color harmony, and color combination preference. Preference for color schemes can be predicted combining preferences for the basic colors and ratings of color harmony. Harmonious pallets are extracted from big data set using comparison algorithms based on fuzzy similarity and grouping.The proposed model results in useful predictions of harmony and preference of multicolored images. For example, in the context of apparel coordination, it allows predicting a preference for a look based on clothing colors. Our approach differs from standard aesthetic models, since in accounts for a personal variation. In addition, it can process not only lower-order color pairs, but also groups of several colors.
## I Introduction
Understanding human aesthetic preference is a challenging task that can be highly useful for a number of industries, including design [1], marketing, and fashion (e.g., refined user-driven results for visual search engines). Fashion aesthetics involves many aspects, including the color, various styles (e.g., sleeves types), materials, spatial compositions, etc. However, consumers usually judge an item within 90 seconds of viewing and initial assessments are mostly driven by colors (i.e. when a human perceives colors, a rich network of associations gets activated). So, we pay a lot of attention to color aesthetics in particular, for making personalized recommendations for images. But we need to remember that preference for harmonious color stimuli is just one factor underlying aesthetic response.
Color aesthetics involves studying of visually appealing color combinations hidden in the interior, fashion look or even some piece of art. So that a user sees a composition (of any items) and gets the aesthetic pleasure. Color is generally considered to be one of the most important and distinguishing visual features. Additionally, it is often treated as an aesthetic issue, having a significant impact on product sales, accounting for 85% of the reason why consumer purchases a product. How? By creating an impression and raising the aesthetic senses, color influences decision-making (buy or not to buy) processes in our brains.
Humans have different levels of visual sensitivity and different color perception abilities. Differentiating between millions of colors, wherein describing a whole image by only using a few colors is a challenging task. That is why the majority of shopping portals, like Amazon and eBay use TBIR, have limitations like subjectivity, manual image tagging and incompleteness. So, we need a human-consistent way to represent color images.
By adopting fuzzy set based representations and the necessary calculus for them we can solve the problem of semantic gap between low-level color visual features and high-level concepts. In our previous works [2, 3], we discuss how we use fuzzy sets to deal with the uncertainty linked to apparel's images for the online shopping coordination. Color channels distributions in our space are expressed with fuzzy membership functions [2].
We claim that color theories must be shaped by aesthetic norms, including taste (preference for single colors) and trends (context-aware harmonious palettes). We propose a technique to predict the aesthetic preference for color combinations by introducing the new variables, color harmony and color preference. Aesthetic responses to colors are highly influenced by harmony between colors, since the same color can create different impression when viewed together with different colors. We believe that concepts of _preference_ and _harmony_ and their relationship can serve as a tool for future investigations in aesthetics across multiple domains.
Difference between preference and harmony is crucial. Aesthetic preference for color combinations is mostly driven by color harmony and there are some common tendencies in defining them. However, there are also small differences in the degree to which people prefer harmony, with correlations in the range from 0.03 to 0.75. [4, 5]. In the context of e-commerce shopping, there are two main related questions:
1. How do colors of apparel's influence preference and harmony judgements?
2. How to predict a preference for a look based on colors of apparel's?
In this paper, we try to answer the above questions by predicting combination preference from components preference and harmony. We use results of experiments of Berkeley Color Project [6, 4, 5], and claim that higher individual preference
for distinct colors and higher harmony ratings imply higher preference.
We extract harmonious pallets from big data sets, test them by conducting a questionnaire and then use comparison algorithms based on fuzzy similarity. Our previous method just gave us true/false response on a harmony of a particular combination. In this work we propose the method to quantify the harmony and add the phenomenon of preference.
## II Motivation
In our previous works [2, 3] we proposed an approach based on fuzzy sets and logic towards the creation of a perceptual color space (FHSI, fuzzy HSI). FHSI colors are modelled by means of fuzzy sets defined on an HSI color space and a fuzzy partition is defined in the corresponding color feature domain (fuzzy color space). There are 92 colors in FHSI. The soft boundaries between the color categories were derived experimentally through an online survey based on human color categorization. We also defined methods for finding the perceptual difference between colors and the degree of similarity between images based on FHSI system [2, 3].
Currently we are working on developing a human-consistent image retrieval system for apparel coordination based on the FHSI perceptual color space. We faced the problem of quantitative evaluation of a color combination harmony.
Humans often experience colors not in an isolation, but in a combination. The aesthetic perception of a color group is strongly influenced by its overall harmony. Hence, it is essential to consider the congruency of chromatic compositions, rather than how much people like single colors [4].
Throughout history, it has been the contradiction in research works of color theorists studying harmony. In [7] Chevreul defined the law of simultaneous contrast of colors and proposed harmonies of analogous and contrasting colors. Another great color theorist, Itten, stated that "harmonious" color combinations are composed of closely similar chromos (e.g. tones, tints and shades), or else of different colors of the same nuance [8]. In essence, his theory defines harmonious colors as colors producing neutral gray when mixed. Furthermore, Munsell and Ostwald put ahead the idea that colors are harmonious if they have some relation in the given color space (e.g., if colors are similar in hue) [4].
As we see, the color literature is full of discrepancies. If we unite these theories into one system we will obtain that nearly every color combination can be considered as harmonious [4]. The source of this contradiction lies in the fact that there is still neither human consistent color space nor single best representation of color. There are multiple spaces that characterize the color features from different perspectives instead. Therefore, it is a very challenging task to represent, measure and process colors and their harmony.
III Evaluation of Color Harmony and Preference Preference and harmony are often used interchangeably. However, these two phenomenon are similar only in case an observer likes the colors in the combination. To avoid a confusion we need to accurately define and measure them.
### _Typology of Color Judgements_
In order to better customize the system for each user, we need to define and carefully understand the difference between _Single Color Preference (SCP)_, _Color Scheme Harmony (CSH)_, and _Color scheme preference (CSP)_. They are all three distinct types of judgments and different ways of evaluation of perceptual responses to color schemes.
#### Iii-A1 Single Color Preference
SCP reflects the contextless preference ratings. Usually, people tend to prefer some colors over others, and nearly everyone has its favourite color.
SCP measures can be obtained during or after the registration in the system. We just offer a small survey with simple visual content (colors) with questions (e.g., How much do you like the display?) and collect the responses using a rating scale (e.g., Not at all, Good, Very much, etc.). An example of Single Color Ratings can be seen in Fig. 1. The idea is to account for basic colors without a considerable loss in precision.
#### Iii-A2 Color Scheme Harmony
In contrast, a CSH reflects how strongly the colors in the combination are going well together, regardless of whether an individual likes the given combination or not. In other words, CSH indicates the harmony of the color combination as a whole.
#### Iii-A3 Color Scheme Preference
A color palette (combination, scheme, group) preference is defined as how much an individual likes a given combination of colors. So, it reflects an aesthetic preference for a given palette as a whole
SCP ratings are not enough to predict CSP. To better define palette preference we need a relational factor like harmony. The problem is how to derive it? Conventional methods are no longer valid to meet current requirements [3].
### _Harmonious Palettes Derivation_
In [6, 4, 5] it was concluded that color harmony is a function of color similarity. What is more, people tend to prefer color combinations, which contain colors that are similar in hue, cool, and desaturated. However, we believe that harmony is a very complex phenomenon and it is nearly impossible to purely define it in terms of strict rules.
We want to extract harmonious, fuzzy color-based pallets from data set. This is done by forming groups containing looks with similar color compositions. To achieve this, we use the fuzzy model we developed, formulas for fuzzy color difference, palettes similarity (described in [2, 3]).
As a data set we took 10000 images with fashion looks from various sources, including the most popular fashion sites, like polyvore.com, lookbook.nu, instyle.com, dailylook.com and
Fig. 1: Example of Single Color Ratings.
The preference was given to looks having more likes.
For each image M in a data set we perform the following:
1. Compute the fuzzy dominant color histogram of M
2. Compute the mean average perceptual difference \(Dp_{avg}\) between \(CH\) and members of each harmonious groups.
3. If minimal \(Dp_{avg}\) is more than difference threshold, form a new group and add M to it. Otherwise, add M to a group with which M has minimal \(Dp_{avg}\).
4. Choose groups having at least 100 similar looks falling into a similar fuzzy color scheme.
Processing of data set took almost 7 hours (1-2 seconds for each image, depending on resolution, current number of groups, etc.). On 8nd thousand the convergence was achieved, since almost all new coming images were falling into existing groups and very few new palettes were added during the processing of the last 2000 images.
As a result, we got 139 groups in total, 59 of them contained more than 100 images. For each group we took averaged harmonious fuzzy color palette. Some of the schemes we extracted were similar to Analogous, Contrasting, Triadic, etc. (classified by Itten [8]), but there were also schemes which are out of any rules. Examples of harmonious groups with more than 100 similar images can be seen in Fig. 2.
It needs to be emphasized that color preferences may change depending on semantic context [6]. Therefore, derived palettes provide context-aware harmony. We can perform palettes derivation for other domains - art or interior design, for example. The very essence of this generation process is that it is becoming possible due to FHSI space.
### _User Preference Prediction in Fashion Industry_
How to predict a preference for a look based on colors of apparels? As mentioned before, a preference for color schemes is influenced by preferences for the component basic colors and ratings of color harmony [6]. Let's combine preferences for the single component colors and ratings of color harmony:
\[Pref(A,B)=\frac{Pref(A)\times w_{A}+Pref(B)\times w_{B}}{w_{A}+w_{B}}+Harm(A,B) \tag{1}\]
In Eq. (1) \(w\) is a weight importance of an apparel, \(Pref(A),Pref(B)\) are user preferences for single colors \(A\) and \(B\). Eq. (1) also works for three, four, five colors as well.
Generally, aesthetic judgements of fashion looks are influenced by dominance order. For example, dress or skirts usually make more impact on an overall impression rather than accessories. That is why we need to take into account \(w\), apparel weight parameter:
* For dresses/costumes \(w=1\).
* For up & down clothes (e.g.,skirts, blouses.) \(w=0.75\).
* For shoes and bags \(w=0.5\).
* For accessories (e.g., glasses, watches) \(w=0.25\).
Now we can find \(Harm(A,B)\) between two fuzzy colors \(A\) and \(B\). If there is a palette containing both \(A\) and \(B\), then \(Harm(A,B)=1\). Otherwise we take the most similar color harmonious palette and find the similarity value (which will serve as a harmony value too in this case). Harmony of a group of fuzzy colors which is not in a knowledge base is equal to its similarity with the closest harmonious group. We use similarity measure defined in [2].
## IV Results
### _Application_
The proposed model results in useful predictions of perceived harmony and preference of multi-colored images.
In a popular application Polyvore, users create looks by dragging items from the menu. It is clear that the color of the new object you add needs to harmonize well with colors of other existing objects, but it is very difficult for a majority of people to choose an object considering such things.
Using FHSI model [2, 3] we can index the apparels database. Furthermore, the preference formula Eq. (1) allows us to sort all the commodity images according to its harmony to the given apparel or look, i.e., the more harmonious items (suiting the whole look) will be ranked more ahead. Some of the features include:
* Having measures for harmony and preference, we can sort the apparels on the right by harmony to the apparels on the left. The suggested apparels will be unique for each user, since single color ratings allow us to refine color harmony to individual color scheme preference.
* Some of the pallets have style names like modern, classic, retro, romantic, elegant, formal, etc. User can choose several styles and see the corresponding apparels only.
Fig. 3: Shopping Advisor Application Mockup.
Fig. 2: Examples of Derived Harmonious Palettes.
* User can input several favourite colors and the system selects colors which are in a harmony and we get many finished looks by the system and also use these harmonious colors to filter the apparels in the menu.
The method can also be applied to automatic labelling in large image databases [9] and customized services. One example for this is furniture coordination (e.g., a user uploads a room image and requests sofas fitting perfectly the interior).
### _Preliminary Experiments_
**Example 1**. Predict the preference of user \(X\) for the look in Fig. 8. As we defined, \(w_{dress}=1\), \(w_{bag}=0.5\)
Since the apparels were preprocessed, we know the ids of fuzzy dominant colors of the dress (A, \(12\)) and the bag (B, \(1\)). Next, suppose the single color ratings of user \(X\) areas represented in Fig. 1. So, \(Pref(A)=0.8\) and \(Pref(B)=0.5\). \(Harm(A,B)=1\), since fuzzy colors \(1\) and \(12\) are both in a color harmony palette 27, see Fig. 8. Finally, according to our preference formula Eq. (1), \(Pref(A,B)\) for User \(X\) is \(0.83\) (value is normalized to a scale [0; 1] ). Note that for an unregistered user (e.g. for a guest) \(Pref(A,B)=Harm(A,B)=1\).
**Example 2**. Predict the preference of user \(Y\) for the look in Fig. 7. Apparels weights for this example: \(w_{up}=w_{down}=0.75\), \(w_{bag}=w_{shoes}=0.5\), and \(w_{acc}=0.25\).
Suppose Eq. (6) depicts single color ratings of user \(Y\). There is no such color harmony group containing all the fuzzy colors in the look, but there is a group \(14\) which is the most similar, similarity is equal to 83%, so as harmony. According to Eq. (1), \(Pref(A,B)\) for User \(Y\) normalized to a scale \([0;1]\approx 0.8\).
## V Conclusions and Future Work
Current work presents an overview of the findings in current research focusing on quantification of the harmony and preference phenomena. The intended application is apparel online shopping coordination system.
The research shown in this paper describes our preliminary results and there are a number of next steps to take in the future. First, we plan to analyse the correlation between preference and harmony ratings through regression analysis. Specifically, the model can be validated by an experiment, where human observers are judging fashion images: whether they are harmonious or disharmonious, and whether they are liked or disliked. We can provide various samples of stimuli and explain the pattern of variation in scheme preferences. Second, we plan to perform the system evaluation by measuring the precision and recall for \(\sim\)20 queries (Eq. (2)).
\[P_{T}=\frac{\sum_{i=1}^{T}P_{i}}{T},R_{T}=\frac{\sum_{i=1}^{T}R_{i}}{T},Relevance =\frac{P_{T}}{R_{T}} \tag{2}\]
\(T\) is the number of selected queries, \(P_{i}\) = the number of relevant apparels retrieved divided by the number of apparels retrieved, \(R_{i}\) = the number of relevant apparels retrieved divided by the number of relevant apparels in DB.
Next, for each user we can compute the average difference score as the absolute value of the difference between his real preference ratings and predicted preference. Note that \(T\) is a number of looks offered to a survey participant.
\[D=\frac{\sum_{i=1}^{T}|Pref_{real}-Pref_{pred}|}{T} \tag{3}\]
Finally, knowing how to measure preference and harmony properly, we are particularly interested in how much such aesthetic preferences might covary across different semantic contexts. We need to check the palettes relevance in the different aesthetic domains in order to find out whether preferences for harmony are correlated across various domains or not. This will also be the subject of future work.
|
2305.03509 | Diffusion Explainer: Visual Explanation for Text-to-image Stable
Diffusion | Diffusion-based generative models' impressive ability to create convincing
images has garnered global attention. However, their complex structures and
operations often pose challenges for non-experts to grasp. We present Diffusion
Explainer, the first interactive visualization tool that explains how Stable
Diffusion transforms text prompts into images. Diffusion Explainer tightly
integrates a visual overview of Stable Diffusion's complex structure with
explanations of the underlying operations. By comparing image generation of
prompt variants, users can discover the impact of keyword changes on image
generation. A 56-participant user study demonstrates that Diffusion Explainer
offers substantial learning benefits to non-experts. Our tool has been used by
over 10,300 users from 124 countries at
https://poloclub.github.io/diffusion-explainer/. | Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, ShengYun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng Chau | 2023-05-04T16:14:43Z | http://arxiv.org/abs/2305.03509v3 | # Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion
###### Abstract
Diffusion-based generative models' impressive ability to create convincing images has captured global attention. However, their complex internal structures and operations often make them difficult for non-experts to understand. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements. By comparing the evolutions of image representations guided by two related text prompts over refinement timesteps, users can discover the impact of prompts on image generation. Diffusion Explainer runs locally in users' web browsers without the need for installation or specialized hardware, broadening the public's education access to modern AI techniques. Our open-sourced tool is available at: [https://poloclub.github.io/diffusion-explainer/](https://poloclub.github.io/diffusion-explainer/). A video demo is available at [https://youtu.be/Zg4gxdIWDds](https://youtu.be/Zg4gxdIWDds).
**Index Terms:** Human-centered computing--Visualization--Visualization systems and tools--Visualization toolkits;
## 1 Introduction
Diffusion-based generative models [31, 37, 43] like Stable Diffusion [43] and DALL-E [31] have captured global attention for their impressive image creation abilities, from AI developers, designers, to policymakers. The integration of Stable Diffusion in Lensa AI [35], a photo editing application that transforms selfies into different styles of artwork like amine and fantasy, led to a surge of 5.8 million downloads in the first week of December 2022 [13].
However, the popularity and progress of generative AI models have sparked ethical and social concerns [14, 15, 12, 44], such as accusations of artistic style their bitly developers of AI image generators [14, 15]. Policynakers are also discussing ways to combat malicious data generation and revise copyright policies [16, 17, 39, 1, 1]. There is an urgent need for individuals from many different fields to understand how generative AI models function and communicate effectively with AI researchers and developers [15, 20].
**Key challenges in designing learning tools for Stable Diffusion.** At the high level, Stable Diffusion iteratively refines _noise_ into a high-resolution image's vector representation, guided by a text prompt. Internally, the prompt is tokenized and encoded into vector representations by the _Text Encoder_ of the _CLIP_ neural network [36]. With text representations' guidance, Stable Diffusion improves the image quality and adherence to the prompt by incrementally refining the image's vector representation using the _UNet_ neural network [38] and the _Scheduler_ algorithm [28] to predict and remove noise. The final image representation is upscaled to a high-resolution image [25]. The crux of learning about Stable Diffusion, therefore, originates from the complex interplay between the multiple neural network subcomponents, their intricate operations, and the iterative nature of image representation refinements. Such complex interactions are challenging even for experts to comprehend [47]. While some articles [4] and video lessons [22, 5] explain Stable Diffusion, they presume knowledge of machine learning and focus on model training and mathematical details.
**Contributions.** In this work, we contribute:
* **Diffusion Explainer, the first interactive visualization tool de
signed for non-experts to learn how Stable Diffusion transforms a text prompt into a high-resolution image (Fig. 1), overcoming key design challenges in developing interactive learning tools for Stable Diffusion (SS 3). Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex structure with detailed explanations of their underlying operations (Fig. 2, Fig. 3) enabling users to fluidly transition between multiple abstraction levels through animations and interactive elements (SS 4.2).
* **Novel interactive visualization design** that enables users to discover the impacts of prompts on image generation. It compares how image representations evolve differently over refinement timesteps when guided by two related text prompts (Fig. 4), revealing how the keyword differences in prompts affect evolution trajectories that start from the same initial random noise. Since prompt engineering for Stable Diffusion has far been highly heuristic [26, 32], Diffusion Explainer provides a new way for people to gain a better understanding of the impacts of text prompts on the complex image generation process (SS 4.3).
* **An open-sourced, web-based implementation** that broadens the public's education access to modern generative AI techniques without requiring any installation, advanced computational resources, or coding skills. We develop Diffusion Explainer as a web-based tool that runs locally in users' browsers, allowing large number of concurrent users to easily learn about Stable Diffusion directly on their laptops and tablets (SS 4.1). Diffusion Explainer is open-sourced1 and available at the following public demo link: [https://poloclub.github.io/diffusion-explainer/](https://poloclub.github.io/diffusion-explainer/). A video demo is available at [https://youtu.be/Zg4gxdIWDds](https://youtu.be/Zg4gxdIWDds).
Footnote 1: [https://github.com/poloclub/diffusion-explainer](https://github.com/poloclub/diffusion-explainer)
## 2 Related Works
**Interactive Visualizations for Explaining Deep Learning.** Several web-based visualization tools, such as CNN Explainer [48], GAN Lab [24], and Adversarial-Playground [29], have been developed to help people understand deep learning. Google's Machine Learning Crash Course [19] employs Tensorflow Playground [41], which provides interactive visualizations for training simple neural networks. Moreover, various deep learning concepts are explained by many machine learning researchers and practitioners in their web articles [2, 18, 40] and blog posts [30] through the use of interactive visualizations. Inspired by the success of these previous works, we present Diffusion Explainer, an interactive visualization tool that explains text-to-image Stable Diffusion.
**Explanations for Stable Diffusion.** Online articles that explain Stable Diffusion [4, 7, 21, 46, 24] often assume that the audience has knowledge about machine learning and use jargon and mathematical equations that can be daunting for non-experts [21, 4, 8]. Tutorials in the form of Google Colab notebooks [33, 49] primarily focus on code implementation, while blog posts for beginners [46, 7] mostly address deployment and prompt engineering. To help users quickly understand how Stable Diffusion generates an image, Diffusion Explainer provides easy-to-understand explanations of its complex architecture and operations, integrating multiple abstraction levels through fluid animations and interactive elements.
## 3 Design Goals
By reviewing literature and online resources, we have identified four design goals (G1-G4) aimed at addressing the challenges people may face while learning about Stable Diffusion:
1. [leftmargin=*]
2. **Visual summary of Stable Diffusion.** Stable Diffusion is comprised of multiple model components, each with a complex structure [37, 47]. Additionally, its incremental image generation, which refines noise into the vector representation of a high-resolution image, is a cyclic process that is uncommon in neural networks. Diffusion Explainer aims to provide an overview of the model architecture and data flow to help users quickly understand its overall structure (SS 4.2).
3. **Interactive interface tightly integrating different levels of abstraction.** The image generation process of Stable Diffusion's image generation involves a complex interplay between multiple neural network subcomponents [36, 37] (Fig. 2, Fig. 3), their intricate operations, and iterative image representation refinements. Such complex interactions are challenging even for experts to comprehend [47]. To effectively explain these low-level operations and help users conceptually connect them with a high-level overview, we design Diffusion Explainer to bridge multiple abstraction levels through fluid animations and interactive elements [24, 48] (SS 4.2.1, SS 4.2.2).
4. **Visualizing how keywords in text prompts affect image generation.** Stable Diffusion incrementally refines noise into the vector representation of a high-resolution image, while being guided by a text prompt. However, the refinement process, which involves multiple iterations of intricate vector computations, can be challenging to understand [37]. Due to the lack of understanding about how text prompts impact the refinements, writing prompts has been highly heuristic [26, 32]. We aim to visualize the refinement process for two text prompts that differ only in a few keywords to enable users to compare how image representations evolve differently when guided by each prompt. (SS 4.3).
5. **Broadening access via web-based deployment.** As more and more individuals from different fields are now interested in understanding how generative AI models work [1, 16, 39, 15, 1],
Figure 2: Diffusion Explainer tightly integrates different levels of abstractions to help users conceptually connect the overview of Stable Diffusion’s structure with the underlying operations of each component. To learn how Stable Diffusion converts a text prompt into vector representations, users click the _Text Representation Generator_, which smoothly expands to (A) the _Text Operation View_, which explains how the prompt is split into tokens that are then encoded into vector representations. (B) The _Text-image Linkage Explanation_ demonstrates how Stable Diffusion connects text and image, enabling text representations to guide the image generation process.
we have developed Diffusion Explainer as a web-based tool that runs locally in users' web browsers without requiring any installation, specialized hardware, or coding skills [37]. This allows users to learn about this latest AI technology on their laptops or tablets (SS4.1).
## 4 System Design and Implementation
### Overview
Diffusion Explainer is an interactive visualization tool that explains how Stable Diffusion generates a high-resolution image from a text prompt, selected from the _Prompt Selector_ (Fig. 1A). It incorporates an animation of random noise gradually refined and a _Timestep Controller_ (Fig. 1D) that enables users to visit each refinement timestep. Diffusion Explainer consists of two views: _Architecture View_ (SS4.2) and _Refinement Comparison View_ (SS4.3). The Architecture View provides an overview of Stable Diffusion's architecture (G1), which can be expanded into details via user interactions (G2; Fig. 2, Fig. 3). The Refinement Comparison View visualizes the incremental image generation process for two related text prompts to allow users to discover how prompts affect image generation (G3). Diffusion Explainer is implemented using a standard web technology stack (HTML, CSS, JavaScript) and the D3,js [11] visualization library (G4). Diffusion Explainer has 13 text prompts based on the prompt template from _A Traveler's Guide to the Latent Space_[42]. Most prompts include popular keywords (e.g., _detailed_, _trending on artstation_) identified from literature and articles [32, 34, 9].
### Architecture View
The Architecture View provides an overview (G1; Fig. 1) of how the _Text Representation Generator_ (Fig. 1B) converts a text prompt into vector representations that guides the _Image Representation Refiner_ (Fig. 1C) to incrementally refine noise into the vector representation of a high-resolution image. Clicking on the generators provides more details about their underlying operations (G2; Fig. 2, Fig. 3).
#### 4.2.1 Text Representation Generator
The _Text Representation Generator_ (Fig. 1B) converts text prompts into vector representations. Clicking on it expands to the _Text Operation View_ (G2; Fig. 2A), that explains how the Tokenizer splits the prompt into tokens and how the Text Encoder encode the tokens into vector representations. Clicking on the Text Encoder displays the _Text-image Linkage Explanation_ (G2; Fig. 2B), which visually explains how Stable Diffusion connects text and image by utilizing the CLIP [36] text encoder to generate text representations with image-related information.
#### 4.2.2 Image Representation Refiner
The _Image Representation Refiner_ (Fig. 1C) incrementally refines random noise into the vector representation of a high-resolution image that adheres to the input text prompt. Diffusion Explainer visualizes the image representation of each refinement step in two ways: (1) decoding it as a small image using linear operations [45] and (2) upscaling it to the Stable Diffusion's output resolution (Fig. 1E). Users expands the Image Representation Refiner to access the _Image Operation View_ (G2; Fig. 3A), which explains how the UNet neural network [38] predicts the noise to be removed from the image representation to improve its adherence to the prompt. The predicted noise is weakened before removal.
The guidance scale hyperparameter, which controls the image's adherence strength to the text prompt, is described at the bottom, and further explained in the _Interactive Guidance Explanation_ (G2; Fig. 3B) through a slider that allows users to experiment with different values, to better understand how higher values lead to stronger adherence of the generated image to the text prompt.
### Refinement Comparison View
The _Refinement Comparison View_ demonstrates how Stable Diffusion generates different images based on two related text prompts, helping users understand the impact of prompts on image generation (G3; Fig. 4). Each prompt in Diffusion Explainer is paired with a prompt that differs only in a few keywords (e.g., "a cute and adorable bummy..." vs. "a cute and adorable bummy..." _pixar character_"). We use UMAP [27] to visualize the incremental refinement of image representations for each paired prompts, revealing how the keywords in prompts affect the evolution of image representations from the same initial random noise (G3).
## 5 Usage Scenarios
We present two usage scenarios for Diffusion Explainer, demonstrating how it may enhance user learning of Stable Diffusion. The scenarios highlight: (1) how practitioners can discover the impact of text prompts on image generation (SS 5.1); and (2) how non-experts can discern challenges in attributing AI-generated images (SS 5.2).
### Discovering Prompts' Impact on Image Generation
Jenny is a graphic designer at a media company who wants to use generative AI models to create images in specific artistic styles, but she is uncertain how text prompts affect image generation. In particular, she wants to experiment with different styles while maintaining object composition consistency. Jenny activates the _Refinement Comparison View_ (Fig. 4A), in Diffusion Explainer to compare two related text prompts and images generated from each. Both prompts begin with the phrase "a cute and adorable bummy", but only one includes "_in the style of cute pixar character_". The bunnies in both images have the same pose, but the _pixar_ version is more cartoon
Figure 3: Users learn how Stable Diffusion incrementally refines noise into the vector representation of a high-resolution image that adheres to the text prompt by clicking the _Image Representation Refiner_ in the high-level overview, which smoothly expands to (A) the _Image Operation View_ that demonstrates how the noise is iteratively weakened and removed from the image representation as predicted by the UNet neural network. (B) The _Interactive Guidance Explanation_ allows users to interactively experiment with different guidance scale values (0, 1, 7, 20) to better understand how higher values lead to stronger adherence of the generated image to the text prompt.
and has more vibrant colors and textures, typical of characters in Pixar animations. Curious about whether the pose preservation is a coincidence, Jenny adds the same _pixar_ phrase to prompts for an elephant and a squirrel (Fig. 4B) and notices that their poses are also preserved. Intrigued by the effect of the _pixar_ phrase on image generation, she examines the trajectories of the image representations and discovers that adding the _pixar_ phrase leads to only slight divergence.
Jenny wonders if other phrases may also similarly modify only styles while maintaining overall image compositions. To explore this, she asks her colleagues about commonly used "modifiers" keywords. Some suggest that repeating words such as "very very..." could produce better images by more reliably activating neural network regions associated with subject terms." [3, 32] Intrigued, Jenny compares the prompts "_a very very very very beautiful cityscape_" [33] and "_a beautiful cityscape_." Surprisingly, the two prompts generate significantly different images. To understand why, Jenny analyzes the image representation trajectories and observes a detachment occurring at step 24, resulting in their final representations being much farther apart. From this, she concludes that the pose preservation of the _pixar_ phrase is a unique characteristic attributable to its slight divergence and decides to identify more such keywords.
### Discerning Challenges in Attributing AI Generations
Troy is a government policymaker responsible for creating policies related to AI-generated images in the entertainment and media industries. Recently, he has received numerous concerns from artists that their artwork has been exploited by AI models to create commercial products without their consent [6]. Troy is keen to help these artists be compensated for their contributions. In his research, he has learned about emergent tools that aim to help attribute AI-generated images to human artists [10, 23], which could potentially address artists' concerns. However, before proposing any policies, he needs to understand how and if such attribution may work.
Troy starts by launching Diffusion Explainer on his laptop, arriving at the Overview that describes how Stable Diffusion transforms a text prompt into a high-resolution image (Fig. 1). He realizes that the process of generating an image is iterative and involves refining noise into a vector representation of a high-resolution image that aligns with the text prompt. Curious about how the text prompt is processed, he clicks on the _Text Representation Generator_ to expand it to the _Text Operation View_ (Fig. 2A). Here, he discovers that the prompt is split into tokens and converted into vector representations. However, he is still unsure about how text guides image generation, so he displays the _Text-image Linkage Explanation_ (Fig. 2B). Here, he learns that the text representations with image-related information act as a bridge between text and images.
Troy proceeds to explore the incremental refinement process of image representation by examining the _Image Operation View_ (Fig. 3A). He discovers that each refinement step involves noise prediction and removal; UNet, a neural network, predicts the noise in the image representation of the step. He also learns about the _guidance scale_, a hyperparameter that adjusts how well the generated image adheres to the text prompt. Intrigued by the guidance scale, Troy accesses the _Interactive Guidance Explanation_ (Fig. 3B). After experimenting with different guidance scale values, he observes that a guidance scale value of 7 generates a realistic image that closely follows the text prompt. In contrast, values of 1 and 20 result in images that are either difficult to interpret or overly exaggerated.
Troy has now gained a good understanding of the image generation process of Stable Diffusion, including the factors involved such as text prompts, guidance scale, and the link between text and image. Based on this understanding, he realizes that relying solely on image analysis, without considering text prompts, will be insufficient in determining how an artist's works have been used to create AI-generated images. Troy is of the opinion that more research is necessary to reliably identify attributions of AI-generated images.
## 6 Conclusion
We introduce Diffusion Explainer, the first interactive web-based visualization tool that explains how Stable Diffusion generates high-resolution images from text prompts. Our tool tightly integrates a visual overview of Stable Diffusion's complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements. Its novel interactive visualization design enables users to discover the impacts of prompts on image generation. Diffusion Explainer runs in modern web browsers and is open-sourced. We hope our work will inspire further research and development of visualization tools that helps enhance people's understanding of generative AI technologies so they may be used responsibly.
Figure 4: **(A) The _Refinement Comparison View_ enables users to discover the impacts of prompts on image generation by comparing how image representations evolve differently over refinement timesteps, using UMAP, when guided by two related text prompts. Adding ”_pixar_” phrase changes the generated bunny’s style to be more cartoony and vibrant in colors and textures while preserving its pose. (B) The same _pixar_ phrase consistently preserves the poses of the elephant and squirrel.** |
2307.15512 | Catching a robber on a random $k$-uniform hypergraph | The game of \emph{Cops and Robber} is usually played on a graph, where a
group of cops attempt to catch a robber moving along the edges of the graph.
The \emph{cop number} of a graph is the minimum number of cops required to win
the game. An important conjecture in this area, due to Meyniel, states that the
cop number of an $n$-vertex connected graph is $O(\sqrt{n})$. In 2016,
Pra{\l}at and Wormald [Meyniel's conjecture holds for random graphs, Random
Structures Algorithms. 48 (2016), no. 2, 396-421. MR3449604] showed that this
conjecture holds with high probability for random graphs above the
connectedness threshold. Moreoever, {\L}uczak and Pra{\l}at [Chasing robbers on
random graphs: Zigzag theorem, Random Structures Algorithms. 37 (2010), no. 4,
516-524. MR2760362] showed that on a $\log$-scale the cop number demonstrates a
surprising \emph{zigzag} behaviour in dense regimes of the binomial random
graph $G(n,p)$. In this paper, we consider the game of Cops and Robber on a
hypergraph, where the players move along hyperedges instead of edges. We show
that with high probability the cop number of the $k$-uniform binomial random
hypergraph $G^k(n,p)$ is $O\left(\sqrt{\frac{n}{k}}\, \log n \right)$ for a
broad range of parameters $p$ and $k$ and that on a $\log$-scale our upper
bound on the cop number arises as the minimum of \emph{two} complementary
zigzag curves, as opposed to the case of $G(n,p)$. Furthermore, we conjecture
that the cop number of a connected $k$-uniform hypergraph on $n$ vertices is
$O\left(\sqrt{\frac{n}{k}}\,\right)$. | Joshua Erde, Mihyun Kang, Florian Lehner, Bojan Mohar, Dominik Schmid | 2023-07-28T12:14:07Z | http://arxiv.org/abs/2307.15512v2 | # Catching a robber on a random \(k\)-uniform hypergraph
###### Abstract.
The game of _Cops and Robber_ is usually played on a graph, where a group of cops attempt to catch a robber moving along the edges of the graph. The _cop number_ of a graph is the minimum number of cops required to win the game. An important conjecture in this area, due to Meyniel, states that the cop number of an \(n\)-vertex connected graph is \(O(\sqrt{n})\). In 2016, Pralat and Wormald (Meyniel's conjecture holds for random graphs, Random Structures Algorithms. 48 (2016), no. 2, 396-421. MR3449604] showed that this conjecture holds with high probability for random graphs above the connectedness threshold. Moreover, Luczak and Pralat (Chasing robbers on random graphs: Zigzag theorem, Random Structures Algorithms. 37 (2010), no. 4, 516-524. MR2760362] showed that on a log-scale the cop number demonstrates a surprising _zigzag_ behaviour in dense regimes of the binomial random graph \(G(n,p)\). In this paper, we consider the game of Cops and Robber on a hypergraph, where the players move along hyperedges instead of edges. We show that with high probability the cop number of the \(k\)-uniform binomial random hypergraph \(G^{k}(n,p)\) is \(O\left(\sqrt{\frac{n}{k}}\log n\right)\) for a broad range of parameters \(p\) and \(k\) and that on a log-scale our upper bound on the cop number arises as the minimum of _two_ complementary zigzag curves, as opposed to the case of \(G(n,p)\). Furthermore, we conjecture that the cop number of a connected \(k\)-uniform hypergraph on \(n\) vertices is \(O\left(\sqrt{\frac{n}{k}}\right)\).
Key words and phrases:Cops and Robber game, cop number, random hypergraph, expansion properties \({}^{1}\)Supported in part by the Austrian Science Fund (FWF): W 1230 and P 36131. For the purpose of open access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. \({}^{3}\)Supported in part by the NSERC Discovery Grant R611450 (Canada). This work is the result of author's visit of TU Graz under Oberwolfach's Simons Visiting Professors program in December 2021.
is at most three. More generally, it is known that the cop number is bounded for any proper minor-closed class of graphs [3], and there has been much research into determining the largest cop number of a graph that can be embedded in a fixed surface [7, 11, 14, 17, 25, 26].
Perhaps the most well-known conjecture in this area is Meyniel's conjecture (communicated by Frankl [13]).
**Conjecture 1.1**.: _Let \(G\) be a connected graph on \(n\) vertices. Then \(c\left(G\right)=O\left(\sqrt{n}\right)\)._
Despite much interest in this conjecture, there has been relatively little improvement to the trivial bound of \(O(n)\). Frankl [13] gave the first non-trivial upper bound on the cop number of \(O\left(\frac{n\log\log n}{\log n}\right)\), and this bound was improved to \(O\left(\frac{n}{\log n}\right)\) by Chiniforooshan [10]. As of today, the best known general upper bound on the cop number is \(n2^{-(1+o(1))\sqrt{\log n}}\), given independently by Lu and Peng [20] and by Scott and Sudakov [27]. We note that this bound is still \(\Omega\left(n^{1-o(1)}\right)\), and it remains an open question as to whether the cop number can be bounded by \(O\left(n^{1-c}\right)\) for any fixed \(c>0\)[4].
A natural step towards understanding Conjecture 1.1 is to consider the cop number of the _random_ graph \(G(n,p)\). For \(p\) constant, it was shown by Bonato, Hahn and Wang [6] that whp1 the cop number of \(G(n,p)\) is logarithmic in \(n\), and hence Conjecture 1.1 holds for almost all graphs. However, if we let \(p\) vary as a function of \(n\), then more interesting behaviour can be seen to develop. Indeed, Luczak and Pralat [21] showed that the cop number of \(G(n,p)\) behaves in a rather interesting manner in _dense_ regimes. Their result can be roughly summarised as follows, where we use \(\tilde{\Theta}(\cdot)\) to indicate a bound which holds up to logarithmic factors.
Footnote 1: Throughout the paper, all asymptotics are considered as \(n\to\infty\) and so, in particular, whp (with high probability) means with probability tending to one as \(n\to\infty\).
**Theorem 1.2** ([21, Theorem 1.1]).: _Let \(0<\alpha<1\) and \(d=np=n^{\alpha+o(1)}\)._
1. _If_ \(\frac{1}{2j+1}<\alpha<\frac{1}{2j}\)_, for some_ \(j\in\mathbb{N}\)_, then whp_ \[c\left(G(n,p)\right)=\Theta\left(d^{j}\right).\]
2. _If_ \(\frac{1}{2j}<\alpha<\frac{1}{2j-1}\)_, for some_ \(j\in\mathbb{N}\)_, then whp_ \[c\left(G(n,p)\right)=\tilde{\Theta}\left(\frac{n}{d^{j}}\right).\]
In particular, Theorem 1.2 implies that the function \(f\colon(0,1)\to\mathbb{R}\), defined as
\[f(x)=\frac{\log\left(\tilde{c}\left(G\left(n,n^{x-1}\right)\right)\right)}{ \log n}, \tag{1.1}\]
where \(\tilde{c}\) denotes the median of the cop number, has a characteristic zigzag shape (see Figure 1).
In particular, Theorem 1.2 implies that whp \(c\left(G(n,p)\right)=\tilde{O}\left(\sqrt{n}\right)\) throughout this range of \(p\), and that conversely there are choices of \(p\) where whp \(c\left(G(n,p)\right)=\tilde{\Theta}\left(\sqrt{n}\right)\) and Conjecture 1.1 is close to tight for almost all graphs of this density. Bollobas, Kun and Leader [5] gave a similar bound which holds also for sparser regimes of \(p\).
Meyniel's conjecture was finally resolved for all random graphs above the connectedness threshold by Pralat and Wormald [23]. In fact, their result holds for all random graphs with density above \(\frac{1}{2}\log n\).
**Theorem 1.3** ([23], Theorem 1.2).: _Let \(c>0\), and \(p(n-1)\geq\left(\frac{1}{2}+\epsilon\right)\log n\). Then whp_
\[c\left(G(n,p)\right)=O\left(\sqrt{n}\right).\]
In this paper we consider a variant of the Cops and Robber game on hypergraphs, and in particular \(k\)-uniform hypergraphs, which we call \(k\)_-graphs_, for \(k\in\mathbb{N}_{\geq 2}\). The game is defined analogously to the \(2\)-graph case, with the only difference being that the pieces move along hyperedges instead of edges. For the sake of brevity, when it is clear from the context that we are talking about a hypergraph, we will
refer to hyperedges as simply edges. Similarly to \(2\)-graphs, we define the _cop number_ of a hypergraph \(H\) to be
\[c\left(H\right)\coloneqq\min\left\{m\in\mathbb{N}\colon\,m\text{ cops have a winning strategy to catch a robber on }H\right\}.\]
This game was first considered by Gottlob, Leone and Scarcello [15] and by Adler [1]. For more recent results on the hypergraph game we refer the reader to [28], where some classic results on the cop number of \(2\)-graphs are generalised to this setting.
Note that by replacing every edge in a hypergraph by a clique, we arrive at an equivalent \(2\)-graph game on the same vertex set. Thus, the game of Cops and Robber on hypergraphs is equivalent to the \(2\)-graph game played on a restricted class of graphs. On the other hand, we can transform a graph \(G\) into a \(2k\)-uniform hypergraph \(H\) with \(c\left(G\right)=c\left(H\right)\) via a simple blow-up construction: We replace each vertex \(v\) in \(G\) by \(k\) vertices \(\{v_{1},v_{2},\ldots v_{k}\}\) and form a hypergraph \(H=H(G)\) on \(\{v_{i}\colon\,v\in V(G),i\in[k]\}\) by taking an edge of the form \(\{u_{1},u_{2},\ldots,u_{k},v_{1},v_{2},\ldots,v_{k}\}\) for each edge \(e=\{u,v\}\) of \(G\) (see Figure 2). It is then easy to check that \(c\left(G\right)=c\left(H\right)\), and moreover \(|V(H)|=k|V(G)|\).
From these two observations, it is easy to see that the following holds:
\[\max\left\{c\left(G\right)\colon\,G\text{ a graph },|V(G)|=\frac{2n}{k}\right\} \leq\max\left\{c\left(H\right)\colon\,H\text{ a }k\text{-graph },|V(H)|=n\right\} \tag{1.2}\] \[\leq\max\left\{c\left(G\right)\colon\,G\text{ a graph },|V(G)|=n\right\}.\]
In particular, as there are graphs with \(c\left(G\right)=\Omega\left(\sqrt{n}\right)\), there are also \(k\)-graphs with \(c\left(H\right)=\Omega\left(\sqrt{\frac{n}{k}}\right)\). It would seem surprising that such a simple construction, which is essentially graphical in nature,
Figure 1. Zigzag shape of the function \(f\)
Figure 2. An example of the blow-up construction to generate a \(2k\)-graph \(H\) from a \(2\)-graph that has the same cop number. In this case, \(k=5\), \(|V(H)|=20\) and \(c\left(H\right)=2\).
could capture the worst case behaviour for the cop number in hypergraphs of higher uniformity, but we conjecture that this bound is in fact tight.
**Conjecture 1.4**.: _Let \(H\) be a connected \(k\)-graph on \(n\) vertices. Then \(c\left(H\right)=O\left(\sqrt{\frac{n}{k}}\right)\)._
Clearly Conjecture 1.4 is a generalisation of Meyniel's conjecture, but we note further that for any polynomial function \(k:=k(n)=n^{\alpha}\) with \(\alpha<1\), Conjecture 1.4 implies Meyniel's conjecture. Indeed, if \(c\left(H\right)=O\left(\sqrt{\frac{n}{k(n)}}\right)=O\left(n^{\frac{1-n}{2}}\right)\) for all \(n\)-vertex \(n^{\alpha}\)-graphs, then by (1.2)
\[\max\left\{c\left(G\right):\,G\text{ a graph },|V(G)|=2m\right\}\leq\max\left\{c \left(H\right):\,H\text{ a }m^{\frac{n}{1-\alpha}}\text{-graph },|V(H)|=m^{\frac{1}{1-\alpha}}\right\}\leq O\left(\sqrt{m}\right).\]
As with Meyniel's Conjecture, a first step towards Conjecture 1.4 is to consider the behaviour of the cop number of _random_\(k\)-graphs.
### Main results
The _\(k\)-uniform binomial random hypergraph_, which we denote by \(G^{k}(n,p)\), is a random \(k\)-graph with vertex set \(\left\{n\right\}\) in which each edge, that is, each subset of \(\left[n\right]\) of size \(k\), appears independently with probability \(p\). Although the main focus of this paper is \(G^{k}(n,p)\), the strategies we develop for the cops work in a more general class of \(k\)-graphs, those satisfying certain expansion properties.
Very roughly, if we denote by \(N_{V}^{r}(v)\) the vertices that are at most at a fixed distance \(r\) from \(v\), then in \(G^{k}(n,p)\) we expect this set to be growing exponentially quickly in \(r\), with its size tightly concentrated around its expectation. Furthermore, for different vertices \(v\) and \(w\) we do not expect the neighbourhoods \(N_{V}^{r}(v)\) and \(N_{V}^{r}(w)\) to have a large intersection, and so, for small subsets \(A\subseteq\left[n\right]\) we expect the number of vertices at most at a fixed distance \(r\) from \(A\) to be around \(\left|A\right|\) times the size of \(N_{V}^{r}(v)\). Similarly, we expect the set of edges \(N_{E}^{r}(v)\) at most at a fixed distance \(r-1\) from \(v\) to be growing at some uniform exponential rate, and for ranges of \(p\) where the random hypergraph is sparse enough, and so few pairs of edges have a large intersection, this rate of growth should be roughly \(\frac{1}{k}\) times that of the vertex-neighbourhoods.
Informally, given \(\xi>0\) we say that a graph is \(\xi\)-expanding if the sizes of its vertex and edge-neighbourhoods have this uniform exponential growth, up to some multiplicative error in terms of \(\xi\). See Definition 2.1 for a precise definition of this notion.
Our first result supports Conjecture 1.4 up to a log-factor for \(k\)-graphs that are \(\xi\)-expanding for a fixed expansion constant \(\xi\).
**Theorem 1.5**.: _Let \(k\in\mathbb{N}_{\geq 2}\), let \(\xi>0\) and let \(G\) be a \(\xi\)-expanding \(k\)-graph on \(n\) vertices. Then_
\[c\left(G\right)\leq 20\xi^{-2}\sqrt{\frac{n}{k}}\log n.\]
In fact, depending on the relationship between the average degree \(d\) of the hypergraph, its uniformity \(k\) and its order \(n\), we can give a more refined bound for the cop number of a \(\xi\)-expanding hypergraph.
**Theorem 1.6**.: _Let \(k\in\mathbb{N}_{\geq 2},\xi\in(0,1]\) be fixed and let \(G\) be a \(\xi\)-expanding \(k\)-graph on \(n\) vertices with average vertex degree2\(d=d(G)\). For all \(j\in\mathbb{N}\) the cop number of \(G\) satisfies the following._
Footnote 2: Here the degree of a vertex \(v\) is the number of vertices which share an edge with \(v\), rather than the number of edges containing \(v\).
1. _If_ \(n^{\frac{1}{2j+1}}\leq d\leq\left(\frac{n}{k}\right)^{\frac{1}{2j}}\)_, then with_ \(\lambda=\left\lceil\frac{n}{d^{2j+1}}\log n\right\rceil,\)__ \[c\left(G\right)\leq 20\xi^{-2}d^{j}\lambda.\]
2. _If_ \(\left(\frac{n}{k}\right)^{\frac{1}{2j}}\leq d\leq n^{\frac{1}{2j}}\)_, then_ \[c\left(G\right)\leq 20\xi^{-1}\frac{n}{kd^{j}}\log n.\]
3. _If_ \(n^{\frac{1}{2j}}\leq d\leq\left(nk\right)^{\frac{1}{2j}}\)_, then with_ \(\lambda=\max\left\{\left\lceil\frac{n}{d^{2j}}\log n\right\rceil,\left\lceil \frac{k}{d}\log n\right\rceil\right\}\)_,_ \[c\left(G\right)\leq 20\xi^{-2}\frac{d^{j}}{k}\lambda.\]
_._
4. _If_ \((nk)^{\frac{1}{2j}}\leq d\leq n^{\frac{1}{2j-1}}\)_, then_ \[c\left(G\right)\leq 20\xi^{-1}\frac{n}{d^{j}}\log n.\]
Let us explain in more detail the bounds in Theorem 1.6. In general, the upper bounds in Theorem 1.6 are increasing in \(d\) in the regimes (1) and (3) and decreasing in \(d\) in the regimes (2) and (4). In particular, and perhaps surprisingly, if we fix \(k\) and \(n\) and vary \(d\), in certain regimes increasing the average degree, and hence the number of edges, can help the cops, and in other regimes increasing the number of edges can help the robber.
We note that some of the regimes of Theorem 1.6 can 'collapse' if the left border of a regime is larger than its right border. Specifically, this can happen in regime (1), if we have \(d\leq n^{\frac{1}{2j}}\) and \(n^{\frac{1}{2j+1}}>\left(\frac{n}{\bar{c}}\right)^{\frac{1}{j}}\) or equivalently \(k>n^{\frac{1}{2j+1}}\) and it can happen in regime (4), if \(d\leq n^{\frac{1}{2j-1}}\) and \((nk)^{\frac{1}{2j}}>n^{\frac{1}{2j-1}}\), or equivalently \(k>n^{\frac{1}{2j-1}}\). However, under the reasonable assumption that \(G\) is connected, we have \(k\leq d\), and so this second case does not occur, and the first case only occurs if \(n^{\frac{1}{2j+1}}\leq k\leq d\leq n^{\frac{1}{2j}}\) holds for some \(j\in\mathbb{N}\). In this case, regime (1) collapses and the cop number is bounded as in (2). Furthermore, we note that the second argument of the maximum in the definition of \(\lambda\) in regime (3) is only relevant in the special case where \(k\geq\sqrt{n}\). In that case, the factor takes its largest value of \(\log n\) for the smallest possible value of \(d\), namely \(d=k\). For increasing \(d\), the factor then decreases until \(d\) is larger than \(k\) by a log-factor, at which it attains its smallest value of \(1\). Otherwise, in regimes (1) and (3), the factor \(\lambda\) takes its largest value of \(\log n\) at the left border of the respective regimes of \(d\). Again, it then decreases with increasing \(d\) until it takes the value of \(1\), which happens as soon as \(d\) is bound away from the left border by a sufficiently large multiplicative factor (\(\log^{\frac{1}{2j+1}}n\) in (1) and \(\log^{\frac{1}{2j}}n\) in (3)).
Our final result shows that whp \(G^{k}(n,p)\) satisfies the desired expansion properties as long as \(k\) is growing with \(n\) and \(p\) is not too small.
**Theorem 1.7**.: _There exists a universal constant \(\xi>0\) such that if \(k=k(n),p=p(n)>0\) are such that \(k=\omega(\log n)\) and \(\frac{n}{k}\geq p\binom{n-1}{k-1}=\omega\left(\log^{3}n\right)\), then whp \(G^{k}(n,p)\) is \(\xi\)-expanding._
Let us give some intuition for the conditions on \(k\) and \(p\). The value \(p\binom{n-1}{k-1}\) is roughly the expected number of edges every vertex in \(G^{k}(n,p)\) meets, and the lower bound on this quantity ensures that we can assume this is concentrated around its expectation. On the other hand, \(pk\binom{n-1}{k-1}\) is roughly the expected degree of a vertex in \(G^{k}(n,p)\), and so it is natural to restrict this to be at most \(n\).
Note that it follows from Theorems 1.5 and 1.7 that Conjecture 1.8 holds for the same range of \(n,p\) and \(k\) up to polylogarithmic factors.
**Corollary 1.8**.: _If \(k=k(n),p=p(n)>0\) are such that \(k=\omega(\log n)\) and \(\frac{n}{\bar{c}}\geq p\binom{n-1}{k-1}=\omega(\log^{3}n)\), then whp \(c\left(G^{k}(n,p)\right)=\tilde{O}\left(\sqrt{\frac{n}{k}}\right)\)._
Furthermore, Theorem 1.7 allows us to apply Theorem 1.6 to \(G^{k}(n,p)\) for a broad range of parameters and we can bound \(c\left(G^{k}(n,p)\right)\) more precisely in certain ranges. It turns out that a sensible parameterisation to take is as follows. Let us define \(\hat{d}=\hat{d}(n,p,k)\coloneqq pk\binom{n-1}{k-1}\), which is roughly the expected degree of a vertex in \(G^{k}(n,p)\) and let \(\hat{d}=n^{\alpha}\) and \(k=n^{\beta}\) for some \(\alpha<\beta\leq\alpha\leq 1\). We consider the function \(f_{\beta}\colon(\beta,1)\to\mathbb{R}\) defined as
\[f_{\beta}(\alpha)\coloneqq\frac{\log\left(\tilde{c}\left(G^{k}(n,p)\right) \right)}{\log n},\]
with \(\tilde{c}\) being the upper bound for the cop number obtained from Theorem 1.6. It follows that \(f_{\beta}\) again has a characteristic zigzag shape, see Figure 3. In contrast to the case of \(G(n,p)\) (see Figure 1), the zigzag shape in \(G^{k}(n,p)\) arises as the intersection of two complementary zigzags, coming from two different strategies, and so has twice as many peaks and troughs. In particular, it can be seen that \(f_{\beta}(\alpha)\leq(1+o(1))\frac{1-\beta}{2}\) for all \(\alpha\in(\beta,1)\), corresponding to the bound of \(\tilde{O}\left(\sqrt{\frac{n}{k}}\right)\) on the cop number.
### Techniques
To give a lower bound for the cop number we need to exhibit a strategy for the cops. As in the work of Luczak and Pralat[21], we show the existence of a strategy for the cops to _surround_ the robber using a probabilistic argument. Whilst in [21] the strategies focused solely on surrounding a small _vertex_-neighbourhood of the robber, we also consider a second type of strategy which aims to surround a small _edge_-neighbourhood, and utilise both these strategies in our result.
Assuming the robber starts on a vertex \(v\), after his first \(r\) moves the robber has to be in the \(r\)-th vertex-neighbourhood \(N_{V}^{r}(v)\), and specifically in some edge of the \(r\)-th edge-neighbourhood \(N_{E}^{r}(v)\). The cops aim to occupy each edge in \(N_{E}^{r}(v)\) before the robber has had time to leave this set. Since the cops move first and a cop can catch the robber in a single move once they occupy the same edge, the cops need to occupy each edge in \(N_{E}^{r}(v)\) within their first \(r\) moves (see Figure 4). The strategy of surrounding via vertices works similarly, the only difference being that the cops surround the \(r\)-th vertex-neighbourhood and have \(r+1\) moves before the robber can escape. The pay-off in choosing to surround via vertices or edges can be seen as follows - in the former we can use cops at a larger distance, and so in general we will have more cops to work with, whereas in the latter, since each edge contains many vertices, we will not have to occupy as many edges as we would have vertices, and so perhaps we can catch the robber with fewer cops.
For a fixed vertex \(v\) and a fixed distance \(r\), the existence of such a strategy can then be reduced to a _matching problem_ - for instance in the case of the edge strategy, for each edge \(e\) at distance at most \(r\) from \(v\) we need to assign a unique cop at distance at most \(r\) from \(e\), whose strategy is to occupy \(e\) within the first \(r\) turns of the game. We aim to show (see Claims 1, 2) that such an assignment of cops can be found with _positive_ probability if we choose a _random_ set of cops, assigning a cop to each vertex in the graph independently with some probability \(q\) (see Figure 4).
Assuming that our \(k\)-graph \(G\) is \(\xi\)-expanding, we have quite good control over the sizes of \(N_{V}^{r}(v)\) and \(N_{E}^{r}(v)\), and also over the number of vertices at a fixed distance from each vertex and edge contained in these sets. Using some standard probabilistic and combinatorial tools, we can show that for an appropriate choice of \(q\), with positive probability we can find an appropriate assignment of
Figure 3. Alternating zigzag shape of the function \(f_{\beta}(\alpha)\) for \(\beta=\frac{2}{19}\). The blue (dashed) line is the upper bound coming from the edge strategy, the red (dotted) line is the upper bound coming from the vertex strategy. As can be seen, the two strategies give rise to two alternating zigzag shapes, that together make up the single zigzag with increased frequency. We note the worst bounds occur at the intersection points of the two lines, which all lie on the green (solid) line at \(\frac{1-\beta}{2}\).
cops for _each_ possible starting vertex \(v\), and bound the number of cops \(m\) we use in such a strategy, which in general depends not only on \(r\), but also on the uniformity \(k\) and average degree \(d\) of \(G\).
This leads to a family of bounds on the cop number, one for each \(r\in\mathbb{N}\), for both the vertex and edge surrounding strategy. For a fixed choice of parameters \(k\) and \(d\), we then have to solve an integer optimisation problem to find which choice of \(r\) (and of a vertex or edge surrounding strategy) leads to the best bound on the cop number, from which we can derive the bounds in Theorem 1.6.
### Outline of the paper
The rest of the paper is structured as follows. In Section 2 we introduce some notation and important definitions and state some auxiliary results. In Section 3 we prove Theorems 1.5 and 1.6 and in Section 4 we prove Theorem 1.7. We conclude in Section 5 by discussing some unresolved questions and possible directions for future research.
## 2. Preliminaries
All asymptotics in the paper are taken as \(n\) tends to infinity. We say that a statement \(A(n)\) holds _with high probability_ (_whp_ for short) if \(\lim_{n\to\infty}\mathbb{P}\left[A(n)\right]=1\). We use standard Landau notation for all asymptotics. Furthermore, we omit floors or ceilings in proofs to improve readability. Let \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\) and \(\mathbb{N}_{\geq 2}=\{2,3,\ldots\}\). Throughout the paper we assume \(k\in\mathbb{N}_{\geq 2}\).
Let \(G\) be a \(k\)-uniform hypergraph, or \(k\)-graph, on vertex set \([n]\). The _distance_ between two different vertices \(d_{G}(x,y)\) is equal to the smallest length of a loose path between them, that is, the smallest \(t\in\mathbb{N}\) such that there exists a sequence \(e_{1},e_{2},\ldots,e_{t}\) of edges such that \(x\in e_{1},y\in e_{t}\) and \(e_{i}\cap e_{i+1}\neq\emptyset\) for all \(1\leq i\leq t-1\). Furthermore, we define \(d_{G}(x,x)=0\), for all \(x\in V(G)\). The distance between two sets of vertices \(A\) and \(A^{\prime}\) is defined as the minimum distance between all pairs of vertices from the two sets:
\[d_{G}(A,A^{\prime})\coloneqq\min_{u\in A}\min_{v\in A^{\prime}}d_{G}(u,v).\]
For a set of vertices \(A\subseteq V(G)\) and \(r\in\mathbb{N}_{0}\) we denote the _closed \(r\)-th vertex-neighbourhood of \(A\)_ by \(N_{V}^{r}\left(A\right)\). Formally,
\[N_{V}^{r}\left(A\right)\coloneqq\{v\in V(G):d_{G}(A,v)\leq r\}.\]
For a set of edges \(B\subseteq E(G)\) we write
\[V_{B}\coloneqq\{v\in e\colon e\in B\}\]
Figure 4. A visualisation of the edge-surrounding strategy. The cops try to cover all edges of the third edge-neighbourhood of \(v\) in \(3\) moves, which is possible if there is a matching between \(N_{E}^{3}(v)\) and all vertices occupied by cops within distance three of these edges, which covers all edges of \(N_{E}^{3}(v)\).
for the set of all vertices contained in at least one of the edges in \(B\). We extend the definition of the closed \(r\)-th vertex-neighbourhood to sets of edges in the obvious way by setting
\[N_{V}^{r}(B)\coloneqq N_{V}^{r}(V_{B})=\bigcup_{v\in V_{B}}N_{V}^{r}(v)\,.\]
Furthermore, for a set of vertices \(A\subseteq V(G)\) and \(r\in\mathbb{N}\), we define the _closed \(r\)-th edge-neighbourhood of \(A\)_ as
\[N_{E}^{r}(A)\coloneqq\{e\in E(G):d_{G}(e,A)\leq r-1\}\,.\]
Note that to be included in the \(r\)-th edge-neighbourhood, an edge has to be within distance \(r-1\) of the respective vertex set. The motivation behind this parameter shift is that the amount of 'discovered' vertices is the same for the \(r\)-th vertex- and edge-neighbourhood, or in other words \(N_{V}^{r}(A)=V_{N_{E}^{r}(A)}\). For notational convenience, we will omit the superscript for the _first_ vertex- and edge-neighbourhood and write \(N_{V}(A)\coloneqq N_{V}^{1}(A)\) and \(N_{E}(A)\coloneqq N_{E}^{1}(A)\), respectively.
We denote the _average vertex degree_ of \(G\) by
\[d(G)\coloneqq\frac{1}{|V(G)|}\sum_{v\in V}|N_{V}(v)|\,,\]
where we consider the _degree_ of a vertex \(v\) to be \(|N_{V}(v)|\), and not \(|N_{E}(v)|\) which is also sometimes referred to as the degree of a vertex in a hypergraph.
Note that the average vertex degree of \(G^{k}(n,p)\), i.e., \(d\coloneqq d\left(G^{k}(n,p)\right)\), is a random variable that is concentrated around its expectation, which is given as
\[\mathbb{E}\left[d\right]=(n-1)\left(1-(1-p)^{\binom{n-2}{k-2}}\right). \tag{2.1}\]
For appropriate ranges of \(p\) and \(k\) we can estimate \(\mathbb{E}\left[d\right]\) as follows:
\[\mathbb{E}\left[d\right]=(n-1)\left(1-(1-p)^{\binom{n-2}{k-2}}\right)=np \binom{n-2}{k-2}\left(1+o(1)\right)=pk\binom{n-1}{k-1}\left(1+o(1)\right), \tag{2.2}\]
where the penultimate equation holds when \(p\binom{n-2}{k-2}=o(1)\) and the final equation holds for \(k=o(n)\). In the context of Cops and Robber games on random hypergraphs, these are reasonable assumptions, as otherwise either each edge-neighbourhood (deterministically) or each vertex-neighbourhood (in expectation) covers a constant fraction of the vertex set. If the graph is also connected, it can be shown in both cases that the cop number is then whp at most logarithmic in \(n\). We will also restrict ourselves to the case \(d\geq k\geq 2\), since for \(d<k\) the hypergraph will contain isolated vertices. Since the cop number is additive over disjoint unions, it is natural to restrict our attention to connected hypergraphs.
For convenience, rather than working with \(d\), we will work with the following explicit quantity
\[\hat{d}=\hat{d}(n,p,k)\coloneqq pk\binom{n-1}{k-1}. \tag{2.3}\]
We will show later (see Lemma 4.3) that the size of the first vertex-neighbourhood of every vertex in \(G^{k}(n,p)\) lies close to \(\hat{d}\), from which it then follows, that \(d\) is approximately \(\hat{d}\).
**Definition 2.1**.: Let \(G\) be a \(k\)-graph on \(n\) vertices with average vertex degree \(d\geq k\). Given a positive constant \(0<\xi\leq 1\), which we call the _expanding constant_, we say \(G\) is \(\xi\)-expanding if \(G\) has the following properties.
1. For every vertex \(v\in V(G)\) and \(r\in\mathbb{N}\) satisfying \(d^{r}\leq\sqrt{nk}\), \[\left|N_{E}^{r}(v)\right|\leq\frac{1}{\xi}\frac{d^{r}}{k}.\]
2. For every subset \(A\subseteq V(G)\) of vertices and \(r\in\mathbb{N}\), \[\xi\min\left\{|A|d^{r},n\right\}\leq\left|N_{V}^{r}(A)\right|\leq\frac{1}{\xi} |A|d^{r}.\]
**(A.3)**: For every subset \(B\subseteq E(G)\) of edges and \(r\in\mathbb{N}\),
\[\xi\min\left\{|B|kd^{r},n\right\}\leq\left|N_{V}^{r}\left(B\right)\right|.\]
Throughout the paper we use the following corollaries of the Chernoff bounds (see for example [18, Theorem 2.1, Corollary 2.3]):
**Theorem 2.2**.: _Let \(X\sim Bin(n,p)\). Then for any \(t>0\), we have_
\[\mathbb{P}\left[\left|X-\mathbb{E}\left[X\right]\right|\geq t\right]\leq 2 \exp\left(-\frac{t^{2}}{2\left(\mathbb{E}\left[X\right]+t/3\right)}\right), \tag{2.4}\]
_and_
\[\mathbb{P}\left[\left.X\leq\mathbb{E}\left[X\right]-t\right]\leq\exp\left(- \frac{t^{2}}{2\mathbb{E}\left[X\right]}\right). \tag{2.5}\]
_In particular, if \(a\leq 10\mathbb{E}\left[X\right]\), then_
\[\mathbb{P}\left[\left.X\leq a\right]\leq\exp\left(-4a\right). \tag{2.6}\]
An important step in the proof of the main theorems is constructing matchings in specific bipartite graphs which cover one partition class. To this end we use Hall's marriage theorem, which we state here for the sake of completeness.
**Theorem 2.3** ([16], Theorem 1).: _Let \((A\cup B,E)\) be a bipartite graph. The following two statements are equivalent._
1. _There is a matching covering all vertices of_ \(A\)_._
2. \(|N_{V}(X)\setminus X|\geq|X|\) _for all_ \(X\subseteq A\)_._
## 3. Proofs of Theorems 1.5 and 1.6
We start by proving Theorem 1.6, from which Theorem 1.5 follows as a direct consequence.
Proof of Theorem 1.6.: Let \(G\) be a \(\xi\)-expanding \(k\)-graph on \(n\) vertices. Note that, as the stated bounds on the cop number are all clearly at least \(20\), we can assume w.l.o.g. that \(n\geq 20\).
Our strategy is to choose the initial placement of our cops in such a way that we can catch the robber in \(j\) moves for some \(j\in\mathbb{N}\). In order to show the existence of such a choice of initial positions, we will use the expansion properties of \(G\) to show that a random choice succeeds with positive probability.
In regimes (1) and (4) we will catch the robber by surrounding its \((j-1)\)-st vertex-neighbourhood. In fact, cops that start too far from the robber will not actively participate in the game. The following key claim characterises how many cops we need in the respective regimes to guarantee that sufficiently many cops are close enough to the starting vertex of the robber to make this strategy work. The proof of the claim is deferred to the end of this section.
**Claim 1**.: _Let \(j\in\mathbb{N}\). There exists a subset \(Y\subseteq V(G)\) such that for every vertex \(v\in V(G)\) there exists an injection \(f\colon N_{V}^{j-1}\left(v\right)\to Y\) such that for every vertex \(x\in N_{V}^{j-1}\left(v\right)\), we have \(d_{G}(x,f(x))\leq j\). Furthermore,_
1. _if_ \(j\neq 1\) _and_ \(n^{\frac{1}{j-1}}\leq d\leq\left(\frac{n}{k}\right)^{\frac{1}{2j-2}}\)_, then_ \(|Y|\leq 20\xi^{-2}d^{j-1}\left\lceil\frac{n}{d^{2j-1}}\log n\right\rceil\)_;_
2. _if_ \(\left(nk\right)^{\frac{1}{2j}}\leq d\leq n^{\frac{1}{j-1}}\)_, then_ \(|Y|\leq 20\xi^{-1}\frac{n}{d^{j}}\log n\)_._
Given \(Y\) as in the claim, the cops' strategy is to initially occupy the vertices of \(Y\). The robber starts on some vertex \(v\). By Claim 1 there exists an injection \(f\) such that for every _vertex_\(x\in N_{V}^{j-1}\left(v\right)\), the cop on vertex \(f(x)\) is within distance \(j\) of \(x\) at the start of the game. Each cop which starts on a vertex \(w\) in the image \(f\left(N_{V}^{j-1}\left(v\right)\right)\) moves to the vertex \(f^{-1}(w)\) in the first \(j\) moves. After \(j-1\) moves the robber is positioned at some vertex \(w\in N_{V}^{j-1}\left(v\right)\). Since the cops move first, in the next turn the cop that started on the vertex \(f(w)\) moves to the vertex \(w\) and catches the robber. Note that in regime (1), we use the described strategy after applying the index shift \(j\to j+1\) to obtain the desired result.
In regimes (2) and (3) we instead catch the robber by surrounding his \(j\)-th edge-neighbourhood. Similar to the previous case, it suffices to prove the following claim, which we will do at the end of this section.
**Claim 2**.: _Let \(j\in\mathbb{N}\). There exists a subset \(Z\subseteq V(G)\) such that for all vertices \(v\in V(G)\) there exists an injection \(g\colon N_{E}^{j}(v)\to Z\), such that for every edge \(e\in N_{E}^{j}(v)\), we have \(d_{G}(e,g(e))\leq j\). Furthermore,_
1. _if_ \(\left(\frac{n}{k}\right)^{\frac{1}{2j}}\leq d\leq n^{\frac{1}{2j}}\)_, then_ \(|Z|\leq 20\xi^{-1}\frac{n}{kd^{j}}\log n\)_;_
2. _if_ \(n^{\frac{1}{2j}}\leq d\leq(nk)^{\frac{1}{2j}}\)_, then_ \(|Z|\leq 20\xi^{-2}\frac{d^{j}}{k}\left\lceil\frac{n}{d^{2j}}\log n\right\rceil \left\lceil\frac{k}{d^{j}}\log n\right\rceil.\)__
Indeed, given such a set \(Z\) the cops' strategy is to initially occupy the vertices of \(Z\). The robber starts on some vertex \(v\). By Claim 2 there exists an injection \(g\) such that for every _edge_\(e\in N_{E}^{j}(v)\), we have \(d_{G}(e,g(e))\leq j\). Each cop which starts on a vertex \(w\) in the image \(g\left(N_{E}^{j}(v)\right)\) moves to some vertex \(u\) in the edge \(g^{-1}(w)\) in the first \(j\) moves. After \(j\) moves, the robber is on some vertex \(x\) in some edge \(e\in N_{E}^{j}(v)\). Since the cops move first, the cop that started at \(g(e)\) is currently at some vertex \(u\in e\), and hence can catch the robber in the next move.
It remains to prove the two claims.
Proof of Claim 1.: Clearly such a set \(Y\) exists if we do not make any restrictions on its size, so we may assume that one of (a) or (d) holds. We will choose our set \(Y\) by specifying some probability \(q\) and letting each vertex lie in \(Y\) independently with probability \(q\). We show that with positive probability a random choice of \(Y\) satisfies the conclusions of the claim, and hence there must exist some suitable set \(Y\).
Let us start with case (d), so we are assuming that \((nk)^{\frac{1}{2j}}\leq d\leq n^{\frac{1}{2j-1}}\). In this case we set \(q=10\xi^{-1}d^{-j}\log n\), where we note that our assumptions on \(d\) ensure that \(q\leq 1\). Since \(|Y|\sim\operatorname{Bin}(n,q)\), it follows from the Chernoff bound (2.4) that \(|Y|\leq 20\frac{n}{\xi d^{j}}\log n\) with probability at least \(\frac{2}{3}\). We will show that an injection as stated in the claim exists with probability at least \(\frac{2}{3}\), and hence \(Y\) satisfies the conclusion of the claim with probability at least \(\frac{1}{3}>0\).
Given a fixed vertex \(v\in V\), an injection of the desired form corresponds to a matching in the bipartite graph \(H=\left(N_{V}^{j-1}(v)\cup Y,E\right)\) with edge set \(E=\left\{(a,b)\colon a\in N_{V}^{j-1}(v),b\in Y,d_{G}(a,b)\leq j\right\}\), which covers all vertices of \(N_{V}^{j-1}(v)\). To find a matching as described above, it is enough to check that Hall's condition (see Theorem 2.3) is satisfied for all \(A\subseteq N_{V}^{j-1}(v)\).
We split into two cases. First, suppose that \(A\subseteq N_{V}^{j-1}(v)\) with \(a\colon=|A|\leq\frac{n}{d^{j}}\). Slightly abusing notation, we write \(N_{H}(A)\) for the vertices at _exactly_ distance \(1\) from \(A\) in the auxiliary graph \(H\). By Property **(A.2)** it follows that \(|N_{V}^{j}(A)|\geq\xi ad^{j}\). Thus, by our choice of \(Y\), we have \(|N_{H}(A)|=|Y\cap N_{V}^{j}(A)|\) stochastically dominates a binomial random variable \(\operatorname{Bin}\left(\xi ad^{j},q\right)\), the expectation of which satisfies \(\mathbb{E}\left[\operatorname{Bin}\left(\xi ad^{j},q\right)\right]=\xi ad^{j }q\geq 10a\log n\). Therefore, by the Chernoff bound (2.6) it follows that
\[\mathbb{P}\left[\,|N_{H}(A)|<|A|\right]\leq\mathbb{P}\left[\operatorname{Bin} \left(\xi ad^{j},q\right)<a\right]\leq\mathbb{P}\left[\,\operatorname{Bin} \left(\xi ad^{j},q\right)\leq a\log n\right]\leq\exp(-4\,a\log n)=n^{-4a}.\]
Then, using a union bound over all sets \(A\subseteq N_{V}^{j-1}(v)\) with \(|A|\leq\frac{n}{d^{j}}\) we can bound the probability that there exists such a set \(A\) that violates Hall's condition from above by
\[\sum_{a=1}^{\frac{n}{d^{j}}}\left(\begin{vmatrix}|N_{V}^{j-1}(v)\\ a\end{vmatrix}\right)n^{-4a}\leq\sum_{a=1}^{\frac{n}{d^{j}}}\binom{n}{a}n^{-4a }\leq\sum_{a=1}^{n}n^{-3a}\leq\frac{1}{6n}. \tag{3.1}\]
In the second case, when \(a:=|A|>\frac{n}{d^{j}}\), we note that since \(A\subseteq N_{V}^{j-1}(v)\), by Property **(A.2)** we have \(a\leq\left|N_{V}^{j-1}(v)\right|\leq\xi^{-1}d^{j-1}\) and \(|N_{V}^{j}(A)|\geq\xi n\). Hence, \(|N_{H}(A)|=|Y\cap N_{V}^{j}(A)|\) stochastically dominates a binomial random variable \(\operatorname{Bin}\left(\xi n,q\right)\), the expectation of which satisfies
\[\mathbb{E}\left[\operatorname{Bin}\left(\xi n,q\right)\right]=\xi nq=\frac{10n} {\xi d^{j}}\log n\geq 10\xi^{-1}d^{j-1}\log n.\]
Here we used the fact that in this regime we have \(d\leq n^{\frac{1}{2j-1}}\). From the Chernoff bound (2.6) it follows that
\[\mathbb{P}\left[\,|N_{H}(A)|<|A|\right]\leq\mathbb{P}\left[\operatorname{Bin} \left(\xi n,q\right)<a\right]\leq\mathbb{P}\left[\,\operatorname{Bin}\left(\xi n,q\right)\leq\xi^{-1}d^{j-1}\log n\right]\leq\exp(-4\,d^{j-1}\log n)=n^{-4d^{j -1}}.\]
Again, using the union bound and the fact that \(a\leq\xi^{-1}d^{j-1}\) we can bound the probability that there is such a set \(A\) violating Hall's condition from above by
\[\sum_{a=\frac{n}{d^{j}}+1}^{\xi^{-1}d^{j-1}}\left(\left|N_{V}^{j-1}(v)\right| \atop a\right)n^{-4d^{j-1}}\leq 2^{\left|N_{V}^{j-1}(v)\right|}n^{-4d^{j-1}} \leq 2^{-\xi^{-1}d^{j-1}}n^{-4d^{j-1}}\leq\frac{1}{6n}. \tag{3.2}\]
Thus, for every vertex \(v\) an injection of the desired form exists with probability at least \(1-\frac{1}{3n}\). Using another union bound over all vertices, we can bound the probability that there exists a vertex for which there is no such injection by \(\frac{1}{3}\), concluding the proof in the case (d).
In the case (a), where \(n^{\frac{1}{2j-1}}\leq d\leq\left(\frac{n}{k}\right)^{\frac{1}{2j-2}}\), we proceed similarly as in case (d), but with a slightly different value of \(q\). We let \(q=10\xi^{-2}\frac{d^{j-1}}{n}\left[\frac{n}{d^{j-1}}\log n\right]\), noting again that our assumptions on \(d\) ensure that \(q\leq 1\).
Arguing as before, splitting into cases according to whether or not \(|A|>\frac{n}{d^{j}}\), we see that it suffices to prove the following two inequalities:
\[\sum_{a=1}^{\frac{n}{d^{j}}}\left(\left|N_{V}^{j-1}(v)\right|\atop a\right) \mathbb{P}\left[\left|\mathrm{Bin}\left(\xi ad^{j},q\right)<a\right|\right] \leq\frac{1}{6n}, \tag{3.3}\]
and
\[\sum_{a=\frac{n}{d^{j}}+1}^{\xi^{-1}d^{j-1}}\left(\left|N_{V}^{j-1}(v)\right| \atop a\right)\mathbb{P}\left[\left|\mathrm{Bin}\left(\xi n,q\right)<a\right| \right]\leq\frac{1}{6n}. \tag{3.4}\]
To show (3.3), we note that
\[\mathbb{E}\left[\mathrm{Bin}\left(\xi ad^{j},q\right)\right]=\xi ad^{j}q\geq 1 0a\frac{d^{2j-1}}{n}\left[\frac{n}{d^{2j-1}}\log n\right]\geq 10a\log n.\]
Then, as before, it is clear that (2.6) yields the desired concentration to bound the sum as in (3.1).
Similarly, to show (3.4), we first note that, since \(\left\lceil\frac{n}{d^{2j-1}}\log n\right\rceil\geq 1\), it follows that \(\mathbb{E}\left[\mathrm{Bin}\left(\xi n,q\right)\right]=\xi nq\geq 10\xi^{-1}d^{j-1}\). Hence, by (2.6)
\[\mathbb{P}\left[\mathrm{Bin}\left(\xi n,q\right)>\xi^{-1}d^{j-1}\right]\leq e ^{-4\xi^{-1}d^{j-1}},\]
and since \(a\leq\xi^{-1}d^{j-1}\) by Property **(A.2)**, we can bound the sum as in (3.2), although we have to be more careful in our estimates. Plugging into (3.4) we obtain
\[\sum_{a=\frac{n}{d^{j}}+1}^{\xi^{-1}d^{j-1}}\left(\left|N_{V}^{j-1 }(v)\right|\atop a\right)e^{-4\xi^{-1}d^{j-1}} \leq 2^{\xi^{-1}d^{j-1}}e^{-4\xi^{-1}d^{j-1}} \tag{3.5}\] \[\leq e^{-3\xi^{-1}d^{j-1}}\leq e^{-3n^{\frac{1}{3}}}\leq\frac{1} {6n},\]
where we used the facts that \(d^{j-1}\geq n^{\frac{j-1}{2j-1}}\geq n^{\frac{1}{3}}\), as \(j\geq 2\), and that \(n\geq 20\).
Proof of Claim 2.: As in the previous claim, the existence of such a set \(Y\) is clear if we make no assumptions on its size, so we may assume that one of (b) or (c) holds. Again, we will choose the set \(Y\) by letting each vertex lie in \(Y\) independently with some fixed probability \(q\), and show that with positive probability such a set \(Y\) satisfies the conclusions of the claim.
Let us start with case (c), where \(n^{\frac{1}{2j}}\leq d\leq(nk)^{\frac{1}{2j}}\). Here we set \(q=10\xi^{-2}\frac{d^{j}}{nk}\left\lceil\frac{n}{d^{j}}\log n\right\rceil \left\lceil\frac{n}{d^{2j}}\log n\right\rceil\). As in the previous claim, it follows from (2.4) that \(|Y|\leq 20\frac{d^{j}}{\xi^{-1}k}\left\lceil\frac{n}{d^{2j}}\log n\right\rceil\left \lceil\frac{n}{d^{2j}}\log n\right\rceil\) with probability at least \(\frac{2}{3}\). We will show that an injection as stated in the claim exists with probability at least \(\frac{2}{3}\), and hence \(Y\) satisfies the conclusion of the claim with probability at least \(\frac{1}{3}>0\).
Given a fixed \(v\in V\), an injection of the desired form corresponds to a matching in the bipartite graph \(H=\left(N_{E}^{j}(v)\cup Y,F\right)\) with \(F=\left\{(e,b)\colon e\in N_{E}^{j}(v),b\in Y,d_{G}(e,b)\leq j\right\}\), which covers all of \(N_{E}^{j}(v)\).
To find such a matching, it is enough to check that Hall's condition (see Theorem 2.3) is satisfied for all \(B\subseteq N_{E}^{j}(\nu)\). We again split into two cases, depending on the size of \(B\).
First, suppose that \(b:=|B|\leq\frac{n}{kd^{j}}\). Then, by Property **(A.3)** it follows that \(\left|N_{V}^{j}(B)\right|\geq\xi bkd^{j}\) and so \(|N_{H}(B)|=\left|Y\cap N_{V}^{j}(B)\right|\) stochastically dominates a binomial random variable \(\operatorname{Bin}\left(\xi bkd^{j},q\right)\) with expectation
\[\mathbb{E}\left[\operatorname{Bin}\left(\xi bkd^{j},q\right)\right]=\xi bkd^ {j}q\geq 10b\frac{d^{2j}}{n}\left[\frac{n}{d^{2j}}\log n\right]\geq 10b\log n.\]
Similarly to the previous case, using the Chernoff bound (2.6) we can bound the probability that Hall's condition fails for such a set \(B\) from above by
\[\mathbb{P}\left[|N_{H}(B)|<b\right]\leq\mathbb{P}\left[\operatorname{Bin} \left(\xi bkd^{j},q\right)<b\right]\leq n^{-4b}.\]
Taking a union bound over all sets \(B\subseteq N_{E}^{j}(\nu)\) with \(|B|\leq\frac{n}{kd^{j}}\) as in (3.1), we see that the probability that any such set violates Hall's condition is at most \(\frac{1}{6n}\).
In the second case when \(b:=|B|>\frac{n}{kd^{j}}\), we note that as \(B\subseteq N_{E}^{j}(\nu)\), by Property **(A.1)** we have \(b\leq\frac{d^{j}}{\xi k}\) and by Property **(A.3)** we have \(|N_{V}^{j}(B)|\geq\xi n\). Hence, \(|N_{H}(B)|=\left|Y\cap N_{V}^{j}(B)\right|\) stochastically dominates a binomial random variable \(\operatorname{Bin}\left(\xi n,q\right)\) with expectation \(\mathbb{E}\left[\operatorname{Bin}\left(\xi n,q\right)\right]=\xi nq\geq 10 \frac{d^{j}}{\xi k}\). Hence, from (2.6) it follows that
\[\mathbb{P}\left[\operatorname{Bin}\left(\xi n,q\right)<b\right]\leq\mathbb{P} \left[\operatorname{Bin}\left(\xi n,q\right)\leq\frac{d^{j}}{\xi k}\right] \leq e^{-4d^{j}/(\xi k)}.\]
Again, taking a union bound over all sets \(B\) of size \(\frac{n}{kd^{j}}<|B|\leq\frac{d^{j}}{\xi k}\) we can bound from above the probability that Hall's condition fails for some \(B\) with \(|B|kd^{j}>n\) from above as in (3.5). Note that in the case \(j=1\) and \(k\geq\frac{d}{\log n}\) we need the additional factor \(\left\lceil\frac{n}{d^{2j}}\log n\right\rceil\) for this union bound to work.
Hence, for every vertex \(\nu\), an injection of the desired form exists with probability at least \(1-\frac{1}{3n}\). Using another union bound over all vertices, we can bound the probability that there exists a vertex for which there is no such injection by \(\frac{1}{3}\), concluding the proof in the case (c).
The proof in the case (b) is again analogous, using \(q=\frac{10}{\xi kd^{j}}\log n\). Since the calculations are similar in nature to cases (a), (c) and (d) we omit them.
Theorem 1.5 then follows by checking that in each regime the bounds given by Theorem 1.6 are not significantly larger than \(\sqrt{\frac{n}{k}}\).
Proof of Theorem 1.5.: Let \(j\in\mathbb{N}\) be such that the average degree \(d:=d(G)\) of \(G\) falls into one of the regimes defined in Theorem 1.6. We note that the upper bound in regimes (1) and (3) is increasing in \(d\) and the upper bound in regimes (2) and (4) is decreasing in \(d\). Hence, the bound obtained by applying Theorem 1.6 to \(G\) is at least as good as the bound given at the beginning of regime (2), where \(d=\left(\frac{n}{k}\right)^{\frac{1}{2j}}\), or at the beginning of regime (4), where \(d=\left(nk\right)^{\frac{1}{2j}}\).
In the first case, the cop number is bounded from above by
\[c\left(G\right)\leq 20\xi^{-2}\frac{n}{kd^{j}}\log n=20\xi^{-2}\frac{n}{k} \sqrt{\frac{k}{n}}\log n=20\xi^{-2}\sqrt{\frac{n}{k}}\log n,\]
and in the second case by
\[c\left(G\right)\leq 20\xi^{-2}\frac{n}{d^{j}}\log n=20\xi^{-2}\frac{n}{\sqrt{ nk}}\log n=20\xi^{-2}\sqrt{\frac{n}{k}}\log n.\]
## 4. Proof of Theorem 1.7
To show the existence of a \(\xi\in(0,1]\) such that whp \(G^{k}(n,p)\) is \(\xi\)-expanding we will proceed as follows: First, we will show that \(G^{k}(n,p)\) expands very well (in terms of the expansion constant) in the first edge-neighbourhood and in the first vertex-neighbourhood. Then we will inductively use these results to extend the expansion properties to larger distance neighbourhoods. As might be expected, the expansion constant remains quite good until the neighbourhoods come close to containing the whole graph. We begin by showing that the first edge-neighbourhoods of subsets of \(G^{k}(n,p)\) expand well. In fact, for our inductive step it will be necessary to show a stronger property that unless a subset \(A\) is too large, the size of its first edge-neighbourhood will be quite tightly concentrated around the value \(\frac{d}{k}|A|\), where we recall from (2.3) that \(\hat{d}=pk_{k-1}^{\binom{n-1}{k-1}}\). In order to get good estimates in larger neighbourhoods, we need to calculate the deviation in the first neighbourhood quite precisely. To this end, we set
\[\delta:=\frac{\sqrt{\log\log n}}{\log n}.\]
**Lemma 4.1**.: _Assume the parameters of \(G=G^{k}(n,p)\) are such that \(\frac{\hat{d}}{k}=\omega\left(\log^{3}n\right)\) and \(k\leq\frac{n}{4}\). Then whp the following holds:_
_For every subset \(A\subseteq[n]\) satisfying \(|A|\leq\frac{2n}{\log n}\),_
\[(1-\delta)\,|A|\frac{\hat{d}}{k}\leq|N_{E}\left(A\right)|\leq(1+\delta)\,|A| \frac{\hat{d}}{k}. \tag{4.1}\]
_Moreover, for every subset \(A\subseteq[n]\),_
\[|N_{E}\left(A\right)|\geq\frac{1}{16}\min\left\{|A|\frac{\hat{d}}{k},\frac{n}{ k}\right\}. \tag{4.2}\]
Proof.: We will start by proving the first statement. Let \(A\subseteq[n]\) be given where \(a:=|A|\leq\frac{2n}{k\log n}\) and let \(X=|N_{E}\left(A\right)|\) denote the number of edges in \(G\) that contain at least one vertex of \(A\). Let us write \(M\) for the number of edges meeting \(A\) in the complete \(k\)-uniform hypergraph \(K^{k}(n)\) on \(n\) vertices. A standard inclusion-exclusion type argument implies that
\[M=\sum_{j=1}^{\min\{a,k\}}\binom{a}{j}\binom{n-j}{k-j}(-1)^{j+1}.\]
The (absolute) ratio of consecutive terms in the sum is given as
\[\frac{\binom{a}{j}\binom{n-j}{k-j}}{\binom{a}{j-1}\binom{n-j+1}{k-j+1}}=\frac {(a-j+1)(k-j+1)}{j(n-j+1)}<\frac{ak}{n}\leq\frac{1}{2},\quad\text{for all $2\leq j\leq\min\left\{a,k \right\}$}\,. \tag{4.3}\]
Hence, \(M\) is dominated by the first term of the sum and the total contribution from all latter terms is at most an \(O\left(\frac{ak}{n}\right)\)-fraction of this value. More formally,
\[M=\sum_{j=1}^{\min\{a,k\}}\binom{a}{j}\binom{n-j}{k-j}(-1)^{j+1}=a\binom{n-1} {k-1}\left(1+O\left(\frac{ak}{n}\right)\right).\]
Since \(X\sim\operatorname{Bin}(M,p)\), it follows from \(\hat{d}:=pk_{k-1}^{\binom{n-1}{k-1}}\) (see (2.3)) and the fact that \(\frac{ak}{n}=O\left(\frac{1}{\log n}\right)=o(\delta)\) that
\[\mathbb{E}\left[X\right]=Mp=pa\binom{n-1}{k-1}\left(1+O\left(\frac{1}{\log n }\right)\right)=\frac{a\hat{d}}{k}\left(1+o(\delta)\right).\]
Hence, by the Chernoff bound (2.4) it follows that
\[\mathbb{P}\left[\left|X-\frac{ad}{k}\right|>\frac{ad\delta}{k}\right]\leq 2 \exp\left(-\frac{ad\delta^{2}}{3k}\right).\]
Therefore, by a union bound, the probability that there exists a set \(A\subseteq[n]\) with \(|A|\leq\frac{2n}{\mathrm{E}\log n}\) such that \(|N_{E}\left(A\right)|\) differs from \(|A|\frac{\hat{d}}{k}\) by more than \(\frac{|A|\hat{d}\delta}{k}\) is at most
\[\sum_{a=1}^{\frac{2n}{\mathrm{E}\log n}}2{n\choose a}\exp\left(-\frac{a\hat{d} \delta^{2}}{3k}\right)\leq\sum_{a=1}^{\infty}2n^{a}n^{-a\omega\left(1\right)}= o(1),\]
where we used our assumption that \(\frac{\hat{d}}{k}=\omega\left(\log^{3}n\right)=\omega\left(\delta^{-2}\log n\right)\). Therefore, whp
\[\left(1-\delta\right)\frac{a\hat{d}}{k}\leq|N_{E}\left(A\right)|\leq\left(1+ \delta\right)\frac{a\hat{d}}{k}.\]
To show the second statement we split into two cases. Firstly, let us assume that \(a:=|A|\leq\frac{n}{2k}\). Let \(X\) and \(M\) be defined as above and note that by our assumption on the size of \(A\) (4.3) still holds. Hence, \(M\) is an alternating sum with decreasing terms, and therefore is bounded from below by the difference of the first two terms, and it follows from (4.3) that
\[M\geq\frac{a}{2}{n-1\choose k-1}.\]
Again, since \(X\sim\mathrm{Bin}(M,p)\) and using \(d\coloneqq pk{n-1\choose k-1}\) we have
\[\mathbb{E}\left[X\right]=Mp\geq p\frac{a}{2}{n-1\choose k-1}=\frac{a\hat{d}}{ 2k},\]
and hence, by the Chernoff bound (2.5),
\[\mathbb{P}\left[X\leq\frac{a\hat{d}}{4k}\right]\leq\mathbb{P}\left[X\leq \frac{1}{2}\mathbb{E}\left[X\right]\right]\leq\exp\left(-\frac{\mathbb{E}\left[ X\right]}{8}\right)\leq\exp\left(-\frac{a\hat{d}}{16k}\right).\]
Thus, by a union bound, we can bound the probability that there exists a set \(A\subseteq[n]\) with \(|A|\leq\frac{n}{2k}\) and \(|N_{E}\left(A\right)|\leq\frac{a\hat{d}}{4k}\) from above by
\[\sum_{a=1}^{\frac{n}{2k}}{n\choose a}\exp\left(-\frac{a\hat{d}}{16k}\right) \leq\sum_{a=1}^{\infty}n^{a}n^{-a\omega\left(1\right)}=o(1),\]
where we used our assumption that \(\frac{\hat{d}}{k}=\omega(\log n)\). Therefore, whp for every set \(A\subseteq[n]\) with \(|A|\leq\frac{n}{2k}\),
\[|N_{E}\left(A\right)|\geq\frac{a\hat{d}}{4k}.\]
If \(A\) is such that \(a>\frac{n}{2k}\) we pick the largest possible subset \(A^{\prime}\subseteq A\) such that \(a^{\prime}k:=|A^{\prime}|k\leq n/2\). Therefore, \((a^{\prime}+1)k>n/2\), and so \(a^{\prime}k\geq n/2-k>n/4\). Using the previous argument and the fact that \(\hat{d}\geq k\) we see that
\[|N_{E}\left(A\right)|\geq\left|N_{E}\left(A^{\prime}\right)\right|\geq\frac{ a^{\prime}\hat{d}}{4k}\geq\frac{a^{\prime}k}{4k}\geq\frac{n}{16k}.\]
In total, the previous two cases yield that whp for all subsets \(A\subseteq[n]\),
\[|N_{E}\left(A\right)|\geq\min\left\{\frac{a\hat{d}}{4k},\frac{n}{16k}\right\} \geq\frac{1}{16}\min\left\{\frac{a\hat{d}}{k},\frac{n}{k}\right\}.\]
To show an analogous result for the vertex-neighbourhood, we recall that for a set of edges \(B\) we defined \(V_{B}=\{\nu\in e\colon e\in B\}\) and that the vertex-neighbourhood \(N_{V}\left(A\right)\) of a set \(A\) can be written as \(V_{N_{E}\left(A\right)}\). Thus, to show that the first vertex-neighbourhood expands well, it will suffices to show that in \(G^{k}(n,p)\) sets of edges are unlikely to have large overlaps.
**Lemma 4.2**.: _Assume the parameters of \(G=G^{k}(n,p)\) are such that \(k=\omega(\log n)\), \(k\leq 2^{-11}n\) and \(\hat{d}\leq n\). Then whp the following holds._
_For every subset \(B\subseteq E(G)\) of edges and every \(\epsilon=\epsilon(n)\in\left(0,\frac{1}{2}\right]\) such that \(\left(\frac{|B|k}{n}\right)^{\epsilon}\leq 2^{-5}\),_
\[|V_{B}|\geq(1-\epsilon)\,|B|k. \tag{4.4}\]
_Moreover, for every subset \(B\subseteq E(G)\) of edges,_
\[|V_{B}|\geq 2^{-12}\min\{|B|k,n\}. \tag{4.5}\]
Proof.: We start by showing the first statement. Let \(b=b(n)\in\mathbb{N}\) and \(\epsilon=\epsilon(n)\in\left(0,\frac{1}{2}\right]\) be such that \(\left(\frac{bk}{n}\right)^{\epsilon}\leq 2^{-5}\). We set \(t_{b}:=bk(1-\epsilon)\) and denote by \(X_{b}\) the number of edge sets \(B\) in \(G\) with \(|B|=b\) and \(|V_{B}|\leq t_{b}\).
To bound the expectation of \(X_{b}\), we count the number of possible choices for such an edge set \(B\) as follows: We think of \(B\) as a partition of the multi-set \(\hat{V}_{B}\), which is given by including each element \(x\) of \(V_{B}\) with multiplicity equal to the number of edges in \(B\) which contain \(x\), into \(k\)-sets. In particular, each such edge set \(B\) can be specified by fixing the set \(V_{B}\) of size \(t\leq t_{b}\), the vector \((x_{1},\ldots,x_{t})\) determining the multiplicity of each vertex in \(B\), and a partition of the multi-set \(\hat{V}_{B}\) into \(b\) many \(k\)-sets.
Now, there are at most \(\binom{n}{t}\) many possible sets \(V_{B}\) and, since \(\sum_{i=1}^{t}x_{i}=bk\), it follows that there are at most \(\binom{bk+t-1}{t}\) ways to choose the vector \((x_{1},\ldots,x_{t})\). Finally, a crude upper bound for the number of partitions is \(b^{bk}\), since each of the \(bk\) vertices in \(\hat{V}_{B}\) has to be assigned to one of the \(b\) many \(k\)-sets.
It follows that
\[\mathbb{E}\left[X_{b}\right]\leq\sum_{t=1}^{tb}\binom{n}{t}\binom{bk+t-1}{t}b^ {bk}p^{b}. \tag{4.6}\]
Using our assumption that \(\hat{d}\leq n\) we get that
\[p\leq\frac{n}{k\binom{n-1}{k-1}}\leq\frac{n}{k\left(\frac{n-1}{k-1}\right)^{k -1}}\leq\left(\frac{k}{n}\right)^{k-2},\]
and hence, using the fact that \(\binom{n}{t}\) is increasing for \(1\leq t\leq t_{b}\), we can bound \(\mathbb{E}\left[X_{b}\right]\) from above by
\[\mathbb{E}\left[X_{b}\right] \leq\sum_{t=1}^{t_{b}}\binom{n}{t}\binom{bk+t-1}{t}b^{bk}p^{b}\] \[\leq t_{b}\left(\frac{en}{t_{b}}\right)^{t_{b}}2^{2bk}b^{bk} \left(\frac{k}{n}\right)^{b(k-2)}\] \[=\left(t_{b}^{\frac{1}{bk}}\left(\frac{n}{k}\right)^{\frac{2}{t}} \frac{4e^{1-\epsilon}}{(1-\epsilon)^{1-\epsilon}}\left(\frac{bk}{n}\right)^{ \epsilon}\right)^{bk}.\]
However, since \(k=\omega(\log n)\) and \(bk\leq n\), it follows that \(t_{b}^{\frac{1}{bk}}\left(\frac{n}{k}\right)^{\frac{2}{t}}\leq 2\). Furthermore, it is easy to check that \((1-\epsilon)^{(1-\epsilon)}\) is decreasing on \(\left(0,\frac{1}{2}\right]\) and hence
\[\mathbb{E}\left[X_{b}\right]\leq\left(\frac{8\epsilon}{\sqrt{2}}\left(\frac{ bk}{n}\right)^{\epsilon}\right)^{bk}\leq\left(2^{4}\left(\frac{bk}{n}\right)^{ \epsilon}\right)^{bk}=o\left(\frac{1}{n}\right),\]
where we used our assumption \(2^{4}\left(\frac{bk}{n}\right)^{\epsilon}\leq\frac{1}{2}\) and \(bk=\omega(\log n)\). In particular, since \(b\leq 2^{-\frac{5}{\epsilon}}\frac{n}{k}\leq n\), we can conclude by a union bound over all possible values of \(b\) that whp there are no edge sets violating the first part of the lemma.
To show the second statement, suppose that \(B\subseteq E(G)\) is given. If \(|B|\leq\frac{n}{2^{10}k}\), then
\[\left(\frac{|B|k}{n}\right)^{\frac{1}{2}}\leq 2^{-5},\]
and so by the first part of the lemma with \(\epsilon=\frac{1}{2}\), we have
\[|V_{B}|\geq\frac{1}{2}|B|k\geq 2^{-12}\min\{|B|k,n\}.\]
If \(|B|>\frac{n}{2^{10}k}\) we simply pick a largest subset \(B^{\prime}\subset B\) such that \(b^{\prime}:=|B^{\prime}|\leq\frac{n}{2^{10}k}\). Then, \(b^{\prime}>\frac{n}{2^{10}k}-1\geq\frac{n}{2^{11}k}\), for large enough \(n\). By the previous observation, it follows that
\[|V_{B}|\geq|V_{B^{\prime}}|\geq b^{\prime}k/2\geq\frac{n}{2^{12}}\geq 2^{-12} \min\{|B|k,n\},\]
proving also the second statement.
We note that an immediate corollary of Lemmas 4.1 and 4.2 is that not too large sets in \(G^{k}(n,p)\) have relatively uniform vertex expansion.
**Lemma 4.3**.: _Assume the parameters of \(G=G^{k}(n,p)\) are such that \(\frac{d}{k}=\omega(\log^{3}n)\), \(k=\omega(\log n)\) and \(\hat{d}\leq n\). Then whp the following holds. For every subset \(A\subseteq V(G)\) and every \(\epsilon=\epsilon(n)\in\left(0,\frac{1}{2}\right]\) such that \(\left(\frac{|A|\hat{d}}{n}\right)^{\epsilon}\leq 2^{-6}\),_
\[\left(1-\epsilon\right)\left(1-\delta\right)|A|\hat{d}\leq|N_{V}(A)|\leq(1+ \delta)\,|A|\hat{d}. \tag{4.7}\]
_Moreover, for every subset \(A\subseteq[n]\),_
\[2^{-16}\min\left\{|A|\hat{d},n\right\}\leq|N_{V}(A)|\leq 2^{12}|A|\hat{d}. \tag{4.8}\]
Proof.: We start by showing the first statement. Given a set \(A\) and \(\epsilon\) satisfying the conditions of the corollary, we note that, since \(\hat{d}=\omega\left(k\log^{3}n\right)\) and \(\epsilon\leq\frac{1}{2}\),
\[|A|\leq 2^{-\frac{\epsilon}{2}}\frac{n}{\hat{d}}=o\left(\frac{n}{k\log n} \right).\]
Hence we can apply the first part of Lemma 4.1 to conclude that whp
\[\left(1-\delta\right)|A|\frac{\hat{d}}{k}\leq|N_{E}\left(A\right)|\leq(1+ \delta)\,|A|\frac{\hat{d}}{k}. \tag{4.9}\]
If we let \(B=|N_{E}\left(A\right)|\) then it is immediate that
\[|N_{V}(A)|=|V_{B}|\leq k|B|\leq(1+\delta)\,|A|\hat{d}.\]
On the other hand, by (4.9) and our assumption on \(|A|\)
\[\left(\frac{|B|k}{n}\right)^{\epsilon}\leq\left((1+\delta)\,|A|\frac{\hat{d} }{n}\right)^{\epsilon}\leq 2\left(|A|\frac{\hat{d}}{n}\right)^{\epsilon}\leq 2^{-5}.\]
Hence we can apply Lemma 4.2 to conclude that
\[|N_{V}(A)|=|V_{B}|\geq(1-\epsilon)|B|k\geq(1-\epsilon)\,(1-\delta)\,|A|\hat{d}\]
as claimed.
To show the second statement, let \(A\subseteq[n]\) and assume first that \(|A|\hat{d}\leq 2^{-12}\,n\). Then, by the first statement with \(\epsilon=\frac{1}{2}\),
\[\frac{1}{4}|A|\hat{d}\leq\frac{1}{2}\left(1-\delta\right)|A|\hat{d}\leq|N_{V} (A)|\leq(1+\delta)\,|A|\hat{d}\leq 2|A|\hat{d},\]
showing the second statement in this case. On the other hand, for \(|A|\hat{d}>2^{-12}\,n\), note first the trivial upper bound
\[|N_{V}\left(A\right)|\leq n\leq 2^{12}|A|\hat{d}.\]
For the lower bound, we can apply the second part of Lemma 4.1 to conclude that
\[|N_{E}\left(A\right)|\geq 2^{-4}\min\left\{|A|\frac{\hat{d}}{k},\frac{n}{k} \right\}.\]
Setting \(B=N_{E}\left(A\right)\) it now follows from Lemma 4.2 that
\[|N_{V}(A)|=|V_{B}|\geq 2^{-12}\min\left\{|B|k,n\right\}\geq 2^{-16}\min\left\{ |A|\hat{d},n\right\},\]
concluding the proof.
Note that, in particular, Lemma 4.3 implies that \(|N_{V}\left(v\right)|\) is roughly \(\hat{d}\) for every vertex \(v\) of \(G^{k}(n,p)\). To prove Theorem 1.7 we will first show that neighbourhoods in \(G^{k}(n,p)\) expand well in terms of \(\hat{d}\) and then finally conclude by showing that \(d\) is whp close to \(\hat{d}\).
Proof of Theorem 1.7.: We note first that by our assumptions on \(k\) and \(p\), \(G^{k}(n,p)\) satisfies the conditions of Lemmas 4.1, 4.2 and 4.3. We therefore assume in what follows that the conclusions of these lemmas hold deterministically in \(G^{k}(n,p)\).
For \(i\in\{1,2,3\}\), we say a graph \(G\) satisfies Property (**A.l**)', if there exists a constant \(0<\xi_{i}^{\prime}\leq 1\) such that \(G\) satisfies Property (**A.l**) for this constant, but when \(d\) is replaced by \(\hat{d}\) in Definition 2.1. We will start by showing that \(G^{k}(n,p)\) satisfies the Properties (**A.l**)'. Taking a minimum over all respective constants, we obtain a universal constant \(\xi^{\prime}\) such that \(G\) satisfies all properties (**A.l**)' for this constant. We conclude by showing that \(\hat{d}\) lies sufficiently close to \(d\) and in particular that properties (**A.l**)' imply properties (**A.l**) for a universal constant \(\xi\) that is only a constant factor smaller than \(\xi^{\prime}\).
Our first step will be to show that the size of the vertex-neighbourhoods in \(G^{k}(n,p)\) grow relatively uniformly for small enough sets, by inductively applying Lemma 4.3. However, we will need to carefully pick the parameter \(\epsilon\) in each step so that the cumulative error in these approximations is not too large.
Let \(A\subseteq[n]\) with \(a:=|A|\) and let \(r\in\mathbb{N}\) be such that \(a\hat{d}^{r}\leq\frac{n}{2\log n}\). Note that, since \(\hat{d}=\omega\left(\log^{4}n\right)\), it follows that \(r\leq\frac{\log n}{4\log\log n}\). Our aim will be to show the following
\[2^{-5}a\hat{d}^{r}\leq\left|N_{V}^{r}\left(A\right)\right|\leq 2a\hat{d}^{r}. \tag{4.10}\]
In particular, note that if we take \(A=\{v\}\) to be a single vertex then, since \(\sqrt{n}\leq\sqrt{n\frac{d}{\log^{4}n}}\leq\frac{n}{2\log n}\), it follows from (4.10) that Property (**A.l**)' holds for \(v\) with \(\xi_{1}^{\prime}=2\).
In order to apply Lemma 4.3, let us set \(\epsilon_{0}=0\) and
\[\epsilon_{i}\coloneqq\frac{5}{\log n-\log\left(2a\hat{d}^{i}\right)}\quad \text{for }1\leq i\leq r. \tag{4.11}\]
Note that, since \(a\hat{d}^{i}\leq\frac{n}{2\log n}\), it follows that \(\epsilon_{i}=o(1)\) for each \(i\leq r\).
We claim inductively that the following bound holds for each \(0\leq i\leq r\):
\[\prod_{j=0}^{i}\left(1-\epsilon_{j}\right)\left(1-\delta\right)^{i}a\hat{d}^ {i}\leq|N_{V}^{i}\left(A\right)|\leq\left(1+\delta\right)^{i}a\hat{d}^{i}, \tag{4.12}\]
where the statement is clear for \(i=0\).
Suppose that (4.12) holds for some \(i<r\). Then, since \(i<r=o\left(\frac{1}{\delta}\right)\), we have
\[\frac{1}{2}\prod_{j=0}^{i}\left(1-\epsilon_{j}\right)a\hat{d}^{i}\leq\prod_{j =0}^{i}\left(1-\epsilon_{j}\right)\left(1-\delta\right)^{i}a\hat{d}^{i}\leq \left|N_{V}^{i}\left(A\right)\right|\leq\left(1+\delta\right)^{i}a\hat{d}^{i} \leq 2a\hat{d}^{i} \tag{4.13}\]
and so
\[\left(\frac{\left|N_{V}^{i}\left(A\right)\right|\hat{d}}{n}\right)^{\epsilon_ {i+1}}\leq\left(\frac{2a\hat{d}^{i+1}}{n}\right)^{\frac{5}{\log n-\log\left(2a \hat{d}^{i+1}\right)}}\leq 2^{-6}.\]
Hence we can apply Lemma 4.3 to \(N_{V}^{i}\left(A\right)\) to conclude that
\[\prod_{j=0}^{i+1}\left(1-\epsilon_{j}\right)\left(1-\delta\right)^{i+1}a\hat{d }^{i+1}\leq\left|N_{V}\left(N_{V}^{i}\left(A\right)\right)\right|=\left|N_{V}^ {i+1}\left(A\right)\right|\leq\left(1+\delta\right)^{i+1}a\hat{d}^{i+1},\]
and so the induction step holds.
In particular, taking \(i=r\), it follows from (4.13) that
\[\frac{1}{2}\prod_{j=0}^{r}\left(1-\epsilon_{j}\right)\hat{d}^{r}\leq\left|N_{V}^ {r}\left(A\right)\right|\leq 2\hat{d}^{r}. \tag{4.14}\]
Hence it remains to bound the term \(\prod_{j=0}^{r}\left(1-\epsilon_{j}\right)\). The following claim, whose verification we defer to the end of the proof, provides such a bound.
**Claim 3**.: _Let \(a\leq n\) and \(r\) be such that \(ad^{r}\leq\frac{n}{2\log n}\), and let \(\epsilon_{i}\) be defined as in (4.11). Then_
\[\prod_{j=0}^{r}\left(1-\epsilon_{j}\right)\geq 2^{-4}. \tag{4.15}\]
It is now clear that (4.14) and Claim 3 together imply (4.10).
Now let us turn to Property **(A.2)**'. Let us fix a subset \(A\subseteq[n]\) with \(a:=|A|\) and \(r\in\mathbb{N}\). We let \(r_{0}=\min\left\{r,\max\left\{i:|A|\hat{d}^{i}\leq\frac{n}{2\log n}\right\}\right\}\) and let \(A^{\prime}=N_{V}^{r_{0}}(A)\). Then, by (4.10), we have
\[2^{-5}ad^{r_{0}}\leq\left|A^{\prime}\right|\leq 2ad^{r_{0}}. \tag{4.16}\]
In particular, if \(r=r_{0}\), then (4.16) implies that **(A.2)**' holds for \(A\) with \(\xi_{2}^{\prime}=2^{-5}\).
If \(r=r_{0}+1\), then by (4.16) and the second part of Lemma 4.3
\[2^{-21}\min\left\{ad^{r},n\right\}\leq|N_{V}\left(A^{\prime}\right)|=|N_{V}^{r }\left(A\right)|\leq 2^{13}ad^{r}, \tag{4.17}\]
implying that Property **(A.2)**' holds for \(A\) with \(\xi_{2}^{\prime}=2^{-21}\). Finally, for \(r\geq r_{0}+2\) note that since \(\hat{d}=\omega(\log^{3}n)\), we have \(ad^{r}\geq n\), and so applying the second part of Lemma 4.3 to \(N_{V}\left(A^{\prime}\right)\) yields together with (4.17)
\[2^{-37}n=2^{-37}\min\left\{ad^{r},n\right\}=2^{-37}\min\left\{ad^{r_{0}+2},n \right\}\leq|N_{V}^{2}\left(A^{\prime}\right)|\leq|N_{V}^{r}\left(A\right)| \leq n\leq ad^{r},\]
hence Property **(A.2)**' holds also in this case with \(\xi_{2}^{\prime}=2^{-37}\).
Let us finally turn to Property **(A.3)**'. Let \(B\subseteq E(G)\). By the second part of Lemma 4.2, we conclude that
\[|V_{B}|\geq 2^{-12}\min\left\{|B|k,n\right\}.\]
However, since \(N_{V}^{r}(B)=N_{V}^{r}(V_{B})\), we can apply **(A.2)**' to \(V_{B}\) to conclude that
\[|N_{V}^{r}\left(B\right)| =|N_{V}^{r}\left(V_{B}\right)|\geq 2^{-37}\min\left\{|V_{B}|\hat{d}^{ r},n\right\}\] \[\geq 2^{-37}\min\left\{2^{-12}\min\left\{|B|k,n\right\}\hat{d}^{r},n\right\}\] \[\geq 2^{-49}\min\left\{|B|k\hat{d}^{r},n\right\},\]
and hence \(G^{k}(n,p)\) satisfies Property **(A.3)**' with \(\xi_{3}^{\prime}=2^{-49}\). It remains to show that Properties **(A.l)**' imply Properties **(A.l)**. To this end we note that if \(r_{1}=\max\left\{i:d^{i}\leq n\right\}\) then it is sufficient to show that Properties **(A.l)** hold for all \(r\leq r_{1}\). It is clear then that the result follows from the following claim, which we again verify at the end of the proof.
**Claim 4**.: _For all \(r\leq r_{1}\), we have_
\[\frac{1}{2^{37}}d^{r}\leq\hat{d}^{r}\leq 2^{37}d^{r}. \tag{4.18}\]
This completes the proof.
It remains to prove Claims 3 and 4.
Proof of Claim 3.: Note first that as \(\epsilon_{j}\leq\frac{1}{2}\) for all \(1\leq j\leq r\), we can estimate the product as
\[\prod_{j=0}^{r}\left(1-\epsilon_{j}\right)=\prod_{j=1}^{r}\left(1-\epsilon_{j} \right)\geq\exp\left(-2\sum_{j=1}^{r}\epsilon_{j}\right). \tag{4.19}\]
By reversing the order of summation we can write this sum as
\[\sum_{j=1}^{r}\epsilon_{j}=\sum_{j=1}^{r}\frac{5}{\log n-\log\left(2a\hat{d}^{ j}\right)}=5\sum_{j=1}^{r}\frac{1}{\log n-\log\left(2a\hat{d}^{r-j+1}\right)}. \tag{4.20}\]
To bound the term \(\log\left(2a\hat{d}^{r-j+1}\right)\) from above, we first note that since \(a\hat{d}^{r}\leq\frac{n}{2\log n}\), we have
\[\log\left(2a\hat{d}^{r}\right)\leq\log\left(\frac{n}{\log n}\right)=\log n- \log\log n.\]
Furthermore, since \(\hat{d}=\omega\left(k\log^{3}n\right)=\omega\left(\log^{4}n\right)\), it follows that for all \(j\in\mathbb{N}\),
\[\log\left(\hat{d}^{j}\right)\geq 4j\log\log n,\]
which, together with the previous equation, yields that
\[\log\left(2a\hat{d}^{r_{0}-j+1}\right)\leq\log n-\log\log n-4(j-1)\log\log n.\]
Combining this bound with (4.20) we obtain:
\[\sum_{j=1}^{r_{0}}\epsilon_{j}\leq 5\sum_{j=1}^{r_{0}}\frac{1}{(4j-3)\log\log n }\leq\frac{5}{\log\log n}\left(1+\sum_{j=1}^{r_{0}}\frac{1}{4j}\right)\leq \frac{5}{4}\cdot\frac{5+\log r_{0}}{\log\log n}\leq\frac{5}{4}, \tag{4.21}\]
where the penultimate inequality comes from a standard bound on the harmonic sum and the last inequality uses \(r_{0}\leq\frac{\log n}{e^{\epsilon}}\). Plugging (4.21) into (4.19) we obtain
\[\prod_{j=0}^{r_{0}}\left(1-\epsilon_{j}\right)\geq\exp\left(-\frac{5}{2} \right)\geq\frac{1}{2^{4}},\]
which concludes the proof of (4.15).
Proof of Claim 4.: Clearly, it is sufficient to prove the claim for \(r=r_{1}\), and we note that \(r_{1}\geq 1\), since \(\hat{d}\leq n\). If \(r_{1}=1\), we get from Property **(A.2)**'
\[\frac{1}{2^{37}}\hat{d}\leq|N_{V}\left(v\right)|\leq 2^{37}\hat{d},\]
which immediately implies (4.18).
Otherwise, note that for each vertex \(v\in[n]\) and each \(\epsilon\in\left(0,\frac{1}{2}\right]\) such that \(\left(\frac{\hat{d}}{n}\right)^{\epsilon}\leq 2^{-6}\), we have by Lemma 4.3 that
\[\left(1-\epsilon\right)\left(1-\delta\right)\hat{d}\leq|N_{V}\left(v\right)| \leq\left(1+\delta\right)\hat{d}.\]
Therefore, recalling that \(d=d\left(G^{k}(n,p)\right)=\frac{1}{n}\sum_{v\in[n]}|N_{V}\left(v\right)|\), we have
\[\left(1-\epsilon\right)\left(1-\delta\right)\hat{d}\leq d\leq\left(1+\delta \right)\hat{d}. \tag{4.22}\]
As \(r_{0}\geq 2\), it follows that \(\hat{d}\leq\sqrt{n}\) and so we can take \(\epsilon=\delta\) and by (4.22) obtain the following:
\[\frac{1}{2}\hat{d}^{r}\leq\left(1-\epsilon\right)^{r}\left(1-\delta\right)^{r} \hat{d}^{r}\leq d^{r}\leq\left(1+\delta\right)^{r}\hat{d}^{r}\leq 2\hat{d}^{r}.\]
## 5. Concluding discussion
In Theorem 1.5 we prove Conjecture 1.4 for \(G^{k}(n,p)\) in dense regimes up to a log-factor. It would be interesting to remove this log-factor to match the conjectured bound. We note that in the case of \(G(n,p)\), Pralat and Wormald [23] gave a 2-stage cop strategy with similarities to the vertex surrounding strategy of Luczak and Pratat [21] to show that Meyniel's Conjecture holds whp in \(G(n,p)\) in the dense regime. Similar ideas might be of use in order to remove this log-factor.
Furthermore, our theorems give upper bounds on the cop number. It would be interesting to know if these upper bounds are close to tight. Luczak and Pralat [21] gave an escape strategy for the Robber which matches their upper bound up to logarithmic factors, which also uses in a critical way the regular (vertex-)expansion properties of \(G(n,p)\). However, whilst there are already some technical issues to overcome with extending their analysis of such a strategy to the case of \(G^{k}(n,p)\) with \(k=\omega(1)\), the robber must also have to take into account in some way the 'edge-expansion' of \(G^{k}(n,p)\), as we know there are regimes of \(p\) where the cops can do strictly better than the bounds given by the vertex surrounding strategy. Hence some new ideas will be necessary to find a corresponding robber strategy in the hypergraph game.
More generally, it seems the game of Cops and Robber on hypergraphs has been much less well studied than the graphical counterpart. In particular, it would be interesting to find some natural classes of hypergraphs on which the cop number is bounded. Motivated by the classic result of Aigner and Fromme [2] on the cop number of planar graphs, it is natural to ask such questions in relation to geometric notions of embeddability.
However, even in the case \(k=3\), it is clear that there are \(k\)-graphs which are embeddable without crossings in \(\mathbb{R}^{k}\) but have arbitrary large cop number. Hence, in order to bound the cop number we need to make further assumptions on the structure of the \(k\)-graphs, and a natural one in the case of \(3\)-graphs would be to ask that the \(3\)-graph, when viewed as a \(2\)-dimensional simplicial complex, is simply connected.
**Question 5.1**.: Is there a constant \(K\) such that every simply connected \(3\)-graph \(G\) which can be embedded without crossings in \(\mathbb{R}^{3}\) satisfies \(c(G)\leq K\)?
For simply connected \(3\)-graphs, it is known that embeddability in \(\mathbb{R}^{3}\), as with Kuratowski's theorem, has a characterisation in terms of excluded minors [8, 9], here in terms of _space minors_. In the case of graphs, a result of Andreae [3] shows that excluding a fixed minor bounds the cop number of a graph, and it would be interesting to know if, for an appropriate notion of minor, this also holds for the hypergraph game.
**Question 5.2**.: Let \(H\) be a fixed \(k\)-graph. Does there exist a constant \(K:=K(H)\) such that every \(k\)-graph \(G\) with no \(H\)-minor' satisfies \(c(G)\leq K\)?
Finally, we note that there are perhaps other natural ways to extend the game of Cops and Robber to the hypergraph setting. One particularly natural variant would be to play the game on the _edges_ of the hypergraph, rather than the vertices. That is, the cops and robber live on the edges of the hypergraph and are allowed to move to incident edges, instead of adjacent vertices. Let us denote the minimum number of cops needed to win in the _edge-game_ on \(G\) by \(c_{e}(G)\). In the case of graphs, the edge-game was considered by Dudek, Gordinowicz and Pralat [12], who showed that it is closely related to the vertex-game, in that for any graph \(G\),
\[\left\lceil\frac{c(G)}{2}\right\rceil\leq c_{e}(G)\leq c(G)+1. \tag{5.1}\]
In particular, \(c(G)\) and \(c_{e}(G)\) cannot differ by more than a fixed multiplicative constant. In the hypergraph game, it is less clear whether \(c(G)\) and \(c_{e}(G)\) can have wildly different behaviour.
**Question 5.3**.: Is there a function \(f:\mathbb{N}\to\mathbb{R}\) such that for any \(k\)-graph \(G\),
\[\frac{1}{f(k)}c(G)\leq c_{e}(G)\leq f(k)c(G)?\] |
2305.14075 | Lips: p-adic and singular phase space | I present new features of the open-source Python package lips, which
leverages the newly developed pyadic and syngular libraries. These developments
enable the generation and manipulation of massless phase-space configurations
beyond real kinematics, defined in terms of four-momenta or Weyl spinors, not
only over complex numbers ($\mathbb{C}$), but now also over finite fields
($\mathbb{F}_p$) and p-adic numbers ($\mathbb{Q}_p$). The package also offers
tools to evaluate arbitrary spinor-helicity expressions in any of these fields.
Furthermore, using the algebraic-geometry submodule, which utilizes Singular
[1] through the Python interface syngular, one can define and manipulate ideals
in spinor variables, enabling the identification of irreducible surfaces where
scattering amplitudes have well-defined zeros and poles. As an example
application, I demonstrate how to infer valid partial-fraction decompositions
from numerical evaluations. | Giuseppe De Laurentis | 2023-05-23T13:54:40Z | http://arxiv.org/abs/2305.14075v1 | # Lips: \(p\)-adic and singular phase space
###### Abstract
I present new features of the open-source Python package lips, which leverages the newly developed pyadic and syngular libraries. These developments enable the generation and manipulation of massless phase-space configurations beyond real kinematics, defined in terms of four-momenta or Weyl spinors, not only over complex numbers (\(\mathbb{C}\)), but now also over finite fields (\(\mathbb{F}_{p}\)) and \(p\)-adic numbers (\(\mathbb{Q}_{p}\)). The package also offers tools to evaluate arbitrary spinor-helicity expressions in any of these fields. Furthermore, using the algebraic-geometry submodule, which utilizes Singular[1] through the Python interface syngular, one can define and manipulate ideals in spinor variables, enabling the identification of irreducible surfaces where scattering amplitudes have well-defined zeros and poles. As an example application, I demonstrate how to infer valid partial-fraction decompositions from numerical evaluations.
## 1 Introduction
High-multiplicity loop-level amplitude computations involve significant algebraic complexity, which is usually side-stepped via numerical routines. Yet, when available, compact analytical expressions can display improved numerical stability and reduced evaluation times. Moreover, much of the recent progress in the computation of loop-corrections to scattering amplitudes has been achieved thanks to finite-field methods [2, 3]. As these numerical computations are unsuited for direct phenomenological applications, analytic expressions must be recovered, so that they can then be evaluated with floating-point numbers.
An important role to manifest the analytic properties of scattering amplitudes is played by the spinor-helicity formalism. A classical example is that numerators in gauge-theoretic amplitudes, such as quantum chromodynamics, mitigate the degree of divergences which would naively be expected from Feynman propagators, namely from factors of \(1/s_{ij}\) to \(1/\sqrt{s_{ij}}\). Relaxing the constraint of real kinematics, one realizes that these divergences are in fact either purely holomorphic spinor contractions \(\langle ij\rangle\) or purely anti-holomorphic ones \([ij]\) (with \(s_{ij}=\langle ij\rangle[ji]\)).
It has been shown that significant insights into the analytic structure of amplitudes can be obtained from tailored numerical evaluations in singular limits [4, 5]. For example, the rational prefactors appearing in the planar two-loop amplitude for the process \(0\to q^{+}\bar{q}^{-}\gamma^{+}\gamma^{+}\gamma^{+}\) with a closed fermion loop [6, 7] can be simplified to just the following two functions
\[\frac{\langle 23\rangle[23]\langle 24\rangle[34]}{\langle 15 \rangle\langle 34\rangle\langle 45\rangle\langle 4|1+5|4]}+(45\to 54), \tag{1}\] \[\frac{\langle 13\rangle[13\rangle[24\rangle[45]}{\langle 13 \rangle\langle 34\rangle\langle 45\rangle\langle 4|1+3|4]}+(45\to 54)-\frac{ \langle 12\rangle[13]\langle 23\rangle^{2}}{\langle 13\rangle\langle 24\rangle \langle 25\rangle\langle 34\rangle\langle 35\rangle}, \tag{2}\]
together with those obtained by closing the vector space generated by these two functions under the permutations of the photons (legs 3, 4, and 5). In a soon-to-appear paper, we obtain analogous expressions for the full-color two-loop \(0\to q\bar{q}\gamma\gamma\gamma\) amplitude [8].
## 2 Lips: a phase-space generator for theory computations
Phase-space generators in high-energy physics traditionally describe the kinematics of physical processes at colliders, meaning they provide real-valued phase-space configurations. Yet, from a theoretical standpoint, the analytic properties of scattering amplitudes in perturbative quantum field theory are better understood in the complex plane. This motivates the development of a phase-space generator that exploits the additional freedom of complex kinematics.
The package lips (short for Lorentz invariant phase space) provides a phase-space generator and manipulator that is tailored to the needs of modern theoretical calculations. The package is designed to work for processes of arbitrary multiplicity, although at present it can easily handle massless particles only. Nevertheless, massive particles can already be described in terms of a pair of massless decay products. Use cases include: 1) generation of phase-space points over complex numbers (\(\mathbb{C}\)), finite fields (\(\mathbb{F}_{p}\)), and \(p\)-adic numbers (\(\mathbb{Q}_{p}\)); 2) on-the-fly evaluation of arbitrary spinor-helicity expressions; 3) construction of special kinematic configurations; 4) algebro-geometric analysis of irreducible varieties in kinematic space.
Live examples powered by binder[9] are accessible through the badge on the GitHub page.1
Footnote 1: At the URL github.com/GDeLaurentis/lips
### Installation
The required language is Python 3, with version \(\geq 3.8\) being recommended. The package is available through the Python Package Index,2 and thus can be installed via pip.
Footnote 2: At the URL pypi.org/project/lips/
```
pipinstall-upgradelips
```
Alternatively, the source code can be cloned from GitHub, and then installed with pip.
```
pipinstall-epath/to/repository
```
The option -e ensures that changes to the source code are reflected without having to reinstall the package, for instance after invoking git pull to download an update.
### Dependencies
The Python ecosystem provides a rich variety of third-party, open-source libraries for scientific computing. Among these lips depends on NumPy[10], whose ndarray class is used to describe all Lorentz tensors, mpmath[11], from which multi-precision real and complex numbers are imported, and sympy[12], for symbolic manipulations. Dependencies are declared in the file setup.py and are installed automatically through pip. The only exception is Singular[1], which is optional, and needs to be installed separately. Additionally two new, open-source dependencies are used, namely pyadic and symgular. They can be used independently of lips.
#### 2.2.1 pyadic
Finite fields have become a staple of multi-loop computations. More recently, the idea to use \(p\)-adic numbers was introduced, in order to rescue a non-trivial absolute value while maintaining integer arithmetic [5]. The package pyadic provides classes for finite fields (ModP) and \(p\)-adic numbers (PAdic), instantiated as shown, as well as related algorithms.
```
ModP(number,prime) PAdic(number,prime,digits,valuation=0)
```
Besides standard arithmetic operations, square roots are also available in the functions finite_field_sqrt and padic_sqrt. These are not guaranteed to be in the field and may
return a FieldExtension object. Only field extensions by a single square root are currently implemented. Moreover, the \(p\)-adic logarithm is also implemented in the function padic_log.
For \(p\)-adic numbers, numerical precision is explicitly tracked as an error \(\mathcal{O}(p^{k})\) term, meaning all displayed digits are, by default, significant digits. This allows one to perform computations in singular configurations while keeping track of the numerical uncertainty. Nevertheless, the parameter fixed_relative_precision can be switched to True to emulate the usual floating-point behavior, with numerical noise being appended to numbers in case of precision loss.
Rational reconstruction algorithms from \(\mathbb{F}_{p}\) and \(\mathbb{Q}_{p}\) to \(\mathbb{Q}\) are also provided with the function rationalise, which takes an optional keyword argument algorithm to toggle between maximal quotient reconstruction (MQRR) and lattice reduction (LGRR) [13, 14, 15].
It is also worth mentioning that an alternative implementation of finite fields is available in the package galois[16]. This is particularly useful when dealing with large ndarrays.
#### 2.2.2 Singular/syngular
The submodule algebraic_geometry of lips requires Singular[1]. To facilitate computations, an object-oriented Python interface is provided in the package syngular[17]. Like pyadic, this can be used independently of lips. The main classes currently implemented are Ideal, Ring and QuotientRing. They provide easy access to several functions implemented in Singular in a pythonic way. For instance, we have ideal addition (\(\star\)), product (\(\star\)), quotient (\(/\)), membership (in), intersection (\(\&\)), equality (==), _etc_. More methods are available with self-explanatory names, e.g. primary_decomposition, which calls primdecGTZ.
### Basic usage
The main class provided by lips is the Particles class. It describes the phase space of massless particles with given multiplicity in 4 dimensions. By default, the 4-momenta are taken to be complex valued. A specific choice of number field can be passed as a keyword parameter.
Particles(multiplicity, field=Field(name, prime, digits)) The generated phase-space point will satisfy on-shell relations (\(p_{i}^{\mu}p_{i,\mu}=0\ \forall\ i\)) and momentum conservation (\(\sum_{i}p_{i}^{\mu}=0\)). Valid choices for the field keyword are multi-precision complex numbers (\(\mathbb{C}\)), Gaussian rationals (\(\mathbb{Q}[i]\)), finite fields (\(\mathbb{F}_{p}\)), and \(p\)-adic numbers (\(\mathbb{Q}_{p}\)).
Field("mpc", 0, 300) Field("gaussian rational", 0, 0)
Field("finite field", 2147483647, 1) Field("generic", 2147483629, 3)
Finite fields and \(p\)-adic numbers are taken from a specific slice of complex phase space, namely that with \(E,p_{x},p_{z}\in\mathbb{Q}\) and \(p_{y}\in i\mathbb{Q}\). This is equivalent to a change of metric to \(\mathrm{diag}(1,-1,1,-1)\). Depending on the choice of field, some parameters are discarded (see Table 1).
The Particles class is a 1-indexed subclass of the Python built-in type list, with Particle class entries. While this re-indexing may be unusual in Python, it is the natural choice to match the notation used to write scattering amplitudes. The Particle objects have several attributes, each corresponding to one of the representations of the Lorentz group. For instance, we have right spinors with index up (r_sp_u), left spinors with index down (l_sp_d), rank-two spinors (r2_sp),
\begin{table}
\begin{tabular}{l c c c} \hline \hline & ”mpc” & ”gaussian rational” & ”finite field” & ”padic” \\ \hline prime & ✗ & ✗ & ✓ & ✓ \\ digits & ✓ & ✗ & ✗ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Used (✓) and discarded (✗) arguments for Field.
four momenta (four_mom), _etc._ See Table 2 for a more schematic representation. Updating the values for one of these properties automatically updates the values of the rest. The spinor conventions employed are fairly common in the field, for more details please see ref. [18, Section 2.2].
Another basic functionality provided in lips is the evaluation of arbitrary spinor-helicity expressions. This works seamlessly for all of the above defined fields. Understood symbols include arbitrary spinor strings, with limited support for open-index expressions, Mandelstam variables with any numbers of indices, Gram determinants of three-mass triangle diagrams (\(\Delta\_{\text{ij}}|\texttt{kl}|\texttt{mn}\)),3 and traces involving \(\gamma^{5}\) (tr5_ijkl). This feature can be accessed is via the __call__ magic method of the Particle class, as shown.
Footnote 3: See Kallén function or Heron’s formula.
Regular expressions (with the package re) are used to split the string into individual invariants, which then form an abstract syntactic tree (with the package ast). The individual invariants are computed in the Particles.compute method. For simplicity's sake, greater-than and less-than symbols can be used in lieu of the angle brackets to denote holomorphic spinor contractions.
Another useful method is the Particles.image method, which implements transformations under the symmetries of phase space. These are represented by a tuple whose first entry is a string representing a permutation of the external legs, followed by a Boolean denoting whether a swap of the left and right representations of the Lorentz group is needed (\(\lambda_{\alpha}\leftrightarrow\bar{\lambda}_{\dot{\alpha}}\)). For instance, the symmetry of the expression in Eq. 1 would be denoted as ('12354', False).
## 3 Ideals in spinor space
Before constructing singular phase-space configurations, let us consider how to make these special configurations unambiguous. To this aim, we rely on the algebro-geometric concept of an ideal.
In lips, two classes are used to represent ideals: LipsIdeal and SpinorIdeal. The former represents covariant ideals in the ring of spinor components, while the latter represents invariant ideals in the ring of spinor contractions [5]. Both ideal types are subclasses of the Ideal class of the interface syngular, through which one can access many algorithms implemented in Singular. Despite most applications of interest deal with Lorentz invariants, it is generally convenient to work with spinor components, as the ideals are generated by fewer polynomials.
To instantiate a LipsIdeal object one has to declare the multiplicity of phase space, and a set of generating polynomials. For instance, taking invariants which appear in Eq. (1), we can write
J = LipsIdeal(5, ("<4|1+5|4]", "<5|1+4|5]")) Momentum conservation is added by default. The method make_analytical_d of the Particle class is used to replace lower-index spinors with sympy symbols. Tensor contractions are then computed as in numeric cases. The resulting expressions are then passed to Singular for
\begin{table}
\begin{tabular}{c|c|c||c|c|c} \hline \hline Lorentz repr. & symbol & Particle property & Lorentz repr. & symbol. & Particle property \\ \hline \multirow{2}{*}{\((0,1/2)\)} & \(\lambda^{\alpha}\) & r\_sp\_u & & \(p^{\mu}\) & four\_mom \\ & \(\lambda_{\alpha}\) & r\_sp\_d & & \(p_{\mu}\) & four\_mom\_d \\ \hline \multirow{2}{*}{\((1/2,0)\)} & \(\bar{\lambda}^{\dot{\alpha}}\) & l\_sp\_u & & \(p^{\dot{\alpha}\alpha}\) & r2\_sp \\ & \(\bar{\lambda}_{\dot{\alpha}}\) & l\_sp\_d & & \(\bar{p}_{\alpha\dot{\alpha}}\) & r2\_sp\_b \\ \hline \hline \end{tabular}
\end{table}
Table 2: Representations of the Lorentz group and associated properties of the Particle class.
subsequent manipulations. The default ring is a polynomial ring, while the quotient ring by the momentum-conserving ideal can be accessed via the method to_mom_cons_qring. This modifies the ideal in place. Translation to a SpinorIdeal, i.e. to the Lorentz invariant subring, can be computed with the method invariant_slice. This relies on an elimination theory algorithm.
To understand the geometry of a variety associated to an ideal we need to compute its primary decomposition. Prime ideals will then correspond to irreducible varieties, i.e. phase space configuration where amplitudes have well-defined poles and zeros. To obtain the prime ideals associated with a given ideal, such as the above defined J, one can (try to) compute a primary decomposition via Singular. This approach is in general insufficient to map out the irreducible varieties, due to the large number of variables in the underlying ring. However, we can use physical understanding to gain insights into the primary decomposition. For instance, the ideal J has five branches, but only one is non-trivial.
\begin{tabular}{l} K = LipsIdeal(5, ("<14>", "<15>", "<45>", "[23]" )) \\ L = LipsIdeal(5, ("<12>", "<13>", "<14>", "<15>", "<23>", "<24>", "<25>", "<34>", "<35>", "<45>")) \\ M = LipsIdeal(5, ("<4|1+5|4]", "<5|1+4|5", "|1<14><15>+|4|<14><45>-|5|<45><15>", "|1>[14][15]+|4>[14][45]-|5>[45][15]") \\ \end{tabular}
The ideals K and L are essentially triple-collinear configurations for legs 1, 4 and 5. Once these are known, the ideal M can be easily obtained by computing ideal quotients. To check the decomposition, we can compute an intersection of ideals with the operator &, which calls the command intersect of Singular, and verify the equality, via reduced Grobner bases.
\begin{tabular}{l} assert J == K \& K("12345", True) \& L \& L("12345", True) \& M \\ \end{tabular}
Note the use of another magic method (__call__) to compute the image of an ideal under a symmetry of phase space, similarly to the Particles.image method. By itself, this assertion is not sufficient to prove a primary decomposition of the ideal J. One also has to prove that the ideals K, L, M are prime. An efficient way to do this is via a prime test implement in syngular, which can be accessed via the method test_primality. This assumes equi-dimensional ideals.
## 4 Singular varieties
In lips a singular variety, irrespective of its dimension, is represented by single, generic, phase-space point--i.e. a zero-dimensional variety--embedded in the multi-dimensional variety. In this context, generic means that the result of evaluations at the chosen phase-space point should have an absolute value representative of evaluations at most points on the variety, possibly barring special, higher-codimension embedded varieties. This is guaranteed with high probability by picking the point at random, while satisfying the constraints that define the variety.
Two facilities are provided for the generation of finely-tuned phase-space points on specific varieties. First, in the submodule hardcoded_limits two methods, _set and _set_pair, are implemented. These methods efficiently generate points on varieties of codimension one and two respectively. However, they do not know about primary decompositions. As such, in case a variety is multi-branched, a branch will be chosen at random. Furthermore, since the constraints are solved explicitly, only some configurations can be built this way. The second option is to use the algebraic_geometry submodule, where the method _singular_variety is implemented. This method is significantly more computationally intensive than the former, as it relies on lexicographic Groebner bases, but it allows to specify branches--i.e. irreducible sub-varieties.
For instance, the following code will generate a 3-digits 2147483647-adic phase-space point near the irreducible variety associated to the prime ideal M.
oPsM = Particles(5, field=Field("padic", 2 ** 31 - 1, 3), seed=0) oPsM..singular_variety(("\(\langle\)"4\(|\)1+5\(|\)4\(|\)", "\(\langle\)5\(|\)1+4\(|\)5\(|\)"), (1, 1), generators=M.generators) ```
The first argument specifies orthogonal directions to the variety, the second the valuations of the invariants in the first argument (in this case both proportional to the chosen prime), while the generators keyword argument specifies the branch. Asymmetric limits can also be constructed by providing unequal valuations, see ref. [19, Appendix C].
## 5 Partial fraction decompositions
Partial fraction decompositions play an important role in the computation of scattering amplitudes, both in terms of final expressions, as well as at intermediate stages, e.g. for integration-by-parts identities. Standard methods are based on symbolic computations, including Grobner bases and polynomial reduction. For instance, see ref. [20]. We can use the technology described in the previous sections to infer whether a given partial fraction decomposition is valid, before determining the analytic form of the numerator. Given denominator factors \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\),
1. compute the primary decomposition for the ideal \(\left\langle\mathcal{D}_{1},\mathcal{D}_{2}\right\rangle\);
2. generate a phase-space point near each branch of the variety \(V(\left\langle\mathcal{D}_{1},\mathcal{D}_{2}\right\rangle)\);
3. numerically evaluate the function at these points.
If the numerator vanishes on all of them, then it belongs to (the radical of) the associated ideal (Hilbert's Nullstellensatz). For instance, given the expression of Eq. (1), we can infer that the denominator factors \(\langle 4|1+5|4\rangle\) and \(\langle 5|1+4|5\rangle\) must be separable into different fractions because the numerator, considered in least common denominator form, vanishes on all 5 branches. Further constraints from the degree of vanishing can also be imposed via the Zariski-Nagata theorem [5].
Beyond partial fractionsPartial fraction decompositions deal purely with sets of denominator factors, i.e. the poles of the functions. Yet, even if no partial fraction decomposition is possible, for instance when the denominator is a single irreducible polynomial, the numerator may still have a simple structure, generally in terms of an expanded set of invariants. These new invariants can be systematically identified via primary decompositions, and the same logic described to separate poles in the denominators can also be used to identify factors of the numerators.
|
2304.03620 | Electron confinement in chain-doped TMDs: A platform for spin-orbit
coupled 1D physics | The state-of-the-art defect engineering techniques have paved the way to
realize novel quantum phases out of pristine materials. Here, through
density-functional calculations and model studies, we show that the chain-doped
monolayer transition metal dichalcogenides (TMDs), where M atoms on a single
the zigzag chains are replaced by a higher-valence transition-metal element
M$^\prime$ (MX$_2$/M$^\prime$), exhibit one-dimensional (1D) bands. These 1D
bands, occurring in the fundamental gap of the pristine material, are
dispersive along the doped chain but are strongly confined along the lateral
direction. This confinement occurs as the bare potential of the dopant chain
formed by the positively charged M$^\prime$ ions resembles the potential well
of a uniformly charged wire. These bands could show novel 1D physics, including
a new type of Tomonaga-Luttinger liquid behavior, multi-orbital Mott insulator
physics, and an unusual optical absorption, due to the simultaneous presence of
the spin-orbit coupling, strong correlation, multiple orbitals, Rashba spin
splitting, and broken symmetry. For the half-filled 1D bands, we find, quite
surprisingly, a broadening of the 1D bands due to correlation, as opposed to
the expected band narrowing. This is interpreted to be due to multiple orbitals
forming the single Hubbard band at different points of the Brillouin zone.
Furthermore, due to the presence of an intrinsic electric field along the
lateral direction, the 1D bands are Rashba spin-split and provide a new
mechanism for tuning the valley dependent optical transitions. | Mayank Gupta, Amit Chauhan, S. Satpathy, B. R. K. Nanda | 2023-04-07T12:33:38Z | http://arxiv.org/abs/2304.03620v1 | # Electron confinement in chain-doped TMDs: A platform for spin-orbit coupled 1D physics
###### Abstract
The state-of-the-art defect engineering techniques have paved the way to realize novel quantum phases out of pristine materials. Here, through density-functional calculations and model studies, we show that the chain-doped monolayer transition metal dichalcogenides (TMDs), where M atoms on a single the zigzag chains are replaced by a higher-valence transition-metal element M\({}^{\prime}\) (MX\({}_{2}\)/M\({}^{\prime}\)), exhibit one-dimensional (1D) bands. These 1D bands, occurring in the fundamental gap of the pristine material, are dispersive along the doped chain but are strongly confined along the lateral direction. This confinement occurs as the bare potential of the dopant chain formed by the positively charged M\({}^{\prime}\) ions resembles the potential well of a uniformly charged wire. These bands could show novel 1D physics, including a new type of Tomonaga-Luttinger liquid behavior, multi-orbital Mott insulator physics, and an unusual optical absorption, due to the simultaneous presence of the spin-orbit coupling, strong correlation, multiple orbitals, Rashba spin splitting, and broken symmetry. For the half-filled 1D bands, we find, quite surprisingly, a broadening of the 1D bands due to correlation, as opposed to the expected band narrowing. This is interpreted to be due to multiple orbitals forming the single Hubbard band at different points of the Brillouin zone. Furthermore, due to the presence of an intrinsic electric field along the lateral direction, the 1D bands are Rashba spin-split and provide a new mechanism for tuning the valley dependent optical transitions.
## I Introduction
Successful synthesis of atomistically controlled Van der Waals layered materials in the form of transition metal chalcogenides (TMDs) has given rise to a wide range of mesoscopic non-trivial quantum phases. These include proximity of \(p\)-wave superconductivity and charge density wave as in NbSe\({}_{2}\)[1; 2; 3], topological Weyl semimetallic nature and large magnetoresistance as in WTe\({}_{2}\)[4; 5; 6], and the exotic orbital and quantum spin Hall effects [7; 8; 9].
One of the emerging areas of research on two- dimensional (2D) TMDs is to further reduce the dimensionality and explore sub-nanoscale quantum physics. For example, the Moire bilayers of TMDs and their heterostructures allow twist angle controlled resonant effects to engineer exciton band structure [10; 11]. The lateral superlattices and nanoribbons are synthesized out of TMDs to produce edge and interfacial states [12; 13; 14; 15; 16; 17]. A recent study has proposed a typical Moire lattice out of WTe\({}_{2}\) where an overarching periodicity creates a 1D lattice for electrons residing in collective eigenstates which give rise to rarely observed exotic quantum states of Tomonaga Luttinger liquid (TLL) behavior [12]. The ribbon edge states in 1-T'-WTe\({}_{2}\) exhibiting TLL behavior is a new experimental result in this direction [15]. The TLL state is also experimentally observed in MoS\({}_{2}\) by creating mirror twin boundaries [16]. Electron correlation is an important missing factor in TMDs, and therefore, correlation driven exotic quantum phases in the area of magnetism and superconductivity are less evident in this class of compounds. To achieve the correlated electron phases, narrow bands in the vicinity of Fermi energy need to be created, and one of the ways to make it possible is to confine the electron motion by reducing the dimensionality.
Unlike the semimetallic 1T\({}^{\prime}\) phase [18], the 2H phase of TMDs has wide band gaps and hence novel quantum phases and transport can be envisaged in them by inducing mid-gap states of lower dimensions. With growing interest in doped 2D TMDs, there are experimental proposals on tunable doping mechanisms in these systems [19]. Recently, Lin _et. al._[20] reported excellent controllability for substitutional doping of the foreign atoms in 2D TMDs through low energy ion implantation techniques such as site-selective laser-assisted chemical vapor doping [21]. Furthermore, a very recent experimental work demonstrates a controlled doping strategy for TMDs based on a dislocation climb mechanism [22]. In this study, the authors were successful in forming highly doped nanostripes of Ti, V, Cr, and Fe atoms in WSe\({}_{2}\) and WS\({}_{2}\) monolayers.
In this paper, we have engineered 1D quantum states out of the semiconducting 2H phase of MoX\({}_{2}\) and WX\({}_{2}\) monolayers with X being a chalcogen. This is achieved by replacing a single chain of Mo or W along the zigzag direction with an element M\({}^{\prime}\) with one extra valence electron as shown in Fig. 1. We find these chain-doped systems, henceforth represented as MX\({}_{2}\)/M\({}^{\prime}\) to be dynamically stable. The 1D bands, depending on several other factors, build a perfect platform to induce non-trivial low-dimensional quantum phases, which may include Peierls distortion [23], topological magnons [24], charge
density waves [25], TLL [26], and 1D magnetism.
In the case of MTe\({}_{2}\)/Tc and MTe\({}_{2}\)/Re, we find that the weakly SOC driven degenerate half-filled 1D bands running along the chain make them ideal candidates for exhibiting TLL phenomena. In the magnetic phase, the effect of strong correlation can produce Mott insulating states by breaking the half-filled 1D bands to lower and upper Hubbard subbands with a gap in between. However, a new phenomenon emerges where the onsite Coulomb repulsion instead of localizing, delocalizes the lower Hubbard subbands. Upon practical realization, it can give rise to an unusual 1D quantum transport. The dopant chain breaks the reflection symmetry to introduce an intrinsic electric field along the lateral direction. This in turn, makes the 1D bands Rashba spin-split and introduces valley dependent optical transition in the system.
## II Structural and computational details
The prototype representation of the chain-doped monolayer structures (MX\({}_{2}\)/M\({}^{\prime}=\)M\({}_{n-1}\)M\({}_{1}^{\prime}\)X\({}_{2n}\)) is shown in Fig. 1 (a). In this study, the supercell approach is adopted with \(n=13\), which is found to be sufficient to induce the 1D defect bands. The phonon band structures of such systems do not show imaginary frequencies (see Fig. 2), suggesting dynamical stability. With chain doping along the zigzag direction, the 2D Brillouin zone (BZ) reduces to a 1D BZ as shown by a thick black strip in Fig. 1 (b). The high symmetry points of the 2D BZ are projected onto the reduced BZ, which helps us later in discussing the resonance and bound states.
The density functional theory (DFT) calculations are carried out on optimized M\({}_{n-1}\)M\({}_{1}^{\prime}\)X\({}_{2n}\) structures us
Figure 1: (a) Top view of the chain-doped MX\({}_{2}\)/M\({}^{\prime}\) compound, where a single zigzag chain of the pristine MX\({}_{2}\) structure is replaced with an M\({}^{\prime}\)X\({}_{2}\) chain. (b) The reduced 1D Brillouin zone (line extending between \(-\pi/a<k_{x}<\pi/a\)) for the chain-doped structure and its relation to the original 2D Brillouin zone (hexagon). All three valley points \(K\) of the hexagonal BZ fall onto the same point, marked by \(K\), on the 1D BZ, and the same happens for the \(K^{\prime}\) points. The hexagonal zone collapses vertically onto the \(k_{x}\) axis, and the points lying outside the 1D BZ in the process, are brought inside it via the reciprocal lattice translation of \(2\pi/a\). The “zigzag/armchair” labels in the figure indicate the orientation of the BZ with respect to the crystalline directions in real space. (c) and (d) The DFT+SOC band structure of the pristine MoTe\({}_{2}\) and WTe\({}_{2}\) (shaded grey), projected into the 1D BZ. The red lines indicate the defect bands introduced by the Tc doped chain in the forbidden region. These bands represent 1D propagating states along the chain, while they are confined in the lateral direction. The defect bands are dominated by the orbital characters of Tc. The small splitting of the otherwise degenerate defect bands is because of the Rashba SOC due to a non-zero lateral electric field (see text).
Figure 3: The orbital resolved band structure of the monolayer MoTe\({}_{2}\) obtained using Wannier90 [28]. The valence band maximum occurring at the valley points, \(K\) and \(K^{\prime}\), are dominated by the angular momentum orbitals L\({}^{\pm}=(x^{2}-y^{2}\pm ixy)\), while the conduction band minima are formed by the \(z^{2}\) orbital, all belonging to the Mo atom. The spectrum below the top valence band is formed by the Te-\(p\) states.
Figure 2: The phonon frequencies of the chain-doped MoTe\({}_{2}\)/T\({}_{e}\) (M\({}_{06}\)Tc\({}_{1}\)Te\({}_{14}\)). The force constants obtained from the DFPT method are taken into account through the phonopy code [27] as implemented in VASP. The absence of imaginary modes implies the dynamical stability of the chain-doped structure.
ing the pseudopotential based projector-augmented wave (PAW) method [29; 30] within the framework of PBE-GGA exchange-correlation functional as implemented in Vienna ab initio simulation package (VASP) [31]. The plane wave energy cut-off of 400 eV and a 1\(\times\)8\(\times\)2 \(\Gamma\)-centered k-mesh is used for BZ integration. The Hubbard \(U\) formalism is adopted to study the correlation effect arising due to localized defect states. The \(U\) values are obtained using the linear response theory [32]. The cell averaged bare potential is calculated using the QUANTUM ESPRESSO simulation package [33].
## III Formation of 1D bands
While a range of chain-doped configurations are investigated, here, we will discuss the electronic structure of MoTe\({}_{2}\)/T\({}_{c}\) and WTe\({}_{2}\)/T\({}_{c}\) as prototypes. However, it is useful to first provide a brief overview of the electronic structure of the pristine TMDs so that the formation of the dopant states can be better understood. For this purpose, in Fig. 3, we present the band structure of monolayer 2H-MoTe\({}_{2}\)[34]. The electronic properties of the TMDs have been widely studied [34; 35; 36]. The formation of bands can be described with two step chemical bonding. In the first step, the nearest neighbor Mo-d - Te-\(p\) interactions give rise to lower-lying Te-\(p\) dominated bands and upper-lying Mo-\(d\) dominated bands. Driven by the trigonal prismatic crystal field, the latter is further split into three groups: A\({}^{\prime}_{1}(z^{2})\), E\({}^{\prime}(xy\), \(x^{2}-y^{2})\) and E\({}^{\prime\prime}(xz\), \(yz)\). The second step involves second neighbor interactions where in the monolayer limit the reflection symmetry along \(\hat{z}\) direction permits hybridization among the A\({}^{\prime}_{1}\) and E\({}^{\prime}\) orbitals to create a band gap. The valley points \(K\) and \(K^{\prime}\) host both the valence band maximum (VBM) and the conduction band minimum (CBM). The VBM at \(K\) and \(K^{\prime}\) are found to be formed by \(x^{2}-y^{2}+ixy\) (\(L^{+}\)) and \(x^{2}-y^{2}-ixy\) (\(L^{-}\)) orbitals respectively, giving rise to opposite orbital moments [8] while the CBM is formed by the \(z^{2}\) character and hence with zero orbital moments [8; 34]. The role of the spin-orbit coupling (SOC) in this compound is restricted to splitting the bands dominated by \(L^{+}\) and \(L^{-}\) by a few meV without perturbing the broad band structure. To produce unique quantum transport phenomena, the mid-gap states with varying characters can be generated in these systems through hole or electron doping.
The band structures of the chain-doped systems MoTe\({}_{2}\)/Tc and WTe\({}_{2}\)/Tc are respectively shown in Figs. 1 (c) and (d). The gray shaded region represents the bands of pristine MoTe\({}_{2}\)/(WTe\({}_{2}\)) projected along the k\({}_{y}\) = 0 (see Fig. 1 (c)/(d) ). The red bands belong to the chain-doped compounds. Most of them overlap with the bands of the parent compound, but the rest form defect
Figure 4: The confinement of the chain-doped bands along the lateral direction. (a) Lower panel: The cell-averaged bare potential (red solid line) and the model potential (black solid line) as per Eq. 1. The ground and the first two excited states wave functions (\(\psi_{1}\), \(\psi_{2}\), and \(\psi_{3}\)) are sketched in blue dotted lines. These are obtained by numerically solving the Schrödinger equation for the model potential. Upper panel: The spread of the extra valence electron in this potential. The DFT obtained values are shown in red circles. The cell average of the ground state charge density (\(\left|\psi_{1}(r)\right|^{2}\)) is shown in black squares. (b) Charge-density contours (isosurface value = 0.0001 \(e/\AA^{3}\)) of the partially occupied defect bands as obtained by integrating from the bottom of the defect bands to the Fermi energy (see the red bands in Fig. 1).
bands, creating either bound states by lying in the forbidden region or resonating states by overlapping with the bulk bands. The defect bands lying in the vicinity of the Fermi level (\(\epsilon_{F}\)) are of significant importance as they can introduce new transport behavior in the system. Our orbital projection analysis indeed shows that these bands are formed by the \(xy\), \(x^{2}-y^{2}\), and \(z^{2}\) orbitals of the Tc chain. Furthermore, the defect bands are dispersive along the chain direction while remaining bound perpendicular to it and thereby a platform for 1D quantum physics is created.
The basic electronic configuration enables us to explain the formation of the 1D quantum state. In MoTe\({}_{2}\) and WTe\({}_{2}\), Mo\({}^{4+}\) and W\({}^{4+}\) has \(d^{2}\) configuration while Tc offers one additional electron with \(d^{3}\) electronic configuration. Therefore, when a chain of Tc is placed in the Mo/W matrix the former adopts a \(d^{2}\) + \(d^{1}\) electronic configuration. The \(d^{2}\) primarily contributes to the bulk while the Tc-\(d^{1}\) is responsible for forming the defect bands.
By giving away the additional electron (\(d^{1}\)), the Tc chain behaves like a positively charged wire of radius \(R_{0}\). From Gauss's law, the potential inside and outside the wire can be expressed as:
\[V=\left\{\begin{array}{ll}\frac{\rho}{4e_{r}\epsilon_{0}}r^{2},&r<R_{0}\\ \frac{\rho R_{0}^{2}}{4e_{0}\epsilon_{r}}\Big{(}1+2\log(\frac{r}{R_{0}})\Big{)},&r>R_{0}\end{array}\right. \tag{1}\]
where, \(\rho=e/\pi R_{0}^{2}a\) is the charge density of the wire with \(a\) being the lattice constant. Based on the earlier theoretical studies the dielectric constant (\(\epsilon_{r}\)) is taken to be 20 [37]. We mapped the modeled potential with the cell averaged bare potential obtained from the DFT calculations on a MoTe\({}_{2}\)/Tc superlattice. There is an excellent agreement capturing both \(r^{2}\) and logarithmic behavior inside and outside the wire respectively for \(R_{0}=2.6\)\(\AA\) which is higher than the atomic radii of Tc and lower than the lattice parameter.
The wave functions of the ground and first two excited states (\(\psi_{n}(r)\)) and their corresponding eigenvalues (\(\lambda_{n}\)) are shown in Fig. 4 (a) lower panel, which were obtained from the numerical solution to the one-particle Schrodinger equation with the potential given in Eq. 1. The eigenstates resemble those of the Airy functions which are the solutions of the Schrodinger equation for at linearly varying potential well [38]. The \(|\psi_{1}(r)|^{2}\) plotted in a blue solid line in the upper half of Fig. 4 (a) reflects the charge spread away from the Tc chain. For validation, we computed the cell average of \(|\psi_{1}(r)|^{2}\) (black filled squares) along the direction perpendicular to the chain and compared them with the atom wise contribution of partially occupied lower lying defect band, and values are depicted through the red filled circles and found a very good match among them. The rapid exponential decay of the charge spread is also demonstrated through the logarithmic charge density (\(\rho_{DFT}(r)\)) contours calculated for the 1D band. The \(\rho_{DFT}(r)\) is calculated by integrating the lower lying defect band upto \(\epsilon_{F}\). From Fig. 4 (b), we observe that the spread vanishes after three layers on either side of the Tc chain while accumulating most of the charge on the chain itself. This implies that the defect state is bound laterally and dispersive along the Tc chain.
## IV Platform for 1D physics
### Rashba SOC and valley-dependent optical transitions
The partially filled non-degenerate defect bands can be fitted with a tight-binding model on the doped chain along with a Rashba-like term, viz.,
\[E=(\varepsilon_{0}+2t\cos\,k_{x}+2t^{\prime}\cos\,2k_{x}+2t^{\prime\prime}\cos \,3k_{x})I+\lambda_{R}(\vec{E}\times\vec{k})\cdot\vec{ \sigma}, \tag{2}\]
where the chain runs along \(\hat{x}\), \(\varepsilon_{0}\) is the on-site energy taken to be zero, \(t\), \(t^{\prime}\), and \(t^{\prime\prime}\) are, respectively, the hopping to the first, second, and the third nearest neighbor. Here, \(I\) is 2\(\times\)2 identity matrix and \(\vec{\sigma}\) are Pauli spin matrices. \(\lambda_{R}\) is the Rashba strength. From the symmetry of the structure (Fig. 4 (b)), an electric field exists in the \(y\) direction on the plane, which translates to a magnetic field \(\vec{B}=\vec{v}\times\vec{E}/c^{2}\) in the electron's rest frame that couples to the spin moment. This leads to the spin-split band structure with a linear dispersion described by the last term in Eq. (2).
The TB parameters, obtained by fitting to the DFT results, are listed in Table 1 for a number of chain-doped compounds. We note that there is a substantial 2\({}^{nd}\) neighbor hopping \(t^{\prime}\), but the 3\({}^{rd}\) neighbor hopping is substantial only for the Re chains and negligible for the Tc chains. The TB bands fitted with DFT for WTe\({}_{2}\)/Tc is shown in Fig. S10 of supplementary materials (SM). Similar models can also be developed for the Rashba spin-split defect bands that lie in the forbidden regions other than the fundamental gap.
The schematic band structure along with the Rashba spin splitting is illustrated in Fig. (5), which suggests interesting valley dependent optical properties. In the parent MX\({}_{2}\) material, circularly polarized light with op
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Compound name & \(\lambda_{R}\) & \(t\) & \(t^{\prime}\) & \(t^{\prime\prime}\) \\ \hline MoSe2/Tc & 0.10 & -0.053 & 0.032 & 0.003 \\ \hline MoSe2/Re & 0.76 & -0.049 & 0.035 & 0.019 \\ \hline MoTe2/Tc & 0.10 & -0.070 & 0.026 & 0.007 \\ \hline MoTe2/Re & 1.00 & -0.077 & 0.030 & 0.019 \\ \hline WS2/Tc & 0.00 & -0.041 & 0.039 & 0.002 \\ \hline WSe2/Tc & 0.20 & -0.057 & 0.044 & 0.000 \\ \hline WTe2/Tc & 0.74 & -0.090 & 0.048 & 0.007 \\ \hline WTe2/Re & 0.68 & -0.081 & 0.041 & 0.025 \\ \hline \end{tabular}
\end{table}
Table 1: Rashba strength (in eV \(\cdot\) Å) and the hopping integrals (in eV) for the 1D defect bands in MX\({}_{2}\)/M\({}^{\prime}\).
posite polarization is absorbed at the two different valleys \(K\) and \(K^{\prime}\). Due to the Rashba-like spin splitting, we predict the chain-doped compounds to exhibit additional features in the valley-dependent optical absorption between the bulk to the defect states. The lowest-energy optical transitions at the valley points are forbidden because the lower defect band has spin opposite to that of the bulk valence band edge. Note that the projected bands onto the 1D BZ of the chain-doped compound not only shows the fundamental gap extending through the full BZ, but also gaps that exist at certain regions of the BZ as indicated for the valence bands in Fig. 5. As also indicated in the figure, localized 1D bands can exist within these gaps, and valley-dependent optical transitions would occur between these states including the localized 1D bands lying in the fundamental gap. To illustrate, the dipole allowed lowest-energy optical transitions (\(\sigma^{+}\) and \(\sigma^{-}\)) between partially filled and localized 1D conduction bands in the vicinity of valley points are indicated in the figure.
### Electronic Correlation
Electron correlation effects are expected to be important for the 1D defect bands, since the bandwidth is quite narrow. A key parameter for characterizing the strength of the correlation effects is the on-site Coulomb repulsion \(U\), which we now proceed to compute using the DFT and the linear response approach.
In this method [32], \(U\) is computed by calculating the difference between interacting and non-interacting density response functions:
\[U=\chi_{0}^{-1}-\chi^{-1}=\Big{(}\frac{\partial n_{i}^{KS}}{\partial\alpha_{i }}\Big{)}^{-1}-\Big{(}\frac{\partial n_{i}}{\partial\alpha_{i}}\Big{)}^{-1}, \tag{3}\]
where \(\alpha_{i}\) is a perturbative shift in the single particle potential at site \(i\), for which \(U\) is being computed. Since the bandwidth of the \(d\) bands of the host compound MX\({}_{2}\) are rather large, and the bands are either occupied or unoccupied, the correlation effects are relatively weak. In contrast, the defect bands are one-dimensional, relatively narrow, and half-filled, so that the correlation effects are expected to be important there. Therefore, we compute \(U\) only for the 1D defect bands. To obtain the response functions, the variation in occupation numbers is obtained by performing the DFT calculations in two ways: (i) by allowing the Kohn-Sham potential to adjust self-consistently which optimally screens the perturbation \(\alpha_{i}\) to give \(\chi_{0}\) and (ii) by calculating the Kohn-Sham potential without screening to get \(\chi\). The latter is achieved by a single loop, without enforcing self-consistency. The variation of \(n_{i}^{KS}\) and \(n_{i}\) as a function of \(\alpha_{i}\) at the doped metal site (Tc or Re) is shown in Fig. 6 for MoSe\({}_{2}\)/Tc and MoSe\({}_{2}\)/Re. The onsite Coulomb repulsion \(U_{0}\) calculated with this procedure is listed for various chain-doped compounds in Table 2, together with the Fermi velocity \(v_{F}\) and the bandwidth \(W\). The spin resolved Fermi velocities \(v_{F}^{\uparrow}\) and \(v_{F}^{\downarrow}\) are computed at the Fermi momentum \(k_{F}^{\uparrow}\) and \(k_{F}^{\downarrow}\), which is roughly halfway along the \(\Gamma-X\) line (see Figs. 1 and 7), by taking the derivative of the energy \(\hbar v_{F}^{\uparrow\downarrow}=\left(\partial E^{\uparrow\downarrow}(k)/ \partial k\right)_{k=k_{F}^{\uparrow\downarrow}}\). The \(k_{F}^{\uparrow}\) and \(k_{F}^{\downarrow}\) differ as the SOC makes the spin-resolved bands non-degenerate. As seen from Table 2, \(U/W>>1\) for all compounds studied, which would put these materials in the strong correlation limit.
Figure 5: Schematic 1D defect bands indicating the Rashba spin splitting and valley dependent optical absorption. The shaded bands indicate the band structure of the 2D host material, while the red lines indicate the localized 1D defect bands introduced in the band gap of the host. The red dashed lines in the valence band indicate resonance states. Some dipole-allowed optical transitions for circularly polarized light are indicated by \(\sigma^{-}\) and \(\sigma^{+}\). The orbital projected bands of prototype WTe\({}_{2}\)/Tc are shown in Fig. S11 of SM.
Figure 6: Calculation of the Coulomb repulsion \(U_{0}\) for the metal atom on the doped chain, Tc or Re, using Eq. 3. Plotted are the occupation numbers, \(n_{i}^{KS}\) and \(n_{i}\), as a function of the perturbing potential \(\alpha\) at the doped metal site, obtained using 1\(\times\)4 supercell of MoSe\({}_{2}\)/Tc and MoSe\({}_{2}\)/Re, i.e. (M\({}_{n-1}\)M\({}_{1}^{\prime}\)X\({}_{2n}\), \(n=7\))\(\times\)4, where the perturbing potential was applied to a single Tc or Re atom.
### Tomonaga Luttinger-Liquid Physics
The low-energy behavior of the correlated electrons in 1D is described by the TLL theory, with features generic to many interacting 1D electron systems, such as the spin-charge separation and the anomalous scaling of the correlation functions.[39; 41; 42; 43; 44; 43; 44]. A few years ago, the TLL theory was extended to include SOC. When SOC is introduced, the complete spin-charge separation is destroyed, resulting in mixed bosonic excitations involving these two degrees of freedom. However, the anomalous scaling of the correlation functions remains with modified exponents[45; 46]. The TLL with SOC has been studied experimentally in the context of quasi-1D systems such as the carbon nanotubes, quantum wires, and 2D electron gases confined to a narrow channel by gate electrodes, but much remains to be done both from theoretical and experimental points of view.
We propose the chain-doped TMDs to be an important class of materials to study the TLL physics. As illustrated for a number of cases in Figs. 1 and 7, the defect bands in the chain-doped TMDs are spin split at the Fermi energy to a varying degree due to the SOC. For example, the splitting is near zero for MoTe\({}_{2}\)/Tc, while it is quite large for WS\({}_{2}\)/Tc. The 1D defect bands in the chain-doped TMDs are nominally half-filled, for which the TLL behavior would be absent[47]. However, departure from the half-filling scenario can occur naturally due to the presence of impurities and/or applied gate voltage, which may furthermore be used to alter the spin-splitting at the Fermi energy. Thus the chain-doped TMDs may serve as a rich laboratory for the study of TLL behavior.
The TLL behavior is characterized by several parameters such as the spin/charge velocities for the collective spin and charge excitations and the anomalous dimension \(\alpha\) which is a function of onsite Coulomb repulsion and \(v_{F}\). Explicit expressions for these quantities exist in the weak-coupling limit, but for the strong-coupling limit, numerical results have been obtained for certain models. For the 1D Hubbard model, the TLL parameters have been computed for different values of \(U/t\) and band filling \(n\)[48]. For \(U/t\rightarrow\infty\), which is the case for the chain-doped compounds, one gets \(\alpha=1/8\). When SOC is present, its strength is parameterized by the Fermi velocity difference \((v_{F\uparrow}-v_{F\downarrow})/(v_{F\uparrow}+v_{F\downarrow})\), and the TLL exponent \(\alpha\) is modified[45]. The TLL behavior of the 1D Hubbard model with next-nearest-neighbor hopping but without the SOC has been studied using the density-matrix renormalization group[49]. For the chain-doped TMDs, the second neighbor interaction is substantial and furthermore the strength of the SOC can also be tailored by changing the metal dopant \(M^{\prime}\), so that these materials can serve as a rich laboratory for studying the TLL behavior.
### Half-Filled Defect Bands: Mott-Hubbard Insulating State and Unusual Band Widening
For the half-filled case, the 1D defect bands would not show the TLL behavior due to the presence of a charge gap for all values of \(U\) within the Hubbard model. In the strong-coupling limit (\(U/W\rightarrow\infty\)), which is the case here, the mean-field treatment adequately describes the gross physics of the system, and this is accomplished by the DFT + \(U\) formalism in the density-functional theory.
We find two surprising features for the 1D defect bands: (i) Instead of an AFM ground state expected from the Hubbard model physics at half filling, we find that the ground state is usually ferromagnetic. (ii) With \(U\), the bandwidth of the Hubbard bands increases instead of decreasing which is normally the case for strongly-correlated systems where the motion of the electron is inhibited due to correlated motion, resulting in a larger effective mass. The increase of the bandwidth of the Hubbard bands with \(U\) can be explained in terms of a
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Compound/ & \(U\)(eV) & W(eV) & \(v_{F\uparrow}\)(eVÅ) & \(v_{F\downarrow}\)(eVÅ) \\ doping element & & & & \\ \hline MoTe\({}_{2}\)/Tc & 1.65 & 0.30 & 0.67 & 0.78 \\ \hline MoSe\({}_{2}\)/Tc & 1.63 & 0.26 & 0.41 & 0.50 \\ \hline WTe\({}_{2}\)/Tc & 1.65 & 0.34 & 0.50 & 0.73 \\ \hline WSe\({}_{2}\)/Tc & 1.63 & 0.31 & 0.64 & 0.76 \\ \hline WS\({}_{2}\)/Tc & 1.30 & 0.20 & 0.43 & 0.76 \\ \hline MoSe\({}_{2}\)/Re & 4.99 & 0.24 & 0.82 & 0.58 \\ \hline MoTe\({}_{2}\)/Re & 4.90 & 0.27 & 1.01 & 0.69 \\ \hline WTe\({}_{2}\)/Re & 4.96 & 0.31 & 1.11 & 0.73 \\ \hline \end{tabular}
\end{table}
Table 2: Characteristics of the 1D defect band for various compounds. \(U\), \(W\), and \(v_{F}\) are the onsite Coulomb repulsion, bandwidth, and Fermi velocity, respectively.
Figure 7: The non-magnetic band structures of several chain-doped compounds. The SOC is included.
multi-band model as discussed below in some detail.
A plausible explanation for the ferromagnetism is that we don't have an isolated 1D Hubbard chain, rather that the electrons are coupled via the electrons of the host materials. In fact, our charge density contour plot in Fig. 3 infers that in the case of MoTe\({}_{2}\)/Tc there is a covalent bonding between the 1D states with the nearest-neighbor Te-p states of the host material to favor the ferromagnetic ordering.
The band-widening feature can be explained using a multi-band Hubbard model. In essence, different parts of the BZ consists of different type of orbitals (for instance, \(z^{2}\) at \(K\) and \(L^{-}\) at \(\Gamma\) for the defect bands as seen from Fig. 1), which can shift around in energy depending on the magnitude of \(U\).
To demonstrate the case of Mott insulating phase and band widening, we examine the case of MoSe\({}_{2}\)/Re as an example. The non-magnetic band structure shown in Fig. 7 indicates that the half-filled 1D bands with the bandwidth of about 0.24 eV (see Table 2). From spin-polarized DFT + \(U\) calculations, we find (see Fig. 8) that with increasing \(U\), the degenerate non-magnetic half-filled bands spin separate to form a more-occupied lower Hubbard and a less-occupied upper Hubbard band. With further increase in \(U\), the two Hubbard bands separate completely to form a gap, leading to a Mott insulating state. We see similar effects for other chain-doped compounds, which are not shown here to avoid redundancy.
This is in contrast to the conventional Mott insulator, where increasing \(U\) makes the Hubbard bands narrower, as the electron's motion becomes restricted due to electron correlation, resulting in a larger effective mass. In the present case, there is a rapid increase in the bandwidth with an increase in \(U\). For example, by increasing \(U\) from 0 to 5 eV, the bandwidth has gone from 0.4 eV to 1.1 eV, as can be seen in Fig. 8.
To describe this physics, we have adopted a multi-orbital Hubbard model with three orbital basis (\(z^{2}\), \(xy\), and \(x^{2}-y^{2}\)), with the Hamiltonian
\[\begin{split} H&=\sum_{i\mu}\epsilon_{i\mu}c^{ \dagger}_{i\mu}c_{i\mu}+\sum_{ij;\mu\nu}t_{i\mu j\nu}c^{\dagger}_{i\mu}c_{j\nu }+h.c.+\\ U&\sum_{i\mu}n_{i\mu\uparrow}n_{i\mu\downarrow}+(U^ {\prime}-\frac{J_{\rm H}}{2})\sum_{i\mu<\nu}n_{i\mu}n_{i\nu}\\ &-2J_{\rm H}\sum_{i,\mu<\nu}S^{z}_{i\mu}\cdot S^{z}_{i\nu}.\end{split} \tag{4}\]
Here, the first two terms describe the onsite and the kinetic energy of the electrons, while the third and fourth terms are the energy cost of having the electrons in the same or different orbitals at the same lattice site. The last term defines the Hund's rule coupling. The relation \(U^{\prime}=U\) - 2\(J_{\rm H}\) has been used here with J\({}_{H}\)/\(U\) ratio estimated to be 0.03 based on the agreement between DFT and model band structures. The \(n_{i\mu}=n_{i\mu\uparrow}\) + \(n_{i\mu\downarrow}\) are the occupation numbers which are obtained from the DFT+\(U\) density matrix.
It is too complex to solve the problem for the supercell of MoSe\({}_{2}\)/Re. Since the defect bands originate from the doped FeSe\({}_{2}\) chain, and the bulk FeSe\({}_{2}\) shows a similar band-widening behavior as well (see Fig. 9), we study
Figure 8: The DFT+\(U\) spin polarized band structures of MoSe\({}_{2}\)/Re. The transition to Mott insulating phase occurs through the formation of lower and upper Hubbard bands (LHB and UHB)). The LHB widens with \(U\), which is unusual and is interpreted to be due to its multiple-orbital origin.
Figure 9: The spin-polarized band structure of 2H-ReSe\({}_{2}\) monolayer as obtained from DFT+\(U\) calculations (upper row) and three-orbital based Hubbard model (lower row) showing excellent agreement. The evolution of the bottom of the LHB with \(U\) has been indicated by the dashed line. There is an excellent agreement between the model and DFT results. Like the chain-doped system MoSe\({}_{2}\)/Re (see Fig. 8), the LHB bandwidth as indicated in (d) and (f), increases with increasing strength of the onsite Coulomb repulsion.
the phenomenon for bulk \(\mathrm{ReSe}_{2}\) using the multi-orbital Hubbard model, Eq. (4).
The results, obtained from both the DFT + U calculations as well as the multi-orbital Hubbard model, are shown in Fig. 9. From the top panel in the figure, we observe that for \(U=0\), it has three spin degenerate bands in the vicinity of the \(\epsilon_{F}\), like in all 2H-TMD compounds. Out of the three, one is completely occupied, another is completely unoccupied and the third one is a half-occupied band. With an increase in \(U\), the half-occupied band becomes spin non-degenerate. Also, like the case of \(\mathrm{MoSe}_{2}\)/\(\mathrm{Re}\), the lower Hubbard sub-band becomes more dispersive. The model results, shown in the lower panel of Fig. 9, match very well with the DFT+U results, and the widening of the lower Hubbard band is seen from both results (indicated by the long-dashed lines). As mentioned already, the lowest conduction band running through \(E_{F}\) has a combination of the orbital characters \(L^{\pm}\) (dominant around \(K^{\prime}\)) and \(z^{2}\) (dominated around \(\Gamma\)). The band-widening happens because the inter-orbital interaction term (the fourth term in the Hamiltonian Eq. 4) lowers the onsite energy for \(z^{2}\) orbital (dominant around \(K^{\prime}\)), while increasing the same for the \(L^{+}\) orbitals (dominant around \(\Gamma\)). In fact, when this term was switched off, there is a band narrowing, clearly seen for the LHB (see Fig. S12 in SM).
## V Summary
To summarize, by employing density functional theory calculations and theoretical models, we show that the chain-doped transition metal dichalcogenides (\(\mathrm{MX}_{2}\)/\(\mathrm{M}^{\prime}=\)\(\mathrm{M}_{n-1}\mathrm{M}_{1}^{\prime}\mathrm{X}_{2n}\)), with a \(\mathrm{M}^{\prime}\) dopant chain along the zigzag direction, form a sharply localized one-dimensional (1D) band structure. While the 1D states are strongly confined along the lateral direction, they are highly mobile along the chain direction. The localization in the lateral direction is interpreted in terms of the bound states of the bare potential of the dopant chain. The partially-filled 1D bands provide a platform to explore exotic spin-orbit coupled one-dimensional quantum phases and properties. These include the Tomonaga Luttinger liquid (TLL) behavior, ferromagnetic Mott insulator, Rashba type spin-orbit coupling and valley-dependent optical transition. The half-filled 1D bands are ideal candidates for stabilizing the anti-ferromagnetic Mott insulating phase. However, the interaction between the 1D states via the host X-p states makes it ferromagnetic and insulating. When the 1D bands deviate from half-filling, the substantial second-neighbor interactions between the \(\mathrm{M}^{\prime}\) states make it favorable for practical realization of the TLL behavior. The deviation from half-filling can be achieved via impurities and gate biasing.
The widening of the lower Hubbard subband with increasing onsite Coulomb repulsion strength in these chain-doped systems is a non-trivial outcome of this study. This phenomenon, which is anti-intuitive and goes against the conventional assumption of band narrowing with increasing repulsion strength, has hardly been observed in the literature, which makes this class of materials worth exploring for non-trivial quantum transport and phases. We have explained the cause of band widening by developing a multi-orbital Hubbard model. Another important outcome of the present study is that due to the presence of an intrinsic electric field along the lateral direction, the 1D bands are Rashba spin-split and provide a new mechanism to tune the valley-dependent optical transition in \(\mathrm{MX}_{2}\)/\(\mathrm{M}^{\prime}\).
Our study opens new avenues to tailor 1D quantum physics in 2D TMDs. With the advent of state-of-the-art techniques such as low-energy ion implantation, dislocation climb mechanism, etc., the chain-doped TMDs can be synthesized in a controlled manner. In addition to the electron doping and emergent properties as discussed in this work, hole doping is also equally likely to introduce interesting features such as orbital and spin Hall effects in such chain-doped compounds. As a whole, we believe that the present study will excite experimenters and theoreticians alike to envisage exotic quantum phenomena and applications.
_Acknowledgements:_ This work is funded by the Department of Science and Technology, India, through Grant No. CRG/2020/004330. SS thanks SERB India for the VAJRA fellowship. BRKN acknowledges the support of HPCE, IIT Madras for providing computational facilities.
|
2310.01816 | On the natural nullcones of the symplectic and general linear groups | Consider a group acting on a polynomial ring $S$ over a field $K$ by
degree-preserving $K$-algebra automorphisms. The invariant ring $R$ is a graded
subring of $S$; let $\mathfrak{m}_R$ denote the homogeneous maximal ideal of
$R$. Several key properties of the invariant ring and its embedding in $S$ can
be deduced by studying the nullcone $S/\mathfrak{m}_R S$ of the group action.
This includes, for example, the finite generation of the invariant ring and the
purity of the embedding. In this article, we study the nullcones arising from
the natural actions of the symplectic and general linear groups.
For the natural representation of the symplectic group, the invariant ring is
the ring defined by the principal Pfaffians of a fixed even size of a generic
alternating matrix. We show that the nullcone of this embedding is a strongly
$F$-regular ring in positive characteristic, and hence in characteristic zero,
a ring of strongly $F$-regular type. Independent of characteristic, we give a
complete description of the divisor class group of the nullcone and determine
precisely when it is Gorenstein. We also show that the nullcone ideal has a
squarefree initial ideal.
For the natural representation of the general linear group, the invariant
ring is a generic determinantal ring. The nullcone of this embedding is
typically a non-equidimensional ring. The irreducible components of the
nullcone are the varieties of complexes of length two, as introduced by
Buchsbaum and Eisenbud. We show that each of these irreducible components
define strongly $F$-regular rings in positive characteristic. We also show that
the Frobenius splitting of the varieties of complexes can be chosen compatibly;
it follows that the nullcone is an $F$-pure ring. We also show that the
nullcone ideal and the ideals defining the varieties of complexes have
squarefree initial ideals with respect to the same monomial order. | Vaibhav Pandey, Yevgeniya Tarasova, Uli Walther | 2023-10-03T06:11:12Z | http://arxiv.org/abs/2310.01816v2 | # On the natural nullcones of the symplectic
###### Abstract.
Consider a group acting on a polynomial ring \(S\) over a field \(\mathbb{K}\) by degree-preserving \(\mathbb{K}\)-algebra automorphisms. The invariant ring \(R\) is a graded subring of \(S\); let \(\mathfrak{m}_{R}\) denote the homogeneous maximal ideal of \(R\). Several key properties of the invariant ring and its embedding in \(S\) can be deduced by studying the nullcone \(S/\mathfrak{m}_{R}S\) of the group action. This includes, for example, the finite generation of the invariant ring and the purity of the embedding. In this article, we study the nullcones arising from the natural actions of the symplectic and general linear groups.
For the natural representation of the symplectic group (via copies of the standard representation), the invariant ring is the ring defined by the principal Pfaffians of a fixed even size of a generic alternating matrix. We show that the nullcone of this embedding is a strongly \(F\)-regular ring in positive characteristic, and hence in characteristic zero, a ring of strongly \(F\)-regular type. Independent of characteristic, we give a complete description of the divisor class group of the nullcone and determine precisely when it is Gorenstein. We also show that the nullcone ideal has a squarefree initial ideal.
For the natural representation of the general linear group (via copies of the standard representation and copies of its dual), the invariant ring is a generic determinantal ring. The nullcone of this embedding is typically a non-equidimensional ring. The irreducible components of the nullcone are the varieties of complexes of length two, as introduced by Buchsbaum and Eisenbud. We show that each of these irreducible components define strongly \(F\)-regular rings in positive characteristic. We also show that the Frobenius splitting of the varieties of complexes of length two can be chosen compatibly; it follows that the nullcone is an \(F\)-pure ring. We also show that the nullcone ideal and the ideals defining the varieties of complexes of length two have squarefree initial ideals with respect to the same monomial order of the ambient polynomial ring.
Key words and phrases:nullcone, invariant ring, \(F\)-regular, \(F\)-pure, divisor class group 2010 Mathematics Subject Classification: Primary 13A50, 13A35 ; Secondary 13C40, 20G05. UW was supported by NSF grant DMS-2100288 and by Simons Foundation Collaboration Grant for Mathematicians #580839. PV was partially supported by the AMS-Simons Travel Grant.
For a more illuminating example, let \(Y\) be a \(d\times n\) matrix of indeterminates over \(\mathbb{K}\), and set \(S\) to be the polynomial ring \(\mathbb{K}[Y]\). Let \(R=\mathbb{K}[\{\Delta\}]\) be the \(\mathbb{K}\)-algebra generated by the size \(d\) minors \(\{\Delta\}\) of \(Y\). Then \(R\) is the coordinate ring for the Plucker embedding of the Grassmannian of \(d\)-dimensional subspaces of an \(n\)-dimensional vector space. The special linear group \(\operatorname{SL}_{d}(\mathbb{K})\) acts \(\mathbb{K}\)-linearly on \(S\) via the action
\[M\colon Y\mapsto MY\qquad\text{ for }\ M\in\operatorname{SL}_{d}(\mathbb{K}).\]
When \(\mathbb{K}\) is an infinite field, the invariant ring is precisely \(R\), see [14] or [11, SS2]. Clearly, the nullcone of the natural embedding
\[R=\mathbb{K}[\{\Delta\}]\subseteq\mathbb{K}[Y]=S\]
is the determinantal ring \(S/I_{d}(Y)\) defined by the maximal minors of a generic matrix.
When \(\mathbb{K}\) has characteristic zero, the group \(\operatorname{SL}_{d}(\mathbb{K})\) is linearly reductive. It follows that \(R\) is a direct summand of \(S\) as an \(R\)-module. In contrast, when \(\mathbb{K}\) has positive characteristic, it was recently shown in [11, Theorem 1.1] that the above natural embedding typically does _not_ split. This non-splitting is due to the Cohen-Macaulay property of the nullcone, when combined with the flatness of the Frobenius map on the ambient polynomial ring \(S\).
The Cohen-Macaulay property of determinantal rings of maximal minors was proved using the Eagon-Northcott resolution in [10] and later for minors of all sizes by the technique of principal radical systems in [12]. The divisor class group of determinantal rings is the infinite cyclic group by [13] and they are Gorenstein precisely when the matrix is square by [15, Theorem 5.4.6]. The initial ideals of generic determinantal ideals are squarefree since the natural generators form a Grobner basis by [16] and [17]. Further, determinantal rings are \(F\)-regular in positive characteristic by [10, SS7] (see also [14, Theorem 4.4]) and therefore have log-terminal, [10] and thus rational, singularities in characteristic zero [15, Theorem 4.3].
The objective of this paper is to discuss the corresponding results for the nullcones arising from the natural actions of the symplectic and general linear groups. We now describe these group actions, and our main results.
Let \(Y\) be a \(2t\times n\) matrix of indeterminates for positive integers \(t\) and \(n\). Set \(S\) to be the polynomial ring \(\mathbb{K}[Y]\) and let
\[\Omega:=\begin{pmatrix}0&\mathbb{I}\\ -\mathbb{I}&0\end{pmatrix}\]
be the size \(2t\) standard symplectic block matrix, where \(\mathbb{I}\) is the size \(t\) identity matrix. The \(\mathbb{K}\)-algebra \(R:=\mathbb{K}[Y^{\mathrm{T}}\Omega Y]\) is canonically isomorphic to \(\mathbb{K}[X]/\operatorname{Pf}_{2t+2}(X)\), where \(X\) is an \(n\times n\) alternating matrix of indeterminates, and \(\operatorname{Pf}_{2t+2}(X)\) the ideal generated by its principal size \(2t+2\) Pfaffians. The symplectic group
\[\operatorname{Sp}_{2t}(\mathbb{K}):=\{M\in\operatorname{GL}_{2t}(\mathbb{K}) \ |\ M^{\mathrm{T}}\Omega M=\Omega\}\]
acts \(\mathbb{K}\)-linearly on \(S\) via the action
\[M\colon Y\mapsto MY\qquad\text{ for }\ M\in\operatorname{Sp}_{2t}(\mathbb{K}).\]
The invariant ring is precisely \(R\) when \(\mathbb{K}\) is infinite, see [11, SS6] or [12, Theorem 5.1]. Notice that the ideal
\[\mathfrak{P}(Y):=(Y^{\mathrm{T}}\Omega Y)S\]
generated by the entries of the alternating matrix \(Y^{\mathrm{T}}\Omega Y\) is the nullcone ideal for this group action. In this article, we refer to the nullcone of the natural embedding
\[R=\mathbb{K}[Y^{\mathrm{T}}\Omega Y]\subseteq\mathbb{K}[Y]=S.\]
as the _(natural) Pfaffian nullcone_\(\mathbb{K}[Y]/\mathfrak{P}(Y)\).
When \(\mathbb{K}\) has characteristic zero, the group \(\operatorname{Sp}_{2t}(\mathbb{K})\) is linearly reductive and thus \(R\) is a direct summand of \(S\) as an \(R\)-module. When \(\mathbb{K}\) has positive characteristic, this embedding typically does _not_ split by [11, Theorem 1.1].
As in the case of the Plucker embedding of Grassmannians, this non-splitting is due to the Cohen-Macaulay property of the Pfaffian nullcone, established in [11, Theorem 1.2], in conjunction with the flatness of Frobenius. Earlier, the irreducibility and normality of the Pfaffian nullcone was proved in [16, Theorem 9.1 (3)]. We prove a much stronger result by establishing the \(F\)-regularity of the Pfaffian nullcone. In addition, we also show that the Pfaffian nullcone ideal has a squarefree initial ideal and is hence ameanable to Grobner degeneration techniques, for example, as discussed in [10].
**Theorem** (Theorem 3.6, Corollary 3.7).: _Let \(Y\) be a matrix of indeterminates of size \(2t\times n\) for positive integers \(t\) and \(n\). Let \(\mathbb{K}\) be a field and set \(S:=\mathbb{K}[Y]\)._
1. _The initial ideal of the natural Pfaffian nullcone ideal_ \(\mathfrak{P}(Y)\) _is squarefree (with respect to the monomial order_ \(<_{B}\) _constructed in_ \(\lx@sectionsign\)_3.3)._
2. _If_ \(\mathbb{K}\) _is an_ \(F\)_-finite field of positive characteristic, the natural Pfaffian nullcone_ \(S/\mathfrak{P}(Y)\) _is a strongly_ \(F\)_-regular ring._
3. _In consequence, if_ \(\mathbb{K}\) _has characteristic zero, the natural Pfaffian nullcone_ \(S/\mathfrak{P}(Y)\) _has log-terminal, and hence rational, singularities._
Independent of characteristic, we give a complete description of the divisor class group of the Pfaffian nullcone and determine precisely when it is Gorenstein.
**Theorem** (Theorem 4.1).: _Let \(Y\) be a \(2t\times n\) matrix of indeterminates for positive integers \(t\) and \(n\), and \(\mathbb{K}\) be any field. Let \(R_{t,n}\) denote the Pfaffian nullcone \(\mathbb{K}[Y]/\mathfrak{P}(Y)\)._
1. _If_ \(n\geq t+1\) _then the divisor class group of_ \(R_{t,n}\) _is the group_ \(\mathbb{Z}\) _of integers. Otherwise,_ \(R_{t,n}\) _is a unique factorization domain._
2. _Let_ \(Y|_{t}\) _be the submatrix of_ \(Y\) _consisting of the first_ \(t\) _columns. If_ \(n\geq t+1\)_, the ideal_ \(\mathfrak{p}:=I_{t}(Y|_{t})\) _of_ \(R_{t,n}\) _is prime of height one and generates the divisor class group. Further, the canonical class of_ \(R_{t,n}\) _is given by_ \(\mathfrak{p}^{(n-t-1)}\)_._
3. _The ring_ \(R_{t,n}\) _is Gorenstein precisely if_ \(n\leq t+1\)_._
Next, we discuss the embedding defined by the natural action of the general linear group. For positive integers \(m\), \(n\), and \(t\), let \(Y\) and \(Z\) be \(m\times t\) and \(t\times n\) matrices of indeterminates respectively. Set \(S\) to be the polynomial ring \(\mathbb{K}[Y,Z]\), and take \(R\) to be the \(\mathbb{K}\)-subalgebra generated by the entries of the product matrix \(YZ\). Then \(R\) is canonically isomorphic to the determinantal ring \(\mathbb{K}[X]/I_{t+1}(X)\), where \(X\) is an \(m\times n\) matrix of indeterminates and \(I_{t+1}(X)\) denotes the ideal generated by the size \(t+1\) minors of \(X\). The general linear group \(\operatorname{GL}_{t}(\mathbb{K})\) acts \(\mathbb{K}\)-linearly on \(S\) via
\[M\colon\begin{cases}Y&\mapsto YM^{-1}\\ Z&\mapsto MZ\end{cases}\]
where \(M\in\operatorname{GL}_{t}(\mathbb{K})\). When the field \(\mathbb{K}\) is infinite, \(R\) is precisely the ring of invariants, see [1, SS3] or [14, Theorem 4.1]. The nullcone of the natural embedding
\[R=\mathbb{K}[YZ]\subseteq\mathbb{K}[Y,Z]=S\]
is the typically non-equidimensional reduced ring \(\mathbb{K}[Y,Z]/(YZ)\) defined by the entries of the matrix \(YZ\) (see [16] or [17, Theorem 4.1] for the reducedness of the nullcone). In this article, we
refer to this ring as the _(natural) determinantal nullcone_. The minimal primes of the determinantal nullcone are the ideals
\[\mathfrak{p}_{r,s}(Y,Z):=I_{r+1}(Y)+I_{s+1}(Z)+(YZ)\]
of \(S\), for \(r+s=t\); see SS5.1 for details. These are precisely the ideals defining the varieties of complexes of length two which are exact, as introduced by Buchsbaum-Eisenbud in [1].
When \(\mathbb{K}\) has characteristic zero, the group \(\operatorname{GL}_{t}(\mathbb{K})\) is linearly reductive and thus the determinantal ring \(R\) splits from \(S\) as an \(R\)-module. When \(\mathbb{K}\) has positive characteristic, this embedding typically does _not_ split by [11, Theorem 1.1]. This is due to the flatness of the Frobenius together with the Cohen-Macaulay property of the varieties of complexes which was proved in [10, Theorem 6.2] using principal radical systems (see also [14] and [15]). The divisor class groups of the varieties of complexes are free abelian groups of finite ranks by [11, Theorem 3.1] and independently by [18, Theorem 1.1]. These rings are Gorenstein under certain symmetry conditions on the sizes of the minors involved by [11, Theorem 4.3] and by [18, Theorem 1.2].
Kempf showed in [10] that the varieties of complexes have rational singularities, using [10]. In positive characteristic, they are \(F\)-rational relative to the resolution of Kempf, and they are also \(F\)-split by [17]. We extend these results by showing:
**Theorem** (Theorems 5.6, 5.7).: _Let \(Y\) and \(Z\) be matrices of indeterminates of sizes \(m\times t\) and \(t\times n\) respectively for positive integers \(m\), \(t\), and \(n\). Let \(\mathbb{K}\) be an \(F\)-finite field of positive characteristic; set \(S:=\mathbb{K}[Y,Z]\) and suppose that \(r\) and \(s\) are non-negative integers with \(r+s\leq t\)._
1. _The variety of complexes_ \(S/\mathfrak{p}_{r,s}(Y,Z)\) _is strongly_ \(F\)_-regular._
2. _If_ \(t\leq\min(m,n)\) _then (for this_ \(m,n,t\)_) the splittings of the Frobenius map on the varieties of complexes_ \(S/\mathfrak{p}_{r,s}(Y,Z)\) _can be chosen compatibly._
3. _For any triple_ \((m,n,t)\)_, the natural determinantal nullcone_ \(S/(YZ)S\) _is_ \(F\)_-pure._
_In consequence, if \(\mathbb{K}\) has characteristic zero, the rings \(S/\mathfrak{p}_{r,s}(Y,Z)\) have log-terminal (and hence rational) singularities._
Recently, Lorincz has also proved the \(F\)-regularity of the varieties of complexes using methods from representation theory in [15, Corollary 4.2].
We also show that the natural determinantal nullcone and the varieties of complexes of length two have squarefree initial ideals.
**Theorem** (Corollary 5.8).: _Let \(Y\) and \(Z\) be matrices of indeterminates of sizes \(m\times t\) and \(t\times n\) respectively and \(\mathbb{K}\) a field; set \(S:=\mathbb{K}[Y,Z]\) and assume that \(r\) and \(s\) are non-negative integers. Let \(<_{B}\) be the monomial irder constructed in SS5.3._
1. _If_ \(r+s=t\)_, the ideal_ \(\mathfrak{p}_{r,s}(Y,Z)\) _defining the variety of exact complexes has a squarefree initial ideal (with respect to the monomial order_ \(<_{B}\)_)._
2. _If_ \(r+s<t\) _and_ \(\mathbb{K}\) _has positive characteristic, the ideal_ \(\mathfrak{p}_{r,s}(Y,Z)\) _defining the variety of non-exact complexes and the natural determinantal nullcone ideal_ \((YZ)S\) _have squarefree initial ideals. (with respect to_ \(<_{B}\)_)._
A key tool used in establishing the \(F\)-regularity of the natural Pfaffian nullcone and the varieties of complexes of length two is the construction of certain very subtle monomial orders; these orders select special lead terms for the generators of the respective nullcone ideal in a characteristic-free manner. Additionally, the nullcone ideals considered have squarefree initial ideals with respect to these monomial orders. We explain these monomial orders with examples in SS3.3 and SS5.3.
## 2. Testing for \(F\)-purity and \(F\)-regularity
We briefly recall some details on the types of singularities in positive characteristic that we discuss in this paper, as well as certain tools to test for them. For a detailed exposition on \(F\)-singularities, we refer the reader to the book [10] of Ma and Polstra.
Let \(R\) be a reduced Noetherian ring of positive prime characteristic \(p\). The letter \(e\) denotes a variable nonnegative integer, and \(q=p^{e}\) the \(e\)-th power. Let \(R^{1/q}\) denote the ring obtained by adjoining all \(q\)-th roots of elements of \(R\). The inclusion \(R\hookrightarrow R^{1/q}\) can then be identified with the \(e\)-fold Frobenius endomorphism \(F^{e}\colon R\longrightarrow R\).
The ring \(R\) is _\(F\)-finite_ if it is a finitely generated \(R\)-module via the action of the Frobenius endomorphism \(F:R\longrightarrow R\); or, equivalently, if \(R^{1/p}\) is finitely generated as an \(R\)-module. A finitely generated algebra over a field \(\mathbb{K}\) is \(F\)-finite if and only if \(\mathbb{K}^{1/p}\) is a finite field extension of \(\mathbb{K}\). For an ideal \(I=(z_{1},z_{2},\ldots,z_{t})\) of \(R\), the symbol \(I^{[q]}\) denotes the ideal \(F(I)R=(z_{1}^{q},z_{2}^{q},\ldots,z_{t}^{q})\) of \(R\).
The ring \(R\) is _\(F\)-pure_ if \(F\) is a pure homomorphism, i.e., if the map \(R\otimes_{R}M\longrightarrow R^{1/p}\otimes_{R}M\) induced by the inclusion of \(R\) in \(R^{1/p}\) is injective for each \(R\)-module \(M\). We say that \(R\) is _\(F\)-split_ if \(F\) is a split monomorphism. Clearly, any \(F\)-split ring is \(F\)-pure; furthermore, an algebra over an \(F\)-finite field is \(F\)-pure if and only if it is \(F\)-split.
An \(F\)-finite ring \(R\) is _strongly \(F\)-regular_ if for each \(c\) in \(R\) not in any of its minimal primes, there exists an integer \(q\) such that the \(R\)-linear inclusion \(R\longrightarrow R^{1/q}\) sending \(1\) to \(c^{1/q}\) splits as a map of \(R\)-modules. If the \(F\)-finite ring \(R\) is \(\mathbb{N}\)-graded, strong \(F\)-regularity coincides with other similar notions like \(F\)-regularity and weak \(F\)-regularity.
Before moving forward, we fix some general hypotheses.
**Convention 2.1**.: Unless stated otherwise, all rings in this paper are standard graded algebras over a field. When the field is of positive characteristic, it is assumed to be \(F\)-finite.
Since all rings in this paper are \(F\)-finite, we may use the words \(F\)-pure and \(F\)-split interchangeably. Similarly, since all rings in this paper are standard graded algebras over \(F\)-finite fields, we sometimes loosely refer to a strongly \(F\)-regular ring simply as an \(F\)-regular ring. \(\diamondsuit\)
The following criterion of Fedder is useful for testing \(F\)-purity in the homogeneous setting:
**Theorem 2.2**.: _[_1_, Theorem 1.12]_ _Let \(S=\mathbb{K}[x_{1},\ldots,x_{n}]\) be an \(\mathbb{N}\)-graded polynomial ring over a field \(\mathbb{K}\) of positive characteristic. Let \(I\) be a homogeneous ideal of \(S\) and let \(\mathfrak{m}_{S}\) denote the homogeneous maximal ideal of \(S\). Then \(S/I\) is \(F\)-pure if and only if the ideal \(I^{[p]}:I\) is not contained in \(\mathfrak{m}_{S}{}^{[p]}\)._
The following criterion of Glassbrenner is useful for testing \(F\)-regularity in the graded setting:
**Theorem 2.3**.: _[_1_, Theorem 3.1]_ _Let \(\mathbb{K}\) be a field of positive characteristic \(p\) and set \(S=\mathbb{K}[x_{1},\ldots,x_{n}]\), with homogeneous maximal ideal \(\mathfrak{m}_{S}\). Suppose that \(R=S/I\) is a finitely generated \(\mathbb{N}\)-graded domain. Then \(R\) is strongly \(F\)-regular if and only if_
* _there exists a homogeneous element_ \(s\) _of_ \(S\)_, not in_ \(I\)_, for which the ring_ \(R[1/s]\) _is strongly_ \(F\)_-regular, and_
* _the ideal_ \(s(I^{[p]}:I)\) _is not contained in_ \(\mathfrak{m}_{S}^{[p]}\)_._
The next two facts are helpful in proving the \(F\)-purity of a given ring. We shall use the following as an ingredient in establishing the \(F\)-regularity of the Pfaffian nullcone:
**Corollary 2.4**.: _[_1_, Corollary 3.3]_ _Let \(S\) be a polynomial ring over a field of positive prime characteristic \(p\) and let \(I\) be an equidimensional ideal of \(S\). If \(\mathfrak{a}\subsetneq I\) is an ideal generated by a regular sequence of length equal to the height of \(I\), then the ideal \(\mathfrak{a}^{[p]}:\mathfrak{a}\) is contained in \(I^{[p]}:I\). In particular, if the ring \(S/\mathfrak{a}\) is \(F\)-pure, then so is \(S/I\)._
Proof.: Let \(J\) be the ideal \(\mathfrak{a}:I\) of \(S\). We have the following chain of containments of ideals in \(S\):
\[\mathfrak{a}^{[p]}:\mathfrak{a} \subseteq \mathfrak{a}^{[p]}:IJ\] \[= (\mathfrak{a}^{[p]}:J):I\] \[\subseteq (\mathfrak{a}^{[p]}:J^{[p]}):I\] \[= (\mathfrak{a}:J)^{[p]}:I\] \[= I^{[p]}:I,\]
where the second-to-last equality follows from the flatness of the Frobenius map on the regular ring \(S\) and the last equality follows from the symmetry of links since \(I\) is an equidimensional ideal; see [11]. The result is now immediate from Theorem 2.2.
**Corollary 2.5**.: _Let \(S\) be a polynomial ring over a field of positive characteristic \(p\) and let \(\mathfrak{m}_{S}\) denote its homogeneous maximal ideal. Let \(I\) be a height \(h\) prime ideal of \(S\). Then, the symbolic power \(I^{(h(p-1))}\) is contained in \(I^{[p]}:I\). In particular, if \(I^{(h(p-1))}\) is not contained in \(\mathfrak{m}_{S}^{[p]}\), then \(S/I\) is \(F\)-pure._
Proof.: By the flatness of the Frobenius map on \(S\), the set of associated prime ideals of \(S/I^{[p]}\) equals that of \(S/I\). So the containment of ideals
\[I\cdot I^{(h(p-1))}\subseteq I^{[p]}\]
may be verified locally. In the regular ring \((S_{I},IS_{I})\), the maximal ideal \(IS_{I}\) is generated by \(h\) elements. Recall that the ordinary and symbolic powers of ideals primary to the maximal ideal are equal. Notice that the containment
\[I^{h(p-1)+1}S_{I}\subseteq I^{[p]}S_{I}\]
immediately follows from the pigeonhole principle. In fact, this result holds more generally when \(I\) is a radical ideal and \(h\) is the maximum of the heights of the associated prime ideals of \(I\).
Corollary 2.5 is especially useful when we understand the primary decomposition of the powers of the ideal of interest. We shall need it in proving the compatible \(F\)-splittings of varieties of complexes by making use of the primary decomposition of the powers of determinantal ideals. Corollary 2.5 appears implicitly in the proof that Hankel determinantal rings are \(F\)-pure [15, Theorem 4.1]. In fact, if an ideal \(I\) satisfies the conclusion of Corollary 2.5, then its symbolic powers \(\{I^{(n)}\}_{n\geq 0}\) define an \(F\)-split filtration, i.e., \(I\) is a _symbolic \(F\)-split_ ideal--a notion which is stronger than \(F\)-split, by [14, Corollary 5.10] and [14, Example 5.13].
The following result is used in proving that several ideals considered in this paper have squarefree initial ideals (in any characteristic) with respect to delicate monomial orders \(<_{B}\) constructed in SS3.3 and SS5.3. The following theorem is first proved in positive characteristic by an application of Theorem 2.2; it is then proved in characteristic zero by using reduction mod \(p\) techniques.
**Theorem 2.6**.: _[_14_, Theorem 3.13]_ _Let \(S\) be a polynomial ring over a field (not necessarily of positive characteristic). Let \(I\) be a radical ideal and \(<_{B}\) a monomial order in \(S\). Let \(h\) be the maximum of the heights of the associated prime ideals of \(I\)._
_If the initial ideal \(\mathrm{in}_{B}(I^{(h)})\) contains a squarefree monomial, then \(\mathrm{in}_{B}(I)\) is a squarefree monomial ideal._
## 3. The natural Pfaffian nullcone is strongly \(F\)-regular
The aim of this section is to establish the \(F\)-regularity of the Pfaffian nullcone. We begin with recalling some known facts.
### Generalities
Let \(Y\) be a \(2t\times n\) matrix of indeterminates and set \(S:=\mathbb{K}[Y]\). Let
\[\Omega:=\begin{pmatrix}0&\mathbb{I}\\ -\mathbb{I}&0\end{pmatrix}\]
be the \(2t\times 2t\) standard symplectic block matrix, where \(\mathbb{I}\) is the size \(t\) identity matrix. There is a natural \(\mathbb{K}\)-algebra isomorphism
\[R:=\mathbb{K}[Y^{\mathrm{T}}\Omega Y]\cong\mathbb{K}[X]/\operatorname{Pf}_{2t +2}(X),\]
where \(X\) is an \(n\times n\) alternating matrix of indeterminates, and \(\operatorname{Pf}_{2t+2}(X)\) the ideal generated by its principal Pfaffians of size \(2t+2\). This isomorphism is induced by mapping the entries of the matrix \(X\) to the corresponding entries of the alternating matrix \(Y^{\mathrm{T}}\Omega Y\). The symplectic group
\[\operatorname{Sp}_{2t}(\mathbb{K}):=\{M\in\operatorname{GL}_{2t}(\mathbb{K}) \ |\ M^{\mathrm{T}}\Omega M=\Omega\}\]
acts \(\mathbb{K}\)-linearly on \(S\) via
\[M\colon Y\mapsto MY\qquad\text{ for }\ M\in\operatorname{Sp}_{2t}(\mathbb{K}).\]
The invariant ring is precisely \(R\) when \(\mathbb{K}\) is infinite, see [1, SS6] or [10, Theorem 5.1]. The nullcone ideal of this group action is the ideal
\[\mathfrak{P}=\mathfrak{P}(Y):=(Y^{\mathrm{T}}\Omega Y)S,\]
generated by the entries of the matrix \(Y^{\mathrm{T}}\Omega Y\) in the polynomial ring \(S\). The nullcone for this action of \(\operatorname{Sp}_{2t}(\mathbb{K})\) on \(S\), the _(natural) Pfaffian nullcone_, is the ring \(S/\mathfrak{P}\). The Pfaffian nullcone is a Cohen-Macaulay normal domain with
\[\dim S/\mathfrak{P} = \begin{cases}2nt-\begin{pmatrix}n\\ 2\end{pmatrix}&\text{if }\ n\leq t+1,\\ nt+\begin{pmatrix}t+1\\ 2\end{pmatrix}&\text{if }\ n\geq t,\end{cases}\]
according to [11, Theorem 6.8] and [12, Theorem 9.1(3)]. It is a complete intersection ring precisely if \(n\leq t+1\) by [11, Theorem 6.3]. The standard monomials for the Pfaffian nullcone are studied in [12, SS4.2].
**Remark 3.1**.: Let \(Y\) be a size \(2t\times n\) matrix of indeterminates over a field \(\mathbb{K}\). Notice that when \(t=1\), we have
\[Y^{\mathrm{T}}\Omega Y\ =\ \begin{pmatrix}y_{1,1}&y_{2,1}\\ \vdots&\vdots\\ y_{1,n}&y_{2,n}\end{pmatrix}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\begin{pmatrix}y_{1,1}&\cdots&y_{1,n}\\ y_{2,1}&\cdots&y_{2,n}\end{pmatrix}=\ \begin{pmatrix}0&\Delta_{1,2}&\Delta_{1,3}&\ldots& \Delta_{1,n}\\ -\Delta_{1,2}&0&\Delta_{2,3}&\ldots&\Delta_{2,n}\\ -\Delta_{1,3}&-\Delta_{2,3}&0&\ldots&\Delta_{3,n}\\ \vdots&\vdots&&\ddots&\vdots\\ -\Delta_{1,n}&-\Delta_{2,n}&-\Delta_{3,n}&\ldots&0\end{pmatrix}.\]
In particular, \(Y^{\mathrm{T}}\Omega Y\) is an alternating matrix where, for \(i<j\), the matrix entry \((Y^{\mathrm{T}}\Omega Y)_{ij}\) is
\[\Delta_{i,j}:=y_{1,i}y_{2,j}-y_{1,j}y_{2,i}.\]
It follows that \(\mathfrak{P}\) coincides with the determinantal ideal \(I_{2}(Y)\). So, \(\mathfrak{P}\) has height \(n-1\) and defines an \(F\)-regular ring \(\mathbb{K}[Y]/\mathfrak{P}\). Note that the ring \(\mathbb{K}[Y^{\mathrm{T}}\Omega Y]\) is the homogeneous coordinate ring of the Grassmannian \(\operatorname{Gr}_{\mathbb{K}}(2,n)\) under the Plucker embedding into \(\mathbb{P}_{\mathbb{K}}^{\binom{n}{2}-1}\).
More generally, for \(t\geq 1\), the ring \(\mathbb{K}[Y^{\mathrm{T}}\Omega Y]\) is the homogeneous coordinate ring of the order \(t-1\) secant variety \(\operatorname{Gr}_{\mathbb{K}}^{t-1}(2,n)\), the closure of the union of linear spaces spanned by \(t\) points of \(\operatorname{Gr}_{\mathbb{K}}(2,n)\)
under the Plucker embedding. Moreover, for \(1\leq i<j\leq n\), the entry \((Y^{\mathrm{T}}\Omega Y)_{i,j}\) equals \(B(v_{i},v_{j})\), where \(v_{i}\) and \(v_{j}\) are the \(i\)-th and \(j\)-th columns of \(Y\), and \(B\) is the (nondegenerate) symplectic form
\[(v_{1},v_{2})\mapsto v_{1}^{\mathrm{T}}\Omega v_{2}.\]
It follows that the generators of \(\mathfrak{P}\) are sums of size two minors given by
\[d_{ij}:=(Y^{\mathrm{T}}\Omega Y)_{i,j}\ =\ (y_{1,i}y_{t+1,j}-y_{1,j}y_{t+1,i})\ +\ \cdots\ +\ (y_{t,i}y_{2t,j}-y_{t,j}y_{2t,i}).\]
for \(1\leq i<j\leq n\).
### The localization property
We show that the Pfaffian nullcone has a localization property analogous to that of the generic determinantal ring, as outlined in [3, Proposition 2.4].
**Lemma 3.2**.: _Let \(Y=(y_{i,j})\) be a \(2t\times n\) matrix of indeterminates; set \(S:=\mathbb{Z}[Y]\). Then there exists a \((2t-2)\times(n-1)\) matrix \(Z\) with entries in \(S[\frac{1}{y_{1,1}}]\), and elements \(f_{2},\ldots,f_{n}\) in \(S[\frac{1}{y_{1,1}}]\) such that:_
1. _the entries of_ \(Z\) _and the elements_ \(f_{2},\ldots,f_{n},y_{1,1},\ldots,y_{1,n},y_{2,1},\ldots,y_{2t,1}\) _taken together are algebraically independent over_ \(\mathbb{Z}\)_;_
2. _along with_ \(y_{11}^{-1}\)_, the above elements generate_ \(S[\frac{1}{y_{1,1}}]\) _as a_ \(\mathbb{Z}\)_-algebra;_
3. _with_ \(S^{\prime}:=\mathbb{Z}[Y^{\prime}]\)_, the ideal_ \(\mathfrak{P}(Y)S[\frac{1}{y_{1,1}}]\) _equals_ \(\mathfrak{P}(Z)S[\frac{1}{y_{1,1}}]+(f_{2},\ldots,f_{n})S[\frac{1}{y_{1,1}}]\)_, and we have an isomorphism_ \[\frac{S}{\mathfrak{P}(Y)}[\frac{1}{y_{1,1}}]\cong\frac{S^{\prime}}{\mathfrak{P }(Z)}[y_{1,1},\ldots,y_{1,n},y_{2,1},\ldots,y_{2t,1},\frac{1}{y_{1,1}}].\]
Proof.: Let us map the entries of the matrix \(Y\) to the corresponding entries of \(YM\), where \(M\) is a matrix with \(n\) rows. Clearly, the ideal \((YM)\) generated by the entries of the matrix \(YM\) is contained in \((Y)\), hence the ideal \(\mathfrak{P}(YM)\) is contained in \(\mathfrak{P}(Y)\). It follows that if \(M\) is invertible in \(S\), then the ideals \(\mathfrak{P}(Y)\) and \(\mathfrak{P}(YM)\) are equal. In particular, \(\mathfrak{P}(Y)\) is unaffected by elementary column operations of the matrix \(Y\).
After inverting \(y_{1,1}\), one may perform elementary column operations to transform \(Y\) into a matrix where \(y_{1,1}\) is the only nonzero entry in the first row; the resulting matrix is
\[\widetilde{Y}:=\left(\begin{array}{ccccc}y_{1,1}&0&0&\cdots&0\\ y_{2,1}&z_{2,2}&z_{2,3}&\cdots&z_{2,n}\\ \vdots&\vdots&\vdots&&\vdots\\ y_{t,1}&z_{t,2}&z_{t,3}&\cdots&z_{t,n}\\ y_{t+1,1}&z_{t+1,2}&z_{t+1,3}&\cdots&z_{t+1,n}\\ y_{t+2,1}&z_{t+2,2}&z_{t+2,3}&\cdots&z_{t+2,n}\\ \vdots&\vdots&\vdots&&\vdots\\ y_{2t,1}&z_{2t,2}&z_{2t,3}&\cdots&z_{2t,n}\\ \end{array}\right)\quad\text{where}\quad z_{i,j}=y_{i,j}-\frac{y_{i,1}y_{1,j}} {y_{1,1}}.\]
By construction, the ideals \(\mathfrak{P}(Y)\) and \(\mathfrak{P}(\widetilde{Y})\) are equal in the ring \(S[\frac{1}{y_{1,1}}]\). Set \(Z\) to be the submatrix of \(\widetilde{Y}\) obtained by deleting the first column, and rows \(1\) and \(t+1\). Note that the nonzero entries of the matrix \(\widetilde{Y}^{\mathrm{T}}\Omega\widetilde{Y}\) are those of \(Z^{\mathrm{T}}\overline{\Omega}Z\), where \(\overline{\Omega}\) is the standard symplectic block of size \(2t-2\), along with the polynomials
\[f_{j}:=(\widetilde{Y}^{\mathrm{T}}\Omega\widetilde{Y})_{1,j}\ =\ y_{1,1}z_{t+1,j}+(y_{2,1}z_{t+2,j}-y_{t+2,1}z_{2,j})+\cdots+(y_{t,1}z_{2t,j }-y_{2t,1}z_{t,j})\]
for \(2\leq j\leq n\). This proves assertion (3).
Assertions (1) and (2) are readily verified since the entries of the matrix \(Z\) do not involve the elements \(z_{t+1,j}\) which appear (with a unit coefficient) in \(f_{j}\) for \(2\leq j\leq n\)
### Constructing the monomial order
The aim of this subsection is to describe a recipe for a monomial order \(<_{B}\) that creates special lead terms for the generators of the Pfaffian nullcone ideal. The construction of this monomial order is quite technical; we illustrate it with an example first:
**Example 3.3**.: Let \(Y\) be a \(4\times 4\) matrix of indeterminates and \(\mathbb{K}\) be any field; set \(S=\mathbb{K}[Y]\). To define a monomial order \(<_{B}\) in the polynomial ring \(S\), we first define an order on the variables as follows. Sort the entries of the matrix \(Y\) into blocks \(B_{0},B_{1},B_{2}\), and \(B_{3}\) as suggested in the matrix
\[\begin{pmatrix}1&3&1&0\\ 2&2&0&0\\ 0&1&3&1\\ 0&0&2&2\end{pmatrix},\]
where the \((i,j)\)-entry of the matrix is the block into which \(y_{i,j}\) is sorted. Thus, for instance, \(y_{1,1}\) is in the block \(B_{1}\), \(y_{1,2}\) is in \(B_{3}\), and so on. Now, for \(\gamma\in B_{\ell}\) and \(\delta\in B_{k}\), set \(\gamma<\delta\) if \(\ell<k\). Then, within each set \(B_{\ell}\), fix an arbitrary order among the variables. This gives us a total variable order in the polynomial ring \(S\). Our monomial order \(<_{B}\) is the reverse lexicographical order induced by this variable order.
For a polynomial \(f\), let \(\operatorname{in}_{B}(f)\) denote the initial monomial of \(f\) with respect to our monomial order. Then
\[\mathfrak{P}=\mathfrak{P}(Y)=(d_{i,j}\ |\ 1\leq i<j\leq 4)\]
is an ideal of height \(5\) in \(S\); the generators \(d_{i,j}\) are as displayed in Remark 3.1. One has
\[\operatorname{in}_{B}(d_{1,2})=y_{1,1}y_{3,2}, \operatorname{in}_{B}(d_{2,3})=y_{1,2}y_{3,3}, \operatorname{in}_{B}(d_{3,4})=y_{1,3}y_{3,4},\] \[\operatorname{in}_{B}(d_{1,3})=y_{2,1}y_{4,3}, \operatorname{in}_{B}(d_{2,4})=y_{2,2}y_{4,4}, \operatorname{in}_{B}(d_{1,4})=y_{2,1}y_{4,4}.\]
Let \(\mathfrak{a}\) be the ideal of \(S\) generated by the elements \(d_{1,2},d_{2,3},d_{3,4},d_{1,3}\), and \(d_{2,4}\). Since the initial terms of the generators of \(\mathfrak{a}\) are pairwise coprime, the generators of \(\mathfrak{a}\) form a Grobner basis and it follows that the ideal \(\mathfrak{a}\) is generated by a regular sequence of maximum length in \(\mathfrak{P}\).
The construction of this monomial order is crucial in establishing the \(F\)-regularity of the Pfaffian nullcone \(S/\mathfrak{P}\), as we show next. From now on, asssume that the underlying field \(\mathbb{K}\) has positive characteristic \(p\).
Note that the polynomial
\[f:=d_{1,2}d_{2,3}d_{3,4}d_{1,3}d_{2,4}\]
is such that
\[f^{p-1}\in(\mathfrak{a}^{[p]}:\mathfrak{a})\smallsetminus\mathfrak{m}_{S}^{[p]},\]
where \(\mathfrak{m}_{S}\) denotes the homogeneous maximal ideal of \(S\). This is so because its initial term \(\operatorname{in}_{B}(f)\) is squarefree. It follows from Corollary 2.4 that \(S/\mathfrak{P}\) is \(F\)-pure. In fact, since \(y_{1,4}\) does not divide \(\operatorname{in}_{B}(f)\), we get
\[y_{1,4}f^{p-1}\in y_{1,4}(\mathfrak{a}^{[p]}:\mathfrak{a})\subseteq y_{1,4}( \mathfrak{P}^{[p]}:\mathfrak{P})\quad\text{while}\quad y_{1,4}f^{p-1}\notin \mathfrak{m}_{S}^{[p]}.\]
Since the ring \(\frac{S}{\mathfrak{P}}[\frac{1}{y_{1,4}}]\) is a smooth extension of the determinantal ring defined by the size two minors of a \(2\times 3\) matrix of indeterminates by Lemma 3.2, it follows from Theorem 2.3 that the Pfaffian nullcone \(S/\mathfrak{P}\) is strongly \(F\)-regular.
Finally, since \(f\) lies in the ideal \(\mathfrak{P}^{5}\), it is clear that \(\operatorname{in}_{B}(f)\) lies in \(\operatorname{in}_{B}(\mathfrak{P}^{5})\), and hence in \(\operatorname{in}_{B}(\mathfrak{P}^{(5)})\). By Theorem 2.6, it immediately follows that the Pfaffian nullcone ideal \(\mathfrak{P}\) has a squarefree initial ideal (with respect to the monomial order \(<_{B}\)) in any characteristic. \(\diamondsuit\)
We next construct the monomial order \(<_{B}\) illustrated in the above example. For ease of notation, we relabel the entries of lower half of the \(2t\times n\) matrix \(Y=(y_{i,j})\) as follows: Let \(w_{i,j}=y_{i+t,n-j+1}\) for \(1\leq i\leq t\) and \(1\leq j\leq n\). Then we have
\[Y=\begin{pmatrix}y_{1,1}&y_{1,2}&\cdots&y_{1,n}\\ \vdots&\vdots&&\vdots\\ y_{t,1}&y_{t,2}&\cdots&y_{t,n}\\ w_{1,n}&w_{1,n-1}&\cdots&w_{1,1}\\ \vdots&\vdots&&\vdots\\ w_{t,n}&w_{t,n-1}&\cdots&w_{t,1}\end{pmatrix}.\]
Recall from Remark 3.1 that the entries of the Pfaffian nullcone ideal \((Y^{\mathrm{T}}\Omega Y)\) in \(\mathbb{K}[Y]\) are
\[d_{i,j}=\det\begin{pmatrix}y_{1,i}&y_{1,j}\\ w_{1,n-i+1}&w_{1,n-j+1}\end{pmatrix}+\cdots+\det\begin{pmatrix}y_{t,i}&y_{t,j} \\ w_{t,n-i+1}&w_{t,n-j+1}\end{pmatrix},\]
for \(1\leq i<j\leq n\).
The following computation is the technical heart of this section:
**Lemma 3.4**.: _Let \(\mathbb{K}\) be any field. There exists a monomial order \(<_{B}\) in \(\mathbb{K}[Y]\) such that for all \(1\leq i<j\leq n\) with \(j-i\leq t\), we have_
\[\mathrm{in}_{B}(d_{i,j})=y_{j-i,i}w_{j-i,n-j+1},\]
_where \(\mathrm{in}_{B}(d_{i,j})\) denotes the initial monomial of \(d_{i,j}\) with respect to the order \(<_{B}\)._
Proof.: To define a monomial order \(<_{B}\) in the polynomial ring \(S=\mathbb{K}[Y]\), we first define an order on the variables as follows. Sort the entries of the matrix \(Y\) into blocks \(B_{0},B_{1},\ldots,B_{n-1}\) according to the following formula:
\[y_{i,j},w_{i,j}\text{ are in block }B_{\ell}\text{ where }\ell=\begin{cases} \text{(a)}&2j+i-2&\text{if}&1\leq j<\frac{n-i+1}{2},\\ \text{(b)}&2n-2j-i&\text{if}&\frac{n-i+1}{2}\leq j<n-i+1,\\ \text{(c)}&0&\text{otherwise}.\end{cases} \tag{3.3.1}\]
Now, for \(\gamma\in B_{\ell}\) and \(\delta\in B_{k}\), set \(\gamma<\delta\) if \(\ell<k\). Then, within each set \(B_{\ell}\), fix an arbitrary order among the variables. This gives us a total variable order in \(S\). Our monomial order \(<_{B}\) is the reverse lexicographical order induced by this variable order in \(S\). For a polynomial \(f\), let \(\mathrm{in}_{B}(f)\) denote the initial monomial of \(f\) with respect to our monomial order.
We claim that for \(i<j\) and with
\[d_{i,j}=\sum_{s=1}^{t}y_{s,i}w_{s,n-j+1}-\sum_{s=1}^{t}y_{s,j}w_{s,n-i+1},\]
we have \(\mathrm{in}_{B}(d_{i,j})=y_{j-i,i}w_{j-i,n-j+1}\). To prove our claim, we must show that
\[y_{j-i,i}w_{j-i,n-j+1}\geq y_{s,i}w_{s,n-j+1},\quad\text{and}\quad y_{j-i,i}w _{j-i,n-j+1}\geq y_{s,j}w_{s,n-i+1}\]
for all \(1\leq s\leq t\). As the order is reverse lexicographical, we must show that, for all \(1\leq s\leq t\), \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are greater than or equal to at least one of \(y_{s,i}\) and \(w_{s,n-j+1}\), as well as at least one of \(y_{s,j}\) and \(w_{s,n-i+1}\). This can be done by analyzing all possible cases one by one. We include all the details for the sake of completeness.
Our proof will proceed as follows: First, we will show that \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are in the same block \(B_{\ell}\). Then we will show that for \(s\neq j-i\) at least one of \(y_{s,i}\) and \(w_{s,n-j+1}\) is in \(B_{k}\) for some \(k<\ell\). Lastly, we will show that at least one of \(y_{s,j}\) and \(w_{s,n-i+1}\) is in \(B_{k}\) for some \(k<\ell\).
**Part 1**. We claim that \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are both in \(B_{i+j-2}\) if \(i+j<n+1\), and are in \(B_{2n-i-j}\) otherwise.
To show this, we first show that \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are not in \(B_{0}\). Suppose \(y_{j-i,i}\in B_{0}\). This requires that \(n-(j-i)+1\leq i\), implying \(n+1\leq j\), which is impossible. Suppose \(w_{j-i,n-j+1}\in B_{0}\). This requires that \(n-(j-i)+1\leq n-j+1\), implying \(i\leq 0\), which also impossible.
Now, \(i+j<n+1\) is equivalent to \(i<(n-(j-i)+1)/2\) as well as to \((n-(j-i)+1)/2<n-j+1\). The former means that the block of \(y_{j-i,i}\) is decided via formula (a) in (3.3.1) and equals \(B_{2i+(j-i)-2}=B_{i+j-2}\); the latter that the block of \(w_{j-i,n-j+1}\) comes from formula (b) and is \(B_{2n-2(n+1-j)-(j-i)}=B_{i+j-2}\).
If on the other hand \(i+j>n+1\) then \(y_{j-i,i}\) follows case (b) and \(w_{j-i,n-j+1}\) follows case (a) by the same computation. The corresponding blocks are computed as \(B_{2n-2(i)-(j-i)}=B_{2n-i-j}\) and \(B_{2(n-j+1)+(j-i)-2}=B_{2n-j-i}\).
In the case \(i+j=n+1\), \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) have the same label and are in block \(B_{i+j-2}=B_{2n-i-j}\).
**Part 2**. Here we will show that if \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are in \(B_{\ell}\), then \(s\neq j-i\) implies that at least one of \(y_{s,i}\) and \(w_{s,n-j+1}\) is in \(B_{k}\) where \(k<\ell\).
If \(y_{s,i}\) or \(w_{s,n-j+1}\) is in \(B_{0}\), then we are done. So assume that \(y_{s,i}\) and \(w_{s,n-j+1}\) are not in \(B_{0}\).
* Suppose \(i+j<n+1\) and \(s<j-i\). Then \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are in \(B_{i+j-2}\). By the proof of Part 1, \(i<\dfrac{n-(j-i)+1}{2}<\dfrac{n-s+1}{2}\), and so \(y_{s,i}\in B_{2i+s-2}\). As \(s<j-i\) implies that \(2i+s-2<i+j-2\), \(y_{s,i}\) is in a lower block than \(y_{j-i,i}\).
* Suppose \(i+j<n+1\) and \(s>j-i\). Then, \(y_{j-i,i},w_{j-i,n-j+1}\in B_{i+j-2}\). By the proof of Part 1, \(n-j+1>\dfrac{n-(j-i)+1}{2}>\dfrac{n-s+1}{2}\), and so \(w_{s,n-j+1}\in B_{2j-s-2}\). As \(s>j-i\) implies that \(2j-s-2<i+j-2\), \(w_{s,n-j+1}\) is in a lower block than \(w_{j-i,n-j+1}\).
* Suppose \(i+j\geq n+1\) and \(s<j-i\). Then, \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are in \(B_{2n-i-j}\). By the proof of Part 1, \(n-j+1\leq\dfrac{n-(j-i)+1}{2}<\dfrac{n-s+1}{2}\), and so \(w_{s,n-j+1}\in B_{2n-2j+s}\). As \(s<j-i\) implies that \(2n-2j+s<2n-i-j\), \(w_{s,n-j+1}\) is in a lower block than \(w_{j-i,n-j+1}\).
* Suppose \(i+j\geq n+1\) and \(s>j-i\). Then, \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are in \(B_{2n-i-j}\). By the proof of Part 1, \(i\geq\dfrac{n-(j-i)+1}{2}>\dfrac{n-s+1}{2}\), and so \(y_{s,i}\in B_{2n-2i-s}\). As \(s>j-i\) implies hat \(2n-2i-s<2n-i-j\), \(y_{s,i}\) is in a lower blck than \(y_{j-i,i}\).
**Part 3**. Here we will show that if \(y_{j-i,i},w_{j-i,n-j+1}\in B_{\ell}\), then for \(s\neq j-i\) at least one of \(y_{s,j}\) and \(w_{s,n-i+1}\) is in \(B_{k}\) for some \(k<\ell\).
If \(y_{s,j}\) or \(w_{s,n-i+1}\) is in \(B_{0}\), there is nothing to show by the proof of Part 1. So assume that \(y_{s,j}\) and \(w_{s,n-i+1}\) are not in \(B_{0}\).
* Suppose that \(i+j<n+1\). By Part 1, \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are in \(B_{i+j-2}\). Since \(i+j<n+1\), an inequality \(n-i+1<\frac{n-s+1}{2}\) would imply that \(s<2i-n-1<i-j<0\), which we know to be false. Hence, \(n-i+1\geq\frac{n-s+1}{2}\), and thus the block of \(w_{s,n-i+1}\) is decided by case (b) and equals \(B_{2n-2(n-i+1)-s}=B_{2i-s-2}\). As \(s>0>i-j\) implies that \(2i-s-2<i+j-2\), \(w_{s,n-i+1}\) is in a lower block than \(w_{j-i,n-j+1}\).
* Suppose that \(i+j\geq n+1\).
By Part 2, \(y_{j-i,i}\) and \(w_{j-i,n-j+1}\) are in \(B_{2n-i-j}\). Since \(i+j\geq n+1\), an inequality \(j<\frac{n-s+1}{2}\) would imply that \(s<n-2j+1\leq i-j<0\), which we know to be false. Thus, \(j\geq\frac{n-s+1}{2}\), and thus the block of \(y_{s,j}\) is decided by case (b) and equals \(B_{2n-2j-s}\). As \(s>0>i-j\) implies that \(2n-2j-s<2n-i-j\), and so \(y_{s,j}\) is in a lower block than \(y_{j-i,i}\).
This finishes the analysis of each possible case.
Having established the monomial order \(<_{B}\), we next exhibit a natural choice of a maximal regular sequence in the Pfaffian nullcone ideal.
**Lemma 3.5**.: _Let \(\mathbb{K}\) be any field and \(\mathfrak{a}\subseteq\mathfrak{P}(Y)\) be the ideal of \(\mathbb{K}[Y]\) generated by the set_
\[\alpha=\{d_{i,j}\mid 1\leq i<j\leq n\text{ and }j-i\leq t\}.\]
_Then the ideal \(\mathfrak{a}\) has the same height as \(\mathfrak{P}(Y)\) and \(\alpha\) is a regular sequence._
Proof.: First, we note that
\[|\alpha|=\begin{cases}\binom{n}{2}&\text{if }\;n\leq t+1,\\ nt-\binom{t+1}{2}&\text{if }\;n\geq t,\end{cases}\]
where \(|\alpha|\) denotes the cardinality of \(\alpha\). By [1, Theorem 6.8], we have that \(|\mathfrak{a}|=\operatorname{ht}(\mathfrak{P}(Y))\).
By Lemma 3.4, there exists a monomial order in \(\mathbb{K}[Y]\) such that for all \(1\leq i<j\leq n\) with \(j-i\leq t\), we have
\[\operatorname{in}_{B}(d_{i,j})=y_{j-i,i}w_{j-i,n-j+1}.\]
Notice that if \((i,j)\neq(k,\ell)\) then \(\operatorname{in}_{B}(d_{i,j})\) and \(\operatorname{in}_{B}(d_{l,k})\) are relatively prime. Thus, the elements of \(\alpha\) form a Grobner basis for \(\mathfrak{a}\) (see, for example, [1, Corollary 2.3.4.]), and the initial terms of \(\alpha\) form a regular sequence. Therefore,
\[\dim(R/\mathfrak{a})=\dim(R/\operatorname{in}_{B}(\mathfrak{a}))=\operatorname {depth}(R/\operatorname{in}_{B}(\mathfrak{a}))\leq\operatorname{depth}(R/ \mathfrak{a})\leq\dim(R/\mathfrak{a}),\]
so that we must have equality throughout (see [1, Corollary 3.3.4]). In particular,
\[|\alpha|\geq\operatorname{ht}(\mathfrak{a})=\operatorname{ht}(\operatorname{ in}_{B}(\mathfrak{a}))=|\alpha|,\]
and the lemma follows from [1, Theorem 6.8].
We are now ready to prove the main results of this section.
**Theorem 3.6**.: _Let \(Y\) be a matrix of indeterminates of size \(2t\times n\) for positive integers \(t\) and \(n\). Let \(\mathbb{K}\) be a field and set \(S:=\mathbb{K}[Y]\)._
1. _If_ \(\mathbb{K}\) _is an_ \(F\)_-finite field of positive characteristic, the natural Pfaffian nullcone_ \(S/\mathfrak{P}(Y)\) _is a strongly_ \(F\)_-regular ring._
2. _In consequence, if_ \(\mathbb{K}\) _has characteristic zero, the natural Pfaffian nullcone_ \(S/\mathfrak{P}(Y)\) _has log-terminal, and hence rational, singularities._
Proof.: Assertion (2) follows from (1) since rings of characteristic zero of \(F\)-regular type have log-terminal singularities, which are rational, compare [16, Theorem 4.3] and [11]. We therefore concentrate on the case where the characteristic of \(\mathbb{K}\) is \(p>0\).
We proceed by induction on \(t\). The statement is clear for \(t=1\), since then by Remark 3.1, the ideal \(\mathfrak{P}(Y)\) equals the determinantal ideal \(I_{2}(Y)\) of the size two minors of \(Y\). The corresponding ring is strongly \(F\)-regular as it is a Segre product of standard graded polynomial rings.
Now assume that the assertion holds for some \(t>1\). By Lemma 3.2, we have
\[\frac{S}{\mathfrak{P}(Y)}[\frac{1}{y_{1,n}}]\cong\frac{\mathbb{K}[Z]}{ \mathfrak{P}(Z)}[y_{1,1},\ldots,y_{1,n},y_{2,n},\ldots,y_{2t,n},\frac{1}{y_{1,n}}]\]
where \(Z\) is a matrix of indeterminates of size \((2t-2)\times n\). It follows by induction that the ring \(\frac{S}{\mathfrak{P}(Y)}[\frac{1}{y_{1,n}}]\) is strongly \(F\)-regular.
In order to apply Theorem 2.3, we must show that
\[y_{1,n}(\mathfrak{P}(Y)^{[p]}:\mathfrak{P}(Y))\not\subseteq\mathfrak{m}_{S}^{[p]},\]
where \(\mathfrak{m}_{S}\) is the homogeneous maximal ideal of \(S\). By Corollary 2.4, it suffices to find an ideal \(\mathfrak{a}\) in \(\mathfrak{P}(Y)\) generated by a regular sequence \(\alpha\) of length equal to the height of \(\mathfrak{P}(Y)\) such that
\[y_{1,n}(\mathfrak{a}^{[p]}:\mathfrak{a})\not\subseteq\mathfrak{m}_{S}^{[p]}.\]
Let \(\alpha=\{d_{i,j}\mid 1\leq i<j\leq n\text{ and }j-i\leq t\}\) and \(\mathfrak{a}=\alpha S\). Consider the polynomial
\[f:=\prod_{d_{i,j}\in\alpha}d_{i,j}.\]
Clearly
\[y_{1,n}f^{p-1}\in y_{1,n}(\mathfrak{a}^{[p]}:\mathfrak{a}).\]
Recall that for polynomials \(g\) and \(h\) and a fixed monomial order \(<_{B}\) in \(S\), we have \(\operatorname{in}_{B}(gh)=\operatorname{in}_{B}(g)\operatorname{in}_{B}(h)\). We choose the monomial order \(<_{B}\) as constructed in Lemma 3.4. Then we get:
\[\operatorname{in}_{B}(y_{1,n}f^{p-1}) = y_{1,n}\operatorname{in}_{B}(f)^{p-1}\] \[= y_{1,n}\left(\prod_{\begin{subarray}{c}1\leq i<j\leq n\\ j-i\leq t\end{subarray}}(y_{j-i,i}w_{j-i,n-j+1})\right)^{p-1}\notin\mathfrak{m} _{S}^{[p]}.\]
Since \(\mathfrak{m}_{S}^{[p]}\) is a monomial ideal, an element lies in \(\mathfrak{m}_{S}^{[p]}\) if and only if each of its terms does, and so
\[y_{1,n}f^{p-1}\notin\mathfrak{m}_{S}^{[p]}.\]
We are done by Theorem 2.3.
**Corollary 3.7**.: _Let \(Y\) be a matrix of indeterminates of size \(2t\times n\) for positive integers \(t\) and \(n\). Let \(\mathbb{K}\) be a field (of any characteristic) and set \(S:=\mathbb{K}[Y]\). The Pfaffian nullcone ideal \(\mathfrak{P}(Y)\) has a squarefree initial ideal._
Proof.: Let \(h\) be the height of \(\mathfrak{P}=\mathfrak{P}(Y)\) and the polynomial \(f\) be as constructed in the proof of Theorem 3.6, i.e.,
\[f:=\prod_{d_{i,j}\in\alpha}d_{i,j}.\]
Then since \(f\) lies in the ideal \(\mathfrak{P}^{h}\), it is clear that \(\operatorname{in}_{B}(f)\) lies in \(\operatorname{in}_{B}(\mathfrak{P}^{h})\), and hence in \(\operatorname{in}_{B}(\mathfrak{P}^{(h)})\). By Theorem 2.6, it immediately follows that the Pfaffian nullcone ideal \(\mathfrak{P}\) has a squarefree initial ideal (with respect to the monomial order \(<_{B}\)).
We end this section with the following:
**Question 3.8**.: Let \(Y\) be a \(2t\times n\) matrix of indeterminates for positive integers \(t\) and \(n\), and let \(\mathfrak{P}\) denote the natural Pfaffian nullcone ideal in the polynomial ring \(\mathbb{K}[Y]\). Denote by
\[\mathcal{R}^{S}(\mathfrak{P}):=\bigoplus_{k\geq 0}\mathfrak{P}^{(k)}\quad \text{and}\quad G^{S}(\mathfrak{P}):=\bigoplus_{k\geq 0}\mathfrak{P}^{(k)}/ \mathfrak{P}^{(k+1)}\]
the _symbolic Rees algebra_ and the _symbolic associated graded algebra_ of \(\mathfrak{P}\) respectively. Are these rings Noetherian? \(\diamondsuit\)
The proof of Theorem 3.6 shows that the Pfaffian nullcone ring is _symbolic \(F\)-split_ ([DSMnNnB21, Corollary 5.10]). It immediately follows by [DSMnNnB21, Theorem 4.7] that the symbolic Rees algebra and the symbolic associated graded algebra of the ideal \(\mathfrak{P}\) are \(F\)-split (hence reduced). However, we do not know if either of these blowup algebras are Noetherian.
## 4. The divisor class group of the Pfaffian nullcone
In this section, we give a characteristic-free description of the divisor class group and the Gorenstein property of the Pfaffian nullcone. This section may be viewed as an application of the technique of principal radical systems introduced by Hochster and Eagon in [11] and studied for the Pfaffian nullcone in [10, Theorem 6.7].
**Theorem 4.1**.: _Let \(Y\) be a \(2t\times n\) matrix of indeterminates for positive integers \(t\) and \(n\), and \(\mathbb{K}\) be any field. Let \(R_{t,n}\) denote the Pfaffian nullcone \(\mathbb{K}[Y]/\mathfrak{P}(Y)\)._
1. _If_ \(n\geq t+1\) _then the divisor class group of_ \(R_{t,n}\) _is the group_ \(\mathbb{Z}\) _of integers. Otherwise,_ \(R_{t,n}\) _is a unique factorization domain._
2. _Let_ \(Y|_{t}\) _be the submatrix of_ \(Y\) _consisting of the first_ \(t\) _columns. If_ \(n\geq t+1\)_, the ideal_ \(\mathfrak{p}:=I_{t}(Y|_{t})\) _of_ \(R_{t,n}\) _is prime of height one and generates the divisor class group. Further, the canonical class of_ \(R_{t,n}\) _is given by_ \(\mathfrak{p}^{(n-t-1)}\)_._
3. _The ring_ \(R_{t,n}\) _is Gorenstein precisely if_ \(n\leq t+1\)_._
Proof.: Let \(W\) be a multiplicatively closed set of \(R_{t,n}\). Recall the Nagata exact sequence (see [23]) of divisor class groups
\[0\longrightarrow U\longrightarrow\operatorname{Cl}(R_{t,n})\longrightarrow \operatorname{Cl}(W^{-1}R_{t,n})\longrightarrow 0,\]
where \(U\) is the subgroup of \(\operatorname{Cl}(R_{t,n})\) consisting of the classes of pure height one ideals which have a nonempty intersection with \(W\). Let \(W\) be the multiplicatively closed set of \(R_{t,n}\) consisting of the powers of \(y_{1,1}\).
For the remainder of the proof, fix \(t\geq 2\). The principal ideal \(y_{1,1}R_{t,n}\) is prime by [10, Theorem 6.7(2)]. Thus, the only pure height one ideal containing \(y_{1,1}\) is principal and \(U\) is the trivial group \(\{[y_{1,1}]=0\}\). Since the class group is unaffected by a polynomial extension or by inverting at a prime element, it follows from Lemma 3.2(3) that the class groups \(\operatorname{Cl}(W^{-1}R_{t,n})\) and \(\operatorname{Cl}(R_{t-1,n-1})\) are isomorphic. This gives us an explicit inductive isomorphism of class groups
\[\operatorname{Cl}(R_{t,n})\cong\operatorname{Cl}(R_{t-1,n-1})\text{ with }[\mathfrak{q}]\mapsto[S^{-1}\mathfrak{q}]. \tag{4.0.1}\]
By Remark 3.1, \(R_{1,n-t+1}\) is the generic determinantal ring \(\mathbb{K}[Y_{2\times(n-t+1)}]/I_{2}(Y)\) and its class group can be computed directly using the Kunneth formula for local cohomology modules [11, Theorem 4.1.5] since it is a Segre product of standard graded polynomial rings. Inductively, we get
\[\operatorname{Cl}(R_{t,n})\cong\operatorname{Cl}(R_{1,n-t+1})=\begin{cases} \mathbb{Z}&\text{if }\ n-t+1\geq 2,\\ 0&\text{otherwise}.\end{cases}\]
Assertion (1) immediately follows. Since the canonical module localizes, Equation (4.0.1) also proves assertion (3).
We now prove assertion (2). Let the \((2t-2)\times n\) matrix \(Z\) and the elements \(f_{2},\dots,f_{n}\) be as in Lemma 3.2. We have an isomorphism of rings
\[\mathbb{K}[Y][\frac{1}{y_{1,1}}]\overset{\simeq}{\longrightarrow}\mathbb{K}[Z ][y_{1,1},\dots,y_{1,n},y_{2,1},\dots,y_{2t,n},f_{2},\dots,f_{n},\frac{1}{y_{1, 1}}]\]
induced by elementary column operations of the matrix \(Y\) that sends
\[\mathfrak{P}(Y)+I_{t}(Y|_{t})\mapsto\mathfrak{P}(Z)+I_{t-1}(Z|_{t-1})+(f_{2}, \dots,f_{n}).\]
By Lemma 3.2(3), this map descends to the isomorphism
\[R_{t,n}[\frac{1}{y_{1,1}}]\overset{\simeq}{\longrightarrow}R_{t-1,n-1}[y_{1, 1},\dots,y_{1,n},y_{2,1},\dots,y_{2t,n},\frac{1}{y_{1,1}}]\]
with
\[\mathfrak{p}:=I_{t}(Y|_{t})\mapsto I_{t-1}(Z|_{t-1}).\]
Put \(t=2\) in the above isomorphism. Clearly the ideal \(\mathfrak{q}:=I_{1}(Z|_{1})\) is prime of height one in the ring \(R_{1,n-1}\) defined by the size two minors of the \(2\times(n-1)\) generic matrix \(Z\). Further, the prime ideal \(\mathfrak{q}\) generates the class group of \(R_{1,n-1}\) and the canonical class is given by \(\mathfrak{q}^{(n-2)}\). This can be computed directly from the fact that \(R_{1,n-1}\) is a Segre product of standard graded polynomial rings; alternatively see [10]. Assertion (2) now follows by induction along the above isomorphism.
**Remark 4.2**.: Let \(Y\) be a \(2t\times n\) matrix of indeterminates for positive integers \(t\) and \(n\). It follows from Theorem 4.1 and [11, Theorem 6.3] that the Pfaffian nullcone \(\mathbb{K}[Y]/\mathfrak{P}(Y)\) is Gorenstein if and only if it is \(\mathbb{Q}\)-Gorenstein if and only if it is a complete intersection ring if and only if \(n\leq t+1\). Further, note that \(n=t+1\) gives us the only Gorenstein Pfaffian nullcone which is not a unique factorization domain, and that \(n=t+2\) gives us the only (non-UFD) Pfaffian nullcone whose class group is generated by the canonical class. \(\diamondsuit\)
## 5. Compatible \(F\)-splitting of the natural determinantal nullcone,
and the varieties of complexes
In this section, we prove that the determinantal nullcone is \(F\)-pure and that each of its irreducible components are \(F\)-regular. We begin with recalling some known facts about the determinantal nullcone and the varieties of complexes.
### Generalities
For positive integers \(m\), \(n\), and \(t\), let \(Y\) and \(Z\) be \(m\times t\) and \(t\times n\) matrices of indeterminates respectively. Set \(S\) to be the polynomial ring \(\mathbb{K}[Y,Z]\), and take \(R\) to be the \(\mathbb{K}\)-subalgebra generated by the entries of the product matrix \(YZ\). There is a natural \(\mathbb{K}\)-algebra isomorphism
\[R:=\mathbb{K}[YZ]\cong\mathbb{K}[X]/I_{t+1}(X)\]
where \(X\) is an \(n\times n\) matrix of indeterminates, and \(I_{t+1}(X)\) is the ideal generated by its size \(t+1\) minors. This isomorphism is induced by mapping the entries of the matrix \(X\) to the corresponding entries of the matrix \(YZ\). The general linear group \(\operatorname{GL}_{t}(\mathbb{K})\) acts \(\mathbb{K}\)-linearly on \(S\) via
\[M\colon\begin{cases}Y&\mapsto YM^{-1}\\ Z&\mapsto MZ\end{cases}\]
where \(M\in\operatorname{GL}_{t}(\mathbb{K})\). When the field \(\mathbb{K}\) is infinite, \(R\) is precisely the ring of invariants, see [1, SS3] or [14, Theorem 4.1]. The nullcone of the natural embedding
\[R=\mathbb{K}[YZ]\subseteq\mathbb{K}[Y,Z]=S\]
is the typically non-equidimensional ring \(\mathbb{K}[Y,Z]/(YZ)\). In this article, we refer to this ring as the _(natural) determinantal nullcone_. When \(\mathbb{K}\) has characteristic zero, the group \(\operatorname{GL}_{t}(\mathbb{K})\) is linearly reductive and thus the determinantal ring \(R\) splits from \(S\) as an \(R\)-module. When \(\mathbb{K}\) has positive characteristic, this embedding typically does _not_ split by [11, Theorem 1.1]. This is due to the Cohen-Macaulay property of the minimal prime ideals of the nullcone ideal ([12, Theorem 6.2]), in conjunction with the flatness of Frobenius. We next discuss the primary decomposition of the determinantal nullcone ideal.
Observe that any point \(P\) in the zero set of the ideal \((YZ)S\) in \(\mathbb{K}^{mt+tn}\) may be regarded as an ordered pair of matrices \(P=(A,B)\), where \(A\) and \(B\) are points in \(\mathbb{K}^{mt}\) and \(\mathbb{K}^{tn}\) respectively, such that the product matrix \(AB\) is zero. Let \(r\) and \(s\) denote the ranks of the matrices \(A\) and \(B\) respectively. A point of the zero set of \((YZ)S\) maybe regarded as a complex (of length two)
\[\mathbb{K}^{n}\xrightarrow{B}\mathbb{K}^{t}\xrightarrow{A}\mathbb{K}^{m}.\]
Consequently, we must have that \(r+s\leq t\). Consider the ideals
\[\mathfrak{p}_{r,s}=\mathfrak{p}_{r,s}(Y,Z):=I_{r+1}(Y)+I_{s+1}(Z)+(YZ)\]
of \(S\), where \(r\leq\min\{m,t\}\), \(s\leq\min\{t,n\}\), and \(r+s\leq t\). These are precisely the ideals defining the varieties of complexes (of length two) as introduced by Buchsbaum-Eisenbud in [1]. The above discussion gives us a one-to-one correspondence between the points of the zero set of \((YZ)S\) with the varieties of complexes of length two. Notice that the ideal \(\mathfrak{p}_{r,s}\) contains \(\mathfrak{p}_{r^{\prime},s^{\prime}}\) whenever \(r\leq r^{\prime}\) and \(s\leq s^{\prime}\). Therefore, the minimal prime ideals of the determinantallunc are the ideals defining the varieties of exact complexes:
\[(YZ)S=\bigcap_{r+s=t}\mathfrak{p}_{r,s}(Y,Z).\]
The determinantall nullcone \((YZ)S\) is a radical ideal by [11]; see also [14, Theorem 4.1]. It is typically non-equidimensional since, by [10, Lemma 2.3], we have
\[\dim(S/\mathfrak{p}_{r,s})=r(m-r+t)+s(n-s+t)-rs.\]
The varieties of complexes are Cohen-Macaulay normal domains in any characteristic by [12, Theorems 6.2, 7.1] and [10, Theorem 2.7]. Their divisor class groups and Gorenstein property are determined independently in [12] and [11]. Kempf showed that the varieties of complexes \(S/\mathfrak{p}_{r,s}\) have rational singularities in characteristic zero ([13], [14]). In [14] it is shown that in positive characteristic, they are \(F\)-rational relative to the resolution given by Kempf, and that they are also \(F\)-split.
### The localization property
We start with a localization property for the varieties of complexes analogous to that of the Pfaffian nullcone in Lemma 3.2.
**Lemma 5.1**.: _Let \(Y=(y_{i,j})\) and \(Z=(z_{i,j})\) be matrices of indeterminates of sizes \(m\times t\) and \(t\times n\) respectively; set \(S:=\mathbb{Z}[Y,Z]\). Let \(Z^{\prime}\) be the submatrix of \(Z\) obtained by deleting the first row. Then there exists a matrix \(Y^{\prime}\) of size \((m-1)\times(t-1)\) and elements \(f_{1},\ldots f_{n}\) in \(S[\frac{1}{y_{1,1}}]\) such that:_
1. _the entries of_ \(Y^{\prime}\)_, the entries of_ \(Z^{\prime}\)_, and the elements_ \(f_{1},\ldots,f_{n}\) _taken together are algebraically independent over_ \(\mathbb{Z}\)_;_
2. _along with_ \(y_{1,1}\) _and_ \(y_{1,1}^{-1}\)_, the above elements generate_ \(S[\frac{1}{y_{1,1}}]\) _as a_ \(\mathbb{Z}\)_-algebra;_
3. _with_ \(S^{\prime}:=\mathbb{Z}[Y^{\prime},Z^{\prime}]\)_, the ideal_ \(\mathfrak{p}_{r,s}(Y,Z)S[\frac{1}{y_{1,1}}]\) _equals_ \(\mathfrak{p}_{r-1,s}(Y^{\prime},Z^{\prime})S[\frac{1}{y_{1,1}}]+(f_{1},\ldots,f_{n})S[\frac{1}{y_{1,1}}]\)_, and we have an isomorphism_ \[\frac{S}{\mathfrak{p}_{r,s}(Y,Z)}[\frac{1}{y_{1,1}}]\cong\frac{S^{\prime}}{ \mathfrak{p}_{r-1,s}(Y^{\prime},Z^{\prime})}[y_{1,1},\ldots,y_{m,1},y_{1,2}, \ldots,y_{1,t},\frac{1}{y_{1,1}}].\]
Proof.: Let us map the entries of the matrix \(Y\) to the corresponding entries of \(MY\), where \(M\) is a matrix with \(m\) columns. Clearly, the ideal \((MY)\) generated by the entries of the matrix \(MY\) is contained in \((Y)\). This maps the ideal \(I_{i+1}(Y)\) to \(I_{i+1}(MY)\) and the ideal \((YZ)\) to \((MYZ)\). It follows that if \(M\) is invertible then the ideals \(\mathfrak{p}_{r,s}(Y,Z)\) and \(\mathfrak{p}_{r,s}(MY,Z)\) are equal. In particular, the ideal \(\mathfrak{p}_{r,s}(Y,Z)\) is unaffected by elementary row operations of the matrix \(Y\).
After inverting \(y_{1,1}\), one may perform elementary row operations to transform \(Y\) into a matrix where \(y_{1,1}\) is the only nonzero entry in the first column; the resulting matrix then is
\[\widetilde{Y}=\begin{pmatrix}y_{1,1}&y_{1,2}&\cdots&y_{1,t}\\ 0&y^{\prime}_{2,2}&\cdots&y^{\prime}_{2,t}\\ \vdots&\vdots&&\vdots\\ 0&y^{\prime}_{m,2}&\cdots&y^{\prime}_{m,t}\end{pmatrix}\quad\text{where} \quad y^{\prime}_{i,j}=y_{i,j}-\frac{y_{i,1}y_{1,j}}{y_{1,1}}.\]
Let \(Y^{\prime}\) be the submatrix of \(\widetilde{Y}\) obtained by deleting the first row and column. Note that the ideal \(I_{r+1}(Y)S[\frac{1}{y_{1,1}}]\) is generated by the size \(r+1\) minors of the matrix \(\widetilde{Y}\), and hence equals \(I_{r}(Y^{\prime})S[\frac{1}{y_{1,1}}]\). As discussed above, in the ring \(S[\frac{1}{y_{1,1}}]\), the ideals \((YZ)\) and \((\widetilde{Y}Z)\) are equal. Let \(Z^{\prime}\) be the submatrix of \(Z\) obtained by deleting the first row. Note that the entries of the matrix
\[\widetilde{Y}Z=\begin{pmatrix}y_{1,1}&y_{1,2}&\cdots&y_{1,t}\\ 0&y_{2,2}^{\prime}&\cdots&y_{2,t}^{\prime}\\ \vdots&\vdots&&\vdots\\ 0&y_{m,2}&\cdots&y_{m,t}^{\prime}\end{pmatrix}\begin{pmatrix}z_{1,1}&z_{1,2} &\cdots&z_{1,n}\\ z_{2,1}&z_{2,2}&\cdots&z_{2,n}\\ \vdots&\vdots&&\vdots\\ z_{t,1}&z_{t,2}&\cdots&z_{t,n}\end{pmatrix}\]
are exactly those of the matrix \(Y^{\prime}Z^{\prime}\) along with the elements \(f_{1},\ldots f_{n}\), where
\[f_{i}:=y_{1,1}z_{1,i}+\sum_{j=2}^{t}y_{1,j}z_{j,i}\]
is the dot product of the first row of \(\widetilde{Y}\) with the \(i\)-th column of \(Z\). Thus, in the ring \(S[\frac{1}{y_{11}}]\), we have
\[(YZ)=(\widetilde{Y}Z)=(Y^{\prime}Z^{\prime})+(f_{1},\ldots,f_{n}),\]
and the matrix \(Z\) can be rewritten as
\[Z=\begin{pmatrix}\frac{f_{1}}{y_{1,1}}-\sum_{j=2}^{t}\frac{y_{1,j}z_{j,1}}{y_{1,1}}&\cdots&\frac{f_{n}}{y_{1,1}}-\sum_{j=2}^{t}\frac{y_{1,j}z_{j,n}}{y_{1,1}} \\ z_{2,1}&\cdots&z_{2,n}\\ \vdots&&\vdots\\ z_{t,1}&\cdots&z_{t,n}\end{pmatrix}.\]
By the additivity of the determinant in any fixed row, a minor of \(Z\) of size \(s+1\) is exactly the corresponding minor of
\[\begin{pmatrix}\frac{f_{1}}{y_{1,1}}&\cdots&\frac{f_{t}}{y_{1,1}}\\ z_{2,1}&\cdots&z_{2,n}\\ \vdots&&\vdots\\ z_{t,1}&\cdots&z_{t,n}\end{pmatrix}.\]
Therefore, we get
\[\mathfrak{p}_{r,s}(Y,Z)S[\frac{1}{y_{1,1}}]=\mathfrak{p}_{r,s}(\widetilde{Y}, Z)S[\frac{1}{y_{1,1}}]=\mathfrak{p}_{r-1,s}(Y^{\prime},Z^{\prime})S[\frac{1}{y_ {1,1}}]+(f_{1},\ldots,f_{n})S[\frac{1}{y_{1,1}}].\]
The second part of assertion (3) immediately follows. Assertions (1) and (2) are readily verified since the matrices \(Y^{\prime}\) and \(Z^{\prime}\) do not involve the elements \(z_{1,j}\) which appear (with unit coefficients) in \(f_{j}\) for \(1\leq j\leq n\).
### Constructing the monomial order
In this subsection, we use the following:
**Notation 5.2**.: Let \(A\) be an \(m\times n\) matrix, and \(i,j,k,\ell\) be integers such that \(1\leq i\leq j\leq m\) and \(1\leq k\leq\ell\leq n\). We use \(A_{[k,\ell]}^{[i,j]}\) to denote the \((j-i+1)\times(\ell-k+1)\) submatrix of \(A\) with row indices \(i,\ldots,j\) and column indices \(k,\ldots,\ell\).
The aim of this subsection is to describe a recipe for a monomial order \(<_{B}\) in the polynomial ring \(S\) that creates special lead terms for the generators of the ideals \(\mathfrak{p}_{r,s}\) defining the varieties of complexes. The construction of this monomial order is quite technical; we illustrate it with an example first.
**Example 5.3**.: Let \(Y\) and \(Z\) be matrices of indeterminates of sizes \(5\times 3\) and \(3\times 5\) respectively and let \(\mathbb{K}\) be any field; set \(S:=\mathbb{K}[Y,Z]\). To define a monomial order \(<_{B}\) in the polynomial ring \(S\), we first define an order on the variables of \(S\). Sort the entries of the matrices \(Y\) and \(Z\) into blocks \(B_{1},B_{2},\ldots,B_{15}\) as displayed in the respective matrices
\[\begin{pmatrix}12&11&10\\ 9&14&13\\ 6&8&15\\ 3&5&7\\ 1&2&4\end{pmatrix},\begin{pmatrix}12&9&6&3&1\\ 14&11&8&5&2\\ 15&13&10&7&4\end{pmatrix}.\]
Thus, for instance, \(y_{1,1}\) is in the block \(B_{12}\), \(y_{1,2}\) is in \(B_{11}\), \(z_{1,1}\) is in the block \(B_{12}\), and so on. Now, for \(\gamma\in B_{\ell}\) and \(\delta\in B_{k}\), set \(\gamma<\delta\) if \(\ell<k\). Then, within each set \(B_{\ell}\), fix an arbitrary order among the variables. This gives us a total variable order in \(S\). Our monomial order \(<_{B}\) is the reverse lexicographical order induced by this variable order in \(S\).
For a polynomial \(f\), let \(\operatorname{in}_{B}(f)\) denote the initial monomial of \(f\) with respect to our monomial order. Set \(c_{i,j}:=(YZ)_{i,j}\) and let \(\alpha\) be the set consisting of the elements \(c_{i,j}\) in \(YZ\) with \(i+j\leq 4\), together with the following minors of \(Y\) and \(Z\):
\[\det\Big{(}Y_{[1,2]}^{[4,5]}\Big{)},\quad\det\Big{(}Y_{[1,3]}^{[3,5]}\Big{)}, \quad\det\Big{(}Y_{[1,3]}^{[2,4]}\Big{)},\quad\det\Big{(}Z_{[4,5]}^{[1,2]} \Big{)},\quad\det\Big{(}Z_{[3,5]}^{[1,3]}\Big{)},\quad\det\Big{(}Z_{[2,4]}^{[1, 3]}\Big{)}.\]
Notice that
\[\operatorname{in}_{B}(c_{1,1})=y_{1,1}z_{1,1}, \quad\operatorname{in}_{B}(c_{1,2})=y_{1,2}z_{2,2}, \quad\operatorname{in}_{B}(c_{1,3})=y_{1,3}z_{3,3},\] \[\operatorname{in}_{B}(c_{2,1})=y_{2,2}z_{2,1}, \quad\operatorname{in}_{B}(c_{2,2})=y_{2,3}z_{3,2}, \quad\operatorname{in}_{B}(c_{3,1})=y_{3,3}z_{3,1},\]
and
\[\operatorname{in}_{B}(\det\Big{(}Y_{[1,2]}^{[4,5]}\Big{)})=y_{4,1}y _{5,2}, \quad\operatorname{in}_{B}(\det\Big{(}Y_{[1,3]}^{[3,5]}\Big{)})=y_{3,1}y_{4, 2}y_{5,3}, \quad\operatorname{in}_{B}(\det\Big{(}Y_{[1,3]}^{[2,4]}\Big{)})=y_{2,1}y_{3,2}y_{4,3},\] \[\operatorname{in}_{B}(\det\Big{(}Z_{[4,5]}^{[1,2]}\Big{)})=z_{1,4} z_{2,5}, \quad\operatorname{in}_{B}(\det\Big{(}Z_{[3,5]}^{[1,3]}\Big{)})=z_{1,3}z_{2,4}z_{3,5}, \quad\operatorname{in}_{B}(\det\Big{(}Z_{[2,4]}^{[1,3]}\Big{)})=z_{1,2}z_{2, 3}z_{3,4}.\]
Let \(f\) be the product of the elements of \(\alpha\). Since the initial terms of the elements in \(\alpha\) are squarefree and pairwise coprime, \(f\) has a squarefree initial term. Moreover, \(y_{5,1}\) and \(z_{1,5}\) do not divide \(f\).
The construction of this monomial order is crucial in establishing the \(F\)-regularity of the variety of complex \(S/\mathfrak{p}_{r,s}(Y,Z)\), as well as the \(F\)-purity of the determinantal nullcone \(S/(YZ)S\), as we show next. From now on, assume that the underlying field \(\mathbb{K}\) has positive characteristic \(p\).
We begin with the case of the varieties of complexes \(S/\mathfrak{p}_{r,s}\). If \(r=0\) or \(s=0\), then \(S/\mathfrak{p}_{r,s}\) is a determinantal ring, and thus we may focus on the case where \((r,s)=(1,2)\) or \((r,s)=(2,1)\).
Let \(h\) be the height of \(\mathfrak{p}_{r,s}\); we shall show in the proof of Theorem 5.6 that \(f\) lies in the symbolic power \(\mathfrak{p}_{r,s}^{(h)}\) whenever \(r,s\neq 0\); hence we also have \(f^{p-1}\in\mathfrak{p}_{r,s}^{(h(p-1))}\) for those \(r,s\). As the initial term \(\operatorname{in}_{B}(f)\) is squarefree, it follows from Corollary 2.5 that the ideal \(\mathfrak{p}_{r,s}\) defines an \(F\)-pure ring for \(r,s\neq 0\). In fact, since \(y_{5,1}\) does not divide \(\operatorname{in}_{B}(f)\), we get
\[y_{5,1}f^{p-1}\in y_{5,1}(\mathfrak{p}_{r,s}^{[p]}:\mathfrak{p}_{r,s})\quad \text{while}\quad y_{5,1}f^{p-1}\notin\mathfrak{m}_{S}^{[p]}.\]
The ring \(\frac{S}{\mathfrak{p}_{r,s}}[\frac{1}{y_{5,1}}]\) is a smooth extension of the determinantal ring defined by the size two minors of a \(2\times 5\) matrix of indeterminates by Lemma 5.1, therefore it is \(F\)-regular. It follows from Theorem 2.3 that the varieties of complexes \(S/\mathfrak{p}_{r,s}\) are \(F\)-regular.
Now consider the determinantal nullcone \(S/(YZ)S\). As \((YZ)S\) is radical, with minimal primes \(\mathfrak{p}_{r,s}\) with \(r+s=3\), we have
\[(YZ)S=\bigcap_{r+s=3}\mathfrak{p}_{r,s}.\]
Thus, by Lemma 5.5 and Theorem 2.2, it suffices to find, for \(r+s=3\), a polynomial \(g\in\mathfrak{p}_{r,s}^{[p]}:\mathfrak{p}_{r,s}\) such that \(g\notin\mathfrak{m}_{S}^{[p]}\). Let
\[g=(y_{5,1}z_{1,5}f)^{p-1}.\]
We will show in the proof of Theorem 5.7 that \(g\in\mathfrak{p}_{r,s}^{[p]}:\mathfrak{p}_{r,s}\) for _all_\(r+s=3\). As \(y_{5,1}\) and \(z_{1,5}\) do not divide \(f\), we get \(g\notin\mathfrak{m}_{S}^{[p]}\), and so we are done by Theorem 2.2.
We now construct the monomial order \(<_{B}\) illustrated in the above example. We start with attaching a natural number to each entry of the matrices \(Y\) and \(Z\). For \(Z\), this process uses exactly the numbers \(1,\ldots,t\times n\), starts with \(1\) in the upper right corner of \(Z\), and proceeds along upwards oriented diagonals, in descending order. For \(Y\), on and below the main diagonal, and starting at the lower left corner, proceed along upwards oriented diagonals in ascending order. Above the main diagonal of \(Y\), associate to \(y_{i,j}\) with \(i\leq j\) the block that belongs to \(z_{j,j-i+1}\). In particular, only if \(m=n\) will the blocks used for \(Y\) be exactly \(1,\ldots,m\times t\); otherwise there could be repetitions as well as omissions.
The explicit formulae for the general block numbers are as follows: For \(1\leq i\leq m\) and \(1\leq j\leq n\), the variable \(y_{i,j}\) is in block \(B_{\ell}\), where
\[\ell=\left\{\begin{array}{r@{\qquad}l}\binom{m-i+j+1}{2}-j+1\qquad&\text{if }\quad i\geq m-t+1,j\leq i-m+t,\\ \binom{t+1}{2}+(t-j)+t(m-t+i-j-1)+1\quad&\text{if }\qquad i-m+t+1\leq j \leq i-1,\\ t\cdot n+t-j-\binom{t-i+2}{2}+1\qquad&\text{if }\qquad 1\leq i\leq j\leq t. \end{array}\right.\]
The variable \(z_{i,j}\) is in block \(B_{\ell}\), where
\[\ell=\left\{\begin{array}{r@{\qquad}l}i-2j+2n+\binom{i-j+n-1}{2}\qquad&\text {if }\quad 1-n\leq i-j\leq t-n-1,\\ \binom{t}{2}+t(i-j+n-t+1)-i+1\quad&\text{if }\qquad t-n\leq i-j\leq-1,\\ t\cdot n-\binom{t-i+j+1}{2}+t-i+1\qquad&\text{if }\qquad 0\leq i-j\leq t-1. \end{array}\right.\]
As in the example, our monomial order \(<_{B}\) is the reverse lexicographical order induced by this variable order in \(S\). For a polynomial \(f\), let \(\operatorname{in}_{B}(f)\) denote the initial monomial of \(f\) with respect to our monomial order.
We now consider lead terms of certain elements of the ideal \((YZ)\).
**Lemma 5.4**.: _Let \(\alpha\) be the set consisting of the elements \(c_{i,j}:=(YZ)_{i,j}\) with \(i+j\leq t+1\), along with the following minors of \(Y\) and \(Z\):_
* \(\det\left(Y_{[1,i]}^{[m-i+1,m]}\right)\) _for_ \(2\leq i\leq t-1\)_,_
* \(\det\left(Y_{[1,t]}^{[i+1,t+i]}\right)\) _for_ \(1\leq i\leq m-t\)_,_
* \(\det\left(Z_{[n-i+1,n]}^{[1,i]}\right)\) _for_ \(2\leq i\leq t-1\)_, and_
* \(\det\left(Z_{[i+1,t+i]}^{[1,t]}\right)\) _for_ \(1\leq i\leq n-t\)_._
_The initial terms of the elements of \(\alpha\) are squarefree and pairwise coprime with respect to the monomial order \(<_{B}\)._
Proof.: As all elements considered are sums of square free monomials, their respective lead terms must also be square free. In order to show that the lead terms are also coprime, we first prove the following claim:
If \(2\leq i+j\leq t+1\), then
* the entry \(c_{i,j}=\sum_{k=1}^{t}y_{i,k}z_{k,j}\) contains a term for which \(y_{i,k}\) and \(z_{k,j}\) are both in \(B_{\ell}\) for the same \(\ell\). In fact, this happens for \(k=i+j-1\);
* the term \(y_{i,i+j-1}z_{i+j-1,j}\) is the lead term of \(c_{i,j}\) under \(\leq_{B}\).
The first item is clear, since for \(i<k\) we have defined the block numbers of \(y_{i,k}\) to agree with the block numbers of \(z_{k,k-i+1}\) and so \(y_{i,i+j-1}\) and \(z_{i+j-1,j}\) are both in \(B_{\ell}\) for the same \(\ell\). For \(i+j-1\leq t\), compare \(y_{i,i+j-1}z_{i+j-1,j}\) to the other entries \(y_{i,k}z_{k,j}\) of \(c_{i,j}\). If \(k>i+j-1\) then \(y_{i,k}\) is in the same row and to the right of \(y_{i,i+j-1}\), which in turn is above the main diagonal of \(Y\). As the block numbers on and above the diagonal in \(Y\) decrease from left to right, \(y_{i,k}\) is in \(B_{\ell^{\prime}}\) for \(\ell^{\prime}<\ell\). Since we use revlex on the block value, \(y_{i,k}z_{k,j}<_{B}y_{i,i+j-1}z_{i+j-1,j}\). On the other hand, if \(k<i+j-1\), then \(z_{k,j}\) is in the same column as, and above, \(z_{i+j-1,j}\). As block numbers in \(Z\) increase with the row index, \(z_{k,j}\) is in \(B_{\ell^{\prime}}\) for \(\ell^{\prime}<\ell\). Since we use revlex on the block value, \(y_{i,k}z_{k,j}<_{B}y_{i,i+j-1}z_{i+j-1,j}\).
As the lead terms of the \(c_{i,j}\) with \(i+j\leq t+1\) are the elements \(y_{i,i+j-1}z_{i+j-1,j}\) for that range of \(i,j\), each \(y_{i,j}\) with \(i\leq j\) and each \(z_{k,\ell}\) with \(k\geq\ell\) appear exactly once in a lead term of \(c_{i,j}\).
Now observe that, in \(Y\) and \(Z\), the lead terms of the relevant minors listed in Lemma 5.4 are the main diagonal terms, since any other term of this determinant involves a variable that is under, and one variable that is above its main diagonal, and (depending on whether one looks at minors in \(Y\) or \(Z\)) one or the other of these has smaller block number than any element on the main diagonal of the minor. As the lead terms of the relevant minors of \(Y\) and \(Z\) are in disjoint diagonals that lie under the diagonal in \(Y\), and above the diagonal in \(Z\), they are coprime to one another and to the lead terms of the relevant \(c_{i,j}\).
### Main Results
We will require the following lemma to prove the main results of this section:
**Lemma 5.5**.: _Let \(I\) and \(J\) be ideals in a regular ring of prime characteristic \(p>0\). Then_
* \((I^{[p]}:I)\cap(J^{[p]}:J)\subseteq(I\cap J)^{[p]}:(I\cap J)\)_,_
* \((I^{[p]}:I)\cap(J^{[p]}:J)\subseteq(I+J)^{[p]}:(I+J).\)__
Proof.: By the flatness of the Frobenius map, we have \((I\cap J)^{[p]}=I^{[p]}\cap J^{[p]}\). Thus we get:
\[(I\cap J)^{[p]}:(I\cap J) = \left(I^{[p]}:(I\cap J)\right)\cap\left(J^{[p]}:(I\cap J)\right)\] \[\supseteq (I^{[p]}:I)\cap(J^{[p]}:J).\]
The proof of the second item is immediate (and does not require the flatness of Frobenius).
We are now ready to prove:
**Theorem 5.6**.: _Let \(Y\) and \(Z\) be matrices of indeterminates of sizes \(m\times t\) and \(t\times n\) respectively for positive integers \(m\), \(t\), and \(n\). Let \(\mathbb{K}\) be a field; set \(S:=\mathbb{K}[Y,Z]\) and suppose that \(r\) and \(s\) are non-negative integers with \(r+s\leq t\)._
1. _If_ \(\mathbb{K}\) _is an_ \(F\)_-finite field of positive characteristic, the variety of complexes_ \(S/\mathfrak{p}_{r,s}(Y,Z)\) _is strongly_ \(F\)_-regular._
2. _In consequence, if_ \(\mathbb{K}\) _has characteristic zero, the variety of complexes_ \(S/\mathfrak{p}_{r,s}(Y,Z)\) _has log-terminal, and in particular rational, singularities._
Proof.: Assertion (2) follows from (1) since rings of characteristic zero of \(F\)-regular type have log-terminal singularities, which are rational, compare [15, Theorem 4.3] and [14]. We therefore concentrate on the case where the characteristic of \(\mathbb{K}\) is \(p>0\).
We proceed by induction on \(r\). The statement is clear for \(r=0\), since then the ring \(S/\mathfrak{p}_{r,s}(Y,Z)\) is isomorphic to the determinantal ring \(\mathbb{K}[Z]/I_{s}(Z)\). This ring is strongly \(F\)-regular by [14, SS7].
Now assume that the assertion holds for some \(r\geq 1\). By suitably restating Lemma 5.1, we infer
\[\frac{S}{\mathfrak{p}_{r,s}(Y,Z)}[\frac{1}{y_{m,1}}]\cong\frac{S^{\prime}}{ \mathfrak{p}_{r-1,s}(Y^{\prime},Z^{\prime})}[y_{1,1},\ldots,y_{m,1},y_{m,2}, \ldots,y_{m,t},\frac{1}{y_{m,1}}].\]
where \(Y^{\prime}\) and \(Z^{\prime}\) are matrices of indeterminates of sizes \((m-1)\times(t-1)\) and \((t-1)\times n\) respectively. It follows from induction that the ring \(\frac{S}{\mathfrak{p}_{r,s}(Y,Z)}[\frac{1}{y_{m,1}}]\) is strongly \(F\)-regular.
If \(s=0\), then the ring \(S/\mathfrak{p}_{r,s}(Y,Z)\) is isomorphic to the determinantal ring \(\mathbb{K}[Y]/I_{r}(Y)\), and we are done. So assume that \(s\geq 1\). In order to apply Theorem 2.3, we must show that
\[y_{m,1}(\mathfrak{p}_{r,s}(Y,Z)^{[p]}:\mathfrak{p}_{r,s}(Y,Z))\not\subseteq \mathfrak{m}_{S}^{[p]},\]
where \(\mathfrak{m}_{S}\) is the homogeneous maximal ideal of \(S\). By Corollary 2.5, it suffices to find a polynomial \(f\) contained in \(\mathfrak{p}_{r,s}(Y,Z)^{(h)}\), where \(h\) is the height of \(\mathfrak{p}_{r,s}(Y,Z)\), such that \(y_{m,1}f^{p-1}\notin\mathfrak{m}_{S}^{[p]}\). We will proceed to find such a polynomial; crucially this polynomial will be _the same_ for all \(r\) and \(s\). However, before finding this polynomial, it is useful to make the following reductions:
Firstly, since \(F\)-regularity is preserved by direct summands, we may assume that \(t\leq\min\{m,n\}\) as the following argument shows: Choose two integers \(m^{\prime}\geq m\) and \(n^{\prime}\geq n\). Let \(\overline{Y}\) be a generic \(m^{\prime}\times t\) matrix that contains \(Y\). Similarly, let \(\overline{Z}\) be a generic \(t\times n^{\prime}\) matrix that contains \(Z\). Clearly \(YZ\) is a submatrix of \(\overline{YZ}\). Consider the maps
\[\mathbb{K}[Y,Z]\longrightarrow\mathbb{K}[\overline{Y},\overline{Z}] \longrightarrow\mathbb{K}[Y,Z]\]
where the first is the inclusion, and the second is the projection that sends all variables in \(\overline{Y}\smallsetminus Y\) and \(\overline{Z}\smallsetminus Z\) to zero. The inclusion sends \(\mathfrak{p}_{r,s}(Y,Z)\) into \(\mathfrak{p}_{r,s}(\overline{Y},\overline{Z})\) and the projection sends \(\mathfrak{p}_{r,s}(\overline{Y},\overline{Z})\) onto \(\mathfrak{p}_{r,s}(Y,Z)\). Since the composition is the identity map, the ring \(\mathbb{K}[Y,Z]/\mathfrak{p}_{r,s}(Y,Z)\) is a direct-summand of \(\mathbb{K}[\overline{Y},\overline{Z}]/\mathfrak{p}_{r,s}(\overline{Y}, \overline{Z})\). In consequence, \(F\)-regularity of the ring \(\mathbb{K}[\overline{Y},\overline{Z}]/\mathfrak{p}_{r,s}(\overline{Y}, \overline{Z})\) implies that of \(\mathbb{K}[Y,Z]/\mathfrak{p}_{r,s}(Y,Z)\).
Secondly, it suffices to consider varieties of complexes that are exact; i.e., we may assume that \(r+s=t\). Indeed,
\[\mathfrak{p}_{r,s}(Y,Z)+\mathfrak{p}_{r^{\prime},s^{\prime}}(Y,Z)=\mathfrak{ p}_{\min(r,r^{\prime}),\min(s,s^{\prime})}(Y,Z).\]
Thus, any ideal defining a variety of complexes may be written as the sum of ideals defining varieties of exact complexes with the same \(Y\) and \(Z\). Thus, by Lemma 5.5, if we can find a polynomial \(f\) such that
\[f^{p-1}\in\mathfrak{p}_{r,s}(Y,Z)^{[p]}:\mathfrak{p}_{r,s}(Y,Z)\text{ and }f^{p-1}\in\mathfrak{p}_{r^{\prime},s^{\prime}}(Y,Z)^{[p]}:\mathfrak{p}_{r^{ \prime},s^{\prime}}(Y,Z),\]
then
\[f^{p-1}\in\mathfrak{p}_{\min(r,r^{\prime}),\min(s,s^{\prime})}(Y,Z)^{[p]}: \mathfrak{p}_{\min(r,r^{\prime}),\min(s,s^{\prime})}(Y,Z).\]
Having made these reductions, we exhibit the desired polynomial \(f\) in the remainder of the proof.
Choose \(f\) to be the product of the elements contained in the set \(\alpha\) as in Lemma 5.4. As the lead terms of the elements contained in \(\alpha\) are squarefree and coprime with respect to the monomial order constructed in Lemma 5.4, and as \(y_{m,1}\) is not a factor of the said lead terms, \(y_{m,1}f^{p-1}\) is not contained in \(\mathfrak{m}_{S}^{[p]}\). We now show that \(f\) is contained in the symbolic power \(\mathfrak{p}_{r,s}(Y,Z)^{(h)}\).
Recall that for \(r+s=t\), the height of \(\mathfrak{p}_{r,s}(Y,Z)\) is
\[h=(m-r)(t-r)+(n-s)(t-s)+rs=ms-nr-rs.\]
The set \(\alpha\) as defined in Lemma 5.4 contains \(\binom{t+1}{2}\) elements of the form \(c_{i,j}\), and so \(f\in(YZ)^{\binom{t+1}{2}}\).
Before proceeding further, recall two useful facts about symbolic powers:
* Given an \(m\times n\) matrix \(A\) of indeterminates with \(m\leq n\), we have \(I_{\ell+k-1}(A)\subseteq I_{\ell}(A)^{(k)}\) whenever \(1\leq k\leq m-\ell+1\) by [1, Proposition 10.2].
* Given prime ideals \(\mathfrak{p}\subseteq\mathfrak{q}\) in a polynomial ring, we have \(\mathfrak{p}^{(k)}\subseteq\mathfrak{q}^{(k)}\) for any \(k\geq 0\).
Suppose first that \(r,s>1\). Then we have
\[f \in (YZ)^{\binom{t+1}{2}}\Bigg{(}\prod_{k=r+1}^{t-1}I_{k}(Y)\Bigg{)}I_{t }(Y)^{m-t}\Bigg{(}\prod_{\ell=s+1}^{t-1}I_{\ell}(Z)\Bigg{)}I_{t}(Z)^{n-t}\] \[\subseteq (YZ)^{\binom{t+1}{2}}\Bigg{(}\prod_{k=r+1}^{t-1}I_{r+1}(Y)^{(k-r) }\Bigg{)}\Big{(}I_{r+1}(Y)^{(t-r)}\Big{)}^{m-t}\Bigg{(}\prod_{\ell=s+1}^{t-1}I_ {s+1}(Z)^{(\ell-s)}\Bigg{)}\Big{(}I_{s+1}(Z)^{(t-s)}\Big{)}^{n-t}\] \[\subseteq \mathfrak{p}_{r,s}(Y,Z)^{\binom{t+1}{2}}\Big{(}I_{r+1}(Y)^{\binom {\binom{t-r}{2}+(m-t)(t-r)}{2}}\Big{)}\Big{(}I_{s+1}(Z)^{\binom{\binom{t-s}{2} +(n-t)(t-s)}{2}}\Big{)}\] \[\subseteq \mathfrak{p}_{r,s}(Y,Z)^{\binom{\binom{t+1}{2}+\binom{t-r}{2}+(m -t)(t-r)+\binom{t-s}{2}+(n-t)(t-s)}}\]
Since \(r+s=t\), an elementary computation shows that
\[\binom{t+1}{2}+\binom{t-r}{2}+(m-t)(t-r)+\binom{t-s}{2}+(n-t)(t-s) = ms+nr-rs=h.\]
So, \(f\) is contained in \(\mathfrak{p}_{r,s}(Y,Z)^{(h)}\), as desired.
Now assume that \(r=1\), and so \(h=m(t-1)+n-t+1\) while \(s+1>t-1\). Then we have
\[f \in (YZ)^{\binom{t+1}{2}}\Bigg{(}\prod_{k=2}^{t-1}I_{k}(Y)\Bigg{)}I_{t }(Y)^{m-t}I_{t}(Z)^{n-t}\] \[\subseteq (YZ)^{\binom{t+1}{2}}\Bigg{(}\prod_{k=2}^{t-1}I_{2}(Y)^{(k-1)} \Bigg{)}\Big{(}I_{2}(Y)^{(t-1)}\Big{)}^{m-t}I_{t}(Z)^{n-t}\] \[\subseteq \mathfrak{p}_{r,s}(Y,Z)^{\binom{t+1}{2}}\Big{(}I_{2}(Y)^{\binom {\binom{t-1}{2}+(m-t)(t-1)}{2}}\Big{)}\Big{(}I_{t}(Z)^{n-t}\Big{)}\] \[\subseteq \mathfrak{p}_{r,s}(Y,Z)^{\binom{\binom{t+1}{2}+\binom{t-1}{2}+(m -t)(t-1)+(n-t)}{2}}.\]
Notice that
\[\binom{t+1}{2}+\binom{t-1}{2}+(m-t)(t-1)+(n-t) = m(t-1)+n-t+1=h.\]
So, again, \(f\) is contained in \(\mathfrak{p}_{r,s}(Y,Z)^{(h)}\), as desired. The case \(s=1\) is analogous to the case \(r=1\), requiring only that we switch the roles of the minors of \(Y\) and \(Z\). We are done by Theorem 2.3.
**Theorem 5.7**.: _Let \(Y\) and \(Z\) be matrices of indeterminates of sizes \(m\times t\) and \(t\times n\) respectively and \(\mathbb{K}\) an \(F\)-finite field of positive characteristic; set \(S:=\mathbb{K}[Y,Z]\) and assume that \(r\) and \(s\) are non-negative integers with \(r+s\leq t\)._
1. _If_ \(t\leq\min(m,n)\) _then the splittings for the Frobenius map on the varieties of complexes_ \(S/\mathfrak{p}_{r,s}(Y,Z)\) _can be chosen compatibly._
2. _For any triple_ \((m,n,t)\)_, the natural determinantal nullcone_ \(S/(YZ)S\) _is_ \(F\)_-pure._
Proof.: The ring \(S/(YZ)S\) with \(t>\min(m,n)\) is a direct summand of the ring \(S/(\overline{YZ})S\), where \(\overline{Y}\) is an \(m^{\prime}\times t\) matrix which contains \(Y\) and \(\overline{Z}\) is a \(t\times n^{\prime}\) matrix which contains \(Z\) with \(m^{\prime}\geq\max(m,t)\) and \(n^{\prime}\geq\max(n,t)\). Therefore, as in the proof of Theorem 5.6, Part (2) of the present theorem then follows from Part (1) and the fact that \(F\)-purity is inherited by direct summands. Furthermore, for \(t\leq\min(m,n)\), the \(F\)-purity of \(S/(YZ)S\) will follow once we have shown that the \(F\)-splittings of each \(S/\mathfrak{p}_{r,s}(Y,Z)\) are compatible. So, for the remainder of the proof, assume that \(t\leq\min(m,n)\).
Recall that
\[(YZ)=\bigcap_{r+s=t}\mathfrak{p}_{r,s}(Y,Z).\]
Thus, by Lemma 5.5 and Theorem 2.2, it suffices to find a polynomial \(g\in\mathfrak{p}_{r,s}(Y,Z)^{[p]}:\mathfrak{p}_{r,s}(Y,Z)\) for all \(r+s=t\) such that \(g\notin\mathfrak{m}_{S}^{[p]}\).
Let \(f\) be the product of the elements of the set \(\alpha\) as in Definition 5.4. In the proof of Theorem 5.6 we showed that, for \(r\) and \(s\) both nonzero, \(f\) is contained in \(\mathfrak{p}_{r,s}(Y,Z)^{(h)}\) where \(h\) is the height of \(\mathfrak{p}_{r,s}(Y,Z)\), and thus \(f^{p-1}\in\mathfrak{p}_{r,s}(Y,Z)^{[p]}:\mathfrak{p}_{r,s}(Y,Z)\) by Corollary 2.5.
In order to account for the cases where \(r\) or \(s\) are zero, let
\[g:=y_{m,1}z_{1,n}f.\]
Clearly \(g^{p-1}\in\mathfrak{p}_{r,s}(Y,Z)^{[p]}:\mathfrak{p}_{r,s}(Y,Z)\) for \(r\) and \(s\) both nonzero. We claim that
\[g\in\mathfrak{p}_{r,s}(Y,Z)^{[p]}:\mathfrak{p}_{r,s}(Y,Z),\]
when \(r\) or \(s\) is zero.
Assume first that \(r=0\), and so \(h=mt\). We have
\[y_{m,1}z_{1,n}f \in (YZ)^{\binom{t+1}{2}}\Bigg{(}\prod_{k=1}^{t-1}I_{k}(Y)\Bigg{)}I_{ t}(Y)^{m-t}\] \[\subseteq (YZ)^{\binom{t+1}{2}}\Bigg{(}\prod_{k=1}^{t-1}I_{1}(Y)^{k}\Bigg{)} \Big{(}I_{1}(Y)^{t}\Big{)}^{m-t}\] \[\subseteq \mathfrak{p}_{r,s}(Y,Z)^{\binom{t+1}{2}}\Big{(}I_{1}(Y)^{\binom{ t}{2}+(m-t)t}\Big{)}\] \[\subseteq \mathfrak{p}_{r,s}(Y,Z)^{\binom{t+1}{2}+\binom{t}{2}+(m-t)t}.\]
Notice that
\[\binom{t+1}{2}+\binom{t}{2}+(m-t)t\ =\ mt\ =\ h.\]
Therefore \(g\) lies in \(\mathfrak{p}_{r,s}(Y,Z)^{h}\subseteq\mathfrak{p}_{r,s}(Y,Z)^{(h)}\), and thus Corollary 2.5 implies the claim. The case \(s=0\) is analogous.
We conclude the proof of the theorem by observing that the lead terms of the elements of \(\alpha\) are squarefree and coprime by Lemma 5.4, and that \(y_{m,1}\) and \(z_{1,n}\) are not factors of said lead terms. Thus, \(g^{p-1}\) is not contained in \(\mathfrak{m}_{S}^{[p]}\), and we are done by Corollary 2.5.
**Corollary 5.8**.: _Let \(Y\) and \(Z\) be matrices of indeterminates of sizes \(m\times t\) and \(t\times n\) respectively and \(\mathbb{K}\) a field; set \(S:=\mathbb{K}[Y,Z]\) and assume that \(r\) and \(s\) are non-negative integers._
1. _If_ \(r+s=t\)_, the ideal_ \(\mathfrak{p}_{r,s}(Y,Z)\) _defining the variety of exact complexes has a squarefree initial ideal._
2. _If_ \(r+s<t\) _and_ \(\mathbb{K}\) _has positive characteristic, the ideal_ \(\mathfrak{p}_{r,s}(Y,Z)\) _defining the variety of non-exact complexes and the natural determinantal nullcone ideal_ \((YZ)S\) _have squarefree initial ideal._
Proof.: Let \(<_{B}\) be the monomial order constructed in SS5.3. Choose \(f\) to be the product of the elements contained in the set \(\alpha\) as in Lemma 5.4, and set
\[g:=y_{m,1}z_{1,n}f\]
as in the proof of Theorem 5.7. By the proof of Theorem 5.7, \(\operatorname{in}_{B}(g)\) lies in the initial ideal \(\operatorname{in}_{B}(\mathfrak{p}_{r,s}^{(h)})\) for \(r+s=t\), where \(h\) is the height of \(\mathfrak{p}_{r,s}(Y,Z)\). We are done by Theorem 2.6. For (2), choose the same polynomial \(g\); we are done by [13, Theorem 3.12], where we additionally need the field to be of positive characteristic.
We end with the following:
**Question 5.9**.: Let \(Y\) and \(Z\) be matrices of indeterminates of sizes \(m\times t\) and \(t\times n\) respectively for positive integers \(m\), \(t\), and \(n\). Let \(r\) and \(s\) be non-negative integers with \(r+s\leq t\) and let \(\mathfrak{p}_{r,s}\) denote the ideal defining a variety of complexes in the polynomial ring \(\mathbb{K}[Y,Z]\). Denote by
\[\mathcal{R}^{S}(\mathfrak{p}_{r,s}):=\bigoplus_{k\geq 0}\mathfrak{p}_{r,s}^{(k)} \quad\text{and}\quad G^{S}(\mathfrak{p}_{r,s}):=\bigoplus_{k\geq 0}\mathfrak{p}_{r,s}^{(k)}/\mathfrak{p}_{r,s}^{(k+1)}\]
the _symbolic Rees algebra_ and the _symbolic associated graded algebra_ of \(\mathfrak{p}_{r,s}\) respectively. Are these rings Noetherian? \(\diamondsuit\)
The proof of Theorem 5.6 shows that the ideals defining the varieties of complexes are _symbolic \(F\)-split_ (see [5, Corollary 5.10]). It immediately follows by [5, Theorem 4.7] that the symbolic Rees algebra and the symbolic associated graded algebra of the ideal \(\mathfrak{p}_{r,s}\) are \(F\)-split (hence reduced). However, we do not know if either of these blowup algebras are Noetherian.
## Acknowledgements
We would like to thank Anurag Singh, Bernd Ulrich, Jack Jeffries, and Mel Hochster for several valuable discussions. We thank Matt Weaver for sharing useful study material with us.
Vaibhav Pandey and Yevgeniya Tarasova thank Alessandro De Stefani, Jonathan Montano, Lisa Seccia, and Luis Nunez-Betancourt for their advice and encouragement. Vaibhav Pandey is especially thankful to Matteo Varbaro for the invitation to the University of Genova, where a part of this work was carried out.
|
2306.12632 | The semi-inclusive deeply inelastic scattering in the eN collinear frame | The deeply inelastic scattering is one of the most important processes in
studying the nucleon structure. Theoretical calculations for both the inclusive
one and the semi-inclusive one are generally carried out in the virtual
photon-nucleon collinear frame in which virtual photon does not have the
transverse components. Expressions in this frame are written in relatively
simple forms. Nevertheless, it is also meaningful to calculate the scattering
process in the electron-nucleon collinear frame where new measurement schemes
are obtained. In the present paper, we reconsider the semi-inclusive deeply
inelastic scattering process in the electron-nucleon collinear frame and
present the results of azimuthal asymmetries and quark intrinsic asymmetries.
We find that the differential cross sections in these two frames are the same
at leading twist level but different at higher twist level. Azimuthal
asymmetries and intrinsic asymmetries in these two frames have the same forms
but different kinematic factors. For the sake of completeness, both the
electromagnetic and weak interactions are considered in our calculations. The
neutral current measurements in the scattering process could be used as
electroweak precision tests which can provide new accurate determinations of
the electroweak couplings. | W. Yang | 2023-06-22T01:59:13Z | http://arxiv.org/abs/2306.12632v2 | # The semi-inclusive deeply inelastic scattering in the \(eN\) collinear frame
###### Abstract
The deeply inelastic scattering is one of the most important processes in studying the nucleon structure. Theoretical calculations for both the inclusive one and the semi-inclusive one are generally carried out in the virtual photon-nucleon collinear frame in which virtual photon does not have the transverse components. Expressions in this frame are written in relatively simple forms. Nevertheless, it is also meaningful to calculate the scattering process in the electron-nucleon collinear frame where new measurement schemes are obtained. In the present paper, we reconsider the semi-inclusive deeply inelastic scattering process in the electron-nucleon collinear frame and present the results of azimuthal asymmetries and quark intrinsic asymmetries. We find that the differential cross sections in these two frames are the same at leading twist level but different at higher twist level. Azimuthal asymmetries and intrinsic asymmetries in these two frames have the same forms but different kinematic factors. For the sake of completeness, both the electromagnetic and weak interactions are considered in our calculations. The neutral current measurements in the scattering process could be used as electroweak precision tests which can provide new accurate determinations of the electroweak couplings.
## I Introduction
The deeply inelastic scattering (DIS) experiments provided a unique window in studying the nucleon structure in the past decades via the lepton-nucleon reactions. It will still play an important role in the future Electron-Ion Collider (EIC) [1; 2; 3] experiments. The inclusive DIS process can only access the longitudinal motion of partons in a fast moving nucleon or the longitudinal momentum distributions along the light-cone direction determined by the nucleon. In order to resolve the transverse momentum distributions, one is supposed to consider the semi-inclusive DIS (SIDIS) where a final state hadron or a jet is also measured in addition to the scattered lepton. Under the one-photon exchange approximation, theoretical calculation are generally carried out in the virtual photon-nucleon (\(\gamma^{*}N\)) collinear frame in which virtual photon does not have the transverse components. Systematic calculations for the hadron-production SIDIS process at leading order twist-3 level can be found in Ref. [4; 5]. In these reactions, the transverse momentum dependent parton distribution functions (TMD PDFs) and fragmentation functions (FFs) have to be considered simultaneously. Uncertainties from TMD FFs are inevitably introduced. To avoid this problem, one considers the jet-production SIDIS process where jets are generally taken as fermions (quarks) in the calculation. Here the jet and the scattered lepton are measured simultaneously. Comparing to the production of hadrons in the SIDIS process, the production of jets in a reaction takes on simpler forms that allows to calculate higher twist effects. Nevertheless, there is a shortcoming of this process that can not be used to explore the chiral-odd quantities, e.g., Boer-Mulders function (\(h_{1}^{+}\)) [6]. Because no spin flip occurs in such a reaction that only the chiral-even contributions exist. However, proposals for exploring the chiral-odd quantities through the jet fragmentation functions have been studied recently in Refs. [7; 8; 9; 10].
Calculations for the jet-production SIDIS process in the \(\gamma^{*}N\) collinear frame had been studied extensively [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Recently, the jet-production SIDIS process considered in the electron-nucleon (\(eN\)) collinear attracted much attentions [22; 23; 24; 25; 26]. A number of quantities were reconsidered, such as single spin asymmetry [22; 23; 24; 25], parity violating asymmetries and charge asymmetries [27]. Previous discussions of the jet-production SIDIS process in the \(eN\) collinear frame were limited at leading twist level. In this paper, we reconsider this process and extend the calculation to the twist-3 level. This is not a naive extension because the current conversation law of the hadronic tensor (\(q_{\mu}W^{\mu\nu}=0\)) should be dealt with carefully. In the \(\gamma^{*}N\) collinear frame, the current conservation at twist-3 is satisfied by the relationship \(q\cdot\bar{q}=q\cdot(q+2xp)=0\), see Ref. [19]. However, in the \(eN\) collinear frame, the virtual-photon gains the transverse momentum component \(q_{\perp}\). The previous relationship no longer holds, instead, a more complicated relationship is obtained.
In addition to the electromagnetic contributions, the (SI)DIS process also receives contributions from the weak interaction from the exchange of the \(Z^{0}\) boson. According to our numerical estimates, weak contributions will reach a few percent when \(Q>10\) GeV in the (SI)DIS process. The neutral current measurements in the scattering process could be used as electroweak precision tests [28] which can provide new accurate determinations of the electroweak couplings or new signals beyond standard model. Furthermore, the charged current measurements can be used in the flavor decomposition in PDF global analyses, specially for the determination of the parton distributions of the strange quark. In this paper, we limit ourselves by only calculate the neutral current SIDIS process in the \(eN\) collinear frame. After obtain the hadronic tensor, we calculate the differential cross section, azimuthal asymmetries and intrinsic asymmetries [21]. We find that the differential cross sections in these two frames are the same at leading twist level but different at higher twist level. Azimuthal asymmetries and intrinsic asymmetries in these two frames have the same forms but different kinematic factors.
To be explicit, we organize this paper as follows. In Sec. II, we present the formalism of the jet-production SIDIS process. Conventions used in this paper are also given. In Sec. III, we calculate the hadronic tensor up to twist-3 level in the \(eN\) collinear frame. In Sec. IV we present our calculation results, including differential cross section, azimuthal asym
metries and intrinsic asymmetries. Numerical estimates of the intrinsic asymmetry are also presented. Finally, a brief summary is given in Sec. V.
## II The formalism
As mentioned in the introduction, we consider both the EM and weak interactions in the jet production SIDIS. Therefore, the exchange of a \(Z^{0}\) boson between the electron and the nucleon can be relevant. We label this reaction in the following form,
\[e^{-}(l,\lambda_{e})+N(p,S)\to e^{-}(l^{\prime})+q(k^{\prime})+X, \tag{1}\]
where \(\lambda_{e}\) is the helicity of the electron with momentum \(l\), \(N\) is a nucleon with momentum \(p\) and spin 1/2. \(q\) denotes a quark which corresponds to a jet of hadrons observed in experiments. We require that the jet is in the current fragmentation region.
The differential cross section of the SIDIS can be written as a product of the leptonic tensor and the hadronic tensor,
\[d\sigma=\frac{\alpha_{\rm em}^{2}}{sQ^{4}}A_{r}L_{\mu\nu}^{r}(l,\lambda_{e},l^ {\prime})W_{r}^{\mu\nu}(q,p,S,k^{\prime})\frac{d^{3}l^{\prime}d^{3}k}{E_{r}(2 \pi)^{2}2E_{k^{\prime}}}. \tag{2}\]
The symbol \(r\) can be \(\gamma\gamma\), \(ZZ\) and \(\gamma Z\), for EM, weak and interference terms, respectively. A summation over \(r\) in Eq. (2) is understood, i.e., the total cross section is given by
\[d\sigma=d\sigma^{ZZ}+d\sigma^{\gamma Z}+d\sigma^{\gamma\gamma}. \tag{3}\]
\(A_{r}\)'s are defined as
\[A_{\gamma\gamma} =e_{q}^{2},\] \[A_{ZZ} =\frac{Q^{4}}{\left[(Q^{2}+M_{Z}^{2})^{2}+\Gamma_{Z}^{2}M_{Z}^{2} \right]\sin^{4}2\theta_{W}}\equiv\chi,\] \[A_{\gamma Z} =\frac{-2e_{q}Q^{2}(Q^{2}+M_{Z}^{2})}{\left[(Q^{2}+M_{Z}^{2})^{2} +\Gamma_{Z}^{2}M_{Z}^{2}\right]\sin^{2}2\theta_{W}}\equiv\chi_{int}, \tag{4}\]
where \(e_{q}\) is the electric charge of quark \(q\). \(M_{Z},\Gamma_{Z}\) are the mass and width of \(Z^{0}\) boson, \(\theta_{W}\) is the weak mixing angle. \(Q^{2}=-q^{2}=-(l-l^{\prime})^{2}\). The leptonic tensors are respectively given by
\[L_{\mu\nu}^{\gamma\gamma}(l,\lambda_{e},l^{\prime}) =2\left[l_{\mu}l^{\prime}_{\nu}+l_{\mu}l^{\prime}_{\mu}-(l\cdot l ^{\prime})g_{\mu\nu}\right]+2i\lambda_{e}\varepsilon_{\mu\nu ll^{\prime}}, \tag{5}\] \[L_{\mu\nu}^{\gamma Z}(l,\lambda_{e},l^{\prime}) =(c^{\prime}_{\nu}-c^{\prime}_{A}\lambda_{e})L_{\mu\nu}^{\gamma \gamma}(l,\lambda_{e},l^{\prime}),\] (6) \[L_{\mu\nu}^{ZZ}(l,\lambda_{e},l^{\prime}) =(c^{\prime}_{1}-c^{\prime}_{3}\lambda_{e})L_{\mu\nu}^{\gamma \gamma}(l,\lambda_{e},l^{\prime}), \tag{7}\]
where \(\lambda_{e}\) is the helicity of the electron. \(c^{\prime}_{1}=(c^{\prime}_{V})^{2}+(c^{\prime}_{A})^{2}\) and \(c^{\prime}_{3}=2c^{\prime}_{V}c^{\prime}_{A}\). \(c^{\prime}_{v}\) and \(c^{\prime}_{A}\) are defined in the weak interaction current \(J^{\mu}(x)=\bar{\psi}(x)\Gamma^{\mu}\psi(x)\) with \(\Gamma^{\mu}=\gamma^{\mu}(c^{\prime}_{V}-c^{\prime}_{A}\gamma^{5})\).
The hadronic tensors are given by
\[W_{\gamma\gamma}^{\mu\nu}(q,p,k^{\prime}) =\sum_{X}(2\pi)^{3}\delta^{4}(p+q-k^{\prime}-p_{X})\] \[\times\langle p,S|J_{\gamma\gamma}^{\mu}(0)|k^{\prime};X\rangle \langle k^{\prime};X|J_{\gamma\gamma}^{\nu}(0)|p,S\rangle, \tag{8}\] \[W_{\gamma Z}^{\mu\nu}(q,p,k^{\prime}) =\sum_{X}(2\pi)^{3}\delta^{4}(p+q-k^{\prime}-p_{X})\] \[\times\langle p,S|J_{ZZ}^{\mu}(0)|k^{\prime};X\rangle\langle k^{ \prime};X|J_{\gamma\gamma}^{\nu}(0)|p,S\rangle,\] (9) \[W_{ZZ}^{\mu\nu}(q,p,k^{\prime}) =\sum_{X}(2\pi)^{3}\delta^{4}(p+q-k^{\prime}-p_{X})\] \[\times\langle p,S|J_{ZZ}^{\mu}(0)|k^{\prime};X\rangle\langle k^{ \prime};X|J_{ZZ}^{\nu}(0)|p,S\rangle, \tag{10}\]
where \(J_{\gamma\gamma}^{\mu}(0)=\bar{\psi}(0)\gamma^{\mu}\psi(0)\), \(J_{ZZ}^{\mu}(0)=\bar{\psi}(0)\Gamma_{q}^{\mu}\psi(0)\) with \(\Gamma_{q}^{\mu}=\gamma^{\mu}(c^{\prime}_{V}-c^{\prime}_{A}\gamma^{5})\). It is convenient to consider the \(k^{\prime}_{\perp}\)-dependent cross section, i.e.,
\[d\sigma=\frac{\alpha_{\rm em}^{2}}{sQ^{4}}A_{r}L_{\mu\nu}^{r}(l,\lambda_{e},l^ {\prime})W_{r}^{\mu\nu}(q,p,S,k^{\prime}_{\perp})\frac{d^{3}l^{\prime}d^{2}k^{ \prime}_{\perp}}{E_{l^{\prime}}}, \tag{11}\]
where the \(k^{\prime}_{\perp}\) integrated hadronic tensor is given by
\[W_{r}^{\mu\nu}(q,p,S,k^{\prime}_{\perp})=\int\frac{dk^{\prime}_{z}}{(2\pi)^{3}2 E_{k^{\prime}}}W_{r}^{\mu\nu}(q,p,S,k^{\prime}). \tag{12}\]
With the convention of exploring the lepton-jet correlation [22], we define \(j=l^{\prime}+k^{\prime}\), the sum of the momenta of the scattered lepton and the final jet. In the \(eN\) collinear frame,
\[\vec{J}_{\perp}=\vec{l}_{\perp}+\vec{k}^{\prime}_{\perp}=\vec{l}_{\perp}^{\eta}+ \vec{k}_{\perp}+\vec{q}_{\perp}=\vec{k}_{\perp} \tag{13}\]
if the higher order radiation of gluon is neglected. In other words, the transverse momentum \(\vec{J}_{\perp}\) equals to the intrinsic transverse momentum \(\vec{k}_{\perp}\) of a quark in the nucleon which can induce the intrinsic asymmetry [21]. Therefore, the hadronic tensor and/or cross section can be defined in terms of the new momentum \(j\)[27], i.e.,
\[\frac{d\sigma}{d\eta d^{2}l^{\prime}_{\perp}d^{2}j_{\perp}}=\frac{\alpha_{\rm em }^{2}}{sQ^{4}}A_{r}L_{\mu\nu}^{r}(l,\lambda_{e},l^{\prime})W_{r}^{\mu\nu}(q,p, S,j_{\perp}), \tag{14}\]
where \(\eta\) is the rapidity of the scattered lepton. We here have used \(d\eta=dl^{\prime}_{z}/E_{l^{\prime}}\).
In the \(eN\) collinear frame, momenta of these particle are parametrized as
\[p^{\mu} =\left(p^{+},0,\vec{0}_{\perp}\right), \tag{15}\] \[l^{\prime} =\left(0,\frac{Q^{2}}{2xyp^{+}},\vec{0}_{\perp}\right),\] (16) \[q^{\mu} =\left(-xyp^{+},\frac{Q^{2}}{2xyp^{+}},\sqrt{1-y}Q,0\right), \tag{17}\]
where \(x=Q^{2}/2p\cdot q\), \(y=p\cdot q/p\cdot l\). Since the transverse momentum \(\vec{J}_{\perp}\) equals to the intrinsic transverse momentum \(\vec{k}_{\perp}\) of a quark in the nucleon in the \(eN\) collinear frame, we can define
\[\vec{J}_{\perp}^{\mu}=k_{\perp}^{\mu}=|k_{\perp}|(0,0,\cos\varphi,\sin\varphi). \tag{18}\]
We also parametrize the transverse polarization vector as,
\[S_{T}^{\mu}=|S_{T}|\left(0,0,\cos\varphi_{S},\sin\varphi_{S}\right). \tag{19}\]
## III The hadronic tensor
In order to calculate the differential cross section, Eq. (14), we need to obtain the hadronic tensor. To be explicit, we divide the hadronic tensor into a leading twist part and a twist-3 part. Leading twist hadronic tensor had been obtained in Ref. [27], we repeat here explicitly.
### The leading twist hadronic tensor
Equations (8)-(10) show respectively the operator definitions of hadronic tensors for the EM, interference and weak interaction. One can choose any one of them for example for illustration. Generally, we choose \(W^{\mu\nu}_{ZZ}\). After simple algebraic calculation, the hadronic tensor can be written as,
\[W^{\mu\nu}=\frac{2E_{k^{\prime}}}{2p\cdot q}\text{Tr}\left[\hat{ \Phi}^{(0)}(x,k_{\perp})\hat{H}^{\mu\nu}_{ZZ}(q,k)\right](2\pi)^{3}\delta(q_{z }+k_{z}-k^{\prime}_{z}). \tag{20}\]
Hereafter, we neglect the subscript \(ZZ\) of \(W^{\mu\nu}\) for simplicity. Integrating over \(k_{z}\), see Eq. (12), we have the \(k_{\perp}\)-dependent hadronic tensor. Changing the variable \(k_{\perp}\) into \(j_{\perp}\) gives \(W^{\mu\nu}(q,p,S,j_{\perp})\),
\[\hat{W}^{\mu\nu}=\frac{1}{2p\cdot q}\text{Tr}\left[\hat{\Phi}^{(0) }(x,k_{\perp})\hat{H}^{\mu\nu}_{ZZ}(q,k)\right], \tag{21}\]
where the quark-quark correlator is
\[\hat{\Phi}^{(0)}(x,k_{\perp}) =\int\frac{p^{+}dy^{-}d^{2}\vec{y}_{\perp}}{(2\pi)^{3}}e^{ixp^{+} y^{-}-\hat{k}_{\perp}\vec{y}_{\perp}}\] \[\times\langle p,S|\bar{\psi}(0)\mathcal{L}(0,y)\psi(y)|p,S\rangle. \tag{22}\]
The gauge link has been inserted into the quark-quark correlator Eq. (22) to keep the gauge invariance. In the jet production SIDIS process, where the fragmentation is not considered, only the chiral even PDFs are involved. Since there is no spin flip, we only need to consider the \(\gamma^{\mu}\)- and the \(\gamma^{\mu}\gamma^{5}\)-terms in the decomposition of these correlators.
\[\hat{\Phi}^{(0)} =\frac{1}{2}\left[\gamma^{\mu}\Phi^{(0)}_{\alpha}+\gamma^{\mu} \gamma_{5}\hat{\Phi}^{(0)}_{\alpha}\right], \tag{23}\] \[\hat{\Phi}^{(1)}_{\rho} =\frac{1}{2}\left[\gamma^{\mu}\Phi^{(1)}_{\rho\alpha}+\gamma^{ \mu}\gamma_{5}\hat{\Phi}^{(1)}_{\rho\alpha}\right], \tag{24}\]
where \(\hat{\Phi}^{(1)}_{\rho}\) is the quark-gluon-quark correlator which will be introduced in the next part. The TMD PDFs are defined through the decomposition of the correlators or the coefficient functions,
\[\Phi^{(0)}_{\alpha}=p^{+}\bar{n}_{\alpha}\Big{(}f_{1}-\frac{k_{ \perp}\cdot\bar{S}_{T}}{M}f_{1T}^{\pm}\Big{)}+k_{\perp\alpha}f^{\perp}\] \[-M\bar{S}_{T\alpha}f_{T}-\lambda_{h}\bar{k}_{\perp\alpha}f_{L}^{ \perp}-\frac{k_{\perp\alpha}k_{\perp\beta}}{M}\bar{S}^{\beta}_{T}f_{T}^{\pm}, \tag{25}\] \[\tilde{\Phi}^{(0)}_{\alpha} =p^{+}\bar{n}_{\alpha}\Big{(}-\lambda_{h}g_{1L}+\frac{k_{\perp} \cdot S_{T}}{M}g_{1T}^{\pm}\Big{)}-\bar{k}_{\perp\alpha}g^{\perp}\] \[-MS\tau_{\alpha}g_{T}-\lambda_{h}k_{\perp\alpha}g_{L}^{\perp}+ \frac{k_{\perp\alpha}k_{\perp\beta}}{M}\bar{S}^{\beta}_{T}g_{L}^{\pm}. \tag{26}\]
Here \(\bar{A}^{\alpha}=\varepsilon^{\alpha A}_{\perp}=\varepsilon^{\alpha B}_{\perp} A_{\perp\beta}\), \(A\) can be \(k_{\perp}\) or \(S_{T}\). \(k_{\perp\alpha}k_{\perp\beta}=k_{\perp\alpha}k_{\perp\beta}-g_{\perp\alpha} \varepsilon^{\lambda}_{\perp}/2\). For the antiquark distribution functions defined via the antiquark correlator \(\overline{\Phi}(x,k_{\perp})\)[4; 29], we have relations \(\overline{\Phi}^{(\gamma)}=+\Phi^{(\gamma)}\) and \(\overline{\Phi}^{(\gamma\gamma^{5})}=-\Phi^{(\gamma\gamma^{5})}\), where \(\Phi^{(\gamma]}\) and \(\Phi^{(\gamma\gamma^{5})}\) denote respectively distribution functions given in Eq. (25) and (26).
The hard part in Eq. (21) is abbreviated as
\[\hat{H}^{\mu\nu}_{ZZ}(q,k)=\Gamma^{\mu\alpha}(q+\not{k})\Gamma^{ \nu\alpha}. \tag{27}\]
In the \(eN\) collinear frame we have obtain the following relationships
\[k^{-}<<k_{\perp}<<q_{\perp}\sim q^{-}, \tag{28}\]
and \(q^{+}+k^{+}=(1-y)xp^{+}\). Neglecting the small components of \(k\), we have
\[(\not{q}+\not{k})=(1-y)xp^{+}\not{\bar{n}}+q^{-}\not{\bar{n}}+\not{q}_{\perp}. \tag{29}\]
In addition to the minus component of \(q\), we see, the transverse and the plus components also contribute in the \(eN\) collinear frame. This is important to the requirement of the current conservation of the hadronic tensor. We can obtain the result is the \(\gamma^{*}N\) frame by neglecting the first and third terms in Eq. (29) gives the [27],
\[(\not{q}+\not{k})=q^{-}\not{\bar{n}}. \tag{30}\]
According to the above discussion, the hadronic tensor is calculated as
\[W^{\mu\nu}_{i2} =-\left(c_{1}^{q}\varepsilon^{\mu\nu}_{\perp}+i\varepsilon^{q}_{3} \varepsilon^{\mu\nu}_{\perp}\right)\Big{(}f_{1}-\frac{k_{\perp}\cdot\bar{S}_{T }}{M}f_{1T}^{\perp}\Big{)}\] \[-\left(c_{3}^{q}\varepsilon^{\mu\nu}_{3}+i\varepsilon^{q}_{1} \varepsilon^{\mu\nu}_{\perp}\right)\Big{(}-\lambda_{h}g_{1L}+\frac{k_{\perp} \cdot S_{T}}{M}g_{1T}^{\perp}\Big{)}, \tag{31}\]
where subscript \(t2\) denotes leading twist. Dimensionless tensors are defined as
\[\vec{g}^{\mu\nu}_{\perp} =g^{\mu\nu}_{\perp}-\frac{\vec{q}^{2}_{\perp}}{(q^{-})^{2}}\bar{n} ^{\mu}\bar{n}^{\nu}-\frac{1}{q^{-}}q^{\parallel\mu}_{\perp}\bar{n}^{\nu}, \tag{32}\] \[\vec{g}^{\mu\nu}_{\perp} =\varepsilon^{\mu\nu}_{\perp}+\frac{1}{q^{-}}e^{\mu\bar{n}q_{ \perp}}. \tag{33}\]
In the \(\gamma^{*}N\) collinear frame, only \(g^{\mu\nu}_{\perp}\) and \(\varepsilon^{\mu\nu}_{\perp}\) are involved on the right-hand side in Eqs. (32) and (33). It is easily to check that \(q_{\mu}\vec{g}^{\mu\nu}_{\perp}=q_{y}\vec{g}^{\mu\nu}_{\perp}=0\) and \(q_{\mu}\vec{g}^{\mu\nu}_{\perp}=q_{y}\vec{g}^{\mu\nu}_{\perp}=0\). This relationships imply that the current conservation of the hadronic tensor is satisfied.
### The twist-3 hadronic tensor
The twist-3 hadronic tensor has two parts of contributions, one from the quark-quark correlator given in Eq. (3.3), the other from the quark-gluon-quark correlator,
\[\hat{\Phi}^{(1)}_{\rho}\left(x,k_{\perp}\right) =\int\frac{p^{+}dy^{-}d^{2}y_{\perp}}{(2\pi)^{3}}e^{ixp^{+}y^{-}- \vec{k}_{\perp}\cdot\vec{y}_{\perp}}\] \[\times\langle p,S|\hat{\psi}(0)D_{z\rho}(0)\mathcal{L}(0,y)\psi(y )|p,S\rangle, \tag{3.15}\]
where \(D_{\rho}(y)=-i\partial_{\rho}+gA_{\rho}(y)\) is the covariant derivative. Similar to Eqs. (3.16) and (3.17), we can decompose the quark-gluon-quark correlator as we did for the quark-quark correlator,
\[\Phi^{(1)}_{\rho\sigma} =p^{+}\bar{n}_{\alpha}\Big{[}k_{\perp\rho}f_{d}^{\perp}-M\bar{S}_ {T\rho}f_{dT}\] \[-\lambda_{d}\bar{k}_{\perp\rho}f_{dL}^{\perp}-\frac{k_{\perp \rho}k_{\perp\rho}}{M}\bar{S}_{T}^{\beta}f_{dT}^{\perp}\Big{]}, \tag{3.16}\]
\[\hat{\Phi}^{(1)}_{\rho\sigma} =ip^{+}\bar{n}_{\alpha}\Big{[}\bar{k}_{\perp\rho}g_{d}^{\perp}+ MS_{T\rho}g_{dT}\] \[+\lambda_{d}k_{\perp\rho}g_{dL}^{\perp}-\frac{k_{\perp\rho}\langle k _{\perp\rho}\rangle}{M}S_{T}^{\beta}g_{dT}^{\perp}\Big{]}, \tag{3.17}\]
where a subscript \(d\) is used to denote TMD PDFs defined via the quark-gluon-quark correlator or coefficient functions.
For the sake of simplicity, we show the complete twist-3 hadronic tensor here and give the calculation procedure in the appendix A. Then the complete twist-3 hadronic tensor which satisfies the current conservation is written as
\[W^{\mu\nu}_{\prime 3} =\frac{f^{\perp}}{p\cdot q}h_{1}^{\mu\nu}+\frac{\lambda_{d}f_{L}^{ \perp}}{p\cdot q}h_{2}^{\mu\nu}+\frac{g^{\perp}}{p\cdot q}h_{3}^{\mu\nu}+\frac {3g_{L}^{\perp}}{p\cdot q}h_{4}^{\mu\nu}\] \[+\frac{Mf_{T}}{p\cdot q}h_{5}^{\mu\nu}+\frac{f_{T}^{\perp}}{p\cdot q }h_{6}^{\mu\nu}+\frac{Mg_{T}}{p\cdot q}h_{7}^{\mu\nu}+\frac{g_{T}^{\perp}}{p \cdot q}h_{8}^{\mu\nu}, \tag{3.18}\]
where \(h_{1-8}^{\mu\nu}\) are defined as
\[h_{1}^{\mu\nu} =+c_{1}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}+k_{\perp}^{[\mu} \bar{n}^{\nu]}(2-y)xp^{+}+k_{\perp}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g^{\mu\nu}+ \bar{n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}k_{\perp}\cdot q_{\perp}\] \[+ic_{3}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}-\bar{k}_{\perp}^{[ \mu}\bar{n}^{\nu]}q^{+}-\bar{n}^{[\mu}\bar{n}^{\nu]}k_{\perp}\cdot q_{\perp} \Big{]},\] (3.19) \[h_{3}^{\mu\nu} =-c_{3}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}+\bar{k}_{\perp}^{[ \mu}\bar{n}^{\nu]}(2-y)xp^{+}+\bar{k}_{\perp}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g ^{\mu\nu}+\bar{n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}k_{\perp}^{ \mu}\Big{]}\] \[+ic_{3}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}-k_{\perp}^{[\mu} \bar{n}^{\nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}k_{\perp}\cdot q_{\perp}\Big{]},\] (3.20) \[h_{3}^{\mu\nu} =-c_{3}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}+\bar{k}_{\perp}^{[ \mu}\bar{n}^{\nu]}(2-y)xp^{+}+\bar{k}_{\perp}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g ^{\mu\nu}+\bar{n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}k_{\perp}^{ \mu}\Bigg{]}\] \[+ic_{3}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}+\bar{S}_{T}^{[\mu} \bar{n}^{\nu]}(2-y)xp^{+}+\bar{S}_{T}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g^{\mu\nu} +\bar{n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}k_{\perp}^{\mu S}\, \Big{]}\] \[+ic_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}-S_{T}^{[\mu}\bar{n}^{ \nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}S_{T}\cdot q_{\perp}\Big{]}\Bigg{\}},\] (3.23) \[h_{6}^{\mu\nu} =-\Bigg{\{}+c_{1}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}+k_{\perp }^{[\mu}\bar{n}^{\nu]}(2-y)xp^{+}+k_{\perp}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g ^{\mu\nu}+\bar{n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}k_{\perp}\cdot q _{\perp}\Big{]}\] \[+ic_{3}^{q}\Big{[}\bar{k}_{\perp}^{[\mu}n^{\nu]}-\bar{k}_{\perp}^{ [\mu}\bar{n}^{\nu]}q^{-}+\bar{n}^{[\mu}n^{\nu]}S_{T}\cdot q_{\perp}\Big{]} \Bigg{\}},\] (3.24) \[h_{7}^{\mu\nu} =+\Bigg{\{}-c_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}+S_{T}^{[\mu }\bar{n}^{\nu]}(2-y)xp^{+}+S_{T}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g^{\mu\nu}+\bar{ n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}S_{T}\cdot q_{\perp}\Big{]}\] \[+ic_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}-S_{T}^{[\mu}\bar{n}^{ \nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}S_{T}\cdot q_{\perp}\Big{]}\Bigg{\}}\frac{k_{ \perp}^{2}}{2M},\] (3.25) \[h_{7}^{\mu\nu} =+\Bigg{\{}-c_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}+S_{T}^{[\mu} n^{\nu]}(2-y)xp^{+}+S_{T}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g^{\mu\nu}+\bar{n}^{\mu}\bar{n}^{ \nu}\frac{2xp^{+}}{q^{-}}\Big{)}S_{T}\cdot q_{\perp}\Big{]}\] \[+ic_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}-S_{T}^{[\mu}\bar{n}^{ \nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}S_{T}\cdot q_{\perp}\Big{]}\Bigg{\}}\frac{k_{ \perp}^{2}}{2M},\] (3.26) \[h_{7}^{\mu\nu} =+\Bigg{\{}-c_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}+S_{T}^{[\mu }n^{\nu]}(2-y)xp^{+}+S_{T}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g^{\mu\nu}+\bar{n}^{ \mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}S_{T}\cdot q_{\perp}\Big{]}\] \[+ic_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}-S_{T}^{[\mu}\bar{n}^{ \nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}S_{T}\cdot q_{\perp}\Big{]}\Bigg{\}}\frac{k_{ \perp}^{2}}{2M},\] (3.27) \[h_{7}^{\mu\nu} =+\Bigg{\{}-c_{3}^{q
\[-ic_{1}^{q}\Big{[}\bar{S}_{T}^{[\mu}n^{\nu]}q^{-}-\bar{S}_{T}^{[\mu}n^{ \nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}\varepsilon_{\perp}^{qS}\Big{]}\Bigg{\}}, \tag{3.26}\] \[\bar{I}_{8}^{\mu\nu}=+\Bigg{\{}+c_{3}^{q}\Big{[}k_{\perp}^{[\mu}n^ {\nu]}q^{-}+k_{\perp}^{[\mu}\bar{n}^{\nu]}(2-y)xp^{+}+k_{\perp}^{[\mu}q_{\perp} ^{\nu]}-\Big{(}g^{\mu\nu}+\bar{n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)} k_{\perp}\cdot q_{\perp}\Big{]}\] \[\qquad\qquad+ic_{1}^{q}\Big{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}-\bar{ k}_{\perp}^{[\mu}\bar{n}^{\nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}\varepsilon_{ \perp}^{qS}\Big{]}\Bigg{\}}\frac{k_{\perp}\cdot S_{T}}{M}\] (3.27) \[-\Bigg{\{}+c_{3}^{q}\Big{[}S_{T}^{[\mu}n^{\nu]}q^{-}+S_{T}^{[\mu} \bar{n}^{\nu]}(2-y)xp^{+}+S_{T}^{[\mu}q_{\perp}^{\nu]}-\Big{(}g^{\mu\nu}+\bar{ n}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{-}}\Big{)}S_{T}\cdot q_{\perp}\Big{]}\] \[\qquad\qquad+ic_{1}^{q}\Big{[}\bar{S}_{T}^{[\mu}n^{\nu]}q^{-}- \bar{S}_{T}^{[\mu}\bar{n}^{\nu]}q^{+}-\bar{n}^{[\mu}n^{\nu]}\varepsilon_{ \perp}^{qS}\Big{]}\Bigg{\}}\frac{k_{\perp}^{2}}{2M}. \tag{3.28}\]
We see clearly that the full twist-3 hadronic tensor satisfies current conservation, \(q_{\mu}\bar{W}_{t3}^{\mu\nu}=q_{\nu}\bar{W}_{t3}^{\mu\nu}=0\). Although, the \(h\)-tensors are relative complicated, they have the similar forms which would lead to the simple expression of the differential cross section.
## IV The results
### The differential cross section
The differential cross section can be obtained by using the contraction of the leptonic tensor and hadronic tensor. With variables shown in Eqs. (2.15)-(2.19), we use Eqs. (2.7) and (3.13), (3.14) and obtain
\[L_{\mu\nu}^{ZZ}\left(c_{1}^{q}\bar{s}_{\perp}^{\mu\nu}+ic_{3}^{q }\varepsilon_{\perp}^{\mu\nu}\right)=-\frac{2Q^{2}}{y^{2}}\left[T_{0}^{q}(y)- \lambda_{e}\bar{T}_{0}^{q}(y)\right], \tag{4.1}\] \[L_{\mu\nu}^{ZZ}\left(c_{3}^{q}\bar{s}_{\perp}^{\mu\nu}+ic_{1}^{q }\varepsilon_{\perp}^{\mu\nu}\right)=-\frac{2Q^{2}}{y^{2}}\left[\bar{T}_{1}^{q }(y)-\lambda_{e}T_{1}^{q}(y)\right], \tag{4.2}\]
where \(T\)-functions are defined as
\[T_{0}^{q}(y)=c_{1}^{r}c_{1}^{q}A(y)+c_{3}^{r}c_{3}^{q}C(y),\] \[\bar{T}_{0}^{q}(y)=c_{3}^{r}c_{1}^{q}A(y)+c_{1}^{r}c_{3}^{q}C(y),\] \[T_{1}^{q}(y)=c_{3}^{r}c_{3}^{q}A(y)+c_{1}^{r}c_{1}^{q}C(y),\] \[\bar{T}_{1}^{q}(y)=c_{1}^{r}c_{3}^{q}A(y)+c_{3}^{r}c_{1}^{q}C(y). \tag{4.3}\]
Here \(A(y)=y^{2}-2y+2,C(y)=y(2-y)\). A simple algebraic calculation gives the leading twist cross section of the jet production SIDIS in the \(eN\) collinear frame
\[d\tilde{\sigma}_{t2}=\frac{\alpha_{\rm em}^{2}\chi}{yQ^{4}}2x \bigg{\{}\Big{(}T_{0}^{q}(y)-\lambda_{e}\bar{T}_{0}^{q}(y)\Big{)}\,f_{1}\] \[\qquad\qquad-\Big{(}\bar{T}_{1}^{q}(y)-\lambda_{e}T_{1}^{q}(y) \Big{)}\,\lambda_{h}g_{1L}\] \[\qquad\qquad+|S_{T}|k_{\perp M}\Big{[}\sin(\varphi-\varphi_{S}) \left(T_{0}^{q}(y)-\lambda_{e}\bar{T}_{0}^{q}(y)\right)f_{1T}^{\perp}\] \[\qquad\qquad-\cos(\varphi-\varphi_{S})\left(\bar{T}_{1}^{q}(y)- \lambda_{e}T_{1}^{q}(y)\right)g_{1T}^{\perp}\Big{]}\Bigg{\}}, \tag{4.4}\]
where \(d\tilde{\sigma}_{t2}=d\sigma_{t2}/(d\eta d^{2}l_{\perp}d^{2}j_{\perp})\), \(k_{\perp M}=|k_{\perp}|/M\). Subscript \(t2\) denotes leading twist. One can check that \(d\sigma_{t2}^{\gamma N}=d\sigma_{t2}^{\kappa N}\), i.e., the differential cross sections in these two frames are the same at leading twist level.
We can calculate the twist-3 differential cross section similarly. For the sake of simplicity, we only show contractions of the leptonic tensor and \(h_{1,3}^{\mu\nu}\) here,
\[L_{\mu\nu}^{ZZ}\cdot h_{1}^{\mu\nu}=\frac{2Q^{3}}{y^{2}}|k_{\perp }|\left[T_{2}^{q}(y)-\lambda_{e}\bar{T}_{2}^{q}(y)\right]\cos\varphi, \tag{4.5}\] \[L_{\mu\nu}^{ZZ}\cdot h_{3}^{\mu\nu}=\frac{2Q^{3}}{y^{2}}|k_{\perp }|\left[\bar{T}_{3}^{q}(y)-\lambda_{e}T_{3}^{q}(y)\right]\sin\varphi. \tag{4.6}\]
Other contractions have the same forms. The \(T\)-functions are defined as
\[T_{2}^{q}(y)=c_{1}^{r}c_{1}^{q}B(y)+c_{3}^{r}c_{3}^{q}D(y),\] \[\bar{T}_{2}^{q}(y)=c_{3}^{r}c_{1}^{q}B(y)+c_{1}^{r}c_{3}^{q}D(y),\] \[T_{3}^{q}(y)=c_{3}^{r}c_{3}^{q}B(y)+c_{1}^{r}c_{1}^{q}D(y),\] \[\bar{T}_{3}^{q}(y)=c_{1}^{r}c_{3}^{q}B(y)+c_{3}^{r}c_{1}^{q}D(y). \tag{4.7}\]
with \(B(y)=2-y^{2},D(y)=y^{2}\sqrt{1-y}\). After simple calculations, we write down the differential cross section at twist-3,
\[d\tilde{\sigma}_{t3}=\frac{\alpha_{\rm em}^{2}\chi}{yQ^{4}}4x^{2 }\kappa_{M}\bigg{\{}k_{\perp M}\cos\varphi\left(T_{2}^{q}(y)-\lambda_{e}\bar{T}_ {2}^{q}(y)\right)f^{\perp}\] \[\qquad\qquad-k_{\perp M}\sin\varphi\left(\bar{T}_{3}^{q}(y)- \lambda_{e}T_{3}^{q}(y)\right)g^{\perp}\] \[\qquad\qquad-\lambda_{h}\Big{[}k_{\perp M}\cos\varphi\left(\bar{T}_ {3}^{q}(y)-\lambda_{e}T_{3}^{q}(y)\right)g_{L}^{\perp}\] \[\qquad\qquad-k_{\perp M}\sin\varphi\left(T_{2}^{q}(y)-\lambda_{e} \bar{T}_{2}^{q}(y)\right)f_{L}^{\perp}\Big{]}\] \[\qquad\qquad+|S_{T}|\Big{[}\sin\varphi_{S}\left(T_{2}^{q}(y)- \lambda_{e}\bar{T}_{2}^{q}(y)\right)f_{T}\] \[\qquad\qquad+\cos\varphi_{S}\left(T_{3}^{q}(y)-\lambda_{e}T_{3}^{q}(y) \right)g_{T}\] \[\qquad\qquad+\sin(2\varphi-\varphi_{S})\left(T_{2}^{q}(y)-\lambda_{e} \bar{T}_{2}^{q}(y)\right)\frac{k_{\perp M}^{2}}{2}f_{T}^{\perp}\] \[\qquad\qquad+\cos(2\varphi-\varphi_{S})\left(\bar{T}_{3}^{q}(y)- \lambda_{e}T_{3}^{q}(y)\right)\frac{k_{\perp M}^{2}}{2}g_{T}^{\perp}\Bigg{]}, \tag{4.8}\]
where \(d\tilde{\sigma}_{t3}=d\sigma_{t3}/(d\eta d^{2}l_{\perp}^{\prime}d^{2}j_{\perp})\), \(\kappa_{M}=M/Q\) is the twist sup
pression factor. In the \(\gamma^{*}N\) collinear frame, \(k_{\perp}=k^{\prime}_{\perp}\). In the \(eN\) collinear frame, \(k_{\perp}=k^{\prime}_{\perp}+l^{\prime}_{\perp}\). The difference of the transverse momentum of the incident quark leading to the difference of the cross section at twist-3 level. To be precise, \(B(y)\) and \(D(y)\) are different in these two frame. For example, for the \(f^{\perp}\) term, we have \(-2Q^{3}|k_{\perp}|\cos\varphi 2(2-y)\sqrt{1-y/y^{2}}\) in the \(\gamma^{*}N\) collinear frame and \(2Q^{3}|k_{\perp}|\cos\varphi(2-y^{2})/y^{2}\) in the \(eN\) collinear frame. The minus sign comes from the product of \(l\) and \(k\), the momenta of the electron and the incident quark.
### The azimuthal asymmetries
Azimuthal symmetries are measurable quantities which are generally used to extract (TMD) PDFs. In the jets production reaction, the soft parts are only TMD PDFs. Uncertainties from FFs vanish. Under this circumstance, jet production SIDIS process can be good a reaction in determining the TMD PDFs.
In this part, we consider both the unpolarized beam (\(\lambda_{e}=0\)) and the polarized beam (\(\lambda_{e}=\pm 1\)) cases. They contribute to different azimuthal asymmetries results. The azimuthal asymmetry has a definite definition, e.g.,
\[\langle\sin\varphi\rangle_{U,U}=\frac{\int d\bar{\sigma}\sin\varphi d\varphi}{ \int d\bar{\sigma}d\varphi}, \tag{11}\]
for the unpolarized or longitudinally polarized target case, and
\[\langle\sin(\varphi-\varphi_{S})\rangle_{U,T}=\frac{\int d\bar{\sigma}\sin( \varphi-\varphi_{S})d\varphi d\varphi_{S}}{\int d\bar{\sigma}d\varphi d\varphi _{S}}, \tag{12}\]
for the transversely polarized target case. The subscripts such as \((U,T)\) denote the polarizations of the lepton beam and the target, respectively. At the leading twist, there are two polarization dependent azimuthal asymmetries which are given by (the sum over \(r=ZZ\), \(\gamma Z\) and \(\gamma\gamma\) is implicit in the numerator and the denominator respectively)
\[\langle\sin(\varphi-\varphi_{S})\rangle_{U,T}=k_{\perp M}\frac{ \chi T_{0}^{q}(y)f_{1T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}}, \tag{13}\] \[\langle\cos(\varphi-\varphi_{S})\rangle_{U,T}=-k_{\perp M}\frac{ \chi\tilde{T}_{1}^{q}(y)g_{1T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}}. \tag{14}\]
\(f_{1T}^{\perp}\) is the famous Sivers function[30; 31] which has been studied widely. At twist-3, we have 8 azimuthal asymmetries. They are given by
\[\langle\cos\varphi\rangle_{U,U}=xx_{M}k_{\perp M}\frac{\chi T_{2}^ {q}(y)}{\chi T_{0}^{q}(y)}\frac{f^{\perp}}{f_{1}}, \tag{15}\] \[\langle\sin\varphi\rangle_{U,U}=xx_{M}k_{\perp M}\frac{\chi \tilde{T}_{3}^{q}(y)g^{\perp}+\lambda_{b}\chi T_{2}^{q}(y)f_{1}^{\perp}}{\chi T _{0}^{q}(y)f_{1}},\] (16) \[\langle\cos\varphi_{S}\rangle_{U,T}=xx_{M}k_{\perp M}\frac{\chi T _{2}^{q}(y)f^{\perp}-\lambda_{b}\chi T_{3}^{q}(y)g_{L}^{\perp}}{\chi T_{0}^{q }(y)f_{1}},\] (17) \[\langle\sin\varphi\rangle_{U,T}=xx_{M}k_{\perp M}\frac{\chi T_{2 }^{q}(y)f_{T}}{\chi T_{0}^{q}(y)f_{1}},\] (18) \[\langle\cos(2\varphi-\varphi_{S})\rangle_{U,T}=-x_{M}k_{\perp M}^ {2}\frac{\chi\tilde{T}_{3}^{q}(y)g_{T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}},\] (19) \[\langle\sin(2\varphi-\varphi_{S})\rangle_{U,T}=x_{M}k_{\perp M}^ {2}\frac{\chi T_{2}^{q}(y)f_{T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}}. \tag{20}\]
For the case of the polarized electron beam, we obtain similar results as the unpolarized case. They have one-to-one correspondence. The two kinds of asymmetries at leading twist are given by,
\[\langle\sin(\varphi-\varphi_{S})\rangle_{L,T}=-\lambda_{e}k_{\perp M }\frac{\chi T_{0}^{q}(y)f_{1T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}}, \tag{21}\] \[\langle\cos(\varphi-\varphi_{S})\rangle_{L,T}=\lambda_{e}k_{\perp M }\frac{\chi T_{1}^{q}(y)g_{1T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}}. \tag{22}\]
At twist-3, we have 8 azimuthal asymmetries. They are given by
\[\langle\cos\varphi\rangle_{L,U}=-\lambda_{e}x_{M}k_{\perp M} \frac{\chi\tilde{T}_{2}^{q}(y)f^{\perp}}{\chi T_{0}^{q}(y)f_{1}}, \tag{23}\] \[\langle\sin\varphi\rangle_{L,U}=-\lambda_{e}x_{M}k_{\perp M} \frac{\chi T_{3}^{q}(y)g^{\perp}}{\chi T_{0}^{q}(y)f_{1}},\] (24) \[\langle\cos\varphi\rangle_{L,L}=-\lambda_{e}x_{M}k_{\perp M} \frac{\chi\tilde{T}_{2}^{q}(y)f^{\perp}-\lambda_{b}\chi T_{3}^{q}(y)g_{L}^{ \perp}}{\chi T_{0}^{q}(y)f_{1}},\] (25) \[\langle\sin\varphi\rangle_{L,L}=-\lambda_{e}x_{M}k_{\perp M} \frac{\chi T_{3}^{q}(y)g^{\perp}+\chi\tilde{T}_{2}^{q}(y)f_{L}^{\perp}}{\chi T_{0 }^{q}(y)f_{1}},\] (26) \[\langle\cos\varphi_{S}\rangle_{L,T}=\lambda_{e}x_{M}\frac{\chi T_{3 }^{q}(y)g_{T}}{\chi T_{0}^{q}(y)f_{1}},\] (27) \[\langle\sin\varphi_{S}\rangle_{L,T}=\lambda_{e}x_{M}\frac{\chi \tilde{T}_{2}^{q}(y)f_{T}}{\chi T_{0}^{q}(y)f_{1}},\] (28) \[\langle\cos(2\varphi-\varphi_{S})\rangle_{L,T}=-\lambda_{e}x_{M}k _{\perp M}^{2}\frac{\chi T_{3}^{q}(y)g_{T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}},\] (29) \[\langle\sin(2\varphi-\varphi_{S})\rangle_{L,T}=\lambda_{e}x_{M}k_{ \perp M}^{2}\frac{\chi\tilde{T}_{2}^{q}(y)f_{T}^{\perp}}{2\chi T_{0}^{q}(y)f_{1}}. \tag{30}\]
In the neutral current SIDIS process, weak contributions can not be separated from the EM contribution. We have calculated the contribution from the EM interaction. Numerical estimates shows that weak contributions will reach a few percent when \(Q>10\) GeV. However, the precise values depend on the fraction \(x\) and \(y\). Under this circumstance, weak contributions should be taken into account in measurements of these asymmetries in the SIDIS process.
### The intrinsic asymmetries
In the \(eN\) collinear frame, \(\vec{j}_{\perp}=\vec{k}_{\perp}\) if the higher order radiation of gluon is neglected, see Eq. (13). In other words, the transverse momentum \(\vec{j}_{\perp}\) equals to the intrinsic transverse momentum \(\vec{k}_{\perp}\) of a quark in the nucleon. To explore the imbalance of the transverse momentum of the incident quark in a nucleon, we introduce the intrinsic asymmetry [21]. According to the definition, the transverse momentum of the incident quark (jet) is in the x-y plane. It can be decomposed as
\[k_{\perp}^{x} =k_{\perp}\cos\varphi, \tag{41}\] \[k_{\perp}^{y} =k_{\perp}\sin\varphi. \tag{42}\]
Therefore, we can define \(k_{\perp}^{x,y}(+x)-k_{\perp}^{x,y}(-x)\) to quantify the difference of the transverse momentum between the positive \(x,y\) and negative \(x,y\) directions. To be explicit, we present the general definition of the intrinsic asymmetry,
\[A^{x} =\frac{\int_{-\pi/2}^{\pi/2}d\varphi\ d\bar{\sigma}-\int_{\pi/2}^ {3\pi/2}d\varphi\ d\bar{\sigma}}{\int_{-\pi/2}^{\pi/2}d\varphi\ d\bar{\sigma} +\int_{\pi/2}^{5\pi/2}d\varphi\ d\bar{\sigma}}, \tag{43}\] \[A^{y} =\frac{\int_{0}^{\pi}d\varphi\ d\bar{\sigma}-\int_{\pi}^{2\pi}d \varphi\ d\bar{\sigma}}{\int_{0}^{\pi}d\varphi\ d\bar{\sigma}+\int_{\pi}^{2 \pi}d\varphi\ d\bar{\sigma}}. \tag{44}\]
The sum of the differential cross section for EM, weak and interference terms is understood. Equations (43) and (44) lead to asymmetries in the \(x\)-direction and \(y\)-direction, respectively.
According to our definition, the twist-3 intrinsic asymmetries are obtained as
\[A^{x}_{U,U} =\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi T_{2}^{q}(y) f^{\perp}}{\chi T_{0}^{q}(y)f_{1}}, \tag{45}\] \[A^{y}_{U,U} =\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi\bar{T}_{0}^ {q}(y)g^{\perp}}{\chi T_{0}^{q}(y)f_{1}},\] (46) \[A^{x}_{UL} =-\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi\bar{T}_{3} ^{q}(y)g_{\perp}^{\perp}}{\chi T_{0}^{q}(y)f_{1}},\] (47) \[A^{y}_{UL} =\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi T_{0}^{q}(y )f_{L}^{\perp}}{\chi T_{0}^{q}(y)f_{1}},\] (48) \[A^{x}_{L,U} =-\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi\bar{T}_{2} ^{q}(y)f^{\perp}}{\chi T_{0}^{q}(y)f_{1}},\] (49) \[A^{y}_{L,U} =-\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi T_{3}^{q}( y)g^{\perp}}{\chi T_{0}^{q}(y)f_{1}}, \tag{50}\]
\[A^{x}_{LL} =\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi T_{3}^{q}( y)g_{L}^{\perp}}{\chi T_{0}^{q}(y)f_{1}}, \tag{51}\] \[A^{y}_{LL} =-\frac{4\imath x\kappa_{M}k_{\perp M}}{\pi}\frac{\chi\bar{T}_{2} ^{q}(y)f_{L}^{\perp}}{\chi T_{0}^{q}(y)f_{1}}. \tag{52}\]
We note again that only weak interaction results are shown in Eqs. (45)-(52). For the complete results, EM and interference interactions should be included. Furthermore, the sum of the quark flavor in the numerators and denominators is also understood. At leading twist, the intrinsic asymmetries can also be introduced. But they do not have physical interpretations. We do not consider them here.
In order to illustrate the intrinsic asymmetries shown above, we present the numerical values of \(A^{x}_{U,U}\) and \(A^{x}_{L,U}\) in Figs. 1 and 2. Without proper parametrizations, our estimations are based on the Gaussian ansatz for TMD PDFs, i.e.,
\[f_{1}(x,k_{\perp}) =\frac{1}{\pi{\Lambda^{\prime}}^{2}}f_{1}(x)e^{-\vec{k}_{\perp}^{ 2}/\Delta^{2}}, \tag{53}\] \[f^{\perp}(x,k_{\perp}) =\frac{1}{\pi\Delta^{2}x}f_{1}(x)e^{-\vec{k}_{\perp}^{2}/\Delta^{ 2}}, \tag{54}\]
where \(f_{1}(x)\) are taken from CT14 [32] and the factor is taken as \(x=0.3\) for illustration. We have used the Wandzura-Wilczek approximation, i.e., neglecting quark-gluon-quark correlation function (\(g=0\)) [4; 5], to determine \(f^{\perp}(x,k_{\perp})\).
In the numerical estimates, only the valence quarks are taken into account. Because other contributions are small. For the Gaussian ansatz the widths of the unpolarized TMD PDF \(f_{1}(x,k_{\perp})\) are taken as \({\Lambda^{\prime}}^{2}={\Lambda^{\prime}}^{2}_{d}=0.53\) GeV\({}^{2}\)[33; 34; 35; 36; 37]. However, the widths of the unpolarized TMD PDF \(f^{\perp}(x,k_{\perp})\) run from 0.3 to 0.6 GeV\({}^{2}\), see Figs. 1 and 2. Figure 1 show the results at \(\Delta_{u}^{2}=0.5\) GeV\({}^{2}\) while Fig. 2 show the results at \(\Delta_{d}^{2}=0.5\) GeV\({}^{2}\). In both figures, we choose \(y=0.5\).
According to the numerical estimates, we find that asymmetry \(A^{x}_{L,U}\) is two or three orders of magnitude smaller than \(A^{x}_{UL}\). Because \(A^{x}_{LU}\) is a parity violating effect or an effect of the weak interaction. It should be the same order of magnitude as parity violation in standard model. In addition, asymmetry \(A^{x}_{UL}\) decreases with respect to the energy, while \(A^{x}_{L,U}\) increases with the energy. Furthermore, we find the intrinsic asymmetry is more sensitive to \(\Delta_{u}^{2}\) than \(\Delta_{d}^{2}\). We attribute it to the fact that \(f^{\perp}(x,k_{\perp})\) for \(u\) quark is larger than that for the \(d\) quark in the Gaussian ansatz and the small variation of \(\Delta_{u}^{2}\) will be magnified to the intrinsic asymmetry due to the larger distribution function.
## V Summary
In this paper, we consider the neutral current jet production SIDIS process and calculate the differential cross section of this process at tree level twist-3 in the \(eN\) collinear frame. In this frame, the virtual-photon gains the transverse momentum \(q_{\perp}\) and the current conservation law becomes complicated. Our calculation includes the EM, weak and inference interactions. The initial electron is assumed to be polarized and then scattered off by a target particle with spin-1/2.
After obtaining the differential cross section, we calculate azimuthal asymmetries and intrinsic asymmetries. They provide more measurable quantities for extracting (TMD) PDFs. Two leading twist and eight twist-3 azimuthal asymmetries are obtained for the case of the unpolarized electron beam. Similar results for the case of the polarized electron beam are also obtained. Intrinsic asymmetries indicate the imbalance of the transverse momentum of the incident quark in a nucleon. From the numerical estimates, we find that asymmetry \(A^{x}_{LU}\) is two or three orders of magnitude smaller than \(A^{x}_{LU}\) because of the parity violating effect. In addition, asymmetry \(A^{x}_{LU,U}\) decreases with respect to the energy, while \(A^{x}_{LU}\) increases with the energy. Furthermore, the intrinsic asymmetry is more sensitive to \(\Delta^{2}_{u}\) than \(\Delta^{2}_{d}\).
## Acknowledgements
The author thanks X. H. Yang very much for his kind help. This work was supported by the Natural Science Foundation of Shandong Province (Grant No. ZR2021QA015).
## Appendix A Twist-3 hadronic tensor
There are two origins of the twist-3 hadronic tensor, one from the quark-quark correlator and the other from the quark-gluon-quark correlator. We first consider contributions from the quark-quark correlator. Inserting the twist-3 TMD PDFs in Eqs. (3.6), (3.7) and hadron part given in Eq. (3.8) into Eq. (3.2), we obtain
\[W^{\mu\nu}_{B,q} =\frac{f^{\perp}}{p\cdot q}\bigg{[}+c_{1}^{q}\left(k_{\perp}^{[ \mu}n^{\nu]}q^{-}+k_{\perp}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+k_{\perp}^{[\mu}q _{\perp}^{\nu]}-g^{\mu\nu}k_{\perp}\cdot q_{\perp}\right)\] \[\qquad\qquad+ic_{3}^{q}\left(\bar{k}_{\perp}^{[\mu}n^{\nu]}q^{-}- \bar{k}_{\perp}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\eta]}k_{ \perp}^{\nu]}\right)\bigg{]}\] \[+\frac{\lambda_{h}f_{L}^{\perp}}{p\cdot q}\bigg{[}-c_{1}^{q} \left(\bar{k}_{\perp}^{[\mu}n^{\nu]}q^{-}+\bar{k}_{\perp}^{[\mu}\bar{n}^{\nu]} (1-y)xp^{+}+\bar{k}_{\perp}^{[\mu}q_{\perp}^{\nu]}-g^{\mu\nu}k_{\perp}^{\mu}\right)\]
\[+ic_{3}^{q}\left(k_{\perp}^{[\mu}n^{\nu]}q^{-}-k_{\perp}^{[\mu} \bar{n}^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}k_{\perp}\cdot q_{\perp}\right)\] \[+\frac{g^{\perp}}{p\cdot q}\bigg{[}-c_{3}^{q}\left(\bar{k}_{\perp} ^{[\mu}n^{\nu]}q^{-}+\bar{k}_{\perp}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+\bar{k}_{ \perp}^{[\mu}q_{\perp}^{\nu]}-g^{\mu\nu}k_{\perp}\right)\] \[+ic_{1}^{q}\left(k_{\perp}^{[\mu}n^{\nu]}q^{-}-k_{\perp}^{[\mu} \bar{n}^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}k_{\perp}\cdot q_{\perp}\right) \bigg{]}\] \[+\frac{\lambda g_{L}^{\perp}}{p\cdot q}\bigg{[}-c_{3}^{q}\left(k_ {\perp}^{[\mu}n^{\nu]}q^{-}+k_{\perp}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+k_{ \perp}^{[\mu}q_{\perp}^{\nu]}-g^{\mu\nu}k_{\perp}\cdot q_{\perp}\right)\] \[-ic_{1}^{q}\left(\bar{k}_{\perp}^{[\mu}n^{\nu]}q^{-}-\bar{k}_{ \perp}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}k_{\perp}^{[\mu} \right)\bigg{]},\] \[+\frac{M_{T}}{p\cdot q}\bigg{\{}-c_{1}^{q}\bigg{[}S_{T}^{q}n^{ \nu]}q^{-}+\bar{S}_{T}^{q}\bar{n}^{\nu]}(1-y)xp^{+}+S_{T}^{[\mu}q_{\perp}^{\nu] }-g^{\mu\nu}\varepsilon_{\perp}^{q5}\bigg{]}\] \[+ic_{3}^{q}\bigg{[}S_{T}^{[\mu}n^{\nu]}q^{-}-S_{T}^{[\mu}\bar{n}^{ \nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}S_{T}\cdot q_{\perp}\bigg{]}\bigg{\}}\] \[-\frac{f_{T}^{\perp}}{p\cdot q}\bigg{\{}+c_{1}^{q}\bigg{[}k_{\perp }^{[\mu}n^{\nu]}q^{-}+k_{\perp}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+k_{\perp}^{[\mu }q_{\perp}^{\nu]}-g^{\mu\nu}k_{\perp}\cdot q_{\perp}\bigg{]}\] \[+ic_{3}^{q}\bigg{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}-\bar{k}_{\perp}^ {[\mu}\bar{n}^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}\varepsilon_{\perp}^{q \parallel}\bigg{]}\bigg{\}}\frac{\varepsilon_{\perp}^{k_{\perp}^{\nu}}}{M}\] \[-\frac{f_{T}^{\perp}}{p\cdot q}\bigg{\{}-c_{1}^{q}\bigg{[}\bar{S} _{T}^{[\mu}n^{\nu]}q^{-}+\bar{S}_{T}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+\bar{S}_{ T}^{[\mu}q_{\perp}^{\nu]}-g^{\mu\nu}\varepsilon_{\perp}^{q5}\bigg{]}\] \[+ic_{3}^{q}\bigg{[}S_{T}^{[\mu}n^{\nu]}q^{-}-S_{T}^{[\mu}\bar{n}^{ \nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}S_{T}\cdot q_{\perp}\bigg{]}\bigg{\}} \frac{k_{\perp}^{2}}{2M}\] \[+\frac{Mg_{T}}{p\cdot q}\bigg{\{}-c_{3}^{q}\bigg{[}S_{T}^{[\mu}n^ {\nu]}q^{-}+S_{T}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+S_{T}^{[\mu}q_{\perp}^{\nu]} -g^{\mu\nu}S_{T}\cdot q_{\perp}\bigg{]}\] \[-ic_{1}^{q}\bigg{[}\bar{S}_{T}^{[\mu}n^{\nu]}q^{-}-\bar{S}_{T}^{[ \mu}\bar{n}^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}\varepsilon_{\perp}^{q5} \bigg{]}\bigg{\}}\] \[+\frac{g_{T}^{\perp}}{p\cdot q}\bigg{\{}+c_{3}^{q}\bigg{[}k_{\perp }^{[\mu}n^{\nu]}q^{-}+k_{\perp}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+k_{\perp}^{[\mu }q_{\perp}^{\nu]}-g^{\mu\nu}k_{\perp}\cdot q_{\perp}\bigg{]}\] \[+ic_{1}^{q}\bigg{[}k_{\perp}^{[\mu}n^{\nu]}q^{-}-\bar{k}_{\perp}^ {[\mu}\bar{n}^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}\varepsilon_{\perp}^{q \parallel}\bigg{]}\bigg{\}}\frac{k_{\perp}\cdot S_{T}}{M}\] \[-\frac{g_{T}^{\perp}}{p\cdot q}\bigg{\{}+c_{3}^{q}\bigg{[}S_{T}^{[ \mu}n^{\nu]}q^{-}+S_{T}^{[\mu}\bar{n}^{\nu]}(1-y)xp^{+}+S_{T}^{[\mu}q_{\perp}^{ \nu]}-g^{\mu\nu}S_{T}\cdot q_{\perp}\bigg{]}\] \[+ic_{1}^{q}\bigg{[}S_{T}^{[\mu}n^{\nu]}q^{-}-\bar{S}_{T}^{[\mu}\bar{n }^{\nu]}(1-y)xp^{+}-\bar{n}^{[\mu}n^{\nu]}\varepsilon_{\perp}^{q5}\bigg{]} \bigg{\}}\frac{k_{\perp}^{2}}{2M}. \tag{15}\]
The subscript \(q\) denotes hadronic tensor from the quark-quark correlator. This expression does not satisfy the current conservation law, i. e., \(q_{\mu}W_{\alpha,q}^{\mu\nu}\neq 0\) due to the incompleteness of the twist-3 hadronic tensor.
The quark-gluon-quark correlator comes from the one gluon-exchanging process. From the operator definition of the hadronic tensor we have
\[W_{\alpha,L}^{\mu\nu}=\frac{1}{2p\cdot q}\text{Tr}\left[\hat{\Phi}_{P}^{(1)}(x,k_ {\perp})\hat{H}_{ZZ}^{\mu\nu,\rho}(q,k_{1},k_{2})\right], \tag{16}\]
where \(L\) denotes the left-cut [11], \(\hat{\Phi}_{\rho}^{(1)}\) is the quark-gluon-quark correlator given in Eq. (3.15). \(\hat{H}_{ZZ}^{\mu\nu,\rho}\) is the hard scattering amplitude,
\[\hat{H}_{ZZ}^{\mu\nu\rho}=\Gamma^{\mu,q}\frac{\bar{k}_{2}+q}{(k_{2}+q)^{2}} \gamma^{\rho}(\bar{k}_{1}+q)\Gamma^{\nu,q}. \tag{17}\]
To simplify the hard scattering amplitude, here we use the approximation that only the plus component exist in the propagator.
Under this approximation, we can rewritten the hard scattering amplitude as
\[\hat{H}^{\mu\nu n\nu}_{ZZ}=\frac{1}{2q^{-}}\Gamma^{\mu,q}\sharp\gamma^{\rho}( \not{k}_{1}+\not{q})\Gamma^{\nu,q}. \tag{100}\]
Inserting Eqs. (23), (24) and (100) into (101) and using Eq. (22), we have
\[W^{\mu\nu}_{\Omega,L}=-\left[c_{1}^{q}\left(\vec{n}^{\mu}\vec{n} ^{\gamma}\frac{k_{\perp}\cdot q_{\perp}}{q^{-}}-k_{\perp}^{\gamma}\vec{n}^{\mu }\right)-ic_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp} ^{\theta k}}{q^{-}}-\tilde{k}_{\perp}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{ xp^{+}}{p\cdot q}f^{\perp}\] \[+\left[ic_{1}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{k_{ \perp}\cdot q_{\perp}}{q^{-}}-k_{\perp}^{\gamma}\vec{n}^{\mu}\right)+c_{3}^{q} \left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta k}}{q^{- }}-\tilde{k}_{\perp}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{xp^{+}}{p\cdot q}g ^{\perp}\] \[+\left[ic_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{k_{ \perp}\cdot q_{\perp}}{q^{-}}-k_{\perp}^{\gamma}\vec{n}^{\mu}\right)+c_{1}^{q} \left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta k}}{q^{- }}-\tilde{k}_{\perp}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{xp^{+}}{p\cdot q} \lambda_{h}f_{L}^{\perp}\] \[+\left[c_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{k_{ \perp}\cdot q_{\perp}}{q^{-}}-k_{\perp}^{\gamma}\vec{n}^{\mu}\right)-ic_{1}^{q} \left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta k}}{q^{- }}-\tilde{k}_{\perp}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{xp^{+}}{p\cdot q }\lambda_{h}g_{L}^{\perp}\] \[+\left[ic_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{S_{T} \cdot q_{\perp}}{q^{-}}-S_{T}^{\gamma}\vec{n}^{\mu}\right)+c_{1}^{q}\left( \vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta S}}{q^{-}}- \tilde{S}_{T}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{xp^{+}}{p\cdot q}Mf_{T}\] \[-\left\{-\left[c_{1}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac {k_{\perp}\cdot q_{\perp}}{q^{-}}-k_{\perp}^{\gamma}\vec{n}^{\mu}\right)-ic_{3} ^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta k}}{q ^{-}}-\tilde{k}_{\perp}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{\varepsilon_ {\perp}^{LS}}{M}\right.\] \[\left.\qquad+\left[ic_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma} \frac{S_{T}\cdot q_{\perp}}{q^{-}}-S_{T}^{\gamma}\vec{n}^{\mu}\right)+c_{1}^{ q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta S}}{q^{- }}-\tilde{S}_{T}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{k_{\perp,m}^{2}}{2M} \right\}\frac{xp^{+}}{p\cdot q}f_{T}^{\perp}\] \[+\left\{-\left[c_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma} \frac{k_{\perp}\cdot q_{\perp}}{q^{-}}-k_{\perp}^{\gamma}\vec{n}^{\mu}\right)- ic_{1}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta k}}{q ^{-}}-\tilde{k}_{\perp}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{k_{\perp} \cdot S_{T}}{M}\right.\] \[\left.\qquad+\left[c_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma} \frac{S_{T}\cdot q_{\perp}}{q^{-}}-S_{T}^{\gamma}\vec{n}^{\mu}\right)-ic_{1}^{ q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{\varepsilon_{\perp}^{\theta S}}{q^{- }}-\tilde{S}_{T}^{\gamma}\vec{n}^{\mu}\right)\right]\frac{k_{\perp}^{2}}{2M} \right\}\frac{xp^{+}}{p\cdot q}g_{T}^{\perp}. \tag{101}\]
Subscript \(L\) denotes the left-cut tensor. We note that TMD PDFs marked with subscript \(d\) have been reexpressed in terms of TMD PDFs without subscript by using relation [19]
\[f_{dS}^{K}-g_{dS}^{K}=-x\left(f_{S}^{K}-ig_{S}^{K}\right), \tag{102}\]
where \(K\) can be \(\perp\) and \(S\) can be \(L\) and \(T\). We sum the left-cut and the right-cut terms together and obtain
\[W^{\mu\nu}_{\Omega,L}+W^{\mu\nu}_{\Omega,R}=-\left[c_{1}^{q}\left( \vec{n}^{\mu}\vec{n}^{\gamma}\frac{2xp^{+}}{q^{-}}k_{\perp}\cdot q_{\perp}-xp^{+ }k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right)-ic_{3}^{q}xp^{+}k_{\perp}^{[\mu} \vec{n}^{\gamma]}\right]\frac{1}{p\cdot q}f^{\perp}\] \[+\left[c_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{2xp^{+}}{ q^{-}}\varepsilon_{\perp}^{\mu}-xp^{+}k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right)+ic_{1}^{q}xp^{+ }k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right]\frac{1}{p\cdot q}g^{\perp}\] \[+\left[c_{1}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{2xp^{+}}{ q^{-}}\varepsilon_{\perp}^{\mu}-xp^{+}k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right)+ic_{3}^{q}xp^{+ }k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right]\frac{1}{p\cdot q}\lambda_{h}f_{L}^{\perp}\] \[+\left[c_{3}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{2xp^{+}}{ q^{-}}k_{\perp}\cdot q_{\perp}-xp^{+}k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right)-ic_{1}^{q}xp^{+ }k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right]\frac{1}{p\cdot q}\lambda_{h}g_{L}^{\perp}\] \[+\left[c_{1}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{2xp^{+}}{ q}\varepsilon_{\perp}^{\theta S}-\tilde{S}_{T}^{\mu}\vec{n}^{\gamma]}\right)+ic_{3}^{q}xp^{+ }S_{T}^{[\mu}\vec{n}^{\gamma]}\right]\frac{p\cdot q}{p\cdot q}Mf_{T}\] \[-\left\{-c_{1}^{q}\left(\vec{n}^{\mu}\vec{n}^{\gamma}\frac{2xp^{+}}{ q^{-}}k_{\perp}\cdot q_{\perp}-xp^{+}k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right)-ic_{3}^{q}xp^{+ }k_{\perp}^{[\mu}\vec{n}^{\gamma]}\right]\frac{\varepsilon_{\perp}^{LS}}{M}\]
\[+\left[c_{3}^{q}\left(\bar{m}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^{- }}S_{T}\cdot q_{\perp}-xp^{+}S_{T}^{[\mu}\bar{n}^{\nu]}\right)-ic_{1}^{q}xp^{+}S_ {T}^{[\mu}\bar{n}^{\nu]}\right]\frac{1}{p\cdot q}M_{T}\] \[+\left\{-\left[c_{3}^{q}\left(\bar{m}^{\mu}\bar{n}^{\nu}\frac{2xp^ {+}}{q^{-}}k_{\perp}\cdot q_{\perp}-k_{\perp}^{[\mu}\bar{n}^{\mu]}\right)-ic_{1 }^{q}\bar{k}_{\perp}^{[\mu}\bar{n}^{\nu]}\right]\frac{k_{\perp}\cdot S_{T}}{M}\] \[+\left[c_{3}^{q}\left(\bar{m}^{\mu}\bar{n}^{\nu}\frac{2xp^{+}}{q^ {-}}S_{T}\cdot q_{\perp}-xp^{+}S_{T}^{[\mu}\bar{n}^{\nu]}\right)-ic_{1}^{q}xp^ {+}S_{T}^{[\mu}\bar{n}^{\nu]}\right]\frac{k_{\perp}^{2}}{2M}\right\}\frac{1}{ p\cdot q}g_{T}^{\pm}. \tag{47}\]
Finally, we sum Eqs. (41) and (47) to obtain the complete hadronic tensor given in Eq. (3.18).
|
2302.11814 | FTM: A Frame-level Timeline Modeling Method for Temporal Graph
Representation Learning | Learning representations for graph-structured data is essential for graph
analytical tasks. While remarkable progress has been made on static graphs,
researches on temporal graphs are still in its beginning stage. The bottleneck
of the temporal graph representation learning approach is the neighborhood
aggregation strategy, based on which graph attributes share and gather
information explicitly. Existing neighborhood aggregation strategies fail to
capture either the short-term features or the long-term features of temporal
graph attributes, leading to unsatisfactory model performance and even poor
robustness and domain generality of the representation learning method. To
address this problem, we propose a Frame-level Timeline Modeling (FTM) method
that helps to capture both short-term and long-term features and thus learns
more informative representations on temporal graphs. In particular, we present
a novel link-based framing technique to preserve the short-term features and
then incorporate a timeline aggregator module to capture the intrinsic dynamics
of graph evolution as long-term features. Our method can be easily assembled
with most temporal GNNs. Extensive experiments on common datasets show that our
method brings great improvements to the capability, robustness, and domain
generality of backbone methods in downstream tasks. Our code can be found at
https://github.com/yeeeqichen/FTM. | Bowen Cao, Qichen Ye, Weiyuan Xu, Yuexian Zou | 2023-02-23T06:53:16Z | http://arxiv.org/abs/2302.11814v2 | # FTM: A Frame-level Timeline Modeling Method for Temporal Graph Representation Learning
###### Abstract
Learning representations for graph-structured data is essential for graph analytical tasks. While remarkable progress has been made on static graphs, researches on temporal graphs are still in its beginning stage. The bottleneck of the temporal graph representation learning approach is the neighborhood aggregation strategy, based on which graph attributes share and gather information explicitly. Existing neighborhood aggregation strategies fail to capture either the short-term features or the long-term features of temporal graph attributes, leading to unsatisfactory model performance and even poor robustness and domain generality of the representation learning method. To address this problem, we propose a Frame-level Timeline Modeling (**FTM**) method that helps to capture both short-term and long-term features and thus learns more informative representations on temporal graphs. In particular, we present a novel link-based framing technique to preserve the short-term features and then incorporate a timeline aggregation module to capture the intrinsic dynamics of graph evolution as long-term features. Our method can be easily assembled with most temporal GNNs. Extensive experiments on common datasets show that our method brings great improvements to the capability, robustness, and domain generality of backbone methods in downstream tasks. Our code can be found at [https://github.com/yeeeqichen/FTM](https://github.com/yeeeqichen/FTM).
## Introduction
Graph representation learning intends to transform nodes and links on the graph into lower-dimensional vector embeddings, which can be quite challenging due to the complex graph topological structures and node/link attributes. While approaches on **static graphs** have made breakthroughs and demonstrated distinguishable applicability in various fields [1, 1, 13, 14], those on **temporal graphs** are just getting started. Modeling a temporal graph (which may evolve over time with the addition, deletion, and changing of its attributes) is a core problem in developing real-world industrial systems (_e.g._, social network, citation network, recommendation systems) where many data are time-dependent, and is much more difficult because of the temporal factors. Figure 1 gives an example of temporal graph modeling.
In learning representations on temporal graphs, a key point is the **neighborhood aggregation strategy**, which allows information passing and gathering among graph attributes, so that nodes learn their representations from their neighbors. For static graphs, directly linked nodes are neighbors to each other because they all appear in the one and only topology. In contrast, temporal graph attributes scatter sparsely across the timeline, leading to temporal-structure inconsistency. For any node in a temporal graph, a node connected to it is not necessarily a neighbor, for this node may appear a long time ago or disappear soon. Each node in a temporal graph may also have several temporal neighborhoods, posing a challenge for information aggregation. Therefore, how to design the neighborhood aggregation strategy on temporal graphs remains an open question.
Recent works introduce snapshot-based methods [10, 13, 12] and temporal random walk-based methods [22, 23] for neighborhood aggregation, but are often too simple to capture the evolution of temporal graphs over time. The comparison of the above two methods and our method is shown in Figure 2. In particular, snapshot-based methods equally slice the timeline into a sequence of snapshots, each of which contains nodes and links that occurred within its time span. This kind of method treats a snapshot as a static graph and fails to model the temporal properties within a snapshot, losing short-term features of graph attributes. On
Figure 1: An example of temporal graph modeling. Given a model that has learnt the dynamics of a large number of users’ shopping behaviors in high-dimensional space, what the man in green tends to buy in the future is predictable.
the other hand, temporal random walk-based methods do not impose restrictions on the time range, but select temporal neighbors from the past according to a certain rule (most often randomly) and learn representations based on the neighborhood attributes and their time information. However, the problem is that the randomly constructed temporal neighborhood cannot ensure a balance between short-term features and long-term features.
To develop a representation learning method on temporal graphs that adequately captures both short-term and long-term features, we propose a simple but effective Frame-level Timeline Modeling method (**FTM** for short), at the heart of which is the innovation of the temporal neighborhood aggregation strategy: first, we refer to the concept of frame1 in signal processing, and put forward a novel method called **link-based framing technique**, where we separate most recent links into several frames (_i.e._, temporal neighborhoods) to emphasize short-term features; then, we extract frame features with a **frame aggregator**, which can be easily replaced by most GNN methods; finally, we design a **timeline aggregator** for learning the intrinsic dynamics of successive frames across the timeline to capture long-term features.
Footnote 1: A fundamental technique to decompose raw signal into multiple ranges according to frame length and hop length.
We conduct experiments on several widely-used benchmarks in both transductive and inductive settings, and the results demonstrate the effectiveness of our proposed method. Moreover, the robustness and domain generality of baselines and our method are also evaluated through quantitative and qualitative experiments, which further suggest the insights of **FTM**. Our main contributions are summarized as follows:
* We propose a simple but effective frame-level timeline modeling method for temporal graph representation learning, namely **FTM**, which makes contributions to the neighborhood aggregation strategy, and can be easily assembled with most GNN methods.
* We conduct comprehensive experiments to show that models assembled with **FTM** achieve better performance on common benchmarks, and we further evaluate its effectiveness through quantitative and qualitative analyses.
* We point out the robustness and domain generality issues of several state-of-the-art GNN-based temporal graph representation learning methods, and demonstrate that FTM could greatly alleviate these issues.
## Related Work
Learning representations with GNNs has become a popular research area for graph modeling. Earlier works explore learning representations of topological structures Kipf and Welling (2016); Grover and Leskovec (2016), extending GNN to inductive learning Hamilton et al. (2017), and integrating attention mechanisms Velickovic et al. (2018). In all these works, however, the time information of graph attributes are discarded.
Recent approaches take advantage of the temporal property. Certain approaches learn to access time-aware knowledge by equally slicing the timeline into a sequence of **snapshots**Trivedi et al. (2019); Singer et al. (2019). They aggregate the topological features in a snapshot and combine time-dependent features with sequence-modeling techniques to learn temporal graph embeddings. However, they ignore the sequential nature of nodes and links within the same snapshot, losing short-term features that can guide learning. Meanwhile, the amount of nodes and links within each snapshot is inconsistent, leading to great data biases in learning topological features.
More recently, TGAT Xu et al. (2020) leverages a time encoding function to learn time-aware representations in continuous time. TGN Rossi et al. (2020), as a variant of TGAT, integrates a memory module to keep track of the evolution of node-level features. These methods make progress in capturing short-term features since the time encoding makes it possible to model the temporal properties of a neighborhood. However, in most cases, they randomly sample neighbors from the past to form a temporal neighborhood for a target node, which means that they cannot ensure a balance between short-term features and long-term features.
Our work adopt the idea of time encoding, but make contributions to the way that temporal neighborhoods are constructed and information is aggregated, so that the model learn more informative representations.
## Proposed Method: FTM
### Problem Formalization
Graph representation learning aims to obtain node or link representations based on their own properties and their interactions with neighbors. Let \(E^{T-}=\left\{e_{i,j,t}|1\leq i,j\leq n,0\leq t<T\right\}\) and \(V^{T-}=\left\{v_{s}|s=1\dots n\right\}\) denote the set of links and the set of nodes observed before time \(T\), respectively, where \(n\) is the amount of nodes, \(v_{s}\) is the \(s\)-th node (\(s\) is only used to distinguish nodes), and \(e_{i,j,t}\) is an link between \(v_{i}\) and \(v_{j}\) emerged at time \(t\in\mathbb{R}^{+}\). Let \(E_{s}^{T-}=\left\{e_{s,i,t}|1\leq i\leq n,0\leq t<T\right\}\cup\left\{e_{j,s,t}|1\leq j\leq n,0\leq t<T\right\}\) denotes the subset of \(E^{T-}\) containing links that link to node \(v_{s}\) and satisfy the time constraint (we mainly consider undirected graphs, where the two parts of \(E_{s}^{T-}\) are equivalent). Supposing that \(G^{T-}=(V^{T-},E^{T-})\) denotes the final state of a temporal graph before time \(T\), learning representations on it is mainly to obtain the node and link representations at time \(t\) based on \(G^{T-}\).
Figure 2: An example illustrating prior techniques and our link-based framing technique (where frame length is 2 and hop length is 1) for neighborhood construction.
#### Input Representation
Graph attributes can be recorded in various ways. For instance, online reviews are in text format, and citations are in triplet format. We encode text with BERT-base Devlin et al. (2019), and other records with TransE Bordes et al. (2013), to initialize node and link features. Then, we split links into frames, and feed the features of successive frames into FTM.
**Link-based Framing Technique.** The process of splitting links into temporal frames is controlled by two parameters:
* **Frame length** defines how many links are included in a frame. For example, at timestamp \(t\), to construct a frame of length \(k\) for node \(v_{s}\), we take the most recent \(k\) links from \(E_{s}^{t-}\) to form this frame and denote it as \(f_{s,k}^{t-}\).
* **Hop length** defines how many links to skip when taking the next frame. In practise, we set it to be \(\frac{frame\ length}{2}\) (which is empirically the best and is also a convention in signal processing) to stabilize the training process. An example is provided in Figure 2.
### Frame-level Timeline Modeling
The main idea of FTM is to preserve both the short-term and long-term features of graph attributes through a **frame aggregator** and a **timeline aggregator**. The role of the frame aggregator is to model each neighborhood that generated by the link-based framing technique, so **it can be replaced by most GNN methods**. For example, the overall framework of the model assembling FTM with TGAT Xu et al. (2020), _i.e._, taking TGAT as the frame aggregator, is shown in Figure 3. Since TGAT is composed of a stack of identical layers (with shared parameters), the calculation process of each layer is similar. Assuming that we want node \(v_{i}\)'s embedding at timestamp \(t\), the calculation process in layer \(l\) can be described as the following two parts:
**Temporally Attentive Frame Aggregator.** While TGAT randomly samples links from the past to form temporal neighborhoods, we integrate \(k\)**most recent** links to construct a frame in order to preserve short-term features. Meanwhile, the reason we add links by number rather than by time (as snapshot-based methods) is to guide the model to learn the common evolution of links, instead of time-interval-related knowledge. Given a frame \(f_{i,k}^{t-}\) of \(v_{i}\) that contains links \(\left\{e_{i,j_{t},t_{1}},\ldots,e_{i,j_{k},t_{k}}\right\}\), we obtain a temporal neighborhood feature matrix \(\mathbf{Z}(t)\) as:
\[\mathbf{Z}(t)=\left[\mathbf{z}_{t}(i,t),\mathbf{z}_{t}(j_{1},t_{1}),\ldots, \mathbf{z}_{t}(j_{k},t_{k})\right], \tag{1}\]
\[\mathbf{z}_{t}(j_{k},t_{k})=\left[\mathbf{h}_{j_{k}}^{(l-1)}(t_{k})\;||\; \varphi(t-t_{k})\;||\;\mathbf{e}_{j_{k}}\right], \tag{2}\]
where \(\mathbf{h}_{j_{k}}^{(l-1)}(t_{k})\) is the previous layer's output for \(v_{j_{k}}\), \(\varphi(\cdot)\) is a time encoding function, \(\mathbf{e}_{j_{k}}\) is the feature vector of \(e_{i,j_{k},t_{k}}\), and \(\mathbf{z}_{t}(j_{k},t_{k})\) maps the information of \(v_{j_{k}}\) into a
Figure 3: The architecture of the model assembling FTM with a backbone network. Assuming that our goal is to compute node \(v_{0}\)’s representation at timestamp \(t_{5}\), we first construct a timeline consists of 3 frames \(\left\{f_{v_{0},2}^{t_{3}-2},f_{v_{0},2}^{t_{4}-2},f_{v_{0},2}^{t_{5}-2}\right\}\) as each layer’s input. At each layer - **Stage 1**, we compute each frame’s representation \(\hat{\mathbf{h}}_{0}^{l}(t_{j})\) in parallel through the backbone network (works as the frame aggregator). **Stage 2**, we aggregate all frames’ representations to get the node representation via the timeline aggregator.
time-aware representation. Then, we attentively aggregate \(Z_{t}\) with the multi-head self-attention mechanism:
\[\textbf{q}^{r}(t)=[\textbf{Z}(t)]_{0}\textbf{W}_{Q}^{r}, \tag{3}\]
\[\textbf{K}^{r}(t)=[\textbf{Z}(t)]_{1:N}\textbf{W}_{K}^{r},\;\textbf{V}^{r}(t)=[ \textbf{Z}(t)]_{1:N}\textbf{W}_{V}^{r} \tag{4}\]
\[\alpha_{j}^{r}=\frac{\exp\left(\textbf{q}^{r\top}\textbf{K}_{j}^{r}\right)}{ \Sigma_{q}\exp\left(\textbf{q}^{r\top}\textbf{K}_{q}^{r}\right)},\;\tilde{ \textbf{h}}_{i}^{l,r}(t)=\sum\nolimits_{j}\alpha_{j}^{r}\textbf{y}_{j}^{r}, \tag{5}\]
where \(\textbf{W}_{Q}^{r},\textbf{W}_{K}^{r},\textbf{W}_{V}^{r}\) are query, key and value matrix, respectively, \(\alpha_{j}^{r}\) denotes the attention weight, and \(\tilde{\textbf{h}}_{i}^{l,r}(t)\) is the output of the \(r\)-th attention head. Assuming that we have \(n_{h}\) attention heads, the frame representation \(\hat{\textbf{h}}_{i}^{l}(t)\) will be:
\[\hat{\textbf{h}}_{i}^{l}(t)=\mathrm{ReLU}(\textbf{y}\textbf{W}_{0}+\textbf{ b}_{0})\textbf{W}_{1}+\textbf{b}_{1}, \tag{6}\]
\[\textbf{y}=\left[\textbf{z}_{t}(i,t)|\tilde{\textbf{h}}_{i}^{l,1}(t)||\ldots |\tilde{\textbf{h}}_{i}^{l,n_{h}}(t)\right], \tag{7}\]
where \(\textbf{W}_{0},\textbf{W}_{1}\) are weights and \(\textbf{b}_{0},\textbf{b}_{1}\) are biases.
**Attentively Frame-level Timeline Aggregator.** In the prior part, we get the representation \(\hat{\textbf{h}}_{i}^{l}(t)\) of frame \(f_{i,k}^{t-}\). Now, we consider how to aggregate the information of multiple frames. Empirically, we set the hop length to half of the frame length to retain redundant information between frames. By doing so, _(i)_ short-term features are further highlighted; and _(ii)_ framing serves as a **scrubbing technique** because irregular links (with abnormal time interval/content) will not play a leading role and the commonalities in the evolution of links will be emphasized.
Let \(F_{i,k}^{t-}=\left\{f_{i,k}^{t_{j}-}|1\leq j\leq n,t_{n}=t\right\}\) denotes a set of frames of node \(v_{i}\), in which the timestamps satisfy:
\[t_{j-1}=\mathbb{T}_{\frac{k}{2}}(E_{i}^{t_{j}-}),2\leq j\leq n \tag{8}\]
where \(n\) is the size of this set, \(\mathbb{T}_{\frac{k}{2}}\) maps a set of links to its \(\frac{k}{2}\)-th (_i.e._, half of the frame length) recent element's timestamp. We call this set a \(n\)-length **timeline** of node \(v_{i}\) at timestamp \(t\), and we get the final node representation \(\textbf{h}_{i}^{l}(t)\) as:
\[\textbf{h}_{i}^{l}(t)=\left[\hat{\textbf{h}}_{i}^{l}(t_{1})||\ldots||\hat{ \textbf{h}}_{i}^{l}(t_{n})\right]^{T}\textbf{W}_{2}+\textbf{b}_{2}, \tag{9}\]
where \(\textbf{W}_{2}\) and **b** are weights and bias. Here we take \(1\)-layer MLP as an example for simplicity, but it could be effortlessly extended to RNN-based or attention-based methods, _etc._
\(\textbf{h}_{i}^{l}(t)\) generated by the last layer is just what we want - node \(v_{i}\)'s embedding at timestamp \(t\), \(\textbf{h}_{i}(t)\).
**Learning & Inference.** Since the temporal information is mostly reflected in the time-sensitive interactions among nodes, we choose to use the future link prediction setup for training. The goal of future link prediction is to predict the probability that an link will exist between a target node \(v_{i}\) and another node \(v_{j}\) at a specific future time, _i.e._, given the set of previous links of \(v_{i}\), we compute the probability of a future link \(e_{i,j,t_{i,j}}\) between \(v_{i}\) and \(v_{j}\). To train the model, we sample a set of negative links (\(\neq e_{i,j,t_{i,j}}\)) and optimize the per-node objective:
\[L=\sum\limits_{v_{i},v_{j},t_{i,j}}\mathrm{Pos}\left(i,j,t_{i,j}\right)+Q\cdot E _{v_{q}\sim P}\mathrm{Neg}\left(i,q,t_{i,j}\right) \tag{10}\]
where \(P\) is the negative link sampling distribution, \(Q\) denotes the negative sampling size, \(\mathrm{Pos}(\cdot,\cdot,\cdot)\) and \(\mathrm{Neg}(\cdot,\cdot,\cdot)\) denote the positive and negative scoring functions:
\[\mathrm{Pos}\left(i,j,t_{i,j}\right)=-\log\left(\sigma\left(-\textbf{h}_{i}(t _{i,j})^{\top}\textbf{h}_{j}(t_{i,j})\right)\right) \tag{11}\]
\[\mathrm{Neg}\left(i,q,t_{i,j}\right)=-\log\left(\sigma\left(\textbf{h}_{i}(t _{i,j})^{\top}\textbf{h}_{q}(t_{i,j})\right)\right) \tag{12}\]
where \(\sigma(\cdot)\) is an activation function, \(\textbf{h}_{i}(t)\) is the representation of node \(v_{i}\) at timestamp \(t\). For inference, the output of \(\mathrm{Pos}(i,j,t_{i,j})\) is used as the logits.
## Experimental Setups
We evaluate our method against strong baselines (adapted to temporal settings when possible). Note that **assembling FTM with a baseline method** means that we take the baseline method as the frame aggregator of FTM.
### Tasks and Metrics
We perform future link prediction to evaluate the quality of the generated graph representations. We use average precision (AP) as the evaluation metric and consider this task in two settings: _(i)_**Transductive Task.** We predict future links among nodes that have been observed during training. _(ii)_**Inductive Task.** We perform future link prediction among nodes that have not been observed in the training phase.
### Datasets
We choose seven datasets that contain time-sensitive node interactions: **Reddit2** is created from posts between active users and subreddits, where users and subreddits are nodes, and posts are links. **Wikipedia3** is created by taking top edited pages in Wikipedia and active users as nodes, and the corresponding edits as links. **Icews144**, **Icews05-155** contain political events and the corresponding timestamps. All nodes are real-world entities (e.g. countries) and links are event types. **Bitcoin-ot6**, **Bitcoin-alpha**7 are who-trusts-whom networks of people who trade with Bitcoin, where nodes are people and links are the credit evaluation. **Mooe8** dataset contains user actions on a popular MOOC platform, where nodes represent users and course activities, and links represent user actions. Dataset scales are listed in Table 1.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Dataset** & **Node** & **Link** \\ \hline Reddit [28] & 11,000 & 672,000 \\ Wikipedia [28] & 9,000 & 157,000 \\ Icews14 [1] & 7,000 & 91,000 \\ Icews05-15 [1] & 10,000 & 461,000 \\ Bitcoin-otc [28] & 6,000 & 36,000 \\ Bitcoin-alpha [28] & 4,000 & 24,000 \\ Mooc [28] & 7,000 & 412,000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The node and link statics for each dataset.
### Baselines
**GAE**, **VAGE**[14], **DeepWalk**[15] and **Node2vec**[16] are models for static graphs.
**CTDNE**, **DyRep**, **Jodie**, **GraphSAGE**, **GAT**, **TGAT** and **TGN** are baselines for temporal graphs. We do not ensemble FTM with CTDNE, DyRep and Jodie due to the conflicting schemes9. For other methods, we test the original version and the FTM-assembled version. There may be slight differences between our implementation and others, but it is fair for comparison.
Footnote 9: These methods have their own custom temporal neighborhood construction strategies. If we apply our action-based framing technique to these methods, we are only assembling FTM with their feature extraction modules.
## Results and Analysis
**Transductive & Inductive Future Link Prediction.** As shown in Table 2, (1) temporal methods surpass static ones, suggesting the importance of temporal properties in modeling temporal graphs; (2) models assembled with FTM consistently outperform the originals on all benchmarks, demonstrating the effectiveness of FTM. For instance, on Wikipedia, FTM brings an average gain of **2.98** in AP under inductive setting. Meanwhile, TGN+FTM achieves new state-of-the-art performance on both Wikipedia and Reddit. The overall performance on this task indicates that FTM guides the learning of the evolution of temporal graphs and helps to generate more informative representations.
### Quantitative Analysis
Given these overall performance improvements, we investigate how FTM's improvements are reflected in the learnt node representations. Because we have the gold label of node type in Wikipedia, we conduct a downstream task of future link prediction, **node classification**, in two settings: _(i)_**Fine-tuning.** We fine-tune a MLP layer to classify nodes based on the learnt node embeddings. As the result in the second column of Table 3 (attack intensity is 0) shows, FTM brings about 1%-4% absolute gain in AUC to backbone methods, which reveals that models assembled with FTM generate more reasonable node embeddings. It also demonstrates the insights of our method in temporal graph representation learning; _(ii)_**Adversarial Attack.** The ability to resist Gaussian noise-perturbated examples is important because noisy data is inevitable under most circumstances [13]. We add random Gaussian noise to the original data to generate adversarial examples for five times,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Reddit**} & \multicolumn{2}{c}{**Wikipedia**} \\ \cline{2-5} & Transductive & Inductive & Transductive & Inductive \\ \hline GAE [14] & 93.23 & - & 91.44 & - \\ VAGE [14] & 92.92 & - & 91.34 & - \\ DeepWalk [15] & 83.10 & - & 90.71 & - \\ Node2vec [16] & 84.56 & - & 91.48 & - \\ CTDNE [14] & 91.41 & - & 92.17 & - \\ DyRep [16] & 98.25 & 96.11 & 94.76 & 92.11 \\ Jodie [17] & 97.02 & 94.46 & 92.75 & 93.13 \\ \hline GraphSAGE [1] & 97.20 & 94.68 & 91.09 & 86.08 \\
**w/ FTM** & 98.01\(\uparrow\) & 96.28\(\uparrow\) & 92.91\(\uparrow\) & 91.93\(\uparrow\) \\ GAT [15] & 97.33 & 95.37 & 94.73 & 91.27 \\
**w/ FTM** & 98.21\(\uparrow\) & 96.75\(\uparrow\) & 95.03\(\uparrow\) & 93.54\(\uparrow\) \\ TGAT [15] & 98.27 & 96.73 & 95.13 & 93.97 \\
**w/ FTM** & 98.41\(\uparrow\) & 96.82\(\uparrow\) & 97.82\(\uparrow\) & 97.14\(\uparrow\) \\ TGN [14] & 98.78 & 97.77 & 98.28 & 97.69 \\
**w/ FTM** & **98.88\(\uparrow\)** & **97.96\(\uparrow\)** & **98.82\(\uparrow\)** & **98.33\(\uparrow\)** \\ \cline{2-5} Average Gain & 0.48 & 0.82 & 1.34 & 2.98 \\ \hline \hline \end{tabular}
\end{table}
Table 2: AP(%) for future link prediction tasks. \(\uparrow\) means that FTM brings an improvement to the baseline method. The best results in each column are highlighted in **bold** font. ’-’ denotes incapability.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{6}{c}{**Attack Intensity(\%)**} \\ \cline{2-7} & **0** & **1** & **10** & **20** & **30** & **40** & **50** \\ \hline GraphSAGE (2017) & 85.46(+1.58) & 62.24(+19.52) & 34.11(+28.14) & 51.78(+3.56) & 45.89(+5.85) & 40.81(+3.04) & 48.84(+9.15) \\ \cline{2-7} GAT (2018) & 83.75(+4.31) & 79.16(+5.11) & 49.56(+17.66) & 40.47(+19.32) & 41.89(+16.11) & 36.70(+9.39) & 41.38(+15.06) \\ \cline{2-7} TGAT (2020) & 87.36(+1.37) & 87.67(+1.16) & 59.99(+2.63) & 56.59(+29.85) & 47.39(+38.39) & 34.61(+51.56) & 38.57(+47.29) \\ \cline{2-7} TGN (2020) & 88.19(+1.82) & 86.68(+1.99) & 80.80(+3.18) & 81.93(+2.63) & 81.81(+4.96) & 83.39(+2.09) & 83.17(+2.40) \\ \hline Average Gain & 2.27 & 6.95 & 18.95 & 13.84 & 16.33 & 16.52 & 18.48 \\ \hline \hline \end{tabular}
\end{table}
Table 3: AUC (%) for node classification tasks on Wikipedia. Attack intensity controls the ratio of (the norm of) the added noise to (the maximum norm of) the link features in the dataset. **x(+y)** indicates that the baseline method achieves **x%** in AUC, and FTM brings an improvement of **y%** to it, _i.e._, the model assembling FTM with this method achieves **x+y%**.
and record the average performance of each model. The results are reported in the last six columns of Table 3 (with attack intensity from 1% to 50%). The average gains that FTM brings to the baseline methods demonstrate that FTM can handle data noise (and maybe data biases) better, which is an important capability that guarantees the applicability of the proposed method.
### Qualitative Analysis
In this section, we examine our model's ability to generate more informative representations on the wikipedia dataset qualitatively. As Figure 4(a) shows, FTM helps to distinguish atypical users, whereas baselines are often misled; it reflects the potential of FTM in addressing data biases, since the data bias issues in data collected from platforms like Wikipedia are mainly caused by atypical users who often perform irregular/abnormal actions. Moreover, we hypothesize that the evolution of user actions has short-term stationary features, because people's personality will not change rapidly. We take the most popular snapshot-based modeling method as the opponent to demonstrate that FTM makes it possible to capture short-term stationary features over time. First, we modify the neighborhood sampling strategy of the original TGAT to be snapshot-based, namely TGAT+Snapshot. Specifically, for each node we take its neighbors within an hour to form a temporal neighborhood. Then, we compute the cosine similarity of successive temporal node embeddings for TGAT+Snapshot and our TGAT+FTM respectively. As shown in Figure 4(b), the temporal node embeddings generated by TGAT+FTM show higher consistency. It demonstrates that TGAT+FTM learns more stable representations of users and we believe that the main reason lies in capturing short-term stationary features. Intuitively, this ability helps to stabilize the training process and capture the dynamics of user actions.
### Domain Generality
Our reported results thus far demonstrate the effectiveness of FTM in improving the capability and robustness of temporal GNNs. In this section, we explore whether FTM could help
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} **Training** \\ **Dataset** \\ \end{tabular} } & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Test Dataset**} \\ \cline{3-6} & & **Icevs14** & **Icevs05-15** & **Bitcoin-otc** & **Bitcoin-alpha** & **Moose** \\ \hline \multirow{4}{*}{**Reddit**} & GraphsSAGE (2017) & 46.89(+35.32) & 61.48(+23.08) & 70.36(+7.59) & 54.44(+16.09) & 49.86(+3.38) \\ \cline{2-6} & GAT (2018) & 63.45(+24.32) & 64.44(+20.81) & 70.66(+6.30) & 61.35(+9.49) & 47.28(+7.25) \\ \cline{2-6} & TGAT (2020) & 76.26(+9.82) & 72.80(+15.47) & 70.19(+10.81) & 65.46(+8.11) & 57.01(+16.98) \\ \cline{2-6} & TGN (2020) & 68.63(+12.20) & 70.57(+15.72) & 72.86(+6.48) & 64.55(+6.04) & 67.23(+2.48) \\ \cline{2-6} & Average Gain & 20.42 & 18.77 & 7.80 & 9.93 & 7.52 \\ \hline \multirow{4}{*}{**Wikipedia**} & GraphsSAGE (2017) & 71.88(+7.59) & 77.49(+3.46) & 58.88(+12.44) & 53.81(+18.16) & 49.11(+4.85) \\ \cline{2-6} & GAT (2018) & 67.19(+12.29) & 69.32(+15.20) & 67.20(+0.34) & 61.48(+6.71) & 49.42(+7.19) \\ \cline{1-1} \cline{2-6} & TGAT (2020) & 80.27(+6.94) & 82.03(+10.82) & 71.38(+12.16) & 71.01(+2.18) & 53.98(+22.54) \\ \cline{1-1} \cline{2-6} & TGN (2020) & 66.40(+15.73) & 67.77(+16.36) & 83.76(+0.41) & 64.69(+7.29) & 73.20(+1.86) \\ \cline{1-1} \cline{2-6} & Average Gain & 10.64 & 11.46 & 6.34 & 8.59 & 9.11 \\ \hline \hline \end{tabular}
\end{table}
Table 4: AP (%) of future link prediction tasks. **x(+y)** indicates that the baseline method achieves **x%** in AP, and FTM brings an improvement of **y%** to it, _i.e._, the model assembling FTM with this method achieves **x+y%**.
Figure 4: (a) x-axis/y-axis represents the average/standard deviation of the time intervals of a user’s actions. The green parts denotes user distribution. The darker the color, the greater the number of users. Red points denote atypical users that have misled TGAT but are correctly classified by TGAT+FTM. (b) The cosine similarity of successive temporal node embeddings generated by TGAT+FTM and TGAT+Snapshot, respectively. The consistency of the embeddings generated by TGAT+FTM proves that FTM helps to learn stable temporal representations.
improve the domain generality of baseline methods. From the results shown in Table 4, we can observe that (1) these baseline methods suffer from severe domain generality issues, _e.g._, GraphSAGE trained on Reddit only get **46.89** in AP on Icews14; and (2) assembling FTM with these baseline methods greatly improves their domain generality, _e.g._, when applying models trained on Reddit to Icews14, FTM brings an average gain of **20.42** in AP to them. It illustrates the efficacy of FTM in deriving generalizable knowledge of graph evolution. Furthermore, we test the capability of our method in handling domain gaps from a new perspective - we subsample user-action data from the wikipedia dataset with different time interval distribution and evaluate our method on it. The result shows that assembling FTM with baseline methods improves their AP by **1.5** in average, but is not listed here for space-saving issues.
### Case Studies
In normal experiments, we set the number of model layers to be 2 and the length of frames to be 20 to form a node's temporal neighborhood. In this section, we record the performance of aforementioned methods under different neighborhood scales and data sizes. Note that the test data is the same as aforementioned experiments.
In studying the influence of neighborhood scale, we separately let (the number of model layers, the length of frames) be \((1,10)\), \((1,20)\), \((2,10)\), \((2,20)\) to form a S-scale, M-scale, L-scale, XL-scale neighborhood respectively. The results are provided in the left part of Table 5. In all cases, models assembled with FTM outperform the originals. It illustrates that, even under low-resource settings, assembling FTM with backbone methods can enhance the capability, the robustness, and the domain generality of these models.
In studying the influence of data size, we sample \(x\)-percent of the training/validation set to form new training/validation sets. As the results in the right part of Table 5 illustrate, models assembled with FTM outperform the originals in most cases. It indicates that FTM is not totally data-driven, but superior in understanding the evolution of the temporal graph. This ability is of practical importance.
### Implementation & Training Details
#### Hyper-parameters.
We do the chronological train-validation-test split with 70%-15%-15% according to the timestamps of links. In the test set, we randomly sample 10\(\%\) nodes as 'new nodes' for inductive tasks, and mask down all their links in the training set. Both the number of self-attention layers and the number of heads in each layer of the backbone network are 2. The length of timeline is chosen from [2, 3, 4] (we only report the best result). During training, we use Adam optimizer with learning rate 1e-4. The dimension of time encoding vectors is set to 172, which is same to the dimension of link feature vectors. We have conducted experiments to verify the effect of different aggregate functions in the Timeline Aggregator module. The result is shown in Table 6 (timeline length is 2 and all experiments are conducted on a RTX 2080Ti GPU). Taking both the performance and efficiency into consideration, we decide to deploy a 1-layer MLP as the timeline aggregate function because it achieves comparable performance while having faster convergence rate and smaller parameter size than other aggregate functions. Readers can implement the self-attention mechanism for better performance.
## Conclusion
In this paper, we propose a simple but effective frame-level timeline modeling method for temporal graph representation learning, where the main contributions are made to the way that temporal neighborhoods are constructed and neighboring information is aggregated. Technically, we break down
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Aggregation** & \multicolumn{2}{c}{**AP on Wikipedia**} & \multicolumn{1}{c}{**Convergence**} & **Parameter** \\ \cline{2-4}
**Function** & **Transductive** & **Inductive** & **Time** & **Size** \\ \hline
1-layer MLP & 97.68 & 97.19 & \(\mathbf{8.5\times 10^{\circ}s}\) & **100\%** \\
2-layer MLP & 97.26 & 96.79 & \(1.3\times 10^{\circ}s\) & \(105\%\) \\ LSTM & 97.64 & 96.99 & \(2.7\times 10^{\circ}s\) & \(112\%\) \\ Self-attention & **97.93** & **97.44** & \(1.1\times 10^{\circ}s\) & \(110\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of different aggregate functions in Timeline Aggregator module.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{**Neighborhood Scale**} & \multicolumn{8}{c}{**Percentage of Training Data**} \\ \multicolumn{1}{c}{\multirow{-2}{*}{**Model**}} & \multicolumn{8}{c}{**Generalization**} & \multicolumn{8}{c}{**Inductive**} & \multicolumn{8}{c}{**Generalization**} \\ \cline{2-13} \cline{2-13} & **S** & **M** & **L** & **XL** & **S** & **M** & **L** & **XL** & **1\%** & **\%** & **10\%** & **50\%** & **1\%** & **5\%** & **10\%** & **50\%** \\ \hline GraphSAGE & 86.31 & 88.96 & 94.19 & 94.68 & 70.87 & 70.83 & 78.74 & 83.59 & 65.31 & 85.39 & 91.17 & 95.64 & 57.99 & 62.79 & 73.04 & 80.15 \\
**w/ FTM** & 92.247 & 92.317 & 95.531 & 96.287 & 79.37 & 77.267 & 86.30† & 86.53† & 70.40† & 87.85† & 91.95† & 96.65† & 74.92† & 82.71† & 85.34† \\
**GAT** & 91.11 & 93.15 & 95.56 & 95.37 & 69.88 & 74.96 & 83.76 & 85.84 & 68.99 & 90.81 & 93.93 & 95.10 & 59.53 & 76.44 & 79.70 & 85.80 \\
**w/ FTM** & 91.857 & 93.40† & 95.84† & 96.75† & 82.38† & 81.75† & 86.20† & 88.97† & 73.13† & 91.02† & 93.70† & 96.68† & 68.45† & 81.99† & 85.91† & 90.30† \\ \cline{2-13} TGAT & 91.12 & 92.63 & 95.95 & 96.73 & 69.22 & 71.76 & 85.64 & 87.34 & 65.65 & 88.92 & 92.67 & 96.25 & 74.51 & 77.16 & 81.27 & 86.38 \\
**w/ FTM** & 94.08† & 94.32† & 97.26† & 96.82† & 91.08† & 89.52† & 95.82† & 91.06† & 80.76† & 92.32† & 93.45† & 96.25 & 81.84† & 87.88† & 87.22† & 88.53† \\ \hline Average Gain & 3.21 & 1.76 & 0.98 & 1.02 & 14.29 & 10.33 & 6.73 & 3.26 & 8.10 & 1.67 & 0.64 & 0.69 & 5.00 & 8.25 & 5.69 & 2.87 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Case studies on (1) neighborhood scale, where neighborhood scale expands from S to XL; and (2) the percentage of training data, where models are trained on limited training data of Reddit, _e.g._, 1% means models are trained/validated on one-percent of the original training/validation data. We do not take TGN into consideration, because the way TGN updates node-wise memory has little to do with the neighborhood scale and the percentage of training data. We report AP(%) of future link prediction on Reddit (inductive; generalize from Wiki).
a temporal sequence of graph-structured data into individual frames, and model the evolution of successive frames to mine deeper into the dynamics of nodes and links. Experimental results demonstrate the effectiveness of FTM. Meanwhile, our experiments empirically reveal that even state-of-the-art GNNs have critical weakness in modeling temporal graphs; but FTM helps to derive generalizable knowledge during training and thus greatly improves both the robustness and the domain generality of baseline methods, especially when there are outliers/noise in the data (_cf._ Figure 4(a), Table 3), or the amount of data and computational resources are insufficient (_cf._ Table 5). The efficacy of FTM may provide insights that could facilitate the design of more advanced representation learning methods on temporal graphs.
## Acknowledgement
This paper was partially supported by Shenzhen Science & Technology Research Program (No: GXWD202012311658-07007-20200814115301001) and NSFC (No: 62176008)
|
2307.05067 | Exploiting Asymmetry in Logic Puzzles: Using ZDDs for Symbolic Model
Checking Dynamic Epistemic Logic | Binary decision diagrams (BDDs) are widely used to mitigate the
state-explosion problem in model checking. A variation of BDDs are
Zero-suppressed Decision Diagrams (ZDDs) which omit variables that must be
false, instead of omitting variables that do not matter. We use ZDDs to
symbolically encode Kripke models used in Dynamic Epistemic Logic, a framework
to reason about knowledge and information dynamics in multi-agent systems. We
compare the memory usage of different ZDD variants for three well-known
examples from the literature: the Muddy Children, the Sum and Product puzzle
and the Dining Cryptographers. Our implementation is based on the existing
model checker SMCDEL and the CUDD library. Our results show that replacing BDDs
with the right variant of ZDDs can significantly reduce memory usage. This
suggests that ZDDs are a useful tool for model checking multi-agent systems. | Daniel Miedema, Malvin Gattinger | 2023-07-11T07:13:09Z | http://arxiv.org/abs/2307.05067v1 | Exploiting Asymmetry in Logic Puzzles: Using ZDDs for Symbolic Model Checking Dynamic Epistemic Logic
###### Abstract
Binary decision diagrams (BDDs) are widely used to mitigate the state-explosion problem in model checking. A variation of BDDs are Zero-suppressed Decision Diagrams (ZDDs) which omit variables that must be false, instead of omitting variables that do not matter.
We use ZDDs to symbolically encode Kripke models used in Dynamic Epistemic Logic, a framework to reason about knowledge and information dynamics in multi-agent systems. We compare the memory usage of different ZDD variants for three well-known examples from the literature: the Muddy Children, the Sum and Product puzzle and the Dining Cryptographers. Our implementation is based on the existing model checker SMCDEL and the CUDD library.
Our results show that replacing BDDs with the right variant of ZDDs can significantly reduce memory usage. This suggests that ZDDs are a useful tool for model checking multi-agent systems.
## 1 Introduction
There are several formal frameworks for reasoning about knowledge in multi-agent systems, and many are implemented in the form of epistemic model checkers. Here we are concerned with the _data structures_ used in automated epistemic reasoning. This is a non-issue in theoretical work, where Kripke models are an elegant mathematical tools. But they are not very efficient: models where agents know little tend to be the largest. More efficient representations are often based on Binary Decision Diagrams (BDDs), which use the idea that a representation of a function not depending on \(p\) can simply ignore that variable \(p\). This fits nicely to the models encountered in epistemic scenarios, such as the famous example of the Muddy Children: If child 2 does not observe whether it is muddy, i.e. whether \(p_{2}\) is true or false, then we can save memory by omitting \(p_{2}\) in the encoding of the knowledge of child 2. However, which variables matter may change, and in many examples the claim that "many variables do not matter" only holds in the initial model. This motivates us to look at Zero-suppressed Decision Diagrams (ZDDs) which use an asymmetric reduction rule to omit variables that _must_ be _false_, instead of the symmetric reduction rule targeting variables that _do not matter_.
Our informal research question is thus: Is it more memory efficient to have a default assumption that "anything we do not mention does not matter" or, for example "anything we do not mention must be false"? Obviously, the answer will depend on many aspects. Here we make the question precise for the case of Dynamic Epistemic Logic, and consider three well-known examples from the literature.
The article is structured as follows. We discuss related work in the rest of this section, then we provide the relevant background in Sections 2 and 3. Section 4 describes our experiment design and the formal models used. We present our results in Section 5 and conclude in Section 6.
Related workModel checking aims to verify properties of formally specified systems. Standard model checking methods search through a whole state transition graph and thus suffer from the state explosion problem: the number of states grows exponentially with the number of components or agents. To tackle this problem _symbolic_ methods were developed [5]. These reduce the amount of resources needed, by reasoning about sets instead of individual states. Starting with SMV from [17], most approaches use Binary Decision Diagrams (BDDs) [3] to encode Boolean functions. Zero-suppressed Decision Diagrams (ZDDs) are an adaption of BDDs, introduced by Minato [19]. ZDDs naturally fit combinatorial problems and many comparisons between BDDs and ZDDs have been done. For both an elegant introduction into the topic of BDDs and many more references we refer to [14]. Symbolic model checking using ZDDs has not been studied much, partly due to underdeveloped construction methods [20].
Most existing symbolic model checkers use temporal logics such as LTL or CTL. Yet problems come in many forms and for examples typically described using epistemic operators (e.g. in multi-agent systems), Dynamic Epistemic Logic (DEL) is an established framework [9]. Also DEL model checking can be done symbolically [2], by encoding Kripke models as so-called knowledge structures. This lead to its implementation, SMCDEL, which is extended in this work. Another encoding, sometimes also called "symbolic models", is based on mental programs [7]. In concrete applications such as "Hinitkka's World" these also get encoded as BDDs [6]. To our knowledge no previous work used ZDDs or other BDD variants for DEL model checking, with the exception of [13] where Algebraic Decision Diagrams (ADDs) are used for probabilistic DEL.
Here our main research questions is: Can ZDDs be more compact than BDDs when encoding the Kripke models for classical logic puzzles? We answer this question by adding ZDD functionality to SMCDEL and then comparing the sizes for three well-known examples from the literature.
## 2 Theory: Decision Diagrams
Symbolic model checkers, including SMCDEL, rely on efficient representations of Boolean functions. The most widely used data structure for this are Binary Decision Diagrams (BDDs). In this section we recall their definition and explain the difference between standard BDDs and ZDDs. How Boolean functions are then used for model checking DEL will be explained in the next section. Before we get to decision diagrams we define Boolean formulas and functions.
**Definition 1**.: _The Boolean formulas over a set of variables \(P\) (also called vocabulary) are given by \(\varphi::=\top\mid p\mid\neg\varphi\mid\varphi\land\varphi\) where \(p\in P\). We define \(\bot:=\neg\top\), \(\varphi\lor\psi:=\neg(\neg\varphi\land\neg\psi)\) and \(\varphi\to\psi:=\neg(\varphi\land\neg\psi)\)._
_We write \(\models\) for the usual Boolean semantics using assignments of type \(P\to\{0,1\}\). When \(P\) is given we identify an assignment (also called state) with the set of variables it maps to \(1\). A Boolean function is any \(f\colon\mathcal{P}(P)\to\{0,1\}\). For any \(\varphi\) we define the Boolean function \(f_{\varphi}(s):=\{\text{if }s\models\varphi\text{ then }1\text{ else }0\}\)._
For example, if our vocabulary is \(P=\{p,q,r\}\) and \(s(p)=0\), \(s(q)=1\) and \(s(r)=0\) then we identify \(s\) with \(\{q\}\) and we have \(s\models(\neg p\wedge q)\lor r\). In the following we will also just write \(\varphi\) for \(f_{\varphi}\). Notably, two different formulas can correspond to the same Boolean function, but not vice versa.
**Definition 2**.: _For any \(\varphi\), \(\psi\), and \(p\), let \(\varphi(\frac{p}{\psi})\) be the result of replacing every occurrence of \(p\) in \(\varphi\) by \(\psi\). For any \(A=\{p_{1},\ldots,p_{n}\}\), let \(\varphi(\frac{A}{\psi}):=\psi(\frac{p_{1}}{\psi})(\frac{p_{2}}{\psi})\ldots( \frac{p_{n}}{\psi})\). We use \(\forall p\varphi\) to denote \(\varphi\left(\frac{p}{\tau}\right)\land\varphi\left(\frac{p}{\bot}\right)\). For any \(A=\{p_{1},\ldots,p_{n}\}\), let \(\forall A\varphi:=\forall p_{1}\forall p_{2}\ldots\forall p_{n}\varphi\)._
Decision DiagramsA decision diagram is a rooted directed acyclic graph, used to encode a Boolean function. Any terminal node (i.e. leaf) is labelled with \(0\) or \(1\), corresponding to the result of the function.
Any internal node \(n\) is labelled with a variable and has two outgoing edges to successors denoted by \(\textsc{Then}(n)\) and \(\textsc{Else}(n)\) -- each representing a possible value for the variable. A path from the root to a leaf in a decision diagram corresponds to an evaluation of the encoded function. A decision diagram is called _ordered_ if the variables are encountered in the same order on all its paths.
**Example 3**.: _The first (left-most) decision diagram in Figure 1 is a full decision tree for \(q\wedge\neg r\). To evaluate it at state \(\{p,q\}\) we start at the root and then go along the solid Then-edge because \(p\) is true, then again along a Then-edge as \(q\) is true and then along the dashed Else-edge as \(r\) is false. We get \(1\) as a result, reflecting the fact that \(\{p,q\}\vDash q\wedge\neg r\). Similarly we can use the second and third diagram._
_Binary Decision Diagrams_ (BDDs) were introduced by [3] and are particularly compact decision diagrams, obtained using two reduction rules. The first rule identifies isomorphic subgraphs, i.e. we merge nodes that have the same label and the same children. In Figure 1 we get from the first to the second diagram. The second rule eliminates redundant nodes. A node is considered redundant if both its Then- and Else-edge go to the same child. In Figure 1 this gets us from the second to the third diagram.
_Zero-suppressed Decision Diagrams_ (ZDDs) were introduced by [20] and use a different second rule than BDDs. While in BDDs a node \(n\) is eliminated when \(\textsc{Then}(n)=\textsc{Else}(n)\), in ZDDs a node is eliminated when \(\textsc{Then}(n)=0\). In Figure 1 this rule gets us from the second to the fourth diagram called \(ZDD_{T0}(f)\). The idea is to not ignore the variables that "do not matter" (as \(p\) in \(q\wedge\neg r\)), but to remove the nodes of variables that must be false (as \(r\) in \(q\wedge\neg r\)). To evaluate \(ZDD_{T0}(f)\) at state \(\{p,q\}\) we again start at the root and twice follow a solid edge because \(p\) and \(q\) are true, but then we notice that the solid edge goes from \(q\) to \(1\), without asking for the remaining variable \(r\). When evaluating a \(ZDD_{T0}\) such a transition demands that the variable we "jump over" must be false -- hence the name "zero-suppressed". Indeed \(r\) is false in our state, so we do reach \(1\). If \(r\) would have been true, the result would have been \(0\).
Generalizing Elimination RulesThe elimination rule "remove nodes that have a Then-edge leading to \(0\)" can be modified in two obvious ways: instead of Then- we could consider Else-edges, and instead of \(0\) we could consider \(1\). This leads us to three additional elimination rules.
**Definition 4**.: _We denote five different node elimination rules as follows. A node \(n\) with pairs of children \((\textsc{Then}(n),\textsc{Else}(n))\) is eliminated if it matches the left side of the rule, and any edges leading to \(n\) are diverted to the successor \(s\) on the right side of the rule._
\[\begin{array}{llll}EQ:&(s,s)\Rightarrow s&T0:&(0,s)\Rightarrow s&E0:&(s,0) \Rightarrow s\\ &&T1:&(1,s)\Rightarrow s&E1:&(s,1)\Rightarrow s\end{array}\]
Figure 1: Seven decision diagrams for \(f:=q\wedge\neg r\), assuming vocabulary \(\{p,q,r\}\).
Here \(EQ\) is the rule for BDDs, while \(T0\) (for "Then 0") is the traditional ZDD rule. The remaining three are variations. For example, \(E0\) says that any node with an Else-edge to 0 is removed, and any edge that led to the removed node should be diverted to where the Then-edge of the removed node led.
In Figure 1 the \(E0\) rule gets us from the second to the sixth diagram \(ZDD_{E0}(f)\). Note that we used the rule twice: After deleting an \(r\) node the \(q\) node has an Else-branch to 0, so it is also eliminated. All diagrams encode the same function \(f\), but when evaluating them we must interpret "jumps" differently.
A crucial feature of BDDs and ZDDs is that they are _canonical_ representations: given a fixed variable order there is a unique BDD and a unique ZDD for each variant. It also becomes clear that for different Boolean functions a different kind of diagram can be more or less compact.
**Definition 5**.: _For any Boolean function \(f\), recall that \(\neg f\) denotes its complement. Let \(\neg f\) denote the result of complementing all atomic propositions inside \(f\). (For example, \(\neg(q\wedge\neg r)=\neg q\wedge r\).) For any decision diagram \(d\), let \(\mathsf{flipLeaf}(d)\) be the result of changing the labels of all leaves from 0 to 1 and vice versa; and let \(\mathsf{flipEdge}(d)\) be the result of changing the labels of all edges from Then to Else and vice versa._
There is a correspondence between \(\neg\) and \(\mathsf{flipLeaf}\), and between \(\neg\) and \(\mathsf{flipEdge}\). Moreover, we can use these operations to relate the four different variants of ZDDs as follows.
**Fact 6**.: _For any Boolean function \(f\) we have:_
\[\begin{array}{rcl}DD_{T1}(f)&=&\mathsf{flipLeaf}\,DD_{T0}(\neg f)\\ DD_{E0}(f)&=&\mathsf{flipEdge}\,DD_{T0}(\neg f)\\ DD_{E1}(f)&=&\mathsf{flipEdge}\,\mathsf{flipLeaf}\,DD_{T0}(\neg\neg f)\end{array}\]
**Example 7**.: _We illustrate Fact 6 using our running example \(f:=q\wedge\neg r\) with vocabulary \(\{p,q,r\}\). Figure 2 shows the \(T0\) decision diagrams mentioned in Fact 6. We see that for example \(DD_{T1}(f)\) shown in Figure 1 is the same graph as \(DD_{T0}(\neg f)\) with only the labels of the leaf nodes exchanged. Similarly, \(DD_{E1}(f)\) in Figure 1 is the same graph as \(DD_{T0}(\neg\neg f)\) with flipped edges and leaves._
Fact 6 is crucial for our implementation, because the CUDD library we use does not support \(T1\), \(E0\) and \(E1\) explicitly. Hence instead we always work with \(T0\) diagrams of the negated or flipped functions.
## 3 Theory: Symbolic Model Checking DEL
Kripke ModelsWe recap the standard syntax and semantics of Public Announcement Logic (PAL), the most basic version of Dynamic Epistemic Logic (DEL).
Figure 2: ZDDs with the same shape as the variants for \(f:=p\wedge\neg q\).
**Definition 8**.: _Fix a vocabulary \(V\) and a finite set of agents \(I\). The DEL language \(\mathcal{L}(V)\) is given by \(\varphi::=p\mid\neg\varphi\mid\varphi\wedge\varphi\mid K_{i}\varphi\mid[\varphi]\varphi\) where \(p\in V\), \(i\in I\)._
As usual, \(K_{i}\varphi\) is read as _"agent \(i\) knows that \(\varphi\)"_. The formula \([\psi]\varphi\) says that after a _public announcement_ of \(\psi\), \(\varphi\) holds. The standard semantics for \(\mathcal{L}(V)\) on Kripke models are as follows.
**Definition 9**.: _A Kripke model for a set of agents \(I=\{1,\ldots,n\}\) is a tuple \(\mathcal{M}=(W,\pi,\mathcal{K}_{1},\ldots,\mathcal{K}_{n})\), where \(W\) is a set of worlds, \(\pi\) associates with each world a state \(\pi(w)\), and \(\mathcal{K}_{1},\ldots,\mathcal{K}_{n}\) are equivalence relations on \(W\). A pointed Kripke model is a pair \((\mathcal{M},w)\) consisting of a model and a world \(w\in W\)._
**Definition 10**.: _Semantics for \(\mathcal{L}(V)\) on pointed Kripke models are given inductively as follows._
* \((\mathcal{M},w)\models p\) _iff_ \(\pi^{M}(w)(p)=\top\)_._
* \((\mathcal{M},w)\models\neg\varphi\) _iff not_ \((\mathcal{M},w)\models\varphi\)__
* \((\mathcal{M},w)\models\varphi\wedge\psi\) _iff_ \((\mathcal{M},w)\models\varphi\) _and_ \((\mathcal{M},w)\models\psi\)__
* \((\mathcal{M},w)\models K_{i}\varphi\) _iff for all_ \(w^{\prime}\in W\)_, if_ \(w\mathcal{K}_{i}^{M}w^{\prime}\)_, then_ \((\mathcal{M},w^{\prime})\models\varphi\)_._
* \((\mathcal{M},w)\models[\psi]\varphi\) _iff_ \((\mathcal{M},w)\models\psi\) _implies_ \((\mathcal{M}^{\psi},w)\models\varphi\) _where_ \(\mathcal{M}^{\psi}\) _is a new model based on the set_ \(W^{\mathcal{M}^{\psi}}:=\{w\in W^{\mathcal{M}}\mid(\mathcal{M},w)\models\psi\}\) _and appropriate restrictions of_ \(\mathcal{K}_{i}\) _and_ \(\pi\) _to_ \(W^{\mathcal{M}^{\psi}}\)_._
More expressive versions of DEL also include common knowledge and complex epistemic or ontic actions such as private communication, interception, spying and factual change. Moreover, DEL can work both with S5 models and with arbitrary Kripke models. All of this is compatible with the symbolic semantics we recall in the next section, but for our purposes in this article the restricted language above is sufficient, and we only consider S5 models.
Knowledge StructuresWhile the semantics described above is standard, it has the disadvantage that models are represented explicitly, i.e. the number of worlds also determines the amount of memory needed to represent a model. To combat this well-known state-explosion problem we can replace Kripke models with symbolic knowledge structures. Their main advantage is that knowledge and results of announcements can be computed via purely Boolean operations, as shown in [2].
**Definition 11**.: _Suppose we have \(n\) agents. A knowledge structure is a tuple \(\mathcal{F}=(V,\theta,O_{1},\ldots,O_{n})\) where \(V\) is a finite set of atomic variables, \(\theta\) is a Boolean formula over \(V\) and for each agent \(i\), \(O_{i}\subseteq V\). The set \(V\) is the vocabulary and the formula \(\theta\) is the state law of \(\mathcal{F}\). The \(O_{i}\) are called observational variables. An assignment over \(V\) that satisfies \(\theta\) is a state of \(\mathcal{F}\). A scene is a pair \((\mathcal{F},s)\) where \(s\) is a state of \(\mathcal{F}\)._
**Example 12**.: _Consider the knowledge structure \(\mathcal{F}:=(V=\{p,q\},\theta=p\to q,O_{1}=\{p\},O_{2}=\{q\})\). The states of \(\mathcal{F}\) are the three assignments \(\varnothing\), \(\{q\}\) and \(\{p,q\}\). Moreover, \(\mathcal{F}\) has two agents who each observe one of the propositions: agent \(1\) knows whether \(p\) is true and agent \(2\) knows whether \(q\) is true._
We now give semantics for \(\mathcal{L}(V)\) on knowledge structures.
**Definition 13**.: _Semantics for \(\mathcal{L}(V)\) on scenes are defined as follows._
* \((\mathcal{F},s)\models p\) _iff_ \(s\models p\)_._
* \((\mathcal{F},s)\models\neg\varphi\) _iff not_ \((\mathcal{F},s)\models\varphi\)__
* \((\mathcal{F},s)\models\varphi\wedge\psi\) _iff_ \((\mathcal{F},s)\models\varphi\) _and_ \((\mathcal{F},s)\models\psi\)__
* \((\mathcal{F},s)\models K_{i}\varphi\) _iff for all_ \(t\) _of_ \(\mathcal{F}\)_, if_ \(s\cap O_{i}=t\cap O_{i}\)_, then_ \((\mathcal{F},t)\models\varphi\)_._
* \((\mathcal{F},s)\models[\psi]\varphi\) _iff_ \((\mathcal{F},s)\models\psi\) _implies_ \((\mathcal{F}^{\psi},s)\models\varphi\) _where_ \(\mathcal{F}^{\psi}:=(V,\theta\wedge\|\psi\|_{\mathcal{F}},O_{1},\ldots,O_{n})\)_._
_where \(\|\cdot\|_{\mathcal{F}}\) is defined in parallel in the following definition._
**Definition 14**.: _For any knowledge structure \(\mathcal{F}=(V,\theta,O_{1},\ldots,O_{n})\) and any formula \(\varphi\) we define its local Boolean translation \(\|\varphi\|_{\mathcal{F}}\) as follows._
\[\begin{array}{lclclclcl}\|p\|_{\mathcal{F}}&:=&p&\|K_{i}\psi\|_{\mathcal{F}}&:=& \forall(V\setminus O_{i})(\theta\to\|\psi\|_{\mathcal{F}})\\ \|\neg\psi\|_{\mathcal{F}}&:=&\neg\|\psi\|_{\mathcal{F}}&\|[\psi]\xi\|_{ \mathcal{F}}&:=&\|\psi\|_{\mathcal{F}}\to\|\xi\|_{\mathcal{F}\psi}\\ \|\psi_{1}\wedge\psi_{2}\|_{\mathcal{F}}&:=&\|\psi_{1}\|_{\mathcal{F}}\wedge\| \psi_{2}\|_{\mathcal{F}}&&\end{array}\]
_where the case for \(K_{i}\psi\) quantifies over the variables not observed by agent \(i\), using Boolean quantification as defined in Definition 2 above._
A main result from [2] based on [22] is that for any finite Kripke model there is an equivalent knowledge structure and vice versa. This means we can see knowledge structures as just another, hopefully more memory-efficient, data structure to store a Kripke model. An additional twist is that we usually store the state law \(\theta\) not as a formula but only the corresponding Boolean function -- which can be represented using a decision diagram as discussed in Section 2.
## 4 Methods: Logic Puzzles as Benchmarks
Our leading question is whether ZDDs provide a more compact encoding than BDDs for models encountered in epistemic model checking. To answer it we will work with three logic puzzles from the literature. All examples start with an initial model which we encode as a knowledge structure with the state law as a decision diagram. Then we make updates in the form of public announcements, changing the state law. We record the size of the decision diagrams for each update step.
As a basis for our implementation and experiments we use _SMCDEL_, the symbolic model checker for DEL from [2]. SMCDEL normally uses the BDD library CacBDD [16] which does not support ZDDs. Hence we also use the library CUDD [21] which does support ZDDs. However, also CUDD does not support the generalized elimination rules from Definition 4. Therefore we use Fact 6 to simulate the \(T1\), \(E0\) and \(E1\) variants. Our new code -- now merged into SMCDEL -- provides easy ways to create and update knowledge structures where the state law is represented using any of the four ZDD variants.
An additional detail is that CUDD always uses so-called complement edges to optimize BDDs, but not for ZDDs. To compare the sizes of ZDDs to BDDs without complement edges we still use CacBDD. Altogether in our data set we thus record the sizes of six decision diagrams for each state law: the EQ rule with and without complement edges (called BDD and BDDc) and the four ZDD variants from Definition 4. We stress that by size of a diagram we mean the node count and not memory in bytes, because the former is independent of what libraries are used, whereas the latter depends on additional optimisations.
It now remains to choose examples. We picked three well-known logic puzzles from the literature with different kinds of state laws, such that we also expect the advantage of ZDDs to vary between them.
Muddy ChildrenThe Muddy Children are probably the best-known example in epistemic reasoning, hence we skip the explanation here and refer to the literature starting with [15]. A formalisation of the puzzle can be found in [9, Section 4.10] and the symbolic encoding in [2, Section 4].
Dining CryptographersThis problem and the protocol to solve it was first presented by [8]:
"Three cryptographers gather around a table for dinner. The waiter informs them that the meal has been paid for by someone, who could be one of the cryptographers or the National Security Agency (NSA). The cryptographers respect each other's right to make an anonymous payment, but want to find out whether the NSA paid."
The solution uses random coin flips under the table, each observed by two neighbouring cryptographers but not visible to the third one. A formalisation and solution using Kripke models can be found in [12]. To encode the problem in a knowledge structure we let \(p_{0}\) mean that the NSA paid, \(p_{i}\) for \(i\in\{1,2,3\}\) that \(i\) paid. Moreover, \(p_{k}\) for \(k\in\{4,5,6\}\) represents a coin. The initial scenario is then \((V=\{p_{0},\ldots,p_{6}\},\theta=\otimes_{1}\{p_{0},p_{1},p_{2},p_{3}\},O_{1} =\{p_{1},p_{4},p_{5}\},O_{2}=\{p_{2},p_{4},p_{6}\},O_{3}=\{p_{3},p_{5},p_{6}\})\) where the state law \(\theta\) says that exactly one cryptographer or the NSA must have paid. In the solution then each cryptographer announces the XOR (\(\otimes\)) of all bits they observe, with the exception that the payer should invert their publicly announced bit. Formally, we get a sequence of three public announcements \([?!(\otimes p_{1},p_{4},p_{5})][?!(\otimes p_{2},p_{4},p_{6})][?!(\otimes p_{3 },p_{5},p_{6})]\) where \([?!\psi]\varphi:=[!\psi]\varphi\wedge[\neg!\psi]\varphi\) abbreviates announcing whether. The protocol can be generalised to any odd number \(n\) instead of three participants.
Sum and ProductThe following puzzle was originally introduced in 1969 by H. Freudenthal. The translation is from [10] where the puzzle is also formalised in DEL:
A says to S and P: I have chosen two integers \(x,y\) such that \(1<x<y\) and \(x+y\leq 100\). In a moment, I will inform S only of \(s=x+y\), and P only of \(p=xy\). These announcements remain private. You are required to determine the pair \((x,y)\). He acts as said. The following conversation now takes place: P says: "I do not know it." -- S says: "I knew you didn't." -- P says: "I now know it." -- S says: "I now also know it." -- Determine the pair (x, y).
Solving the puzzle using explicit model checking is discussed in [11]. To represent the four variables and their values in propositional logic we need a binary encoding, using \(\lceil\log_{2}N\rceil\) propositions for each variable that take values up to \(N\). For example, to represent \(x\leq 100\) we use \(p_{1},\ldots,p_{7}\) and encode the statement \(x=5\) as \(p_{1}\wedge p_{2}\wedge p_{3}\wedge p_{4}\wedge\neg p_{5}\wedge p_{6}\wedge \neg p_{7}\), corresponding to the bit-string \(0000101\) for \(5\).
The initial state law for Sum and Product is a big disjunction over all possible pairs of \(x\) and \(y\) with the given restrictions, and the observational variables ensure that agents \(S\) and \(P\) know the values of \(s\) and \(p\) respectively. For a detailed definition of the knowledge structure, see [2, Section 5].
The announcements in the dialogue are formalised as follows, combining the first two into one: First \(S\) says \(K_{S}\neg\bigvee_{i+j\leq 100}K_{P}(x=i\wedge y=j)\), then \(P\) says \(\bigvee_{i+j\leq 100}K_{P}(x=i\wedge y=j)\) and finally \(S\) says \(\bigvee_{i+j\leq 100}K_{S}(x=i\wedge y=j)\). Solutions to the puzzle are states where these three formulas can be truthfully announced after each other. A common variation on the problem is to change the upper bound for \(x+y\). We use this to turn obtain a scalable benchmark, starting with \(65\) to ensure there exists at least one answer.
It is well known that ZDDs perform better on sparse sets [4]. In our case, sparsity is the number of states in the model divided by the total number of possible states for the given vocabulary. Our three examples vary a lot in their sparsity: Muddy Children's sparsity is \(0.5\) on average (going from \(0.875\) to \(0.125\), for \(3\) agents), Dining Cryptographers is fairly sparse from start to finish (\(0.25\) to \(0.0625\), for \(3\) agents), and Sum and Product is extremely sparse (e.g. starting with \(<1.369\cdot 10^{-7}\) for \(x+y\leq 100\)).
## 5 Results
For each example we present a selection of results we deem most interesting, showing differences between BDD and ZDD sizes. The full data set for two examples can be found in the appendix where we also
include instructions how all of the results can be reproduced.
Muddy childrenWe vary the number of children \(n\) from 5 to 40, in steps of 5. We can also vary the number of muddy children \(m\leq n\), but mostly report results here where \(m=n\). Given any number of children, we record the size of the decision diagrams of the state law after the \(k\)th announcement, where \(k\) ranges from 0 (no announcements made yet) to \(m-1\) (after which all children know their own state).
As an example, let us fix \(n=m=20\). Figure 2(a) shows the size of the decision diagrams after each announcement. The lines all follow a similar curve, with the largest relative differences in the initial and final states. Initially the most compact variant is T1 whereas at the end E0 is the most compact. This matches the asymmetry in the Muddy Children story: at the start the state law is \(p_{1}\vee\ldots\lor p_{n}\), hence all Then edges lead to 1 and \(T1\) removes all nodes. In contrast, at the end the state law is \(p_{1}\wedge\ldots\wedge p_{n}\) which means that all Else edges lead to 0 and thus \(E0\) eliminates all nodes.
Hence at different stages different variants are more compact. But we want a representation that is compact throughout the whole process. We thus consider the average size over all announcements, varying \(n\) from 5 to 40. Figure 2(b) shows the relative size differences, with standard BDDs as 100%. The \(T0\)/\(E1\) and the BDDc/\(E0\)/\(T1\) lines overlap. We see that \(T1\) and \(E0\) are more compact for small models, but not better than BDDs with complement edges and this advantage shrinks with a larger number of agents.
We also computed sizes for \(m<n\), i.e. not all children being muddy. In this case the sizes for each update step stay the same but there are fewer update steps because the last truthful announcement is in round \(m-1\). As expected this is in favour of the \(T1\) variant.
Dining cryptographersFor 13 agents we show the sizes after each announcement in Figure 3(a). It becomes clear that there is little difference between the variants, which can be explained by the sparsity of the model throughout the whole story. Still, the \(T0\)/\(E0\) variants slightly outperform the BDD(c) and the \(T1\)/\(E1\) variants. This makes sense as most variables saying that agent \(i\) paid will be false. For lower numbers of agents the difference is larger, as visible in Figure 3(b) where we vary the number of agents from 3 to 13. Note that \(T1\) and \(E1\) overlap here, and \(T0\) provides the best advantage.
Figure 3: Results for Muddy Children.
Sum and ProductIn this last example we can vary the upper bound of \(x+y\) from 50 to 350, but not the number of agents and announcements. Figure 4(a) shows the sizes averaged over all four stages. We note that the BDD(c), \(T1\) and \(E1\) lines all overlap (with insignificant differences), and that T0 and E0 perform the best here. In contrast to the first two examples, this advantage does not disappear for larger instances of the puzzle, as can be seen in Figure 4(b) where we show the relative differences. Interestingly, we see that \(T0\) and \(E0\) meet up and diverge again wherever the bound for \(x+y\) is a power of 2 (i.e. 64, 128 or 256) which we mark by vertical dashed lines. This is due to the bit-wise encoding where just above powers of two an additional bit is needed, but it must be false for almost all values.
Figure 4: Results for Dining Cryptographers.
Figure 5: Results for Sum and Product.
## 6 Conclusion
In all experiments we find a ZDD elimination rule that can reduce the number of nodes compared to BDDs, with the exception that in the Muddy Children example complement edges provide the same advantage. This leads us to conclude that ZDDs are a promising tool for DEL model checking. Specifically, if domain knowledge about the particular model allows one to predict which ZDD variant will be more compact, ZDDs can outcompete BDDs.
The BDD elimination rule treats true and false atomic propositions symmetrically, whereas ZDD rules are asymmetric. This means their success depends on asymmetry in the model.
When translating an example from natural language to a formal models we usually try to avoid redundant variables, which already reduces the number of BDD-eliminable nodes. This is likely the reason why using ZDDs provides an advantage or, for examples with a sparsity around 0.5 like the Muddy Children, at least the same performance as BDDs with complement edges.
Specifically for logic puzzles, usually all variables are needed, and models become asymmetric and sparse as information is revealed and possible answers are ruled out. Our results confirm that sparsity and the kind of asymmetry prevalent in the model can predict which ZDD variant is most beneficial.
In this article we only considered S5. SMCDEL also provides modules for K and in further experiments we compared the sizes of ZDDs and BDDs of the state law of _belief_ structures. As an example we used the famous Sally-Anne false belief task. The results were similar to those here and can be found in [18].
Future workAn obvious limitation is that we only compared memory and not computation time. The size of a decision diagram correlates with the computation time needed to build it. But the step-wise construction techniques in SMCDEL are slower for ZDDs than for BDDs. For example, to compute the Sum and Product result we rather convert each state law BDD to ZDDs instead of computing ZDDs directly. Before a meaningful comparison of computation time can be done, the construction methods for ZDDs need to be further optimized.
We found some indicators which elimination rule is most compact in which case, but a more general approach to formalise domain knowledge and use it to make a correct prediction would be a powerful tool.
AcknowledgementsThis work is based on the master thesis [18] by the first author, written at the University of Groningen and co-supervised by Rineke Verbrugge and the second author.
We thank the TARK reviewers for their careful reading and helpful comments on this article.
|
2305.04500 | Well-being policy evaluation methodology based on WE pluralism | Methodologies for evaluating and selecting policies that contribute to the
well-being of diverse populations need clarification. To bridge the gap between
objective indicators and policies related to well-being, this study shifts from
constitutive pluralism based on objective indicators to conceptual pluralism
that emphasizes subjective context, develops from subject-object pluralism
through individual-group pluralism to WE pluralism, and presents a new policy
evaluation method that combines joint fact-finding based on policy plurality.
First, to evaluate policies involving diverse stakeholders, I develop from
individual subjectivity-objectivity to individual subjectivity and group
intersubjectivity, and then move to a narrow-wide WE pluralism in the gradation
of I-family-community-municipality-nation-world. Additionally, by referring to
some functional forms of well-being, I formulate the dependence of well-being
on narrow-wide WE. Finally, given that policies themselves have a plurality of
social, ecological, and economic values, I define a set of policies for each of
the narrow-wide WE and consider a mapping between the two to provide an
evaluation basis. Furthermore, by combining well-being and joint fact-finding
on the narrow-wide WE consensus, the policy evaluation method is formulated.
The fact-value combined parameter system, combined policy-making approach, and
combined impact evaluation are disclosed as examples of implementation. This
paper contributes to the realization of a well-being society by bridging
philosophical theory and policies based on WE pluralism and presenting a new
method of policy evaluation based on subjective context and consensus building. | Takeshi Kato | 2023-05-08T06:51:43Z | http://arxiv.org/abs/2305.04500v1 | # Well-being policy evaluation methodology based on WE pluralism
###### Abstract
Methodologies for evaluating and selecting policies that contribute to the well-being of diverse populations need clarification. To bridge the gap between objective indicators and policies related to well-being, this study shifts from constitutive pluralism based on objective indicators to conceptual pluralism that emphasizes subjective context, develops from subject-object pluralism through individual-group pluralism to WE pluralism, and presents a new policy evaluation method that combines joint fact-finding based on policy plurality. First, to evaluate policies involving diverse stakeholders, I develop from individual subjectivity-objectivity to individual subjectivity and group intersubjectivity, and then move to a narrow-wide WE pluralism in the gradation of I-family-community-municipality-nation-world. Additionally, by referring to some functional forms of well-being, I formulate the dependence of well-being on narrow-wide WE. Finally, given that policies themselves have a plurality of social, ecological, and economic values, I define a set of policies for each of the narrow-wide WE and consider a mapping between the two to provide an evaluation basis. Furthermore, by combining well-being and joint fact-finding on the narrow-wide WE consensus, the policy evaluation method is formulated. The fact-value combined parameter system, combined policy-making approach, and combined impact evaluation are disclosed as examples of implementation. This paper contributes to the realization of a well-being society by bridging philosophical theory and policies based on WE pluralism and presenting a new method of policy evaluation based on subjective context and consensus building.
Well-being, policy evaluation, WE pluralism, Consensus building, Joint fact-finding.
Introduction
**Policies** that contribute to **well-being** are needed worldwide [1, 2]. The Organisation for Economic Cooperation and Development (OECD) and the Wellbeing Economy Alliance (WEAll) present a well-being-oriented economy [3, 4]. The United Nations Sustainable Development Goals [5] include "health and well-being for all" as Goal 3. Japan's Digital Garden City Nation Initiative [6] and Smart Wellness City Initiative [7] aim to improve well-being. In the early stages of economic development, economic indicators such as production and income were important because the satisfaction of basic needs was the main issue. However, as basic needs are satisfied, it has become necessary to consider well-being related to non-economic factors such as life feelings and social relationships [1].
Various indicators have been developed to evaluate policies that contribute to well-being, including the UN's Human Development Index, the OECD's Better Life Index, and WEAll's Happy Planet Index [8]. However, in the Well-being Indicators that comprise the OECD's Better Life Index, objective indicators for income, housing, health, education, environment, and safety do not necessarily correspond to nation rankings of subjective life satisfaction [9]. The Liveable Well-Being City Indicators [10], developed by Japan for its Digital Garden City Nation Initiative, found little correlation between 22 categories of objective indicators and 33 categories of subjective indicators for 1021 municipalities [11]. In other words, there is a gap between subjective and objective indicators, and it is necessary to reexamine how policies should be evaluated.
Mitchell et al. discuss the science and philosophy of well-being [12]. The science of well-being includes **measurement pluralism**: a position that seeks to measure constructs of well-being by selecting a variety of indicators, and **methodological pluralism**: a position that seeks to parallel or combine various approaches such as economics, sociology, and psychology, as well as construct selection and measures. Measurement pluralism argues that the Well-being Indicators and Liveable Well-Being City Indicators described above cannot be a generic gold standard, and methodological pluralism requires not only a diverse selection of indicators but also a combination of various quantitative and qualitative methodologies at the psychological, cultural, and social levels.
Compared with these sciences, philosophies of well-being include **constitutive pluralism**: a position that well-being is composed of multiple elements that are not reducible to each other, and **conceptual pluralism**: a position that the appropriate concept of well-being depends on the **subjective and social context** of people's facts, environment, and goals. Constitutive pluralism attempts to explain well-being comprehensively based on a list of goods or indicators, whereas conceptual pluralism assumes that the constructs differ from person to person or from period to period of life. If a constitutivist modify constitutive pluralism to take the position that different constructs are involved according to diverse contexts, it is little different from conceptual pluralism. Mitchell et al. then argue that the methodological pluralism of science
recognizes the constructive concepts of philosophy, while the conceptual pluralism of philosophy must take on the methodology of science for social practice and combine the two.
Even limiting the constructs of well-being to subject and object for simplicity, according to Ishida, there are **subjectivism**, **objectivism**, and **subject-object pluralism**[13]. Subject-object pluralism states that a hybrid of individual's subjective attitudes and objective values that are independent of them is conducive to well-being. This can be described as a hybrid of constitutive pluralism and conceptual pluralism. It is further classified into symmetrical and asymmetrical pluralism depending on whether subjectivity and objectivity are treated symmetrically or not. In symmetric pluralism, split cases may occur where subjective and objective evaluations are divided. But in the asymmetric pluralism proposed by Ishida, subjective attitude is a monotonically increasing function with positive and negative values, objective value is a function with positive values, and well-being is formulated as the product of both functions to avoid split cases.
According to Mitchell et al. and Ishida's classification, the OECD's Well-being Indicators and Japan's Liveable Well-Being City Indicators, mentioned above, belong to measurement pluralism and methodological pluralism in the scientific sense, and constitutive pluralism and subject-object pluralism in the philosophical sense, in that they compare the well-being of multiple countries and municipalities using multiple objective and subjective indicators and findings from economics, sociology, and psychology. In these indicators, the context of people in conceptual pluralism is either not taken into account or it is implicitly assumed that the users of the indicators use them with context in mind.
Objective indicators have a certain significance in comparing the sufficiency of material conditions and social infrastructure, but the problem is that, as already mentioned, there is a gap between objective and subjective indicators. Policies involve the investment of objective goods such as funds and personnel, but even if policies are implemented based on objective indicators, they do not necessarily lead to subjective well-being. In other words, in order to implement policies that contribute to well-being, policies need to be pre-evaluated and selected based on indicators linked to subjectivity. This is a position that leans toward subjectivism and, if context is taken into account, a shift from constitutive pluralism to conceptual pluralism.
Another problem is that policy involves **consensus building** not by a single individual but by a group consisting of many stakeholders, and on top of that, fairness, rationality with respect to situation analysis and future projections, efficiency with respect to cost and time, and stability after agreement are also required [14]. In other words, even if policies are evaluated based on indicators linked to subjectivity, it is necessary to fairly take diverse subjectivities into account, which leads to a new consideration of **individual-group pluralism**. Moreover, given that the policy itself encompasses a variety of elements such as funds, workforce, time, effects, and side effects, it is imperative to consider the **policy plurality**. The combination of parameters among the constructive elements will generate a plethora of policy proposals. It is difficult to select a policy from among them based solely on subjectivity; therefore, an objective indicator must be introduced once
more. Furthermore, if rationality and efficiency are sought to convince a large number of stakeholders, **joint fact-finding** is required [15]. Therefore, it is necessary to evaluate policies that contribute to well-being based on consensus by incorporating not only subjective indicators but also objective joint facts again, and combining that with joint fact-finding while respecting subjective context.
The purpose of this paper is to bridge the gap between indicators and policies on well-being and provide a methodology for evaluating policies conducive to well-being. To this end, I first take the positions of methodological pluralism and subject-object pluralism. I derive individual-group pluralism by replacing objectivity with group intersubjectivity to incorporate conceptual pluralism (subjective context). I then further develop individual subjectivity and group intersubjectivity into a new **WE pluralism** based on the view of "self as WE." I then attempt to formulate a methodology for evaluating well-being policies by reintroducing intersubjective and objective joint fact-finding into WE pluralism while taking into account policy plurality in addition to WE plurality.
This paper proceeds as follows. In Chapter 2, I first refer to Ishida's subject-object pluralism formula and expand it through individual-group pluralism to WE pluralism, which incorporates a conceptual pluralism that leans toward subjectivism. In Chapter 3, I extend WE pluralism by incorporating policy plurality. I formulate a consensus in "narrow-wide WE" and present a method for evaluating policies by combining consensus building and joint fact-finding. Then, I disclose three practical examples based on this policy evaluation method. Finally, in Chapter 4, I summarize the conclusions of this paper and discuss future issues.
## 2 WE pluralism
### From subject-object to individual-group pluralism
First, I refer to subject-object pluralism. This is the position that subjectivity and objectivity are independent of each other, and that a hybrid of an individual \(p\)'s subjective attitude for policy \(x\) and objective values independent of it contributes to well-being. Based on Woodard's classification [16], Ishida expresses subject-object pluralism as follows [13].
**Subject-object pluralism**: (1) If an individual \(p\) has a positive attitude for \(x\) and \(x\) has goodness independent of the \(p\)'s positive attitude, then \(x\) contributes to the \(p\)'s well-being. (2) If an \(p\) has a negative attitude for \(x\) and \(x\) has badness independent of the \(p\)'s negative attitude, then \(x\) impairs the \(p\)'s well-being.
Let \(W_{s}(x,p)\) be the subjective attitude of \(p\) for \(x\) and \(W_{o}(x)\) be the objective value independent of it. Then, well-being \(W(x,p)\) can be expressed as the simplest expression as follows [13].
\[W(x,p)=W_{s}(x,p)+W_{o}(x) \tag{1}\]
I would now like to reconsider objective value, which does not depend on subjective attitudes. As discussed in Chapter 1, indicators of objective value are not necessarily tied to subjective well-being. Let me also reconsider that objective value is calculated from some data. According to Deguchi, even the physical constants that can be viewed as the most objective come from a contest of certainty that combines various physical measurements and statistical treatments, with no basis for the assumptions underlying the statistical processing, which were accepted as the result of a collaborative effort [17, 18]. According to Otsuka, in data processing, there is an assumption of the ontology of the world of probability models in its premises. There are different semantic interpretations of the probability model, subjectivism and frequentism, and the decisions that justify the results are based on epistemological beliefs [19].
Given these considerations, an objective value is a value that is determined to be correct through the joint work and interpretation of a large number of people, which can also be thought of as the intersubjectivity of a group. In other words, subjectivity and objectivity are positioned as individual subjectivity and intersubjectivity of a larger group, and subject-object pluralism is replaced by individual-group pluralism.
**Individual-group pluralism**: If an individual \(p\) has subjective value for \(x\) and a group \(g\) has intersubjective value for \(x\), then \(x\) contributes to the well-being of \(p\) and \(g\). (Disjunction regarding negative values is omitted.)
Replacing the objective value \(W_{o}(x)\) in Equation (1) by the intersubjectivity \(W_{is}(x,g)\) of group \(g\), well-being \(W(x)\) in individual-group pluralism can be expressed as follows.
\[W(x)=W_{s}(x,p)+W_{is}(x,g) \tag{2}\]
Here, \(W_{s}(x,p)\) and \(W_{is}(x,g)\) do not necessarily have to be composed of the same elements or methodologies. Even if they are the same elements, they can be discarded in consideration of the context of individual \(p\) and group \(g\). From this perspective, Equation (2) can be viewed as methodological pluralism and conceptual pluralism. In addition, the fact that the subjectivity of individual \(p\) and the intersubjectivity of group \(g\) differ is a situation that can actually occur, and I do not dare here to formulate a formulation that avoids split cases between the two.
### From individual-group to WE pluralism
Next, I will examine the individual and the group. Deguchi describes the "self as WE," an East Asian view of the self that is connected to the lineage of Lao Zhuang and Zen thought, as opposed to the Western view
of the "self as I" [20, 21]. The "self as WE" is a multi-agent network system of entrustment that includes "I" and other people and things around "I."
From this perspective, the dichotomies of subjectivity-objectivity and individual-group no longer exist, and these are positioned relative to the gradations between, for example, I-family-community-municipality-nation-world. Individual-group pluralism is replaced by "WE pluralism" as the relative openness and spread of WE, and for the sake of simplicity, the two can be taken out as a "narrow-wide WE" pluralism. In Deguchi's words, this can be called the "WE turn" of individual-group pluralism.
**WE pluralism**: If a "narrow WE" has subjective value for \(x\), and a "wide WE" has subjective value for \(x\), then \(x\) contributes to the well-being of "narrow WE" and "wide WE." (Disjunction regarding negative values is omitted.)
Replacing the subjective value \(W_{s}(x,p)\) of individual \(p\) and the intersubjective value \(W_{is}(x,g)\) of group \(g\) in Equation (2) by the subjective value \(W_{n}(x)\) of "narrow WE" and \(W_{w}(x)\) of "wide WE," respectively, WE pluralism is expressed as follows.
\[W(x)=W_{n}(x)+W_{w}(x) \tag{3}\]
Note that \(W_{n}(x)\) and \(W_{w}(x)\) do not necessarily consist of the same elements as described in the individual-group pluralism, but include the context of conceptual pluralism. Even if the "narrow WE" is included within the "wide WE," just as the subjective value \(W_{s}(x,p)\) of individual \(p\) and the intersubjective value \(W_{is}(x,g)\) of group \(g\) may differ, the subjective value \(W_{n}(x)\) that the "narrow WE" has for \(x\) in its context and the subjective value \(W_{w}(x)\) that the "wide WE" has for \(x\) in its context may differ, so each is taken into account to represent the overall well-being \(W(x).\) Also, both are only two representatives taken out of the spread of WEs, and by considering a variety of WEs, Equation (3) can be expanded into a polynomial equation. Each term is a function with positive or negative values, and the overall well-being \(W(x)\) will vary depending on what weights are used to add up the value of policy \(x\) for each WE.
I now consider specific functional forms of \(W_{n}(x)\) and \(W_{w}(x)\) in order to picture how \(W(x)\) changes. Figure 1 shows three graphs drawing typical functional forms proposed for well-being.
Kagan uses a saturated form of the function to represent the gradually decreasing increment of well-being \(W(x)\) relative to the increment of pleasure \(x\) for objective goods, as shown in Figure 1 (a) [22]. Sarch expresses the discounting/inflating of the increment of well-being \(W(x)\) relative to the increment of enjoyment of life, as shown in Figure 1 (b), with the net enjoyment \(x\) at base and the conditional function \(f(a)\) of achievement \(a\) as power exponent as follows [23]. Whether it is in discounted or inflated form depends on whether \(f(a)\) exceeds the threshold value of 1 or not.
\[W(x)=\begin{cases}x^{f(a)}&if\ x\geq 0\\ -|x|^{f(a)}&if\ x<0\end{cases} \tag{4}\]
As functional forms of well-being, the utility function in economics and the value function in behavioral economics prospect theory may be helpful. Autility function is a function of the magnitude of utility relative to preference. By reading the preference as a positive attitude or subjective value for \(x\) and the utility as well-being, the utility function can be referred as a function of well-being. In prospect theory, the value function is multiplied by a probability weighting function to represent expected utility, while the value function corresponds to the utility function in general economics. Prospect theory uses an asymmetric functional form in which the value function is expressed in saturated or discounted form, and in which ill-being is perceived as greater than well-being, as shown in Figure 1 (c). Specifically, several functions have been proposed as follows [24].
\[W(x)=\begin{cases}x&\text{Linear}\\ \ln(a+x)&\text{Logarithmic}\\ x^{a}&\text{Power}\\ ax-x^{2}&\text{Quadratic}\\ 1-e^{-ax}&\text{Exponential}\\ bx-e^{-ax}&\text{Lin.}+\text{Exp.}\end{cases}\qquad if\ x\geq 0 \tag{5}\]
Figure 1: (a) Kagan, (b) Sarch, and (c) prospect theory.
Among these functions, the Power form (corresponding to Sarch) or the Exponential form (corresponding to Kagan) is generally used [24, 25]. Here, given that the function of well-being with respect to income (a preference) is in saturated form [1], and using the exponential form following Kagan, the asymmetric function for positive and negative \(x\) is expressed as follows.
\[W(x)=\begin{cases}\begin{array}{cc}1-e^{-\alpha x}&\text{if }x\geq 0\\ -\lambda\big{(}1-e^{\beta x}\big{)}&\text{if }x<0\end{array}\end{cases} \tag{6}\]
Considering the different context and subjective value of "narrow WE" and "wide WE" as WE pluralism, \(W_{n}(x)\) and \(W_{w}(x)\) in Equation (3) are expressed as follows: \(x_{n}\) and \(x_{w}\) are variables, \(\alpha_{n},\beta_{n},\lambda_{n},\alpha_{w},\beta_{w}\) and \(\lambda_{w}\) are coefficients, and \(r\) (\(0\leq r\leq 1\)) is the weight for normalization.
\[W=r\cdot W_{n}(x_{n})+(1-r)\cdot W_{w}(x_{w}) \tag{7}\]
\[W_{n}(x_{n})=\begin{cases}\begin{array}{cc}1-e^{-\alpha_{n}x_{n}}&\text{if }x_{n}\geq 0\\ -\lambda_{n}\big{(}1-e^{\beta_{n}x_{n}}\big{)}&\text{if }x_{n}<0\end{array} \end{cases} \tag{8}\]
Thus, the variation of \(W\) with respect to the "narrow-wide WE" can be seen. \(W_{n}(x_{n})\) and \(W_{w}(x_{w})\) are each composed of several elements, but it is assumed that they are aggregated into a single variable \(x_{n}\) and \(x_{w}\), respectively. For simplicity, the coefficients of both are \(\alpha_{n}=\beta_{n}=\alpha_{w}=\beta_{w}=1\) and \(\lambda_{n}=\lambda_{w}=2\). The reason for setting \(\lambda_{n}=\lambda_{w}=2\) is to represent asymmetry between well-being and ill-being. Figure 2 shows a graph of the calculation results.
In Figure 2, the graph asymptotes to \(W\to 1\) when \(x_{n}\to\infty\) and \(x_{w}\to\infty,\) and to \(W\to-2\) when \(x_{n}\to-\infty\) and \(x_{w}\to-\infty\). As shown in Figure 2 (a)-(c), the shape of the 3-dimensional surface changes depending on whether "narrow WE" or "wide WE" is weighted (value of \(r\)). That is, in (a), well-being is determined by both "narrow WE" and "wide WE," while in (b) and (c) it is determined almost exclusively by one or the other. The (d) shows the case where both are equal, i.e., both reach a consensus, and the change in well-being is represented by a curve rather than a 3-dimensional surface.
## 3 Policy evaluation method
### Policy evaluation based on policy plurality and WE consensus
From the standpoint of conceptual pluralism in WE pluralism, the "narrow WE" \(W_{n}(x_{n})\) and "wide WE" \(W_{w}(x_{w})\) elements for a given policy are pluralistic and differ for each context.
**Case A**: Policies to introduce natural energy may contribute to wellbeing, but this does not necessarily coincide with reduced energy costs, reduced carbon footprint, protection of natural landscapes, increased local economic circulation, etc. [26], and the context varies by the spread of WE: I-family-community -municipality-nation-world.
**Case B**: Policies to enhance social security may contribute to the well-being of the covered population, but this may lead to higher taxes and administrative costs, suppression of other budgets and personnel (education, industrial development, environmental protection, etc.), distortion of social justice, and reduced work incentives [27].
Taking into account the policy plurality and context, Equation (7) can be rewritten and expressed as follows: Let \(X_{n}\) and \(X_{w}\) be the sets of elements and \(x_{n}\) and \(x_{w}\) be vectors consisting of elements with respect to "narrow WE" and "wide WE."
\[W=r\cdot W_{n}(x_{n})+(1-r)\cdot W_{w}(x_{w}) \tag{9}\]
Figure 2: Well-being calculation results for “narrow–wide WE:”(a) \(r=0.5\) (“narrow WE” and “wide WE” are symmetrical), (b) \(r=0.8\) (weighting “narrow WE”), (c) \(r=0.2\) (weighting “wide WE”), and (d) \(W_{n}(x_{n})\equiv W_{w}(x_{w})\) (“narrow WE” and “wide WE” are equal).
\[\begin{array}{l}X_{n}=\{x_{n1},\,x_{n2},\,\,\cdots,\,x_{ni},\,\,\cdots\}\\ X_{w}=\{x_{w1},\,x_{w2},\,\,\cdots,\,x_{wj},\,\,\cdots\}\end{array}\]
Here, I assume that there is a relationship between the components of the "narrow WE" and the "wide WE" for a given policy. As shown in Figure 3 (a), given the mapping \(f\) from set \(X_{w}\) to set \(X_{n}\) (the inverse mapping \(f^{-1}\) from set \(X_{n}\) to set \(X_{w}\)), Equation (9) can be expressed using the common variable vector \(x_{w}\) as follows.
\[\begin{array}{l}x_{n}=f(x_{w})\\ W=r\cdot W_{n}\circ f(x_{w})+(1-r)\cdot W_{w}(x_{w})\end{array} \tag{10}\]
Furthermore, if there is consensus between "narrow WE" and "wide WE" for a given policy, i.e., if their contexts match, then they both have the same function for the variable vector \(x_{w}\) and their weights \(r\) will disappear. Equation (10) can be rewritten as the following concise function, which corresponds to Figure 2 (d).
\[\begin{array}{l}W_{n}\circ f\equiv W_{w}\\ W=W_{w}(x_{w})\end{array} \tag{11}\]
Now, I have moved from subject-object pluralism, individual subjectivity-group intersubjectivity pluralism, and "narrow-wide WE" pluralism, policy and context plurality (conceptual pluralism), to "narrow-wide WE" consensus. However, as discussed in Chapter 1, consensus building requires rationality, efficiency, and stability along with fairness, and joint fact-finding is required [14, 15]. As discussed in Chapter 1, it is difficult for people to judge the vast number of policy proposals resulting from the combination of
elements solely on the basis of their subjectivity, and as demonstrated in Case A and Case B, it is impossible for people to grasp the complex relationships among the various elements solely on the basis of their subjectivity. While Section 2.1 mentioned that objective values are subjective values determined to be correct through people's collaboration and interpretation, it is useful to reiterate the intersubjectivity that all of the various stakeholders would be able to accept as correct, again as objective joint facts.
Therefore, rather than replacing "narrow-wide WE" consensus for joint facts, I assume that the joint facts perturbatively affect the consensus. As shown in Figure 3 (b), considering the mapping g from the direct product \(X_{w}\times X_{c}\) of the common set \(X_{w}\) of the "narrow-wide WE" and the set \(X_{c}\) of joint facts to the set \(X_{w}\)' incorporating the influence of \(X_{c}\), Equation (11) can be rewritten using vectors \(\mathbf{x_{w}},\mathbf{x_{c}}\) and \(\mathbf{x_{w}}^{\prime}\) as follows.
\[X_{c}=\{x_{c1},\ x_{c2},\ \cdots,\ x_{ck},\ \cdots\} \tag{12}\]
\[\mathbf{x_{w}}^{\prime}=g(\mathbf{x_{w}},\mathbf{x_{c}}) \tag{13}\]
\[W^{\prime}=W_{w}(\mathbf{x_{w}}^{\prime})\]
The meaning of Equation (13) is not to enumerate a list of objective indicators, as is the case with constitutive pluralism, but to explain well-being by combining joint fact-finding for policies into it, while emphasizing the subjective context in conceptual pluralism. In other words, as shown in Figure 4, well-being \(W^{\prime}=W_{w}(\mathbf{x_{w}}^{\prime})\) is expressed with the agreed subjective value \(\mathbf{x_{w}}\) as the main term and the objective indicators based on joint fact-finding as the perturbation term \(\mathbf{x_{c}}\). In this way, it is possible to assess the effects of policies on well-being under a "narrow-wide WE" consensus, while avoiding the gap between subjective and objective indicators as described in Chapter 1.
Figure 4: Well-being policy evaluation method based on joint fact-finding under WE consensus.
Note that Equations (9)-(13) were expressed in terms of "narrow WE" and "wide WE," but when a variety of WEs are involved, they can be expanded to polynomials and formulated using the same approach as before. Although Equation (11) shows the case where there is only one consensus, it is also possible, for example, to evaluate policies for the consensus of each stakeholder group and compare them to consider better well-being policies for the whole.
### Practical examples of policy evaluation method
This section discloses a method for evaluating policies under WE consensus based on Equations (9)-(13). Figure 5 illustrates the three practical examples.
In the fact-value combined parameter system shown in Figure 5 (a), the value parameter \(\mathbf{x_{w}}\) and the fact parameter \(\mathbf{x_{c}}\) that reflects it are connected by a network to infer the change in the value parameter \(\mathbf{x_{w}}\) in response to a policy, or manipulation of the fact parameter \(\mathbf{x_{c}}\)[11]. The value parameter \(\mathbf{x_{w}}\) is obtained by mapping \(f^{-1}\): \(X_{n}\to X_{w}\) from the subjective questionnaire (set \(X_{n}\)) to the aggregate results (set \(X_{w}\)) in response to Equations (9) and (10), and \(\mathbf{x_{w}}\) representing well-being \(W=W_{w}(\mathbf{x_{w}})\) in response to Equation (11). It is safe to consider that the questionnaire implicitly assumes the consensus \(W_{n}\circ f\equiv W_{w}\) that the aggregate results will be used.
The fact parameter \(\mathbf{x_{c}}\) is objective joint facts (set \(X_{c}\)) for joint fact-finding, and the parameter is chosen to be relational to the value parameter \(\mathbf{x_{w}}\). While in constitutive pluralism, the list of generic objective indicators is enumerated, resulting in a gap from the subjective indicators, the fact parameter \(\mathbf{x_{c}}\) used here reflects the context of WE in conceptual pluralism. There is a tradeoff between versatility and reflectivity, and \(\mathbf{x_{c}}\) here takes the latter position heavily. In other words, it is important how the fact parameter \(\mathbf{x_{c}}\) is
Figure 5: Policy evaluation method based on WE pluralism: (a) fact–value combined parameter system, (b) combined policy-making approach, and (c) combined impact evaluation.
set up to reflect the value parameter \(\mathbf{x_{w}}\). The policy is evaluated by \(W^{\prime}=W_{w}(\mathbf{x_{w}}^{\prime})\) using the relationship \(\mathbf{x_{w}}^{\prime}=g(\mathbf{x_{w}},\mathbf{x_{c}})\) between the value parameter \(\mathbf{x_{w}}\) and the fact parameter \(\mathbf{x_{c}}\), corresponding to Equation (13), by performing the mapping \(g\): \(X_{w}\times X_{c}\to X_{w}^{\prime}\).
In the combined policy-making approach shown in Figure 5 (b), the target function \(W=W_{w}(\mathbf{x_{w}})\) representing well-being is formulated, the fact parameter \(\mathbf{x_{c}}\) representing the policy is combined with the target function, and the policy that maximizes the revised target function \(W^{\prime}=W_{w}(\mathbf{x_{w}}^{\prime})\) is selected [28]. The target function \(W=W_{w}(\mathbf{x_{w}})\) has well-being as the target variable \(W\) and factors such as social, environmental and economic as explanatory variables \(\mathbf{x_{w}}\), and it is obtained by multiple regression analysis of the aggregate results (set \(X_{w}\)) of the subjective questionnaire (set \(X_{n}\)) (corresponding to Equations (10) and (11): \(f^{-1}\): \(X_{n}\to X_{w}\) and \(W_{n}\circ f\equiv W_{w}\)).
The fact parameter \(\mathbf{x_{c}}\) is the result (set \(X_{c}\)) of the multi-agent simulation for the various policies' operating parameters. Depending on the number of combinations of operating parameters, tens of thousands of different results are obtained, as shown in the white circle plots on the ternary graph in Figure 5 (b). Then, by formulating the relational expression \(\mathbf{x_{w}}^{\prime}=g(\mathbf{x_{w}},\mathbf{x_{c}})\) that combines these results \(\mathbf{x_{c}}\) and the explanatory variable \(\mathbf{x_{w}}\) in response to Equation (13), the policy that maximizes the revised target function \(W^{\prime}=W_{w}(\mathbf{x_{w}}^{\prime})\) is selected out of tens of thousands of possible results. The red (Type A), blue (Type B), and green (Type C) circles in Figure 5 (b) show that the desired policies differ depending on how the relational expression \(\mathbf{x_{w}}^{\prime}=g(\mathbf{x_{w}},\mathbf{x_{c}})\) is formulated to express what balance of social, environmental, and economic values is to be emphasized.
In the combined impact evaluation shown in Figure 5 (c), a logic model \(W=W_{w}(\mathbf{x_{w}})\) is established to achieve well-being, the fact parameter \(\mathbf{x_{c}}\) representing the policies is combined with the logic model, and the output of the logic model, impact \(W^{\prime}=W_{w}(\mathbf{x_{w}}^{\prime})\) is evaluated [29]. A logic model, as shown in the upper part of Figure 5 (c), generally describes and illustrates the cause-and-effect relationship of inputs (investments such as funds and personnel)\(\rightarrow\)activities (organizational activities)\(\rightarrow\)outputs (products and services)\(\rightarrow\) outcomes (results produced)\(\rightarrow\)impacts (social, environmental, and economic changes aimed at) [30]. Since the logic model is created by investors, local governments, nonprofit organizations, and corporations etc. under the agreement of stakeholders when they promote social businesses, it can be said to correspond to \(W=W_{w}(\mathbf{x_{w}})\) in Equation (11).
For the fact parameters (set \(X_{c}\)) corresponding to the policy, it can be used that quantitative measurement data, forecast data based on statistical processing, and the results of multi-agent simulations similar to the combined policy-making approach in Figure 5 (b), and evaluate the impact \(W^{\prime}=W_{w}(\mathbf{x_{w}}^{\prime})\) of the policy corresponding to Equation (13) by establishing a relationship \(\mathbf{x_{w}}^{\prime}=g(\mathbf{x_{w}},\mathbf{x_{c}})\) connecting these data \(\mathbf{x_{c}}\) and the variable \(\mathbf{x_{w}}\) in the logic model. Note that the fact parameter \(\mathbf{x_{c}}\) is often combined to the left side of the logic model, since logic models generally tend to become more subjective as one moves from left to
right. In addition, depending on the policy, there may be a negative as well as positive impact, or positive or negative for multiple impacts, and the final evaluation of the policy is left to the stakeholders.
Through the above three examples, I was able to demonstrate the reality and usefulness of the policy evaluation methods presented in this paper. However, the setting of fact parameters that reflect value parameters in the fact-value combined parameter system, the coupling of explanatory variables of the target function and simulation results in the combined policy-making approach, and the coupling of the logic model and fact parameters in the combined impact evaluation are the keys to whether policies can be better evaluated. The techniques and skills involved are beyond the scope of this paper, but will be examined in future fieldwork in communities and municipalities.
## 4 Conclusion
In this paper, to address the issue of how to evaluate policies that contribute to well-being, I first started from subject-object pluralism and developed it into individual-group pluralism and WE pluralism. Furthermore, by incorporating the policy plurality into WE pluralism and combining it with joint fact-finding under the consensus of WE, a method for evaluating the effect of policies on well-being was presented. For reference, Figure 6 (a) and (b) again show schematically the policy evaluation methods based on subject (conceptual)-object (constitutive) pluralism and WE pluralism, respectively. In the former, it was difficult for objective indicators to reflect subjective context, and subjective indicators differed from individual to individual, while in the latter, WE reached a consensus (subjectively) on well-being, and then combined joint facts that WE could accept as possibly objective.
Figure 6: Comparison of policy evaluation methods: (a) subject (conceptual)–object (constitutive) pluralism, and (b) WE pluralism.
Finally, I will briefly discuss the significance of this paper and future issues. First, by developing from subject-object pluralism to individual-group pluralism and then to WE pluralism, and by including the context of conceptual pluralism in them, I avoided the problem of gap between subjective and objective indicators in constitutive pluralism. Second, instead of the dichotomy of individual subjectivity-objectivity, or individual subjectivity and group intersubjectivity, the "narrow-wide WE" concept was developed within the gradation of I-family-community-municipality-nation-world, a new groundwork for evaluating policies involving diverse stakeholders was provided. Third, I formulated a policy evaluation method that combines WE consensus and joint fact-finding, and demonstrated its usefulness by presenting examples of implementation. Future issues are to verify the policy evaluation method of this paper through fieldwork and to study the coupling method of consensus-based subjective value and joint fact-finding. However, the coupling method will be supported by statistical approaches such as behavioral economics and social psychology, and will again continue to pursue intersubjective and objective certainty.
## Additional Notes
For convenience, I summarize here a series of equations according to the synopsis of this paper.
\begin{tabular}{|c|c|c|} \hline Argument & Equation & \(\#\) \\ \hline \hline Subject-object pluralism & \(W(x,p)=W_{s}(x,p)+W_{o}(x)\) & (1) \\ \hline Individual-group pluralism & \(W(x)=W_{s}(x,p)+W_{is}(x,g)\) & (2) \\ \hline WE pluralism & \(W(x)=W_{n}(x)+W_{w}(x)\) & (3) \\ \hline Normalization & \(W=r\cdot W_{n}(x_{n})+(1-r)\cdot W_{w}(x_{w})\) & (7) \\ \hline Policy plurality & \(W=r\cdot W_{n}(x_{n})+(1-r)\cdot W_{w}(x_{w})\) & \\ \(X_{n}=\{x_{n1},\ x_{n2},\ \cdots,\ x_{nli},\ \cdots\}\) & (9) \\ & \(X_{w}=\{x_{w1},\ x_{w2},\ \cdots,\ x_{wj},\ \cdots\}\) & \\ \hline Mapping & \(f^{-1}\colon X_{n}\to X_{w}\) & \\ \(x_{n}=f(x_{w})\) & (10) \\ & \(W=r\cdot W_{n}\circ f(x_{w})+(1-r)\cdot W_{w}(x_{w})\) & \\ \hline Consensus & \(W_{n}\circ f\equiv W_{w}\) & (11) \\ & \(W=W_{w}(x_{w})\) & \\ \hline Joint fact-finding & \(X_{c}=\{x_{c1},\ x_{c2},\ \cdots,\ x_{ck},\ \cdots\}\) & (12) \\ \hline \end{tabular}
## Acknowledgement
In the project "Practical Examination of ELSI brought by Smartization of Community and Four-Dimensional Co-Creation Model" of JST "Responsible Innovation with Conscience and Agility," I would like to thank Professor Yasuo Deguchi of the Graduate School of Letters, Kyoto University for his guidance as principal investigator, Associate Professor Shunsuke Sugimoto of the Faculty of Business and Commerce, Keio University for his valuable and numerous suggestions on well-being and subjectivity as leader of the Mobile WE group, and Hitachi Kyoto University Laboratory for discussions in charge of the Community Evaluation Parameter Group. I also thank those who discussed this paper during the workshop at the 15th Annual Meeting of the Japan Association of Contemporary and Applied Philosophy. This work was supported by JST RISTEX Grant Number JPMJRS22J5, Japan.
|
2306.04050 | LLMZip: Lossless Text Compression using Large Language Models | We provide new estimates of an asymptotic upper bound on the entropy of
English using the large language model LLaMA-7B as a predictor for the next
token given a window of past tokens. This estimate is significantly smaller
than currently available estimates in \cite{cover1978convergent},
\cite{lutati2023focus}. A natural byproduct is an algorithm for lossless
compression of English text which combines the prediction from the large
language model with a lossless compression scheme. Preliminary results from
limited experiments suggest that our scheme outperforms state-of-the-art text
compression schemes such as BSC, ZPAQ, and paq8h. | Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Dileep Kalathil, Jean-Francois Chamberland, Srinivas Shakkottai | 2023-06-06T22:42:00Z | http://arxiv.org/abs/2306.04050v2 | # LLMZip: Lossless Text Compression using Large Language Models
###### Abstract
We provide new estimates of an asymptotic upper bound on the entropy of English using the large language model LLaMA-7B as a predictor for the next token given a window of past tokens. This estimate is significantly smaller than currently available estimates in [1, 2]. A natural byproduct is an algorithm for lossless compression of English text which combines the prediction from the large language model with a lossless compression scheme. Preliminary results from limited experiments suggest that our scheme outperforms state-of-the-art text compression schemes such as BSC, ZPAQ, and paq8h.
## I Introduction
There are close connections between learning, prediction, and compression. The success of ChatGPT has captured the fascination of general public and brought the connection between learning and prediction to the fore. The main advance brought about by large language models such as LLaMA and GPT-4 is that they excel at predicting the next word (token) in a paragraph based on knowing the past several words (tokens).
The connection between prediction and compression was explored as early as 1951 by Shannon in order to estimate the entropy of the English language [3]. The idea that a good predictor for the \(i\)th value in a time series based on the past values can be effectively converted to a good compression algorithm has played a prominent role in information theory. Many algorithms for speech, image, and video compression exploit this idea either explicitly or implicitly. Within the context of lossless compression of English text, the idea of combining a language model with arithmetic coding has emerged as a very effective paradigm [4]. The performance of such a compression scheme depends substantially on the efficacy of the predictor and every time there is a major advance in the prediction capability, it behooves us to study its effect on the compression performance. Indeed, in 2018, the authors of [5] used recurrent neural networks (RNN) as the predictor and reported improved results for certain kinds of sources. Their scheme still did not outperform state-of-the-art algorithms such as BSC and ZPAQ for text compression.
It is therefore natural at this time to study whether we can obtain better compression results and sharper estimates of the entropy of the English language using recent large language models such as LLaMA-7B [6]. This is the main goal of this paper. We show that when the LLaMA-7B large language model is used as the predictor, the asymptotic upper bound on the entropy is 0.709 bits/character when estimated using a 1MB section of the text8 dataset. This is smaller than earlier estimates provided in [1] and [2, Table 4]. The estimate of the upper bound increases to 0.85 bits/character for a 100 KB section of the text from [7], which is still lower than the estimates in [2]. When LLaMA-7B is combined with an Arithmetic coder for compression, we obtain a compression ratio of 0.7101 bits/character on a 1MB section of the text8 dataset and a compression ratio of 0.8426 bits/character on a 100KB section of a text from [7], which are significantly better than the compression ratio obtained using BSC, ZPAQ and pq8h on the full 100MB of the text8 dataset.
## II Intuitive explanation of the main idea
We will use the following example to describe the main idea, which is nearly identical to that proposed by Shannon in [3] for estimating the entropy of English. The main difference is in the use of tokens which represent groups of letters of variable length and in the use of a large language model instead of a human to predict the next token. Consider a part of the sentence that reads as
\[\texttt{My first attempt at writing a book}\]
Our goal is to convert this sentence into a sequence of bits with the least possible length such that the original sequence can be reconstructed from the sequence of bits. This sentence can first be split into a sequence of words (tokens)
\[\texttt{'My'},\ \texttt{'first'},\ \texttt{'attempt'},\ \texttt{'at'},\ \texttt{'writing'},\ \texttt{'a'},\ \texttt{'book'}\]
A language model with memory \(M\) (for example, say \(M=4\)) predicts the next word in the sentence based on observing the past \(M\) words. Specifically, it produces a rank-ordered list of choices for the next word and their probabilities. As shown in
Figure 1, at epoch 5, the model accepts the first 4 words as input and predicts that the next word in the sentence could be words such as'reading\({}^{\prime}\), writing\({}^{\prime}\), driving\({}^{\prime}\), cooking\({}^{\prime}\) etc. The main idea is to compute the rank of the actual word in our sentence ('writing\({}^{\prime}\)) in this list and call it \(R_{5}\). We will assume that the ranks start at 0 i.e., the most likely word has rank 0, the second most likely word has rank 1, and so on. In this example, the rank for 'writing' is \(R_{5}=1\).
Then, we move forward by one word in the sentence, and at epoch 6, we try to predict the 6th word based on words 2 through 5 as shown in Figure 2. In this example, given words 2 through 5, the most likely 6th word would indeed be the same word in the sentence that we wish to encode, 'a\({}^{\prime}\), and hence, the rank \(R_{6}\) would be 0.
If the language model is good, the word that we wish to encode would often appear at the top of the list and hence, the rank would be 0. Thus, if we look at the sequence of ranks, it is likely to have many \(0\)s with decreasing probabilities for the rank being \(1,2,\ldots\). In this example, it is foreseeable that the ranks will be
\[1,0,0,\ldots\]
A sequence with many '0's is typically compressible since it has structured patterns. Thus, the key idea is to compress the ranks using a standard lossless compression algorithm such as zip, arithmetic coding, or Huffman coding which converts the ranks to bits. This is shown in Fig. 3.
When we wish to reconstruct the sequence, we first decompress and unzip the bits to get the ranks, use the same language model one epoch at a time to produce a rank ordered list of possibilities for the next word, and pick the word in the list at rank \(R_{i}\) during the \(i\)th epoch. We use that as input for determining the next word and so on. Note that this requires that the same LLM is used at both the encoder and the decoder.
The idea of encoding ranks was discussed to build intuition, but better compression can be achieved by directly using the probabilities produced by the LLM along with arithmetic coding as discussed in Section III-B3.
Fig. 1: Schematic showing the prediction at epoch 5 for a language model with memory 4.
Fig. 3: Schematic showing the compression of the sequence of ranks to a bit sequence.
Fig. 2: Schematic showing the prediction at epoch 6 for a language model with memory 4.
## III Compression using LLMs
Let \(\mathbf{s}\) denote a sentence from the English language composed of \(N_{c}\) letters, where each letter is assumed to be from the alphabet \(\mathcal{S}\). We assume that we have a dictionary \(\mathcal{X}=[1,D]\) of \(D\) tokens. We first parse \(\mathbf{s}\) into a sequence of \(N_{T}\) tokens denoted by \(\mathbf{x}=x_{1},x_{2},\ldots,x_{i-1},x_{i},x_{i+1},\ldots x_{N_{T}}\), where \(x_{i}\in\mathcal{X}\). There is a one-to-one mapping between \(\mathbf{s}\) and \(\mathbf{x}\) and hence, compressing \(\underline{s}\) is the same as compressing \(\mathbf{x}\). \(x_{i}\)'s can be thought of as realizations of the random variable denoted by the upper case letter \(X_{i}\).
A language model with memory \(M\) is a predictor that operates as follows. At epoch \(i\), it accepts tokens \(x_{i-M},x_{i-M+1},\ldots,x_{i-1}\) and produces a probability mass function for the next token in the sequence conditioned on the past \(M\) tokens given by \(q_{i}(x_{i}):=\Pr(X_{i}=x_{i}|x_{i-1},x_{i-2},\ldots,x_{i-M}),\forall x_{i}\in \mathcal{X}\). The PMF vector \(\mathbf{q}_{i}:=[q_{i}(1),q_{i}(2),\ldots,q_{i}(D)]^{\mathsf{T}}\) is sorted in descending order and let the sorted PMF vector be denoted by \(\tilde{\mathbf{q}}_{i}\). Let \(\gamma_{i}:\mathcal{X}\rightarrow\mathcal{X}\) be a permutation on the integers from 1 to \(D\) such that
\[\tilde{q}_{i}(\gamma_{i}(j))=q_{i}(j),\forall j\in\mathcal{X}.\]
That is, \(\gamma_{i}(j)\) is the rank of the token \(j\) at epoch \(i\). We define the rank of the input sequence at epoch \(i\) as the rank of the token \(x_{i}\) at epoch \(i\), \(r_{i}:=\gamma_{i}(x_{i})\). The sequence \(\{r_{i}\}_{i=1}^{N_{T}}\) is compressed by a lossless compression algorithm (such as zlib) to produce \(N_{b}\) bits which are the final bit representation of the source. A schematic of this scheme is shown in Fig. 4. In general, the lossless compression algorithm may use the sequence of PMF vectors \(\mathbf{q}_{i}\)'s in addition to the sequence of ranks.
The main metric of interest is the compression ratio \(\rho\) defined as
\[\rho:=\frac{N_{b}}{N_{c}}\text{bits/character}.\]
### _Entropy bounds_
Let \(\mathbf{S}\in\mathcal{S}^{\infty}\) be a random process that represents language input. The \(n\)th character in the sequence is denoted by \(S_{n}\), whereas the string of characters from the beginning to the \(n\)th character is expressed as \(\mathbf{S}_{n}\). The tokenizer parses the input string and maps it to a sequence of tokens \(\mathbf{X}=X_{1},X_{2},\ldots\) using a variable-length mapping. In this sequence, \(X_{i}\) is the \(i\)th token. The number of characters employed to generate \(X_{i}\) depends on the realization of the random process and, as such, we introduce random variable \(B_{i}\) to identify the number of characters contained in the \(i\)th token. Motivated by practical considerations, we only admit tokenizers for which \(B_{i}\geq 1\) and \(B_{i}\) is uniformly bounded, with \(B_{i}<\overline{B}<\infty\); these are characteristics of commonly used tokenizers. An immediate consequence of this framework is that, as the number of tokens grows unbounded \(N_{T}\rightarrow\infty\), the number of characters must also approach infinity \(N_{c}\rightarrow\infty\). Formally, consider the tokenizer function \(T:\mathcal{S}^{\mathbb{N}}\rightarrow\mathcal{X}^{\mathbb{N}}\) operating on infinite symbol sequences; that is, \(T(\mathbf{s})=\mathbf{x}\) where \(\mathbf{s}\) is an infinite sequence in \(\mathcal{S}^{\infty}\). For natural number, \(i\in\mathbb{N}\), define \(m_{i}:\mathcal{S}^{\mathbb{N}}\rightarrow\mathbb{N}\) to be the (time) index during which the tokenizer working sequentially on an input sequence \(\mathbf{s}\) outputs its \(i\)th token. Specifically, suppose \(\mathbf{s}\) is given, then
\[m_{i}(\mathbf{s})=\min_{n}\left\{\operatorname{length}\left(T(\mathbf{s}_{n}) \right)\geq i\right\}. \tag{1}\]
We note that, by construction, \(\lim_{n\rightarrow\infty}\operatorname{length}\left(T(\mathbf{s}_{n}) \right)=\infty\) and, as such, \(m_{i}(\cdot)\) is well-defined. It may be pertinent to stress that the tokenizer function applied to truncated sequences is not necessarily injective because multiple finite input series can map to the same output. This phenomenon is a consequence of the fact that, at any point in time, a tokenizer working sequentially may be waiting for an additional symbol before it can unambiguously select the next output token, i.e., there may be instances where \(T(\mathbf{s}_{n})=T(\mathbf{s}_{n+1})\). However, if we restrict the input series to input indices when a new token is produced, then the restricted mapping becomes injective. That is, suppose \(T(\mathbf{s})=\mathbf{x}\), then the only (finite) series of input symbols in the restricted set for which \(T(\mathbf{y}_{n})=\mathbf{x}_{i}\) is \(\mathbf{s}_{m_{i}(\mathbf{s})}\). Given a fixed sequence \(\mathbf{s}\), we can express the number of characters contained in a token as
\[b_{i}=m_{i}(\mathbf{s})-m_{i-1}(\mathbf{s})\]
Fig. 4: Schematic showing the prediction at epoch \(i\).
with initial condition \(m_{-1}=0\). Consequently, the number of characters embedded in the first \(N_{T}\) tokens for a random input becomes \(N_{c}=\sum_{i=1}^{N_{T}}B_{i}\).
Having established these properties, we turn to the relation between \(H(\mathbf{S})\) and \(H(\mathbf{X})\). We make the assumption that \(\{S_{k}\}_{k=1}^{\infty}\), \(\{B_{i}\}_{i=1}^{\infty}\), and \(\{X_{i}\}_{i=1}^{\infty}\) are stationary and ergodic processes. We know from the Shannon-McMillan-Breiman Theorem [8] that
\[-\frac{1}{n}\log_{2}p_{\mathbf{S}_{n}}(S_{1},\ldots,S_{n})=-\frac{1}{n}\log_{2 }p_{\mathbf{S}_{n}}(\mathbf{S}_{n})\to H(\mathbf{S})\quad\text{almost surely}. \tag{2}\]
Let \(\Omega_{\mathbf{S}}\) be the collection of \(\omega\in\Omega\) for which this limit holds. In an analogous manner, the Shannon-McMillan-Breiman theorem implies
\[-\frac{1}{i}\log_{2}p_{\mathbf{X}_{i}}(X_{1},\ldots,X_{i})=-\frac{1}{i}\log_{ 2}p_{\mathbf{X}_{i}}(\mathbf{X}_{i})\to H(\mathbf{X})\quad\text{almost surely}. \tag{3}\]
Define \(\Omega_{\mathbf{X}}\) as the collection of \(\omega\in\Omega\) for which this limit holds. Finally, by construction, we have
\[\lim_{i\to\infty}\frac{m_{i}(\mathbf{S})}{i}=\mathbb{E}\left[B\right]\quad \text{almost surely}. \tag{4}\]
Set \(\Omega_{B}\) to be the set of \(\omega\in\Omega\) for which this limit holds. For any \(\omega\in\Omega_{\mathbf{S}}\cap\Omega_{\mathbf{X}}\cap\Omega_{B}\), we deduce that
\[H(\mathbf{S}) =\lim_{k\to\infty}-\frac{1}{k}\log_{2}p_{\mathbf{S}_{k}}( \mathbf{S}_{k}(\omega))\] \[=\lim_{i\to\infty}-\frac{1}{l_{i}}\log_{2}p_{\mathbf{S}_{l_{i}}} (\mathbf{S}_{l_{i}}(\omega))\] \[=\lim_{i\to\infty}-\frac{1}{l_{i}}\log_{2}\Pr\left(\mathbf{X}_{i} =T(\mathbf{S}_{l_{i}}(\omega))\right)\] \[=-\frac{1}{\mathbb{E}[B]}\lim_{i\to\infty}\frac{1}{i}\log_{2}\Pr \left(\mathbf{X}_{i}=\mathbf{x}_{i}\right)=\frac{H(\mathbf{X})}{\mathbb{E}[B]}.\]
The first equality follows from (2). The second equality is a consequence of the fact that \(\{l_{i}=m_{i}(\mathbf{S}(\omega))|i\in\mathbb{N}\}\) is an infinite subset of the natural numbers. Since a subsequence of a convergent sequence must converge to the same limit, we immediately gather that this alternate form approaches \(H(\mathbf{S})\). The third equality is a consequence of the equivalence between the following two events,
\[\{\omega\in\Omega|\mathbf{X}_{i}(\omega)=\mathbf{x}_{i}\}=\{\omega\in\Omega|T (\mathbf{S}_{m_{i}(\mathbf{S}(\omega))})=\mathbf{x}_{i}\}.\]
This is characteristic of the tokenization process, and it is a consequence of the correspondence described above. The last step holds because we are considering an \(\omega\in\Omega_{B}\). The sets \(\Omega_{\mathbf{S}}\), \(\Omega_{\mathbf{X}}\), and \(\Omega_{B}\) each have probability one; this implies that their intersection also has probability one, Thus, we must conclude that
\[H(\mathbf{S})=\frac{H(\mathbf{X})}{\mathbb{E}[B]}\quad\text{almost surely}.\]
As a corollary to this result, any upper bound on \(H(\mathbf{X})\) produces an upper bound on \(H(\mathbf{S})\). This is the property we wish to exploit.
Then, from the results of [1], we can see that
\[\Pr\Biggl{\{}H(\mathbf{X})\leq\lim_{N_{T}\to\infty}-\frac{1}{N_{T}}\sum_{i=1} ^{N_{T}}\log_{2}q_{i}(X_{i})\Biggr{\}}=1, \tag{5}\]
where \(q_{i}(\cdot)\) is the output PMF from the language model. Therefore, an asymptotic upper bound on the entropy rate \(H(\mathbf{S})\) is given by
\[H(\mathbf{S})\leq\frac{\lim_{N_{T}\to\infty}-\frac{1}{N_{T}}\sum_{i=1}^{N_{T}} \log_{2}q_{i}(X_{i})}{\mathbb{E}[B]}. \tag{6}\]
We refer to the expression in the right hand side of (6) as the asymptotic upper bound on \(H(\mathbf{S})\) and denote it by \(H_{ub}\). The numerator in (6) represents the average number of bits required to represent the tokens \(\mathbf{X}_{N_{T}}\) and the denominator in (6) is the average number of characters per token. Hence, the unit for \(H(\mathbf{S})\) is bits/character. In [1], Cover and King provide 1.3 bits/character as an estimate of the asymptotic upper bound on \(H(\mathbf{S})\). They also provide an extensive list of references and discussion of the literature on estimating the entropy of English prior to 1976. Very recently, in [2, Table 4], the performance of several language models have evaluated on the text8 dataset using a metric called bits per character (bpc). We believe bpc is the same as the asymptotic upper bound in this paper.
### _Encoding schemes_
We consider three schemes for the lossless compression block in Fig. 3.
#### Iii-B1 Compressing the ranks using zlib
The first scheme uses the zlib compression algorithm to encode the sequence of ranks. We refer to this scheme as LLaMA+zlib and denote the compression ratio of this scheme by \(\rho_{\text{LLaMA+zlib}}\).
#### Iii-B2 Token-by-Token Compression
The second scheme uses a token-by-token lossless compression scheme which uses a time-varying codebook to encode the token \(x_{i}\) at epoch \(i\) by using a prefix-free code assuming \(q_{i}\) to be the true distribution of the tokens. A natural choice for a prefix-free code is a Huffman code. Instead, for simplicity, we use a prefix-free code where the codeword for the token \(x_{i}\) is of length \(l_{i}=\lceil\log_{2}\frac{1}{q_{i}(x_{i})}\rceil\). A prefix-free code with this length for \(x_{i}\) is guaranteed to exist since this choice of lengths satisfies the Kraft inequality [8]. The compression ratio for this scheme, denoted by \(\rho_{\text{LLaMA+TbyT}}\), is given by
\[\rho_{\text{LLaMA+TbyT}}=\frac{\sum_{i=1}^{N_{T}}\left[\log_{2}\frac{1}{q_{i}(x _{i})}\right]}{\sum_{i=1}^{N_{T}}b_{i}}.\]
#### Iii-B3 Arithmetic Coding
The above two schemes are intuitive but their performance can be improved. A very effective way to combine the output of the LLM with a lossless compression scheme is by using arithmetic coding [4, 9]. Arithmetic coding is well suited to accept time-varying probabilities and we use \(q_{i}(x_{i})\) as the probability of token \(x_{i}\) at time in the arithmetic coding scheme. We refer to the compression ratio of this scheme as \(\rho_{\text{LLM+AC}}\). It is known that arithmetic coding is nearly optimal as a compression scheme [10, Page 115]. Hence, the compression ratio for this scheme is expected to be
\[\rho_{\text{LLM+AC}}\approx\frac{\sum_{i=1}^{N_{T}}\log_{2}\frac{1}{q_{i}(x_{ i})}}{\sum_{i=1}^{N_{T}}b_{i}}. \tag{7}\]
Clearly, \(\rho_{\text{LLaMA+zlib}}\), \(\rho_{\text{LLaMA+TbyT}}\), and \(\rho_{\text{LLM+AC}}\) also provide upper bounds on \(H(\mathbf{S})\). \(H_{ub},\rho_{\text{LLaMA+zlib}}\), \(\rho_{\text{LLaMA+TbyT}}\), and \(\rho_{\text{LLM+AC}}\) are estimated using a finite number of tokens and the statistical properties of such an estimate should be kept in mind when interpreting the results, especially since the tokens are from a very large alphabet and language model has large memory.
## IV Results
We used LLaMA-7B [6] as the large language model and SentencePiece tokenizer [11]. The tokenizer produces a dictionary of size 32000. Since the language model is trained on this tokenizer, it is imperative that this tokenizer be used in conjunction with the LLM. It should be noted that the tokenizer and the model are trained on a large corpus of text which includes uppercase letters, special characters etc. This is in contrast to many studies on estimating the entropy of English, where the input alphabet is restricted to lowercase letters such as in [1, 3, 5]. This makes it difficult to perform an entirely fair comparison between these models. By using a pretrained LLM on an input consisting only of lowercase letters, we may be unfair to the LLM.
Nevertheless, we used the text8 dataset available from [http://mattmahnoney.net/dc/text8.zip](http://mattmahnoney.net/dc/text8.zip) to benchmark the performance of LLaMA-7B with compression against other state of the art results for text compression. In [5], it is mentioned that the ZPAQ algorithm obtains the best compression ratio for the texts dataset with a compression ratio of 1.4 bits/character. In [12], the paq8h algorithm is shown to provide a compression ratio of 1.2 bits/character. To the best of our knowledge, this appears to be best performance reported. Therefore, we used these two algorithms as baselines. We did not independently run the ZPAQ or paq8h algorithms and we are quoting results from the existing literature.
The performance of LLaMA-7B is shown in Table I for 10 different batches each with 100,000 tokens. The average performance over these 1M tokens is also shown in the last row in the Table. It can be seen that using LLaMA-7B with Arithmetic Coding compression results in a compression ratio of 0.7101 bits/character. This is substantially better than the state-of-the-art results mentioned in [5] or [12] and is very close to our computed upper bound. The performance with the LLaMA+zlib algorithm and LLaMA+TbyT compression are also better than that of the known state-of-the-art results. Table I also shows the upper bound in (6). It should be noted that the upper bound on the entropy is lower than that computed by Shannon in [3], Cover and King in [1] and more recent estimates based on neural networks in [2].
The dependence of the compression performance on the memory of the LLM (\(M\)) is shown in Table II. As expected, the compression performance improves with increasing \(M\). We also observed that the inference time scaled approximately linearly with the input memory length, i.e., batches with a memory of 511 tokens ran about 16 times slower than batches with a memory of 31 tokens.
It is well known that the estimate of compression ratio can show substantial variance depending on the input text and hence, the results should be interpreted with caution. The empirical mean and standard deviation of the entropy bounds and compression ratios computed using 10 batches of 100,000 tokens are shown in Table III. We were also not able to run LLaMA-7B on the entire 100MB of the text8 dataset. So, the comparison of LLaMA-7B with that of the state-of-the-art corresponds to estimates obtained from different input sizes.
It appears that the LLaMA-7B model was trained on a corpus that included articles from Wikipedia. Since the text8 dataset is derived from Wikipedia, it is likely that our results for the text8 dataset are optimistic.
Therefore, we also tested the performance of LLaMA-7B on a recently released (May 25, 2023) book [7] under Project Gutenberg. We extracted text that corresponds to 100,000 tokens. We applied the same text pre-processing as used in the text8 dataset to clean the text from the book. The resulting text data contained only lowercase letters and space as in the text8 dataset. Table IV shows the compression performance of the LLM on the book. It can be seen that the compression ratios and the entropy upper bound are slightly higher compared to the performance on the text8 dataset; nevertheless, the asymptotic upper bound on the entropy is lower than that of currently known models given in [2, Table 4]). Similarly, the compression ratio of LLaMA-7B-based compressors are better than those of known state-of-the-art results for the text8 dataset. The compression ratio for LLaMA with arithmetic coding is only 0.8426 bits/character and is very close to the estimated upper bound on \(H(\mathbf{S})\).
To provide some insight into the comparative performance of LLaMA based compressors vis-a-vis standard text compressors, we also ran the zlib algorithm directly on the input text. The resulting compression ratio was 2.8 bits/character (shown in the last column). It is clear that the performance of LLaMA based compressors is substantially better than this. The zlib algorithm may not be optimized for compressing small text samples and hence, the compression ratio for the zlib algorithm and the LLaMA+zlib will likely improve on longer texts.
## V Acknowledgement
We would like to thank Andreas Kirsch for an email discussion about arithmetic coding that motivated us to add our results on arithmetic coding in a timely manner.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Batch & \(N_{c}\) & \(N_{T}\) & \(H_{\text{bb}}\) & \(\rho_{\text{LLaMA+zlib}}\) & \(\rho_{\text{LLaMA+TbyT}}\) & \(\rho_{\text{LLaMA+AC}}\) \\ No. & & & (bpc) & file size (bits) & (bpc) & (bpc) \\ \hline
1 & \(466,650\) & \(100,000\) & 0.6882 & 1.0513 & 0.8215 & 0.689 & \\ \hline
2 & \(461,477\) & \(100,000\) & 0.6893 & 1.0558 & 0.8242 & 0.6901 & \\ \hline
3 & \(454,599\) & \(100,000\) & 0.699 & 1.0681 & 0.8357 & 0.6999 & \\ \hline
4 & \(462,755\) & \(100,000\) & 0.6748 & 1.0346 & 0.8093 & 0.6757 & \\ \hline
5 & \(453,847\) & \(100,000\) & 0.7481 & 1.1265 & 0.8831 & 0.749 & \\ \hline
6 & \(458,252\) & \(100,000\) & 0.7218 & 1.0957 & 0.8567 & 0.7227 & \\ \hline
7 & \(451,036\) & \(100,000\) & 0.6959 & 1.0729 & 0.8353 & 0.6968 & \\ \hline
8 & \(447,953\) & \(100,000\) & 0.7092 & 1.0896 & 0.8489 & 0.7101 & \\ \hline
9 & \(462,665\) & \(100,000\) & 0.7394 & 1.1126 & 0.8713 & 0.7402 & \\ \hline
10 & \(449,621\) & \(100,000\) & 0.7269 & 1.1046 & 0.8643 & 0.7277 & \\ \hline Total & \(9,137,710\) & \(2,000,000\) & 0.7093 & 1.0812 & 0.845 & 0.7101 & 1.41 [FOOTNOTE:1]Footnote 1: This result is taken from [5] and it corresponds to the full 100MB dataset text8
\end{table} TABLE I: Results for 1MB of text from text8 dataset
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\(M\)} & \multirow{2}{*}{\(N_{c}\)} & \multirow{2}{*}{\(N_{t}\)} & \(H_{\text{ub}}\) & \(\rho_{\text{LLaMa+lib}}\) & \(\rho_{\text{LLaMa+TyT}}\) & \(\rho_{\text{LLaMa+AC}}\) \\ & & & (bpc) & (bpc) & (bpc) \\ \hline
31 & \(508,463\) & \(115,000\) & \(1.0919\) & \(1.5316\) & \(1.2152\) & \(1.0924\) & \(2.80\) \\ \hline
127 & \(508,463\) & \(115,000\) & \(0.8973\) & \(1.3128\) & \(1.0235\) & \(0.8982\) & \(2.80\) \\ \hline
255 & \(508,463\) & \(115,000\) & \(0.8618\) & \(1.2684\) & \(0.9899\) & \(0.8627\) & \(2.80\) \\ \hline
511 & \(508,463\) & \(115,000\) & \(0.8417\) & \(1.2465\) & \(0.9711\) & \(0.8426\) & \(2.80\) \\ \hline \end{tabular} |
2310.19081 | Deep Audio Analyzer: a Framework to Industrialize the Research on Audio
Forensics | Deep Audio Analyzer is an open source speech framework that aims to simplify
the research and the development process of neural speech processing pipelines,
allowing users to conceive, compare and share results in a fast and
reproducible way. This paper describes the core architecture designed to
support several tasks of common interest in the audio forensics field, showing
possibility of creating new tasks thus customizing the framework. By means of
Deep Audio Analyzer, forensics examiners (i.e. from Law Enforcement Agencies)
and researchers will be able to visualize audio features, easily evaluate
performances on pretrained models, to create, export and share new audio
analysis workflows by combining deep neural network models with few clicks. One
of the advantages of this tool is to speed up research and practical
experimentation, in the field of audio forensics analysis thus also improving
experimental reproducibility by exporting and sharing pipelines. All features
are developed in modules accessible by the user through a Graphic User
Interface. Index Terms: Speech Processing, Deep Learning Audio, Deep Learning
Audio Pipeline creation, Audio Forensics. | Valerio Francesco Puglisi, Oliver Giudice, Sebastiano Battiato | 2023-10-29T17:04:24Z | http://arxiv.org/abs/2310.19081v1 | # Deep Audio Analyzer: a Framework to Industrialize the Research on Audio Forensics
###### Abstract
Deep Audio Analyzer is an open-source speech framework that aims to simplify the research and the development process of neural speech processing pipelines, allowing users to conceive, compare and share results in a fast and reproducible way. This paper describes the core architecture designed to support several tasks of common interest in the audio forensics field, showing possibility of creating new tasks thus customizing the framework. By means of Deep Audio Analyzer, forensics examiners (i.e. from Law Enforcement Agencies) and researchers will be able to visualize audio features, easily evaluate performances on pre-trained models, to create, export and share new audio analysis workflows by combining deep neural network models with few clicks. One of the advantages of this tool is to speed up research and practical experimentation, in the field of audio forensics analysis thus also improving experimental reproducibility by exporting and sharing pipelines. All features are developed in modules accessible by the user through a Graphic User Interface.
Speech Processing, Deep Learning Audio, Deep Learning Audio Pipeline creation, Audio Forensics.
## I Introduction
Applications of novel deep learning solutions to forensics investigations has experienced unprecedented growth in interest and obtained results [1, 2, 7, 8, 9, 10], with many researchers developing innovative algorithms and models to solve complex problems. Research is mostly data-driven and having a lot of data gives the opportunity to find new features or more forensically "fingerprints" [3, 5, 6] which are of utter importance when dealing with the fight of Artificial Intelligence generated evidences [4]. However, reproducing published experiments and results remain a significant challenge due to the needed programming skills required. This challenge is further compounded by the lack of (or an extremely limited) standardization in the way experiments are conducted. This issue results in a significant amount of time being spent by researchers trying to get other researchers' code to work, which leads to a significant waste of resources. The development of speech-processing technologies has been largely driven by open-source toolkits [11, 12, 13, 14, 15]. However, with the emergence of general-purpose deep learning libraries like TensorFlow [16] and PyTorch [17], more flexible speech recognition frameworks have emerged, such as DeepSpeech [18], RETURN [19], PyTorch-Kaldi [20], Espresso [21], Lingvo [22], Fairseq [23], ESPnet [24], NeMo [25], Asteroid [26], Speechbrain [27] and hub where scientists load trained models for others to download [28]. While it can be challenging for non-experts users to prototype new deep learning methods, as it requires knowledge of coding and environmental setup. In this paper, we aim to highlight the importance of reproducibility in data science and discuss a solution to address the problem of knowledge of programming in different languages. Deep Audio Analyzer is a tool that enables users to visualize audio features, evaluate the performance of pre-trained models, and create new audio analysis workflows by combining deep neural network models. Through the use of Deep Audio Analyzer, users can perform these features without the need to develop any code. The tool also provides dedicated modules to test state-of-the-art models on customized data and also combine models to create a new deep learning audio processing pipeline, combing for tasks such as Automatic Speech Recognition, Speech Enhancement, Speaker Separation, Speaker Verification and Voice Activity Detection. Code is available at [https://github.com/valeriopuglisi/deep-audio-analyzer](https://github.com/valeriopuglisi/deep-audio-analyzer).
The remainder of the paper is organized as follows. In Section 2, we delve into the technologies comprising the architecture of the Audio Analyzer. Section 3 reports the proposed features and modules developed in Deep Audio Analyzer. Section 4 presents the considered experiments and discusses the obtained results. Section 5 concludes the paper and proposes future works.
## II Architecture
The Architecture of Deep Learning Audio Analyzer is actually composed of a Backend service where all the artificial intelligence tasks are implemented and Front-End is written in Angular and is concerned with making Audio Analysis as easy as possible as shown in Fig.1.
Angular FrontEnd framework is concerned to make easier the development and maintenance of the platform while the Backend Flask RESTful API was chosen because it is fast to develop and is written in Python, which comprises the Artificial Intelligence libraries used to develop Deep Audio Analyzer platform.
## III Deep Audio Analyzer
### _Audio Features Visualization Module_
Through the preprocessing module of Deep Audio Analyzer, it is possible to graphically analyze all the features extracted through the application of the functions present in the librosa library [29] whose functions have been implemented in the Backend of the application (Fig.2).
The developed functions are shown in the next list :
### _Preprocessing Audio Features_
* **Linear-frequency power spectrogram:** The linear-frequency power spectrogram is an important tool in the field of audio forensics. It represents time on the X-axis, frequency in Hz on a linear scale on the Y-axis, and power in dB [30]. It is used to identify specific events in an audio recording by allowing experts to analyze the spectral characteristics. It also enables voice analysis by studying features such as pitch, formants, and harmonics, which can aid in speaker identification and voice comparison. Furthermore, the spectrogram can reveal hidden artefacts, noise, or disturbances in an audio recording, which can then be mitigated by applying appropriate filtering techniques, thereby enhancing the desired audio content. In audio authentication, the spectrogram can be instrumental in detecting signs of audio tampering or manipulation.
* **Log-frequency power spectrogram:** Such features can be obtained from a spectrogram by converting the linear frequency axis (measured in Hertz) into a logarithmic axis (measured in pitches). Its logarithmic representation of frequency content enables experts to extract unique voice characteristics, classify sounds, segment audio recordings, enhance transcription accuracy, and detect potential tampering or manipulation. The resulting representation is also called log-frequency spectrogram.
* **Chroma STFT:** Chroma STFT features are useful for analyzing the harmonic content of an audio signal and can be used in a variety of applications such as music information retrieval, audio classification, and speech recognition. They provide a way to represent the pitch content of an audio signal in a compact and efficient way and can be used to compare and classify different audio signals based on their harmonic content. Chroma STFT is a useful tool in audio signal processing for analyzing the chromatic content of an audio signal and can be used in a wide range of applications. This implementation is derived from chromagram E [32]
* **Chroma CQT :** The Constant-Q chromagram is a type of chroma feature representation commonly used in music analysis and processing. It is based on the Constant-Q transform, which is a frequency-domain transformation that uses a logarithmic frequency scale that approximates the way that humans perceive sound. [31]
* **Chroma CENS:** Computes the chroma variant "Chroma Energy Normalized" (CENS) [33]. CENS features are robust to dynamics, timbre and articulation, thus these are commonly used in audio matching and retrieval applications.
* **Melspectrogram:** Compute a mel-scaled spectrogram. If a spectrogram input S is provided, then it is mapped directly onto the mel basis by \(mel_{f}.dot(S)\). If a time-series input y, sr is provided, then its magnitude spectrogram S is first computed, and then mapped onto the mel scale by \(mel_{f}.dot(S*power)\). By default, power=2 operates on a power spectrum. [29]
* **Mel-frequency spectrogram**: Display of mel-frequency spectrogram coefficients, with custom arguments for mel filterbank construction (default is \(f_{max}=sr/2\)). Mel-frequency spectrograms are valuable in forensic audio analysis for visualizing and analyzing the characteristics of an audio recording relevant to a legal case. They help identify specific sounds, voices, recording quality, and potential tampering. By extracting features like pitch, spectral content, and temporal characteristics, it enables comparisons between different recordings to determine their common source. They are useful for speaker identification, voice matching, and background noise analysis. Mel-frequency spectrograms provide a perceptually relevant representation of audio and allow forensic analysts to determine important details about the origin and authenticity of recordings.
* **Mel-frequency cepstral coefficients (MFCCs)**: Mel-frequency cepstral coefficients (MFCCs) are a type of feature representation commonly used in audio signal processing and analysis, particularly in speech recognition and forensic audio analysis. MFCCs are derived from the Mel-frequency spectrogram, which is a spectrogram that
Fig. 1: **Angular Front End:** The User Interface of Deep Audio Analyzer is divided into pages and components in order to categorize all the functions separately. All the module of Deep Audio Analyzer is developed on different pages. **Flask Backend:** Deep Learning Audio Analyzer employs a simple software stack (i.e., Python \(\rightarrow\) PyTorch \(\rightarrow\) SpeechBrain \(\rightarrow\) HuggingFace \(\rightarrow\) Flask \(\rightarrow\) Angular) to avoid dealing with too many levels of abstraction. It is developed on top of SpeechBrain and HuggingFace directly, with external APIs that can retrieve the newest model uploaded from the SpeechBrain community and other Companies.
Fig. 2: Architecture of Audio Feature Visualization Module.
uses a frequency scale that is more aligned with human perception of sound. In forensic audio analysis, MFCCs can be used as a feature representation to compare and analyze different audio recordings. By computing the MFCCs for different segments of an audio recording, forensic audio analysts can identify characteristic patterns and features that may be relevant to a legal case.
* **Compare different DCT bases**: In audio signal processing, the discrete cosine transform (DCT) is a widely used method for transforming time-domain audio signals into a frequency-domain representation. There are different types of DCTs that use different basis functions, or sets of orthogonal functions, to represent the signal in the frequency domain. The choice of DCT basis functions depends on the specific application and the trade-offs between computational efficiency, frequency resolution, and energy compaction. In many cases, the standard DCT-II is a good choice for audio signal processing applications, but other DCT bases may be more appropriate for certain types of signals or processing tasks.
* **Root-Mean-Square (RMS)**: Compute root-mean-square (RMS) value for each frame, either from the audio samples y or from a spectrogram S.Computing the RMS value from audio samples is faster as it doesn't require an STFT calculation. However, using a spectrogram will give a more accurate representation of energy over time because its frames can be windowed, thus prefer using S if it's already available.
* **Spectral Centroid**: Compute the spectral centroid.Each frame of a magnitude spectrogram is normalized and treated as a distribution over frequency bins, from which the mean (centroid) is extracted per frame [34]. The spectral centroid is a measure used in audio forensics to characterize an audio signal, often indicating the perceived "brightness" of a sound. It aids in differentiating sounds and identifying unique voices, thus assisting in speaker identification. The spectral centroid can also reveal potential audio tampering, as inconsistencies might suggest alterations.
* **Spectral Bandwidth**: Compute \(p^{\prime}th-order\) spectral bandwidth [34]. In the realm of audio enhancement, knowledge of the spectral bandwidth can aid in developing strategies to filter out unwanted components from a recording. It also can be used as a fingerprint, the spectral bandwidth of an individual's voice can be unique. This characteristic can be analyzed to potentially match a voice to a specific person, which can prove extremely useful in forensic investigations.
* **Spectral Contrast**: Compute spectral contrast. Each frame of a spectrogram S is divided into sub-bands. For each sub-band, the energy contrast is estimated by comparing the mean energy in the top quantile (peak energy) to that of the bottom quantile (valley energy) [35]. High contrast values generally correspond to clear, narrow-band signals, while low contrast values correspond to broad-band noise.
### _Deep Learning Audio Inference Module_
Deep Audio Analyzer puts implements several audio analysis tasks using deep learning methods. The neural networks present in Deep Audio Analyzer are state of the art for the different tasks, and their implementation of them is currently supported by the SpeechBrain [27] framework that implements interfaces through which it is possible to download and execute neural network models through the HuggingFace [28] aggregator. Table I summarizes the various neural network models for the different tasks and related datasets on which they were trained and the obtained performance.
### _Pipeline Creation and Saving_
Through Deep Audio Analyzer, it is possible to perform analysis of an audio file dynamically by creating an audio analysis pipeline. Fig 3 shows the flowchart expressing the
working principle of audio analysis with Deep Audio Analyzer.
The following list represents the process of analysis and pipeline creation:
1. First, the input audio file is selected,
2. Once the file is selected, the task to be performed and consequently, the neural network model is chosen from those available for that task,
3. Once the step is defined, it is possible to execute it by means of a POST request sent to the server, which will execute the neural network in inference and return the result of the task performed to the client
4. Then it is possible to add a new step to the pipeline by choosing on which file to perform the analysis or save the pipeline from executing it later on different files
### _Pipeline Execution and Download Report_
Audio analysis pipelines that have been previously saved by the user are available in the "pipelines" section. In this section, it is possible to run a previously created pipeline, on one or more (previously recorded) audio files, or to perform a recording of an audio file using the application GUI. Once the type of input to be analyzed has been selected, it is possible to choose the type of pipeline and view its steps. Deep Audio Analyzer will then display via Frontend the results of the inferences performed on the Backend side as described in the previous paragraphs. After the analysis process, it is possible to download the reports containing the pipeline executed on each file and its results for each step that is part of it. Figure 4 describes the flowchart for pipeline execution and reporting.
## IV Experiments and Results
In this section, some examples of new pipelines and the tests performed on the performance of the neural networks available for the different tasks and the obtained results using the Deep Audio Analyzer, are described.
### _Example of new pipeline_
In this section, we present two examples of pipeline creation that can be used for investigative purposes in interception contexts. The first example concerns the transcription of speech from multiple people speaking different languages, while the second example concerns the transcription of speech in noisy environments using speech enhancement models.
#### Iv-A1 Multi-speaker Multi-Language ASR : Speech Separation + Language ID + ASR
This pipeline dedicated to transcribing speech in different languages from a maximum of three speakers is composed of the following steps:
1. Addition of the file of interest, selection of the audio separation model, and execution.
2. This step provides three output files; thus, it is necessary to create three new voice activity detection (VAD) steps one per output file of step one.
3. The three steps will each output a file where silence has been removed. It will then be necessary to introduce three additional steps with our implemented language identification and automatic speech recognition module, taking as input the audio processed with VAD in the previous steps.
#### Iv-A2 Automatic Speech Recognition in noisy environment
In forensic investigations, it is often necessary to transcribe highly noisy audio. This context is often overlooked in academic settings, as the focus is on evaluating transcriptions in clean or low-noise/echo environments. Therefore, creating a pipeline that involves the use of enhancement models and voice activity detection improves the results of automatic speech recognition. The creation of this pipeline consists of the following steps :
1. Loading the audio file and selecting the desired model for the speech enhancement task.
2. Adding a new step that takes as input the improved signal produced by step 1 and select the desired Voice Activity Detection model.
3. Adding the language identification and automatic speech recognition task, implemented by us.
### _Experiments_
Deep Audio Analyzer is an application designed as a support tool for audio analysis in forensic and also academic fields. For these reasons, several experiments have been implemented including validating models related to different tasks on different datasets through the implementation of appropriate evaluation metrics using the library [51].
#### Iv-B1 Validate performance of a pre-trained neural network on different task
The first test case consists of evaluating by means of the metrics set out in the introduction chapter, the behaviour of the various networks with datasets that are different from the training datasets, but which have been
Fig. 4: Pipeline Execution Audio Analysis Flowchart
Fig. 3: Pipeline Creation and Dynamic Audio Analysis Flowchart
realised for the same task to see how are robust the networks, varying the datasets for the same type of task. For example, to evaluate the performance of the Automatic Speech Recognition networks in Deep Audio Analyzer. The current Automatic Speech Recognition networks are trained mainly on Librispeech, Voxpopuli and Common Voice and it is possible to see the performance on different datasets in the table II by the implementation of Character Error Rate (CER) and Word Error Rate (WER) [52, 53]. We also implemented evaluation for speech separation models in table III by the implementation of five differents metrics: Source.
#### Iv-B2 User validation of the performance of the deep neural networks available for a given task
Suppose we wanted to test the quality of the neural networks available in the automatic speech recognition application for a specific language, using files not belonging to datasets. It is possible to do this by the pipeline creation section. The user creates a pipeline and add as many steps as necessary to compare the neural networks available for that language and save the pipeline. Then the users can run the various comparison pipelines (previously created) to test the behaviour of the various networks for tasks in examples that are not included in the training datasets. In this use case, it is not possible to perform a validation according to the metrics related to the task being analysed, because the relevant ground-thoughts are missing. For this reason, it was decided to predefine a perceptual quality index ranging from 1 to 10 for the tasks on the platform.
Subsequently, a pipeline was created for each task in order to compare all the available networks in a single process and then manually evaluate the performance of the individual network from 1 to 10. In this way, it is easy to sample perceptual opinions from experts in the field in order to assess robustness not only in the various existing datasets for the generic task but also with audio files recorded in real 'into the wild' situations.
### _Results_
#### Iv-C1 Model Evaluation on different Datasets
Tables II, III show the Evaluation module applied Automatic Speech recognition task and Speech Separation task with pre-trained models on some datasets. However, evaluations conducted on different datasets show that even though a network may show good performance on the training dataset, it may not perform well on other data from different contexts. With Deep Audio Analyzer is possible to upload customized trained models in order to achieve better performance on private datasets.
## V Conclusion
In this paper, we described Deep Audio Analyzer, an audio analysis platform that aims to cover the entire audio analysis process. Deep Audio Analyzer is a framework that allows the comparison of state-of-the-art models for speech analysis with no lines of code. It enables researchers to reduce the benchmarking time of different models.
|
2302.00919 | QCM-SGM+: Improved Quantized Compressed Sensing With Score-Based
Generative Models | In practical compressed sensing (CS), the obtained measurements typically
necessitate quantization to a limited number of bits prior to transmission or
storage. This nonlinear quantization process poses significant recovery
challenges, particularly with extreme coarse quantization such as 1-bit.
Recently, an efficient algorithm called QCS-SGM was proposed for quantized CS
(QCS) which utilizes score-based generative models (SGM) as an implicit prior.
Due to the adeptness of SGM in capturing the intricate structures of natural
signals, QCS-SGM substantially outperforms previous QCS methods. However,
QCS-SGM is constrained to (approximately) row-orthogonal sensing matrices as
the computation of the likelihood score becomes intractable otherwise. To
address this limitation, we introduce an advanced variant of QCS-SGM, termed
QCS-SGM+, capable of handling general matrices effectively. The key idea is a
Bayesian inference perspective on the likelihood score computation, wherein
expectation propagation is employed for its approximate computation. Extensive
experiments are conducted, demonstrating the substantial superiority of
QCS-SGM+ over QCS-SGM for general sensing matrices beyond mere
row-orthogonality. | Xiangming Meng, Yoshiyuki Kabashima | 2023-02-02T07:36:58Z | http://arxiv.org/abs/2302.00919v4 | # QCM-SGM+: Improved Quantized Compressed Sensing
###### Abstract
In realistic compressed sensing (CS) scenarios, the obtained measurements usually have to be quantized to a finite number of bits before transmission and/or storage, thus posing a challenge in recovery, especially for extremely coarse quantization such as 1-bit sign measurements. Recently Meng & Kabashima (2023) proposed an efficient quantized compressed sensing algorithm called QCS-SGM using the score-based generative models as an implicit prior. Thanks to the power of score-based generative models in capturing the rich structure of the prior, QCS-SGM achieves remarkably better performances than previous quantized CS methods. However, QCS-SGM is restricted to (approximately) row-orthogonal sensing matrices since otherwise the likelihood score becomes intractable. To address this challenging problem, in this paper we propose an improved version of QCS-SGM, which we call QCS-SGM+, which also works well for general matrices. The key idea is a Bayesian inference perspective of the likelihood score computation, whereby an expectation propagation algorithm is proposed to approximately compute the likelihood score. Experiments on a variety of baseline datasets demonstrate that the proposed QCS-SGM+ outperforms QCS-SGM by a large margin when sensing matrices are far from row-orthogonal.
Machine Learning, ICML
## 1 Introduction
In this paper, we consider the nonlinear inverse problem from noisy quantized measurements as follows (Zymnis et al., 2009)
\[\mathbf{y}=\text{Q}(\mathbf{Ax}+\mathbf{n}), \tag{1}\]
where the goal is to recover the unknown signal \(\mathbf{x}\in\mathbb{R}^{N\times 1}\) from quantized measurements \(\mathbf{y}\in\mathbb{R}^{M\times 1}\), where \(\mathbf{A}\in\mathbb{R}^{M\times N}\) is a known linear mixing matrix, \(\mathbf{n}\sim\mathcal{N}(\mathbf{n};0,\sigma^{2}\mathbf{I})\) is an i.i.d. additive Gaussian noise, and \(\text{Q}(\cdot):\mathbb{R}^{M\times 1}\rightarrow\mathcal{Q}^{M\times 1}\) is an _element-wise_ quantizer function which maps each element into a finite (or countable) set of codewords \(\mathcal{Q}\), i.e., \(y_{m}=\text{Q}(z_{m}+n_{m})\in\text{Q}\), or equivalently \((z_{m}+n_{m})\in\text{Q}^{-1}(y_{m}),m=1,2,...,M\), where \(z_{m}\) is the \(m\)-th element of \(\mathbf{z}=\mathbf{Ax}\). Same as Meng & Kabashima (2023), we consider the uniform quantizer with \(Q\) quantization bits (resolution) and a quantization interval \(\Delta>0\). The quantization codewords \(\mathcal{Q}=\{q_{r}\}_{r=1}^{2^{Q}}\) consist of \(2^{Q}\) elements, i.e.,
\[q_{r}=\left(2r-2^{Q}-1\right)\Delta/2,\ \ r=1,...,2^{Q}, \tag{2}\]
where the lower and upper thresholds associated with \(q_{r}\) are
\[l_{q_{r}} =\begin{cases}-\infty,r=1;\\ \left(r-2^{Q-1}-1\right)\Delta,\ r=2,...,2^{Q}.\end{cases} \tag{3}\] \[u_{q_{r}} =\begin{cases}\left(r-2^{Q-1}\right)\Delta,\ r=1,...,2^{Q}-1;\\ +\infty,\,r=2^{Q}.\end{cases} \tag{4}\]
In other words, \(\text{Q}^{-1}(q_{r})=[l_{q_{r}},u_{q_{r}})\) or equivalently, \(l_{q_{r}}\leq\text{Q}^{-1}(q_{r})<u_{q_{r}}\). In the extreme 1-bit case, i.e., \(Q=1\), only the sign values are observed, i.e.,
\[\mathbf{y}=\mathrm{sign}(\mathbf{Ax}+\mathbf{n}), \tag{5}\]
where the quantization codewords \(\mathcal{Q}=\{-1,+1\}\).
This problem, widely known as quantized compressed sensing (CS), is ubiquitous in various applications due to the fact that in realistic acquisition scenarios, the obtained measurements have to be quantized to a finite number of \(Q\) bits before transmission and/or storage (Zymnis et al., 2009; Dai & Milenkovic, 2011). Inevitably, the quantization process leads to leads to information loss which makes the recovery particularly challenging. As a result, a naive application of traditional CS methods by ignoring the quantization is suboptimal (Dai & Milenkovic, 2011), especially for the extreme 1-bit case. As a result, there has been an extensive study of algorithms specially designed for quantized CS by explicitly taking into account the quantization operation
(Zymnis et al., 2009; Dai and Milenkovic, 2011; Plan and Vershynin, 2012; 2013; Jacques et al., 2013; Xu and Kabashima, 2013; Xu et al., 2014; Awasthi et al., 2016; Meng et al., 2018; Jung et al., 2021; Liu et al., 2020; Liu and Liu, 2022; Zhu et al., 2022). Among various quantized CS algorithms, one key design factor is the prior distribution of unknown signal \(\mathbf{x}\) since it represents our prior knowledge of the target. Apparently, the more we know about the target, the less we need to recover it, which is also the case under the quantized scenario. As a result, while the sparsity, whether naive sparsity or structured sparsity, is the most popular prior assumption used in CS, it is too simple to capture the rich structure inherent in natural signals. In turn, one has to acquire a larger number of observations to accurately recover the desired signal.
Recently, with the advent of deep generative models (Goodfellow et al., 2014; Kingma and Welling, 2013; Rezende and Mohamed, 2015; Song and Ermon, 2019, 2020; Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol and Dhariwal, 2021) in density estimation, there has been a surge of interests in developing CS algorithms with data-driven priors (Bora et al., 2017; Hand and Joshi, 2019; Asim et al., 2020; Pan et al., 2021), i.e., the prior \(p(\mathbf{x})\) of \(\mathbf{x}\) is learned, either explicitly or implicitly, through a generative model, such as VAE (Kingma and Welling, 2013) GAN (Goodfellow et al., 2014), and the most recent score-based generative models (SGM) or diffusion models (DM) (Song and Ermon, 2019, 2020; Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol and Dhariwal, 2021). Due to the powerful representation capability of deep generative models, one is armed with much better knowledge of the target signal than the handcrafted prior such as sparsity, and therefore requires much fewer measurements in signal recovery. Notably, Meng and Kabashima (2023) proposed an efficient algorithm called QCS-SGM based on SGM (Song and Ermon, 2019, 2020) using score matching with Langevin dynamics (SMLD) (Song and Ermon, 2019, 2020) and achieves the state-of-the-art (SOTA) performances for quantized CS. Nevertheless, QCS-SGM builds on a strong assumption that the sensing matrix \(\mathbf{A}\) is (approximately) row-orthogonal. Otherwise, the likelihood score becomes intractable and QCS-SGM will degrade apparently due to such inaccurate score approximation, which severely limits the application of QCS-SGM (Meng and Kabashima, 2023) for more general sensing matrices.
In this paper, we address the limitation of QCS-SGM by extending it to general sensing matrices. The main contributions are summarized as follows.
### Contributions
* We propose an improved version of QCS-SGM, which we call QCS-SGM+, which works well for general matrices. The key idea is a Bayesian inference perspective of the likelihood score computation, i.e., the computation of the likelihood score can be viewed as a Bayesian inference problem and thus approximate inference method can be applied to yield approximate solutions. In particular, by resorting to the famous expectation propagation (Minka, 2001), we obtain an efficient approximation of the likelihood score which is more accurate than QCS-SGM when sensing matrices are no longer row-orthogonal.
* We verify the effectiveness of the proposed QCS-SGM+ in QCS on various real-world datasets including MNIST, Cifar-10, CelebA \(64\times 64\). Using the pre-trained SGM as a generative prior, the proposed QCS-SGM+ outperforms QCS-SGM and other quantized CS algorithms by a large margin when the sensing matrix is far from row-orthogonal.
### Related works
**Generative Models for CS**: This line of research learns a generative prior from data which is then used for inference (Jin et al., 2017; Bora et al., 2017; Hand and Joshi, 2019; Asim et al., 2020; Pan et al., 2021; Meng and Kabashima, 2022). Most studies follow the classic CSGM framework (Jin et al., 2017), and the difference lies in the generative models used, e.g., VAE (Kingma and Welling, 2013), GAN (Goodfellow et al., 2014), SGM or DM (Song and Ermon, 2019, 2020; Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol and Dhariwal, 2021). Note that both VAE and GAN might have large representation errors or biases due to inappropriate latent dimensionality and/or mode collapse (Asim et al., 2020). Moreover, GAN suffers from unstable training due to the adversarial training nature while VAE uses a surrogate loss. By contrast, SGM or DM (Song and Ermon, 2019, 2020; Ho et al., 2020; Nichol and Dhariwal, 2021) have proven extremely effective and even outperform the state-of-the-art (SOTA) GAN (Goodfellow et al., 2014) and VAE (Kingma and Welling, 2013) in density estimation and generation of various natural sources such as images and audios (Dhariwal and Nichol, 2021; Rombach et al., 2022).
**Generative Models for quantized CS**: Recent works Liu et al. (2020); Liu and Liu (2022) extended CSGM framework to non-linear observations including 1-bit CS. However, the main focuses of Liu et al. (2020); Liu and Liu (2022) are limited to VAE and GAN (in particular DCGAN (Radford et al., 2015)), and thus inherit the disadvantages of VAE and GAN. In the very recent work Meng and Kabashima (2023), the authors proposed a novel algorithm QCS-SGM by resorting to SGM or DM as an implicit prior, which achieves state-of-the-art (SOTA) performances for quantized CS. Unfortunately, QCS-SGM is strictly restricted to row-orthogonal sensing matrices.
## 2 Background
### Score-based Generative Models
For any continuously differentiable probability density function \(p(\mathbf{x})\), if we have access to its score function, i.e., \(\nabla_{\mathbf{x}}\log p(\mathbf{x})\), we can iteratively sample from it using Langevin dynamics (Turq et al., 1977; Bussi & Parrinello, 2007; Welling & Teh, 2011)
\[\mathbf{x}_{t}=\mathbf{x}_{t-1}+\alpha_{t}\nabla_{\mathbf{x}_{t-1}}\log p( \mathbf{x}_{t-1})+\sqrt{2\alpha_{t}}\mathbf{z}_{t},\ 1\leq t\leq T, \tag{6}\]
where \(\mathbf{z_{t}}\sim\mathcal{N}(\mathbf{z_{t}};\mathbf{0},\mathbf{I})\), \(\alpha_{t}>0\) is the step size, and \(T\) is the total number of iterations. It has been demonstrated that when \(\alpha_{t}\) is sufficiently small and \(T\) is sufficiently large, the distribution of \(\mathbf{x}_{T}\) will converge to \(p(\mathbf{x})\)(Roberts & Tweedie, 1996; Welling & Teh, 2011). In practice, the score function \(\nabla_{\mathbf{x}}\log p(\mathbf{x})\) is unknown and can be estimated using a _score network_\(\mathbf{s_{\theta}}(\mathbf{x})\) via score matching (Hyvarinen, 2006; Vincent, 2011). However, the vanilla Langevin dynamics faces a variety of challenges such as slow convergence. To address this challenge, inspired by simulated annealing (Kirkpatrick et al., 1983; Neal, 2001), Song & Ermon (2019) proposed an annealed version of Langevin dynamics, which perturbs the data with Gaussian noise of different scales and jointly estimates the score functions of noise-perturbed data distributions. Accordingly, during the inference, an annealed Langevin dynamics (ALD) is performed to leverage the information from all noise scales.
Specifically, assume that \(p_{\beta}(\tilde{\mathbf{x}}\mid\mathbf{x})=\mathcal{N}(\tilde{\mathbf{x}}; \mathbf{x},\beta^{2}\mathbf{I})\), and so we have \(p_{\beta}(\tilde{\mathbf{x}})=\int p_{\text{data}}(\mathbf{x})p_{\beta}( \tilde{\mathbf{x}}\mid\mathbf{x})d\mathbf{x}\), where \(p_{\text{data}}(\mathbf{x})\) is the data distribution. Consider we have a sequence of noise scales \(\{\beta_{t}\}_{t=1}^{T}\) satisfying \(\beta_{\text{max}}=\beta_{1}>\beta_{2}>\cdots>\beta_{T}=\beta_{\min}>0\). The \(\beta_{\min}\to 0\) is small enough so that \(p_{\beta_{\min}}(\tilde{\mathbf{x}})\approx p_{\text{data}}(\mathbf{x})\), and \(\beta_{\max}\) is large enough so that \(p_{\beta_{\max}}(\tilde{\mathbf{x}})\approx\mathcal{N}(\mathbf{x};0,\beta_{ \max}^{2}I)\). The noise conditional score network (NCSN) \(\mathbf{s_{\theta}}(\mathbf{x},\beta)\) proposed in Song & Ermon (2019) aims to estimate the score function of each \(p_{\beta_{t}}(\tilde{\mathbf{x}})\) by optimizing the following weighted sum of score matching objective
\[\boldsymbol{\theta}^{*}=\operatorname*{arg\,min}_{\boldsymbol{ \theta}}\] \[\sum_{t=1}^{T}\mathbb{E}_{p_{\text{data}}(\mathbf{x})}\mathbb{E} _{p_{\beta_{t}}(\tilde{\mathbf{x}}|\mathbf{x})}\left[\|\mathbf{s_{\theta}}( \tilde{\mathbf{x}},\beta_{t})-\nabla_{\tilde{\mathbf{x}}}\log p_{\beta_{t}}( \tilde{\mathbf{x}}\mid\mathbf{x})\|_{2}^{2}\right]. \tag{7}\]
After training the NCSN, for each noise scale, we can run \(K\) steps of Langevin MCMC to obtain a sample for each \(p_{\beta_{t}}(\tilde{\mathbf{x}})\) as
\[\mathbf{x}_{t}^{k}=\mathbf{x}_{t}^{k-1}+\alpha_{t}\mathbf{s_{\theta}}(\mathbf{ x}_{t}^{k-1},\beta_{t})+\sqrt{2\alpha_{t}}\mathbf{z}_{t}^{k},\ k=1,...,K. \tag{8}\]
The sampling process is repeated for \(t=1,2,...,T\) sequentially with \(\mathbf{x}_{0}^{1}\sim\mathcal{N}(\mathbf{x};\mathbf{0},\beta_{\max}^{2} \mathbf{I})\) and \(\mathbf{x}_{t+1}^{0}=\mathbf{x}_{t}^{K}\) when \(t<T\). As shown in Song & Ermon (2019), when \(K\rightarrow\infty\) and \(\alpha_{t}\to 0\) for all \(t\), the final sample \(\mathbf{x}_{T}^{K}\) will become an exact sample from \(p_{\beta_{\min}}(\tilde{\mathbf{x}})\approx p_{\text{data}}(\mathbf{x})\) under some regularity conditions. Later, by a theoretical analysis of the learning and sampling process of NCSN, an improved version of NCSN, termed NCSNv2, was proposed in Song & Ermon (2020) which is more stable and can scale to various datasets with high resolutions.
### QCS-SGM: Quantized CS with SGM
Recently, Meng & Kabashima (2023) proposed an efficient method called QCS-SGM for quantized CS using SGM as implicit prior. The basic idea is that, in the case of quantized measurements (1), to sample from the posterior distribution \(p(\mathbf{x}\mid\mathbf{y})\) rather than \(p(\mathbf{x})\), the Langevin dynamics in (6) becomes
\[\mathbf{x}_{t}=\mathbf{x}_{t-1}+\alpha_{t}\nabla_{\mathbf{x}_{t-1}}\log p( \mathbf{x}_{t-1}\mid\mathbf{y})+\sqrt{2\alpha_{t}}\mathbf{z}_{t},\ 1\leq t\leq T, \tag{9}\]
where the conditional (_posterior_) score \(\nabla_{\mathbf{x}}\log p(\mathbf{x}\mid\mathbf{y})\) is required. Using the Bayesian rule, the \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}\mid\mathbf{y})\) is decomposed into two terms
\[\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}\mid\mathbf{y})=\nabla_{\mathbf{x }_{t}}\log p(\mathbf{x}_{t})+\nabla_{\mathbf{x}_{t}}\log p(\mathbf{y}\mid \mathbf{x}_{t}), \tag{10}\]
which include the unconditional score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\) (termed as _prior score_ in Meng & Kabashima (2023)), and the conditional score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{y}\mid\mathbf{x}_{t})\) (termed as _likelihood score_ in Meng & Kabashima (2023)), respectively. While the prior score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\) can be easily obtained using a trained score network such as NCSN or NCSNv2, the likelihood score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{y}\mid\mathbf{x}_{t})\) is generally intractable. To tackle this difficulty, Meng & Kabashima (2023) proposed a simple yet effective approximation of \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{y}\mid\mathbf{x}_{t})\) by resorting to an uninformative prior assumption, whereby an equivalent representation of (1) can be obtained as
\[\mathbf{y}=\text{Q}\left(\mathbf{A}\mathbf{x}_{t}+\tilde{\mathbf{n}}_{t} \right), \tag{11}\]
where \(\tilde{\mathbf{n}}_{t}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}+\beta_{t }^{2}\mathbf{A}\mathbf{A}^{T})\). As a result, the intractable \(p(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t})\) can be asymptotically approximated as the following pseudo-likelihood
\[p(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t})\simeq \tilde{p}(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t})=\] \[\int\prod_{m=1}^{M}\mathbb{1}\left((z_{t,m}+\tilde{n}_{t,m})\in \mathbb{Q}^{-1}(y_{m})\right)\mathcal{N}(\tilde{\mathbf{n}}_{t};\mathbf{0}, \mathbf{C}_{t}^{-1})d\tilde{\mathbf{n}}_{t}, \tag{12}\]
where \(\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t},\ \mathbf{C}_{t}^{-1}=\sigma^{2}\mathbf{I}+\beta_{t }^{2}\mathbf{A}\mathbf{A}^{T}\) and \(z_{t,m},\tilde{n}_{t,m}\) as the \(m\)-th elements of \(\mathbf{z}_{t},\tilde{\mathbf{n}}_{t}\), respectively, and \(\mathbb{1}\left(\cdot\right)\) denotes the indicator function, i.e., it equals 1 if the event in the
argument is true and equals 0 otherwise. In particular, if \(\mathbf{A}\) is a row-orthogonal matrix such that \(\mathbf{A}\mathbf{A}^{T}\) becomes diagonal, and thus so is the covariance \(\mathbf{C}_{t}^{-1}\), Meng & Kabashima (2023) derives a closed-form solution of \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{y}\mid\mathbf{x}_{t})\) as Meng & Kabashima (2023). Unfortunately, there is no closed-form solution for general matrices \(\mathbf{A}\), which hinders the application of QCS-SGM to general matrices \(\mathbf{A}\).
## 3 QCS-SGM+: Improved QCS-SGM for general sensing matrices \(\mathbf{A}\)
### A New Perspective
To dress the intrinsic limitation of QCS-SGM (Meng & Kabashima, 2023), we introduce an improved version of it, which we call QCS-SGM+, that can deal with general sensing matrices \(\mathbf{A}\) beyond the (approximate) row-orthogonal ones. Our key observation is that obtaining the pseudo-likelihood distribution \(\tilde{p}(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t})\) (12) can be viewed as computing a partition function of the posterior distribution with respect to (w.r.t.) random variables \(\tilde{\mathbf{n}}_{t}\), where \(\mathcal{N}(\tilde{\mathbf{n}}_{t};\mathbf{0},\mathbf{C}_{t}^{-1})\) acts as a prior distribution and \(\prod_{m=1}^{M}\mathbbm{1}\left((z_{t,m}+\tilde{n}_{t,m})\in\mathsf{Q}^{-1}( y_{m})\right)\) as a factorized likelihood distribution. The computation of partition function one of the most fundamental problems in Bayesian inference and various approximate Bayesian methods have been proposed. As a result, while we cannot yield a exact closed-form solution for general (non-diagonal) covariance matrices \(\mathbf{C}_{t}^{-1}\), we can obtain an estimation of it using efficient approximate inference methods.
### Pseudo-likelihood score via EP
We resort to the well-known expectation propagation (EP) (Minka, 2001) or moment matching (Opper et al., 2005) to approximate the pseudo-likelihood distribution (or partition function) \(\tilde{p}(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t})\) (12), whereby an efficient approximation of the pseudo-likelihood score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{y}\mid\mathbf{x}_{t})\) accordingly.
Specifically, we approximate the integral in \(\tilde{p}(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t})\) (12) in three different ways as follows 1:
Footnote 1: The results might differ in a constant scaling factor which is safely ignored as it does not affect the final concerned score function.
\[\tilde{p}(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t})\approx\] \[\begin{cases}\int\prod_{m=1}^{M}\mathbbm{1}((z_{t,m}+\tilde{n}_{t,m})\in\mathsf{Q}^{-1}(y_{m}))\\ \times\mathcal{N}(\tilde{n}_{t,m};\frac{h_{m}^{F}}{\tau^{F}},\frac{1}{\tau^{F} })d\tilde{\mathbf{n}}_{t}&(a)\\ \int\prod_{m=1}^{M}\mathcal{N}(\tilde{n}_{t,m};\frac{h_{m}^{G}}{\tau^{G}}, \frac{1}{\tau^{G}})\mathcal{N}(\tilde{n}_{t};\mathbf{0},\mathbf{C}_{t}^{-1}) d\tilde{n}_{t}&(b)\\ \int\prod_{m=1}^{M}\mathcal{N}(\tilde{n}_{t,m};\frac{h_{m}^{G}}{\tau^{G}}, \frac{1}{\tau^{G}})\mathcal{N}(\tilde{n}_{t,m};\frac{h_{m}^{F}}{\tau^{F}}, \frac{1}{\tau^{F}})d\tilde{\mathbf{n}}_{t}&(c)\end{cases} \tag{13}\]
Intuitively, (13-a) approximates the correlated Gaussian \(\mathcal{N}(\tilde{\mathbf{n}}_{t};\mathbf{0},\mathbf{C}_{t}^{-1})\) with a product of independent Gaussians \(\prod_{m=1}^{M}\mathcal{N}(\tilde{n}_{t,m};\frac{h_{m}^{F}}{\tau^{F}},\frac{ 1}{\tau^{F}})\), (13-b) approximates the non-Gaussian likelihood \(\mathbbm{1}\left((z_{t,m}+\tilde{n}_{t,m})\in\mathsf{Q}^{-1}(y_{m})\right)\) with a Gaussian likelihood \(\mathcal{N}(\tilde{n}_{t,m};\frac{h_{m}^{G}}{\tau^{G}},\frac{1}{\tau^{G}})\), and (13-c) combines the two approximations together. Importantly, in contrast to the original intractable integral in (12) in the case of general matrices, all the three approximations in (13-a) lead to tractable integrals. In particular, the first approximation (13-a) results in a closed-form approximation as follows
\[\tilde{p}(\mathbf{y}|\mathbf{z}_{t}=\mathbf{A}\mathbf{x}_{t}) \approx\frac{e^{\frac{(h_{m}^{F})^{2}}{2\tau^{F}}}}{2}\Big{[}\text{erfc}(\frac{ -\tilde{u}_{y_{m}}}{\sqrt{2}})-\text{erfc}(\frac{-\tilde{l}_{y_{m}}}{\sqrt{2}}) \Big{]} \tag{14}\]
As a result, it can be calculated that the noise-perturbed pseudo-likelihood score \(\nabla_{\mathbf{x}_{t}}\log p_{\beta_{t}}(\mathbf{y}\mid\mathbf{x}_{t})\) for the quantized measurements \(\mathbf{y}\) in (1) can be computed as
\[\nabla_{\mathbf{x}_{t}}\log p_{\beta_{t}}(\mathbf{y}\mid\mathbf{x}_{t})= \mathbf{A}^{T}\mathbf{G}(\beta_{t},\mathbf{y},\mathbf{A},\mathbf{z}_{t},\mathbf{h}^ {F},\tau^{F}), \tag{15}\]
where \(\mathbf{G}(\beta_{t},\mathbf{y},\mathbf{A},\mathbf{z}_{t},\mathbf{h}^{F},\tau^{F} )=[g_{1},g_{2},...,g_{M}]^{T}\in\mathbb{R}^{M\times 1}\) with the \(m\)-th element being
\[g_{m}=-\frac{\sqrt{2\tau^{F}}\Big{[}\exp\left(-\frac{\tilde{u}_{y_{m}}^{2}}{2} \right)-\exp\left(-\frac{\tilde{l}_{y_{m}}^{2}}{2}\right)\Big{]}}{\sqrt{\pi} \Big{[}\text{erfc}(-\frac{\tilde{u}_{y_{m}}}{\sqrt{2}})-\text{erfc}(-\frac{l_{ y_{m}}}{\sqrt{2}})\Big{]}}, \tag{16}\]
where
\[\tilde{u}_{y_{m}} =-\sqrt{\tau^{F}}z_{t,m}-\frac{h_{m}^{F}}{\sqrt{\tau^{F}}}+u_{y_{m }}\sqrt{\tau^{F}}, \tag{17}\] \[\tilde{l}_{y_{m}} =-\sqrt{\tau^{F}}z_{t,m}-\frac{h_{m}^{F}}{\sqrt{\tau^{F}}}+l_{y_{m }}\sqrt{\tau^{F}}. \tag{18}\]
Consequently, the remaining task is to determine the associated parameters \((\mathbf{h}^{F},\tau^{F},\mathbf{h}^{G},\tau^{G})\) in (13). To this end, we resort to the moment matching principle of EP (Minka, 2001; Opper et al., 2005) by imposing a consistency of the associated posterior mean \(\mathbb{E}[\tilde{n}_{t,m}]\) and variance \(\mathbb{V}[\tilde{n}_{t,m}]\) of \(\tilde{n}_{t,m}\) from all the three approximations in (13), denoted as \((m_{m}^{a},\chi^{a}),(m_{m}^{b},\chi^{b})\) and \((m_{m}^{c},\chi^{c})\) for (13-a), (13-b), and
(13-c), respectively, which can be computed as follows
\[m_{m}^{a} =\frac{h_{m}^{F}}{\tau^{F}}-\frac{2\exp\left(-\frac{\tilde{u}_{m}^{2}} {2}\right)-2\exp\left(-\frac{\tilde{l}_{m}^{2}}{2}\right)}{\sqrt{2\pi\tau^{F}} \Big{[}\text{erfc}(-\frac{\tilde{u}_{m}}{\sqrt{2}})-\text{erfc}(-\frac{\tilde{l} _{m}}{\sqrt{2}})\Big{]}}, \tag{19}\] \[\chi^{a} =\frac{1}{\tau^{F}}-\frac{1}{M}\sum_{m=1}^{M}\Big{[}\frac{2 \tilde{u}_{y_{m}}\exp\left(-\frac{\tilde{u}_{y_{m}}^{2}}{2}\right)-2\tilde{l}_ {y_{m}}\exp\left(-\frac{\tilde{l}_{m}^{2}}{2}\right)}{\sqrt{2\pi}\tau^{F} \Big{[}\text{erfc}(-\frac{\tilde{u}_{y_{m}}}{\sqrt{2}})-\text{erfc}(-\frac{ \tilde{l}_{m}}{\sqrt{2}})\Big{]}}\] \[+\big{(}m_{m}^{a}-\frac{h_{m}^{F}}{\tau^{F}}\big{)}^{2}\Big{]},\] (20) \[m_{m}^{b} =[(\tau^{G}\mathbf{I}+\mathbf{C}_{t})^{-1}\mathbf{h}^{G}]_{m},\] (21) \[\chi^{b} =\text{Tr}[(\tau^{G}\mathbf{I}+\mathbf{C}_{t})^{-1}]/M,\] (22) \[m_{m}^{c} =\frac{h_{m}^{G}+h_{m}^{F}}{\tau^{G}+\tau^{F}},\] (23) \[\chi^{c} =\frac{1}{\tau^{G}+\tau^{F}}, \tag{24}\]
where \(\text{erfc}(z)=\frac{2}{\sqrt{\pi}}\int_{-\infty}^{z}e^{-t^{2}}dt\) is the complementary error function (erfc) of the standard normal distribution.
In the special case of 1-bit quantization, i.e., sign measurements as in (5), the results of \((m_{m}^{a},\chi^{a})\) in (19, 20) and \(g_{m}\) in (16) can be further simplified as follows
\[m_{m}^{a} =\frac{h_{m}^{F}}{\tau^{F}}+\frac{2y_{m}e^{-\frac{\tilde{l}^{2}}{ 2}}}{\sqrt{2\pi\tau^{F}}\text{erfc}(\frac{y_{m}\tilde{l}}{\sqrt{2}})}, \tag{25}\] \[\chi_{m}^{a} =\frac{1}{\tau^{F}}-\frac{1}{M}\sum_{m=1}^{M}\Big{[}(m_{m}^{a}- \frac{h_{m}^{F}}{\tau^{F}})^{2}-\frac{2y_{m}\tilde{l}e^{-\frac{\tilde{l}^{2}} {2}}}{\sqrt{2\pi\tau^{F}}\text{erfc}(\frac{y_{m}\tilde{l}}{\sqrt{2}})}\Big{]},\] (26) \[g_{m} =\frac{y_{m}\sqrt{2\tau^{F}}e^{-\frac{\tilde{l}^{2}}{2}}}{\sqrt{ \pi}\text{erfc}(\frac{y_{m}\tilde{l}}{\sqrt{2}})} \tag{27}\]
where \(\tilde{l}=-\sqrt{\tau^{F}}z_{t,m}-\frac{h_{m}^{F}}{\sqrt{\tau^{F}}}\). Subsequently, imposing the following moment-matching conditions, i.e.,
\[m_{m}^{a} =m_{m}^{b}=m_{m}^{c}, \tag{29}\] \[\chi^{a} =\chi^{b}=\chi^{c}, \tag{30}\]
one can obtain \((\mathbf{h}^{F},\tau^{F},\mathbf{h}^{G},\tau^{G})\) iteratively, and hence the targeted pseudo-likelihood score (15) afterwards.
### Qcs-Sgm+
By combining the pseudo-likelihood score (15) approximated via EP and the prior score from SGM, we readily obtain the improved version of QCS-SGM, dubbed as QCS-SGM+, using the annealed Langevin dynamics (ALD) (Song and Ermon, 2019), as shown in Algorithm 1.
```
Input:\(\{\beta_{t}\}_{t=1}^{T}\), \(\epsilon,K,\mathbf{y},\mathbf{A}\), \(\sigma^{2}\), quantization codewords \(\mathcal{Q}\) and thresholds \(\{[l_{q},u_{q})|q\in\mathcal{Q}\}\) Initialization:\(\mathbf{x}_{1}^{0}\sim\mathcal{U}\;(0,1)\) for\(t=1\)to\(T\)do \(\alpha_{t}\leftarrow\epsilon\beta_{t}^{2}/\beta_{T}^{2}\) for\(k=1\)to\(K\)do Draw \(\mathbf{k}_{t}^{k}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) for\(it=1\)to\(IterEP\)do Initialization:\(\mathbf{h}^{F},\tau^{F},\mathbf{h}^{G},\tau^{G}\) \(\mathbf{h}^{G}=\frac{\mathbf{m}^{a}}{\chi^{a}}-\mathbf{h}^{F}\) \(\tau^{G}=\frac{1}{\chi^{a}}-\tau^{F}\) \(\mathbf{h}^{F}=\frac{\mathbf{m}^{b}}{\chi^{a}}-\mathbf{h}^{G}\) \(\tau^{F}=\frac{1}{\chi^{b}}-\tau^{G}\) Compute \(\nabla_{\mathbf{x}_{t}}\log p_{\beta_{t}}(\mathbf{y}\mid\mathbf{x}_{t})\) as (15) \(\mathbf{x}_{t}^{k}=\mathbf{x}_{t}^{k-1}+\alpha_{t}\Big{[}\mathbf{s}_{0}( \mathbf{x}_{t}^{k-1},\beta_{t})+\nabla_{\mathbf{x}_{t}}\log p_{\beta_{t}}( \mathbf{y}\mid\mathbf{x}_{t})\Big{]}+\sqrt{2\alpha_{t}}\mathbf{z}_{t}^{k}\) \(\mathbf{x}_{t+1}^{0}\leftarrow\mathbf{x}_{t}^{K}\) Output:\(\mathbf{\hat{x}}=\mathbf{x}_{T}^{K}\)
```
**Algorithm 1** QCS-SGM+
It is important to note that while it seems from (31, 22) that a matrix inverse \((\tau^{G}\mathbf{I}+\mathbf{C}_{t})^{-1}\) is needed in each iteration every \(\mathbf{C}_{t}\), this is in fact not necessary since there exists one efficient implementation method using singular value decomposition (SVD) similar to Meng and Kabashima (2022). Specifically, denote \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}\) as the SVD of \(\mathbf{A}\) and \(\Sigma^{2}\) as the element-wise square of singular values, i.e., diagonal elements of \(\mathbf{\Sigma}\), then after some algebra, it can be shown that the terms involving a matrix inverse can be efficiently computed as follows
\[\mathbf{m}^{b} =(\tau^{G}\mathbf{I}+\mathbf{C}_{t})^{-1}\mathbf{h}^{G},\] \[=\mathbf{U}\text{diag}\Big{(}\frac{\sigma^{2}+\beta_{t}^{2} \Sigma^{2}}{\tau_{G}(\sigma^{2}+\beta_{t}^{2}\Sigma^{2})+1}\Big{)}\mathbf{U}^{ T}\mathbf{h}^{G},\] \[\chi^{b} =\text{Tr}[(\tau^{G}\mathbf{I}+\mathbf{C}_{t})^{-1}]/M\] \[=\text{Tr}\Big{[}\mathbf{U}\text{diag}\Big{(}\frac{\sigma^{2}+ \beta_{t}^{2}\Sigma^{2}}{\tau_{G}(\sigma^{2}+\beta_{t}^{2}\Sigma^{2})+1}\Big{)} \mathbf{U}^{T}\Big{]}/M\] \[=<\frac{\sigma^{2}+\beta_{t}^{2}\Sigma^{2}}{\tau_{G}(\sigma^{2}+ \beta_{t}^{2}\Sigma^{2})+1}> \tag{31}\]
which only needs to replace the values of \(\beta_{t},\tau_{G}\) for different iterations. Hence, the main computational burden lies in the SVD of sensing matrix \(\mathbf{A}\), but only one time is required. On the other hand, it is empirically shown that the QCS-SGM+ converges fast with a small number \(IterEP\) of EP iterations yields very good results, as demonstrated in Section 4.
## 4 Experiments
In this section, we empirically demonstrate the efficacy of the proposed QCS-SGM+ in various scenarios for general sensing matrices. In particular, similar as Rangan et al. (2019), we consider a broader class of random matrices called right-orthogonally invariant matrices with different condition numbers. When the condition number is 1, it reduces to the i.i.d. Gaussian matrix \(\mathbf{A}\), i.e., \(A_{ij}\sim\mathcal{N}(0,1/M)\) considered in Meng and Kabashima (2023), which is approximately row-orthogonal. However, for larger condition numbers \(\gg 1\), it is far from row-orthogonal. Therefore, our main goal is to verify the effectiveness of the proposed QCS-SGM+ for right-orthogonally invariant matrices with large condition numbers.
**Datasets**: Same as Meng and Kabashima (2023), we consider three popular datasets: MNIST (LeCun and Cortes, 2010), Cifar-10 (Krizhevsky and Hinton, 2009), and CelebA (Liu et al., 2015), and the high-resolution Flickr Faces High Quality (FFHQ) (Karras et al., 2018). MNIST (LeCun and Cortes, 2010) are grayscale images of size \(28\times 28\) pixels so that the input dimension for MNIST is \(N=28\times 28=784\) per image. Cifar-10 (Krizhevsky and Hinton, 2009) consists of natural RGB images of size \(32\times 32\) pixels, resulting in \(N=32\times 32\times 3=3072\) inputs per image. For CelebA dataset (Liu et al., 2015), we cropped each face image to a \(64\times 64\) RGB image, resulting in \(N=64\times 64\times 3=12288\) inputs per image. The FFHQ are high-resolution RGB images of size \(256\times 256\), so that \(N=256\times 256\times 3=196608\) per image. Note that to evaluate the out-of-distribution (OOD) performance, we also cropped FFHQ to size \(64\times 64\), as OOD samples for CelebA. All images are normalized to range \([0,1]\).
**QCS-SGM+**: For fair of comparison, same as QCS-SGM (Meng and Kabashima, 2023), we adopt the NCSNv2 (Song and Ermon, 2020) in all cases. Specifically, for MNIST, we train a NCSNv2 (Song and Ermon, 2020) model on the MNIST training dataset with a similar training set up as Cifar10 in Song and Ermon (2020), while we directly use the for Cifar-10, and CelebA, and FFHQ, which are available in this Link. Therefore, the prior score can be estimated using these pre-trained NCSNv2 models. After observing the quantized measurements as (1), we can infer the original \(\mathbf{x}\) by posterior sampling via QCS-SGM+ in Algorithm 1. It is important to note that, _we select images \(\mathbf{x}\) that are unseen by the pre-traineirec SGM models_.
### 1-bit Quantization
In this subsection, we perform experiments on the extremely coarse quantization case, i.e., 1-bit measurements. Specifically, we consider images of the MNIST (LeCun and Cortes, 2010) and CelebA datasets (Liu et al., 2015) in the same setting as Meng and Kabashima (2023). For comparison, apart from QCS-SGM+, we also show results of QCS-SGM (Meng and Kabashima, 2023), BIPG (Liu et al., 2020), and OneShot (Liu and Liu, 2022). For BIPG, and OneShot, we follow the default setting as the open-sourced code of Liu and Liu (2022). For QCS-SGM, we follow exactly the same setting as their open-sourced code (Meng and Kabashima, 2023)
Figure 1: Quantitative comparisons of QCS-SGM+ (proposed) and the original QCS-SGM based on different metrics for 1-bit MNIST, CIFAR10, and CelebA. QCS-SGM+ outperforms QCS-SGM when sensing matrix \(\mathbf{A}\) is ill-defined, i.e., with a large condition number.
Figure 4: Random samples of MNIST and CelebA with different random initializations using QCS-SGM+ when condition number of \(\mathbf{A}\) is 1000.
Figure 3: Typical reconstructed images on Cifar10 for different number of bits when condition number of \(\mathbf{A}\) is 1000. \(M=2000,\sigma=0.001\)
Figure 2: Typical reconstructed images from 1-bit measurements on MNIST and CelebA when condition number of \(\mathbf{A}\) is 1000. QCS-SGM+ recovers original images from 1-bit measurements even when \(M\ll N\). Compared to other methods, QCS-SGM+ recovers more natural images with good perceptual quality and outperforms the original QCS-SGM.
First, we evaluate the convergence performances of the proposed QCS-SGM+ with different numbers of EP iterations and compare them with the original QCS-SGM. The results are shown in Figure 1. It can be seen that QCS-SGM+ converges well for small number of EP iterations on all datasets of MNIST, Cifar10, and CelebA. More importantly, when the condition number of \(\mathbf{A}\) is large, QCS-SGM+ apparently outperforms QCS-SGM under different metrics, which demonstrates its advantage and applicability for general matrices. A comparision of QCS-SGM+ and QCS-SGM for different \(M\) is shown in Figure 5, which also shows the superiority of QCS-SGM+.
The typical reconstructed images from 1-bit measurements with fixed \(M\ll N\) are shown in Figure 2 for both MNIST and CelebA. It can be seen that even when the condition number of 1000 and thus \(\mathbf{A}\) is far from row-orthogonal, the QCS-SGM+ can still faithfully recover the images from 1-bit measurements with \(M\ll N\) measurements while other methods, including QCS-SGM, might fail or only recover quite vague images.
**Multiple samples \(\&\) Uncertainty Estimates**: Same as QCS-SGM, as one kind of sampling method, QCS-SGM+ can yield multiple samples with different random initialization so that we can easily obtain confidence intervals or uncertainty estimates of the reconstructed results, as shown in Figure 4.
### Multi-bit Quantization
Then, we evaluate the efficacy of QCS-SGM+ in the case of multi-bit quantization, e.g., 2-bit, 3-bit. The results on Cifar-10 and CelebA are shown in Figure 3 with a comparison with QCS-SGM. As expected, with the increase of quantization resolution, the reconstruction performances get better. For the same number of quantization bit, QCS-SGM+ outperforms QCS-SGM.
## 5 Conclusion
In this paper, to address the limitation of QCS-SGM to row-orthogonal matrices, we propose an improved version of QCS-SGM called QCS-SGM+. By viewing the likelihood computation as a Bayesian inference problem, QCS-SGM+ efficiently approximates the intractable likelihood score using the well-known expectation propagation (EP) algorithm. To verify the effectiveness of QCS-SGM, we conducted experiments on a variety of baseline datasets, which demonstrate that the proposed QCS-SGM+ outperforms QCS-SGM by a large margin when sensing matrices are far from row-orthogonal. One limitation of QCS-SGM+ is that it requires iterative EP message passing, which is computationally slower than QCS-SGM, although we propose an efficient implementation via SVD. As future work, it is important to further reduce the complexity of QCS-SGM+ or search for other kinds of more efficient alternative methods.
|
2305.02171 | Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using
Continual Learning | Despite the extensive investment and impressive recent progress at reasoning
by similarity, deep learning continues to struggle with more complex forms of
reasoning such as non-monotonic and commonsense reasoning. Non-monotonicity is
a property of non-classical reasoning typically seen in commonsense reasoning,
whereby a reasoning system is allowed (differently from classical logic) to
jump to conclusions which may be retracted later, when new information becomes
available. Neural-symbolic systems such as Logic Tensor Networks (LTN) have
been shown to be effective at enabling deep neural networks to achieve
reasoning capabilities. In this paper, we show that by combining a
neural-symbolic system with methods from continual learning, LTN can obtain a
higher level of accuracy when addressing non-monotonic reasoning tasks.
Continual learning is added to LTNs by adopting a curriculum of learning from
knowledge and data with recall. We call this process Continual Reasoning, a new
methodology for the application of neural-symbolic systems to reasoning tasks.
Continual Reasoning is applied to a prototypical non-monotonic reasoning
problem as well as other reasoning examples. Experimentation is conducted to
compare and analyze the effects that different curriculum choices may have on
overall learning and reasoning results. Results indicate significant
improvement on the prototypical non-monotonic reasoning problem and a promising
outlook for the proposed approach on statistical relational learning examples. | Sofoklis Kyriakopoulos, Artur S. d'Avila Garcez | 2023-05-03T15:11:34Z | http://arxiv.org/abs/2305.02171v1 | # Continual Reasoning: Non-monotonic Reasoning in Neurosymbolic AI using Continual Learning
###### Abstract
Despite the extensive investment and impressive recent progress at reasoning by similarity, deep learning continues to struggle with more complex forms of reasoning such as non-monotonic and commonsense reasoning. Non-monotonicity is a property of non-classical reasoning typically seen in commonsense reasoning, whereby a reasoning system is allowed (differently from classical logic) to _jump to conclusions_ which may be retracted later, when new information becomes available. Neural-symbolic systems such as Logic Tensor Networks (LTN) have been shown to be effective at enabling deep neural networks to achieve reasoning capabilities. In this paper, we show that by combining a neural-symbolic system with methods from continual learning, LTN can obtain a higher level of accuracy when addressing non-monotonic reasoning tasks. Continual learning is added to LTNs by adopting a curriculum of learning from knowledge and data with recall. We call this process _Continual Reasoning_, a new methodology for the application of neural-symbolic systems to reasoning tasks. Continual Reasoning is applied to a prototypical non-monotonic reasoning problem as well as other reasoning examples. Experimentation is conducted to compare and analyze the effects that different curriculum choices may have on overall learning and reasoning results. Results indicate significant improvement on the prototypical non-monotonic reasoning problem and a promising outlook for the proposed approach on statistical relational learning examples.
Neural-Symbolic Systems, Continual Learning, Non-monotonic Reasoning, Logic Tensor Networks 202317th International Workshop on Neural-Symbolic Learning and Reasoning, 3-5 July, 2023, Sienna, Italy
## 1 Introduction
The combination of machine learning and symbolic reasoning, now embodied by the area known as neurosymbolic AI, has been a developing field of research since the early days of AI. Recent advancements in deep learning allowed for a surge of interest in this particular type of models. Many variations of neural-symbolic (NeSy) models have surfaced in the past few years, showing the advantages of NeSy systems at reasoning and learning with increased explainability, data efficiency and generalization in comparison with other deep learning models [1, 2, 3, 4].
In this paper we propose **Continual Reasoning**, a new paradigm of learning for NeSy models to achieve _non-monotonic reasoning_ (NMR). The core principle of Continual Reasoning states that reasoning tasks, especially those of a non-monotonic nature, should be addressed by learning from data and knowledge in a multi-stage curriculum of training. We illustrate this learning paradigm using a combination of Logic Tensor Networks (LTN) [5], a NeSy framework
capable of simulating First-Order Logic (FOL), and methodologies borrowed from Continual Learning (CL) for deep learning [6]. LTN is chosen for its ability to constrain the loss calculations of a deep learning system based on symbolic knowledge defined in FOL and its effectiveness in dealing with both typical deep learning and reasoning tasks [5, 7, 8]. CL, that is, the sequential learning of knowledge, without forgetting, from data that may no longer be available, will be shown to implement non-monotonicity in LTNs efficiently, when adopting an appropriate curriculum learning. Continual Reasoning combining LTN and CL aims to address the difficulties that many NeSy models have when dealing with non-monotonic tasks.
We apply and evaluate Continual Reasoning on an exemplar NMR task (the birds and penguins example), on the Smokers and Friends statistical relational reasoning task [9], and on a Natural Language Understanding (NLU) task that contains NMR (from the bAbI dataset [10]). Results indicate that a considerable increase in accuracy can be achieved in comparison with a single-stage curriculum of learning.
The remainder of this paper is organised as follows. In Section \(2\), we discuss the challenges faced by previous approaches to NMR. In Section \(3\), we introduce the Continual Reasoning methodology and two general approaches to curriculum design. In Section \(4\), we analyze the experimental results. Section \(5\) concludes the paper and discusses directions for future work.
## 2 Background
A common scenario to explain NMR is the _Penguin Exception Task_ (PET) [11, 12], which can be defined in simple terms as: _In a group of animals, there exist birds and non-birds. It is known that normally all birds fly, and that all non-birds do not fly. However, it is also known that penguins are animals that are birds, but do not fly._ In First-Order Logic (FOL), the PET can be defined using axioms such as \(\forall X(is\_bird(X)\to can\_fly(X))\) and \(\exists X(is\_ penguin(X)\wedge is\_bird(X))\), etc. The idea is that in the absence of further information, it is reasonable to assume that all birds can fly. Although, when faced with information about penguins as an exception to the rule, one would like to retract the previous conclusion. In monotonic FOL, however, retracting a conclusion is not possible. Thus, in classical logic, the PET becomes unsolvable due to the contradiction that may arise from \(can\_fly(X)\) and \(\neg can\_fly(X)\). The PET is unsolvable also in traditional _logic programming languages_, such as PROLOG [13]. In order to address the problem, many non-monotonic approaches have been developed, including Moore's Autoepistemic logic, McCarthy's Circumscription, Reiter's Default Logic and in logic programming with negation by failure [12]. In autoepistemic logic, certain rules can be adjusted to include an exception: \(\forall_{X}\ (is\_bird(X)\land\neg is\_penguin(X)\to can\_fly(X))\). However, the need to be explicit in including all exceptions makes this approach computationally expensive (considering that there are other birds that do not fly, e.g. ostriches). Circumscription and logic programming with negation by failure, on the other hand, find a solution to the problem by introducing the predicate \(abnormal\), to indicate an exceptional case. The above rules would be re-written as \(\forall_{X}\ (is\_bird(X)\land\neg is\_abnormal(X))\to can\_fly(X)\) along with a rule to state that penguins are abnormal birds. Other exceptions would then be added as needed without changing the original rule. Unfortunately, this approach does not adapt well to exceptions to the exceptions such as an abnormal penguins (a hypothetical super-penguin that is capable of flying).
At present, there is a tension between the above attempts to formalizing non-monotonicity and large-scale data-driven approaches based on neural networks and natural language that are efficient but lack any formalization. In this paper we seek to investigate approaches to solving PET and other simple examples that can be formalized but that work using the same tools as the large-scale network models. Work has been conducted to formalize NMR in neural networks starting with the Connectionist Inductive Learning and Logic Programming System (CILP) [14], later developed into a system for statistical relational learning. More recently, the Differentiable Inductive Logic Programming (\(\partial\)ILP) approach [15] was proposed, addressing cycles through negation. Probabilistic approaches have also been developed which can implement a form of non-monotonicity or at least avoid the problems of classical logic by assigning probabilities to beliefs expressed as Horn clauses, e.g. DeepProbLog [16]. In this paper, rather than mapping symbolic representations into neural networks and vice-versa, we are interested in the interplay between learning and reasoning as part of a curriculum. We focus on the Logic Tensor Network (LTN) [5] because it is a highly modular NeSy framework applicable in principle to any underlying neural network model and based on the canonical, highly expressive FOL language. Additionally, the LTN has shown promise for learning in continual mode [17].
The LTN relies on two main ideas, the _grounding_ of predicates and logical axioms into vectors and _Real Logic_ which maps the satisfiability of the logical axioms to a real number in the interval {0,1} thus enabling viewing satisfiability as optimization. Given a knowledge base of FOL axioms \(\mathcal{K}\), the LTN grounds every variable \(X\) to a vector representation \(\mathcal{G}(X)=\langle x_{1}...x_{k}\rangle\in R^{k}\), and every predicate \(P\) to a neural network \(\mathcal{G}(p)\rightarrow[0,1]\).1 The application of _Real Logic_ uses differentiable fuzzy logic to calculate the truth value of any LTN rule in the usual way. The satisfiability (\(sat\)), i.e. the aggregated truth value of the knowledge base, is then used in the loss function, with \(Loss=1-sat\).
Footnote 1: A note on terminology: the LTN framework treats FOL axioms in a slightly different way than logic programming. A grounding creates a direct connection with data, mapping a variable to a specific partition of the data. For this reason, we use the term _rules_ instead of axioms when referring to the FOL knowledge base defined in LTN. The FOL axiom \(\forall_{X}\)\(is\_bird(X)\to can\_fly(X)\) is defined in LTN as the rule \(\forall_{Animals}\)\(is\_bird(Animals)\Rightarrow can\_fly(Animals)\), where _Animals_ is the set of vector groundings for all animals in the data. This makes LTN a typed FOL language. If we wish to declare rules that only apply to a subset of _Animals_, we can do this in LTN using e.g. \(\forall_{Norm\_Birds}\)\(is\_bird(Norm\_Birds)\), where _Norm_Birds_ consists only of the vector representations for birds, which is a subset of _Animals_. This excludes other subsets of animals, e.g. _Penguins_ or _Cows_. For the definition of the PET used in LTN, see Appendix A.
## 3 Method
**Continual Reasoning** is proposed as a novel methodology, addressing reasoning tasks with a combination of NeSy models and a curriculum of training. In CL, a multi-task dataset is split along the different tasks, so that the model can be trained on each subset of data at each stage of the curriculum, with the aim to learn new tasks without forgetting old ones. In the context of NeSy models where tasks and knowledge are mostly represented at the symbolic level, we treat the aforementioned splitting of data as a division of the symbolic knowledge along a series of stages, which constitutes our curriculum of learning. In doing so, we rely on the neural networks of the NeSy models to learn new knowledge without forgetting previously learned
knowledge, adjusting their beliefs about previously learned knowledge to allow for the new knowledge to be mapped to true without creating an inconsistency. Specifically, when using the LTN as our NeSy model, a knowledge base (KB) of FOL rules is separated into multiple stages for learning. For example, consider a KB consisting of facts \(a(X),b(X),c(X)\), and rules \(a(X)\Rightarrow d(X)\) and \(b(x)\wedge c(X)\Rightarrow d(X)\). A split into three stages might be: (1) train on the facts; (2) train on \(a(X)\Rightarrow d(X)\) and recall fact \(a(X)\); (3) train on \(b(x)\wedge c(X)\Rightarrow d(X)\). All facts and rules are assumed to be universally quantified. Our experiments will show, as one would expect, that the choice of curriculum, i.e the specific sequence in which the rules are learned and the facts are recalled, can affect the outcome. It becomes apparent that while in traditional machine learning all data is treated equally as being i.i.d. (although recent work around out-of-distribution (OOD) learning has started to question this assumption [18]), in reasoning tasks, especially NMR, the order in which knowledge is learned matters (in addition to the data split already identified as important in OOD learning).
Thus, we focus on two core requirements for the choice of curriculum. The first relies on the approach commonly applied in CL where data is split into separate tasks [6]. This can be applied in Continual Reasoning by treating each predicate as an individual task and training any rule aimed at learning about said predicate in a single stage of the curriculum. We call this _Task Separation_. In our previous example, we would split the KB into four stages: (1) learn \(a(X)\); (2) \(b(X)\); (3) \(c(X)\); and (4) learn about \(d(\_), training on both rules. The second requirement takes inspiration from work conducted with knowledge graphs and lifelong learning projects such as NELL [19], in which we aim to "build up" from atomic knowledge (i.e. facts) and augment knowledge by abiding to new rules. In Continual Reasoning, we can accomplish this by giving priority to learning propositional rules, and rules that are directly tied to labelled data. Following this, we aim to use rules that extend the learned domain beyond what is available to more abstract concepts. This is known as _Knowledge Completion_. Using again our previous example, to satisfy both requirements we would split the KB into two stages: (1) train \(a(X)\), \(b(X)\) and \(c(X)\); (2) learn \(a(X)\Rightarrow d(X)\) and \(b(x)\wedge c(X)\Rightarrow d(X)\).
To be able to do the above using neural networks, we must address the core issue found in CL, often referred to as _catastrophic forgetting_, i.e. when the process of gradient descent leads the neural network to forget previously learned data by conforming entirely to newly provided data. To address this problem, we apply a common CL technique of _rehearsal_[6]. Rehearsal is the process by which previously seen data is sampled and recalled in the current stage of learning. For Continual Reasoning, since our knowledge is represented in FOL, in each stage of learning, we recall a random set of previously learned knowledge, such as \(a(X)\) earlier, to be learned along with the current knowledge.
For our analysis, we compare the task separation and knowledge completion curricula to a _Baseline_, where all knowledge is learned in a single stage, and a _Random_ curriculum, where the KB split is randomly selected for each stage. To allow for effective comparison, all curricula, apart from the baseline, are composed of three stages. These comparisons are applied to the PET as a prototypical NMR task to show their benefits. In addition, to show the effectiveness of Continual Reasoning on other types of reasoning problems, we apply it to the Smokers and Friends task[5, 20, 9] and to Task 1 of the bAbI dataset [10] in what follows.
## 4 Results
Penguin Exception Task (PET): For the PET, we examine the behaviour of the LTN model throughout the curriculum of training, paying particular attention to three distinct types of reasoning that are necessary for success. First, we have knowledge that can be learned through induction with _one-hop reasoning_, such as determining that all normal birds fly: \(\forall_{Norm\_Birds}\ can\_fly(Norm\_Birds)\), and that all penguins are birds: \(\forall_{Penguins}\ is\_bird(Penguins)\). Second, we have _two-hop reasoning_ when determining that all penguins should be able to fly, \(\forall_{Penguins}\ can\_fly(Penguins)\), because they are birds. This is an instance of _jumping to a conclusion_ in the absence of further information. Lastly, we contradict this conclusion with our final learning stage for which we expect to conclude non-monotonically that penguins in fact do not fly, \(\forall_{Penguins}\ \neg\ can\_fly(Penguins)\). We use these four FOL statements as queries in the analysis of our curricula of learning by measuring their LTN satifiability over time (Table 1).
The results indicate that the task separation curriculum performs better than the other curricula, with the LTN able to correctly distinguish between all types of animals, as well as learn that normal birds can fly, while penguins, although still classified as birds, do not fly. The knowledge completion curriculum also performs to a high satisfiability for each of the queries.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Curriculum & Rules & Stage 1 & Stage 2 & Stage 3 \\ \hline \multirow{4}{*}{Baseline} & is\_bird(Normal\_Birds) & & & \(97.1\%\pm 0.11\%\) \\ & is\_bird(Penguins) & - & - & \(61.8\%\pm 0.00\%\) \\ & can\_fly(Birds) & & & \(\mathbf{96.2\%\pm 0.33\%}\) \\ & not(can\_fly(Penguins)) & & & \(62.8\%\pm 0.00\%\) \\ \hline \multirow{4}{*}{Random} & is\_bird(Normal\_Birds) & \(61.8\%\pm 47.8\%\) & \(88.2\%\pm 33.0\%\) & \(97.7\%\pm 0.02\%\) \\ & is\_bird(Penguins) & \(54.5\%\pm 45.9\%\) & \(58.2\%\pm 48.3\%\) & \(67.1\%\pm 21.7\%\) \\ & can\_fly(Birds) & \(28.5\%\pm 43.9\%\) & \(65.7\%\pm 49.1\%\) & \(90.5\%\pm 7.06\%\) \\ & not(can\_fly(Penguins)) & \(71.2\%\pm 43.7\%\) & \(44.0\%\pm 48.4\%\) & \(79.7\%\pm 16.8\%\) \\ \hline \multirow{4}{*}{KC} & is\_bird(Normal\_Birds) & \(99.9\%\pm 0.01\%\) & \(99.9\%\pm 0.01\%\) & \(99.9\%\pm 0.01\%\) \\ & is\_bird(Penguins) & \(22.5\%\pm 24.3\%\) & \(98.9\%\pm 2.22\%\) & \(99.1\%\pm 1.16\%\) \\ & can\_fly(Birds) & \(57.6\%\pm 6.21\%\) & \(99.1\%\pm 1.94\%\) & \(91.9\%\pm 7.29\%\) \\ & not(can\_fly(Penguins)) & \(41.4\%\pm 4.81\%\) & \(2.64\%\pm 5.74\%\) & \(78.7\%\pm 43.5\%\) \\ \hline \multirow{4}{*}{TS} & is\_bird(Normal\_Birds) & \(99.9\%\pm 0.00\%\) & \(99.9\%\pm 0.00\%\) & \(\mathbf{99.9\%\pm 0.01\%}\) \\ & is\_bird(Penguins) & \(99.8\%\pm 0.02\%\) & \(99.9\%\pm 0.01\%\) & \(\mathbf{99.5\%\pm 0.32\%}\) \\ & can\_fly(Birds) & \(53.9\%\pm 5.53\%\) & \(99.9\%\pm 0.00\%\) & \(84.7\%\pm 2.21\%\) \\ & not(can\_fly(Penguins)) & \(53.1\%\pm 5.25\%\) & \(0.01\%\pm 0.01\%\) & \(\mathbf{99.7\%\pm 0.25\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy for each Curriculum Choice for the Penguin Exception Task (PET). Baseline: all rules are learned at once. Random: random split of rules along 3 stages. Task Separation (TS): divide rules according to task. Knowledge Completion (KC): divide rules to train facts before general rules. Best performing overall performance can be found in the task separation curriculum.
However, in comparison with task separation, the knowledge completion curriculum is less robust, and in our experimentation led to one failure case, in which penguins were misclassified as normal birds, and therefore could fly.
When analyzing the queries throughout the training stages, we can identify changes that show that the LTN has the desired behaviour, including _jumping to conclusions_ and _belief revision._ Specifically, in the second stage of both curricula, the LTN is trained to infer that penguins are birds, as well as that all birds can fly. Until told otherwise, the LTN jumps to the conclusion that penguins should be able to fly. In the third stage, however, the LTN is trained on the rule that penguins cannot fly. Given this knowledge, \(can\_fly(Penguins)\) and \(can\_fly(Normal\_Birds)\) take an initial plunge (clearly shown in Figure 1). This, of course, makes sense, as the LTN does not yet have any reason to distinguish between penguins and normal birds, and thus once again jumps to the conclusion that since penguins cannot fly, then normal birds should not fly either. However, we see that the process of recall makes \(can\_fly(Normal\_Birds)\) regain satisfiability, while the satisfiability of \(can\_fly(Penguins)\) decreases towards zero. It is interesting to note that in stage 3 the apparent contradiction does not lead to a convergence around an uninformative satisfiability of 0.5.
With a random curriculum, we see more variance in the final results, which is to be expected given the random choice of rules, but overall, on average, this curriculum performs slightly
Figure 1: Satisfiability of four LTN queries: (1) Normal birds are birds (blue), (2) Penguins are birds (grey), (3) Normal birds can fly (green), (4) Penguins can fly (red), for the Knowledge Completion curriculum (top) and the Task Separation curriculum (bottom). Overall better performance is seen in the Task Separation curriculum.
better than the baseline. This shows that even _without_ the benefit of curriculum design, the _method of Continual Reasoning leads to better results than attempting to learn the full knowledge base in a single stage_. By further analysing the experiments in which the random curricula perform optimally, we see that task separation and knowledge completion curricula are not the only viable option for success (see Appendix A).
Smokers and Friends Task (S&F):The S&F problem consists of a statistical relational reasoning task. We define the knowledge base in accordance with [5] and compare a baseline curriculum to curricula belonging to knowledge completion and task separation paradigms. The satisfiability of each rule throughout the stages show that a knowledge completion curriculum outperforms the baseline and task separation on identifying that smoking causes cancer (97.8% to 71.5% and 80.6%, respectively). Overall, the knowledge completion curriculum leads to the LTN reaching higher satisfiability in five of the nine FOL rules, in comparison with the baseline which beats the other curriculum in only three of the nine rules (see Appendix B for a table detailing the satisfiability of rules per stage of each curriculum).
In addition to comparison between curricula, we compare the outcome of Continual Reasoning with two other NeSy models that have been applied to S&F, the Logical Neural Network (LNN) and the Markov Logic Network (MLN). The LNN allows for a lower and upper bound truth value, which signifies the lowest possible and highest possible truth value for a given FOL axiom, such that the whole knowledge base holds true. The MLN derives axiom log-probability weights which signify the probability of the axiom's mapping to true compared to the probability of it mapping to false. In Table 4, we see the results of these models per FOL rule used for training in our experiments. It is important to note that a precise comparison is not possible, as each model defines the set of FOL rules slightly differently in training. However, we see that the application of continual reasoning on LTNs for the S&F task performs comparably to other
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Rules & LNN-\(P_{2}^{5}\) & MLN & LTN-Baseline & LTN-TS Stage 3 & LTN-KC Stage 3 \\ \hline \(\neg F(x,x)\) & [0.83,0.98] & 0.26 & 0.998 & 0.996 & 0.995 \\ \(F(x,y)\Rightarrow F(y,x)\) & [0.97,1.00] & - & 0.796 & 0.826 & 0.954 \\ \(\exists_{y}F(x,y)\) & [1.00,1.00] & 6.88 & 0.730 & 0.748 & 0.718 \\ \(F(x,y)\wedge S(x)\Rightarrow S(y)\) & [0.65,1.00] & 3.53 & 0.716 & 0.615 & 0.497 \\ \(S(x)\Rightarrow C(x)\) & [0.58,1.00] & -1.35 & 0.715 & 0.806 & 0.978 \\ \(F(x,y)\) & - & - & 0.865 & 0.821 & 0.876 \\ \(S(x)\) & - & - & 0.825 & 0.731 & 0.653 \\ \(C(x)\) & - & - & 0.919 & 0.969 & 0.999 \\ \(\neg S(x)\Rightarrow\neg C(x)\) & - & - & 0.917 & 0.910 & 0.991 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy comparison for the Smokers and Friends statistical relational reasoning task. Compares the lower and upper bounds of LNN [20] trained on \(P_{2}^{5}\), and log-probability weights of MLN [9] (provided in [20]), for selected rules, with baseline LTN and LTN with continual learning on task separation (TS) and knowledge completion (KC) curricula.
NeSy approaches.
bAbI - Task 1: Task 1 of the bAbI dataset contains story lines of given facts and questions about those facts. For example, one instance will provide the sentences "Mary went to the office. Jack travelled to the garden." and ask "Where is Mary?". In order to address such a task with the proposed approach of Continual Reasoning using LTNs, we transform natural language sentences into FOL rules using GPT-3 [21]2. As the task already consists of stories told in stages, separated by questions, for curriculum design, we simply separate the FOL rules along the same stages in the dataset. The reasoning here can be said to be non-monotonic over time in that, later in the story, truth-values may change, e.g. Mary may no longer be in the office. Initial experimentation showed that by applying Continual Reasoning, a LTN model achieves 96.9% accuracy on the testing set of bAbI-Task 1, surpassing the 95% threshold for success. Further experimentation is ongoing.
Footnote 2: This approach is inspired by that used in [22], although FOL parsing of natural language is an evolving field of research which continues to face challenges [23]
## 5 Discussion, Conclusions and Future Work
We have introduced a novel methodology that integrates neurosymbolic AI and continual learning techniques in order to achieve non-monotonic reasoning. We call this Continual Reasoning, and we showed that by using Logic Tensor Networks [5] as our neural-symbolic framework, and training the knowledge base of First-Order Logic rules in a curriculum of multiple stages, we can improve on the traditional approach of learning all rules together. Additionally, we have analysed multiple types of curricula, proposing two general paradigms for curriculum design, and showed that while even random curriculum performs better on average than the baseline, a specific design choice can allow the model to appropriately jump to conclusions and revise its beliefs more effectively.
Experimentation conducted for this paper showed that Continual Reasoning also performs comparably on statistical relational reasoning tasks to a baseline curriculum, and other NeSy models. Continuation of this work could apply Continual Reasoning on larger datasets, such as the dataset used in RuleTaker [24], visual relational question-answering datasets, such as the CLEVR [25], and the remaining tasks in the bAbI dataset [10].
Furthermore, there still remain open questions concerning Continual Reasoning, such as how it might perform in extended non-monotonic reasoning tasks that occur when addressing lifelong learning. Rudimentary exploration of extending the PET to learn about a "superpenguin" which could fly, resulted in the LTN mostly failing to learn the exception to the exception. We believe, however, that utilising more advanced continual learning techniques, such as structural choices for neural network architecture, as well as more sophisticated recall methods like active learning, as suggested in [6, 17], would allow the Continual Reasoning methodology to succeed. This is to be investigated. Additionally, while LTNs proved to be a straightforward NeSy model to apply Continual Reasoning on, it should be possible to apply our methodology to other NeSy models, such as LNNs. Integration with a very recent software framework called PyReason [26] could provide an efficient way to do this. |
2310.09292 | Harnessing Unipolar Threshold Switches for Enhanced Rectification | Phase transition materials (PTM) have drawn significant attention in recent
years due to their abrupt threshold switching characteristics and hysteretic
behavior. Augmentation of the PTM with a transistor has been shown to provide
enhanced selectivity (as high as ~107 for Ag/HfO2/Pt) leading to unique
circuit-level advantages. Previously, a unipolar PTM, Ag-HfO2-Pt, was reported
as a replacement for diodes due to its polaritydependent high selectivity and
hysteretic properties. It was shown to achieve ~50% higher DC output compared
to a diode-based design in a Cockcroft-Walton multiplier circuit. In this
paper, we take a deeper dive into this design. We augment two different PTMs
(unipolar Ag-HfO2-Pt and bipolar VO2) with diodeconnected MOSFETs to retain the
benefits of hysteretic rectification. Our proposed hysteretic diodes
(Hyperdiodes) exhibit a low forward voltage drop owing to their volatile
hysteretic characteristics. However, augmenting a hysteretic PTM with a
transistor brings an additional stability concern due to their complex
interplay. Hence, we perform a comprehensive stability analysis for a range of
threshold voltages (-0.2 < Vth < 0.8) and transistor sizes to ensure
operational stability and to choose the most optimum design parameters. We then
test a standalone AgHfO2-Pt and an Ag-HfO2-Pt-based Hyperdiode in two different
types of voltage multipliers and report ~500 and ~20 times lower settling time,
respectively. | Md Mazharul Islam, Shamiul Alam, Garrett S. Rose, Aly Fathy, Sumeet Kumar Gupta, Ahmedullah Aziz | 2023-08-30T04:53:13Z | http://arxiv.org/abs/2310.09292v1 | # Harnessing Unipolar Threshold Switches for Enhanced Rectification
###### Abstract
Phase transition materials (PTM) have drawn significant attention in recent years due to their abrupt threshold switching characteristics and hysteretic behavior. Augmentation of the PTM with a transistor has been shown to provide enhanced selectivity (as high as \(\sim\)10\({}^{7}\) for Ag/HfO\({}_{2}\)/Pt) leading to unique circuit-level advantages. Previously, a unipolar PTM, Ag-HfO\({}_{2}\)-Pt, was reported as a replacement for diodes due to its polarity-dependent high selectivity and hysteretic properties. It was shown to achieve -50% higher DC output compared to a diode-based design in a _Cockcroft-Walton_ multiplier circuit. In this paper, we take a deeper dive into this design. We augment two different PTMs (unipolar Ag-HfO\({}_{2}\)-Pt and bipolar VO\({}_{2}\)) with diode-connected MOSFETs to retain the benefits of hysteretic rectification. Our proposed hysteretic diodes (Hyperdiodes) exhibit a low forward voltage drop owing to their volatile hysteretic characteristics. However, augmenting a hysteretic PTM with a transistor brings an additional stability concern due to their complex interplay. Hence, we perform a comprehensive stability analysis for a range of threshold voltages (-0.2 \(<V_{\text{m}}<\) 0.8) and transistor sizes to ensure operational stability and to choose the most optimum design parameters. We then test a standalone Ag-HfO\({}_{2}\)-Pt and an Ag-HfO\({}_{2}\)-Pt-based Hyperdiode in two different types of voltage multipliers and report -500 and -20 times lower settling time, respectively. As the PTM possesses additional sources of variation, it is crucial to examine the performance benefits of the structure through an extensive variation analysis. We perform 3\(\sigma\) Monte-Carlo variation analysis for a Cockcroft-Walton multiplier considering the nonidealities in the host transistor and the PTM. We observe that, Hyperdiode-based design achieves -20% higher output voltage compared with the conventional designs within a fixed timeframe (200 \(\mu\)s).
Phase transition material, diode, hysteresis, voltage multiplier, Monte-Carlo, Hyperdiode.
## I Introduction
Transistor technology has undergone significant advancements in area scaling and power efficiency over the last few decades [1]. While it has transformed a multitude of technological paradigms, new challenges associated with the advancement continue to emerge [2]. As electronic industries are continually tackling these challenges through novel design approaches, extensive research efforts have been given to improve the transistor performance and applicability [3]. To achieve the best possible performance of an electronic device, it is also crucial to focus on optimizing other active components.
A diode is one such indispensable active device component in electronic circuits playing crucial roles such as rectification, voltage regulation, signal modulation, and energy harvesting in mm-wave technology, etc [4, 5]. However, an on-chip diode faces several challenges such as high leakage current, forward voltage drop, reverse recovery transient, lower breakdown voltage, etc [6, 7]. Diode-connected FET (MOS-diode) incurs significant forward voltage drop, hurting their performance for low-voltage operation. While Schottky diodes exhibit low forward voltage drop and shorter reverse recovery time, they have drawbacks such as low reverse breakdown voltage and high leakage current. Consequently, for next-generation low-power electronics, a diode with low forward voltage drop and low leakage current is highly desired [8].
Recently, a unipolar PTM-based switch, Ag-HfO\({}_{2}\)-Pt, has been proposed as a diode-replacement in a voltage multiplier circuit. Here, the unipolar PTM structure exhibits two discrete resistive states (insulating and metallic) and unique hysteretic characteristics.to achieve a low forward voltage drop while simultaneously maintaining a low off-current, thanks to its high insulating-state resistance [9]. However, the structure does not provide considerable flexibility and lacks practical tuning knobs as the diode dynamics is mainly dictated by the material properties. Here, we explore the possibility to combine a PTM with a diode-connected CMOS transistor to fully utilize the unique benefits of hysteretic rectification, without sacrificing design tunability. A wide variety of phase transition materials (PTMs), including both unipolar and bipolar types, have been extensively reported with diverse transition voltages and resistance levels [10, 11, 12]. Considering the inherent rectification property of the MOS-diode, it is reasonable to consider augmenting PTMs with both unipolar and bipolar characteristics to explore their implications in the augmented structure [13, 14, 15]. Hence, in addition to Ag-HfO\({}_{2}\)-Pt as a
Figure 1: Physical structure of a **(a)** p-n diode **(b)** Schottky diode, and **(c)** diode connected FinFET. **(d)** I-V characteristics of the envisioned hysteretic diode (blue) with a conventional diode connected FinFET (red). **(e)** A simple circuit with a diode charging a capacitor. **(f)** Input voltage (\(V_{\text{m}}\)) and output voltage level for a hysteretic (blue) and a non-hysteretic diode (red).
unipolar PTM, we choose vanadium dioxide (VO\({}_{2}\)) as a bipolar PTM in our analysis. VO\({}_{2}\) has been well-studied for decades and can be integrated with the current CMOS process technology through monolithic integration [12].
The complex interplay between the MOS-diode and the PTM raises a concern for system stability due to the abrupt transition behaviour and hysteretic characteristics of the PTM [16]. For a PTM-augmented MOS-diode, it is of paramount interest to assess its stability within a relevant range of input voltages and inherent parameters. In this manuscript, we perform a comprehensive stability analysis for a PTM-augmented MOS-diode (Hyper-diode) for both unipolar and bipolar PTMs for varying threshold voltage and number of fins (\(n_{\mathit{fin}}\)). [17]. The insights provided by our analysis are highly useful for selecting suitable design parameters. In the realm of RF circuits and energy harvesting technology, the MOS-diode plays a critical role, with one of its important applications being the voltage multiplier.[18, 19]. We benchmark and evaluate the circuit-level performance of our proposed diode using two widely used voltage multiplier circuits. To assess the practicality of the novel rectifier platforms, it is necessary to consider the impact of variations. Process variation can significantly alter the threshold voltage and the channel current of the nanoscale transistors [20]. The PTM brings in additional sources of variation in our design in the resistance levels (\(R_{\mathit{INS}}\) and \(R_{\mathit{MET}}\)) and the transition voltages (\(V_{\mathit{C-IMT}}\) and \(V_{\mathit{C-MIT}}\)). Hence, it is crucial to perform a systematic variation analysis to ensure that considerable advantage is maintained even for the worst-case scenario We compare the variation tolerance of the standard MOS-diode and the proposed Hyper-diode through a 500-point Monte Carlo variation analysis [21].
The organization of this paper is as follows. Section II provides a brief overview of the PTM and introduces the concept of Hyperodiode. In section III, we perform a detailed stability analysis for VO\({}_{2}\) and Ag-HfO\({}_{2}\)-Pt-based Hyperdiodes for different transistor sizes. In section IV, we show the performance comparison of a standalone PTM, Hyperodiode, and conventional MOS-diode in DCP and CW multiplier and evaluate the validity of our proposition. Finally, we test the degree of variation tolerance of our proposed Hyperodiode using the Monte Carlo variation analysis in section V.
## II Phase Transition Material-based Diode
### _Challenges Faced by Existing Diodes: An Overview_
A diode is a unidirectional active electronic component widely used in wireless communication, energy harvesting technology, and optical devices. Diodes can be classified into three major classes. a) p-n diode, b) Schottky diode c) MOS-diode (Fig. 1). However, conventional diodes often suffer from a common problem of forward voltage drop, which imposes limitations on the DC output voltage in rectifiers and voltage multiplier circuits.[18]. The forward voltage drop (\(V_{D}\)) is defined as the minimum voltage required to keep itself turned on. Although Schottky diode serves as a low voltage drop diode, it suffers from high reverse leakage current and low reverse breakdown voltage which sets significant limitations to its performance and reliability [22]. For MOS-diode, \(V_{D}\) can be reduced by decreasing the threshold voltage (\(V_{\mathit{fl}}\)) of the device. But the reduction of \(V_{\mathit{fl}}\) raises the leakage current (\(I_{\mathit{off}}\)) posing significant challenges for low power applications. Intuitively, both low \(I_{\mathit{off}}\) and low \(V_{D}\) are desired for a MOS-diode. Unfortunately, conventional diodes with typical -LV characteristics, where a diode turns on and off at the same voltage, cannot meet both requirements as a reduction of \(V_{D}\) increases the \(I_{\mathit{off}}\)(Fig.1(d)). By independently reducing the turn-off voltage (\(V_{\mathit{OFF}}\)) of a diode, the voltage drop can be lowered without affecting the \(I_{\mathit{off}}\)(as shown in Fig. 1(d)). The hysteretic diode can achieve both advantages independently. From a simple circuit analogy (Fig. 1(e)), it can be envisioned that a hysteretic diode wins over a conventional diode due to the low \(V_{D}\) (\(=V_{\mathit{OFF}}\)) (Fig.1(f)).
### _Phase Transition Materials and Hyper-FET_
PTMs are special materials that exhibit an abrupt transition between insulating and metallic states triggered by several types of stimuli (_i.e._ electrical, optical, mechanical, and thermal) [23]. When the voltage across a PTM reaches a particular value (\(V_{\mathit{C-IMT}}\)), insulating to metallic transition occurs (Fig. 2(a,b)). The material further returns to its insulating state once the voltage across it goes below \(V_{\mathit{C-MIT}}\)(Fig. 2(a,b)). The family of PTMs is quite rich with diverse transition voltages and resistivity levels. This gives the benefit of selecting and optimizing PTMs based on the specification required for the application. For example, both unipolar and bipolar PTMs have been reported and extensively used [13, 24]. We have incorporated a SPICE-compatible compact model for both the unipolar (Ag-HfO\({}_{2}\)-Pt) and a bipolar (VO\({}_{2}\)) PTM. For the host FinFET, the predictive technology model has been used. In Ag-HfO\({}_{2}\)-Pt, the Ag filament is formed in the interstitial sites of HfO\({}_{2}\) (Fig.3(a)) [24]. Due to the inert nature of the Pt electrode, no filament formation occurs when a voltage is applied across the Pt terminal. This asymmetry makes Ag-HfO\({}_{2}\)-Pt unipolar.
Figure 2: I-V characteristics of **(a)** Ag-HfO\({}_{2}\)-Pt [36], and **(b)** VO\({}_{2}\)[37]. **(c)** Physical structure of a **(c)** Hyper-FET [13], and **(d)** its I-V characteristics.
VO\({}_{2}\) is a bipolar PTM where a strong electron correlation drives the phase transition mechanism.
Among the wide variety of materials reported so far, VO\({}_{2}\) has been used with a MOSFET device giving rise to the hysteretic I-V characteristics [12]. This hybrid system has been referred to as the hybrid phase transition FET (Hyper-FET) where the simple augmentation of PTM with conventional MOSFET has overcome the fundamental Boltzmann limit (60 mV/dec) (Fig. 2(c)). The Hyper-FET demonstrates hysteresis in its I-V characteristics (as shown in Fig. 2(d)) enhancing the performance of various digital and analogue devices and circuits.[25, 26, 27]. Since the Hyper-FET exhibits hysteretic behavior due to the augmented PTM, intuitively the Hyper-diode is supposed to exhibit rectification characteristics with unidirectional hysteretic behavior.
### _Hyperodiode_
Augmenting a unipolar PTM with a FET device provides additional high resistance during the off state of the host transistor reducing \(I_{\mathit{off}}\) substantially without introducing any additional area penalty (Fig.3(b,c)). In our model, Ag-HfO\({}_{2}\)-Pt has very high off resistance (\(R_{\mathit{OFF}}\) = 40 GO) for which it dominates the overall characteristics during off state. During on-state, it adds a resistance that suppresses the on-state current. However, by reducing the transistor threshold voltage, it can considerably compensate the overall \(I_{\mathit{on}}\) of the device. This way, it is possible to match the \(I_{\mathit{on}}\) of the host transistor lowering \(I_{\mathit{off}}\) substantially. Additionally, the hysteretic characteristics of the device result in the improvement of the forward voltage drop problem due to the very low value of \(V_{\mathit{C-IMT}}\)(Fig. 3(d)).
While unipolar Ag-HfO\({}_{2}\)-Pt inherently works as a rectifier, bidirectional VO\({}_{2}\) does not exhibit rectification property as it undergoes insulator to metal transition (IMT) from both directions. However, it can be augmented with a MOS-diode to achieve rectification. When, a voltage is applied from opposite direction, VO\({}_{2}\) does not offer high resistance owing to its bipolarity. But it still reduces \(I_{\mathit{off}}\) substantially due to its insulating state resistance. Moreover, it introduces hysteresis in the _I-V_ characteristics of the device lowering the forward voltage drop. Fig. 3(d) shows the _I-V_ characteristics of a
Figure 3: **(a)** Physical mechanism of unipolar switching of Ag-HfO\({}_{2}\)-Pt. Structure [40] and layout of **(b)** Structure and **(d)** layout of a Hyperodiode. **(d)** I-V characteristics of a MOS-diode (blue) and Ag-HfO\({}_{2}\)-Pt (red), designed to possess similar on-current and turn-on voltage (\(V_{\mathit{O}}\)\(\blacktriangleright\)\(V_{\mathit{C-IMT}}\)). I-V characteristics of a **(e)** Ag-HfO\({}_{2}\)-Pt-based Hyperodiode and **(f)** VO\({}_{2}\)-based Hyperodiode side by side with the matched diode. **(g)** Circuit for charging a capacitor by four different versions of diodes. Comparison of \(V_{\mathit{off}}\) for diode-connected FET and **(h)** standalone Ag-HfO\({}_{2}\)-Pt, **(i)** Ag-HfO\({}_{2}\)-Pt-based Hyperodiode and (i) VO\({}_{2}\)-based Hyperodiode.
standalone Ag-HfO\({}_{2}\)-Pt and the host transistor. Fig. 3(e) shows the _I-V_ characteristics of the Ag-HfO\({}_{2}\)-Pt-based Hyperdiode. Here, the host transistor is engineered so that the _I\({}_{on}\)_ of both devices match. The off-current (_I\({}_{off}\)_) decreases by several orders of magnitude, thanks to the high insulating-state resistance (_R\({}_{INS}\)_ = 40 G\(\Omega\)) of the augmented PTM. Fig. 3(f) shows the _I-V_ characteristics of a VO\({}_{2}\)-based Hyperdiode. Here, the host transistor is engineered to match the on-current of the MOSFET. For both of these cases, the turn-off voltage (_V\({}_{OFF}\)_) is determined by the metal-to-insulator transition voltage (_V\({}_{CHT}\)_) of the PTM which is much lower than the threshold voltage of the MOS-diode. The advantage of using both Hyperdiodes are shown in Fig.3(g-j). It is evident that, using a standalone Ag-HfO\({}_{2}\)-Pt substantially improves the speed of charging and substantially reduces the forward voltage drop (_V\({}_{D}\)_). Ag-HfO\({}_{2}\)-Pt-based Hyperdiode shows less improvement compared to the standalone Ag-HfO\({}_{2}\)-Pt but still manages to achieve faster charging with a lower _V\({}_{D}\)_ compared to the MOS-diode. Additionally, the host transistor makes the Hyperdiode CMOS-compatible and thus fabrication-friendly. The VO\({}_{2}\)-based Hyperdiode suffers from the same _V\({}_{D}\)_ due to the narrow hysteresis in its characteristics.
In contrast to the smooth and continuous current-voltage relationship of the MOS-diode, a PTM exhibits a sharp and hysteretic characteristic, allowing only certain current values to pass through it. As a result, when connected with a FET in series, concerns arise about the stability of the system due to the shared current. In the following section, we will conduct a thorough stability analysis for the whole structure.
## III. Stability Analysis
To determine the stability of a system consisting of two components in series, one can determine the stable current value that is shared between both components. This is executed by plotting the current across each component against the voltage at the same node, and then identifying the intersection point of both characteristic curves. This intersection point provides the solution of the system, which can be used to analyze its stability states. Here, we plot the _I-V_ characteristics of the MOS-diode and the PTM with respect to the common node voltage (_V\({}_{PTM}\)_) as shown in Fig.4(a). The point where the two characteristic curves intersect indicates the shared solution of the system, in which _I\({}_{TRM}\)_= _I\({}_{PTM}\)_. When the curves meet at a steady current value of PTM (either metallic or insulating), the system is stable for that specific voltage (_V\({}_{typ}\)_), and PTM becomes stable (either insulating or metallic). Conversely, when the curves intersect at the metastable points only, the system stability fails. Again, if the _I-V_ curve of the MOS-diode intersects with both the stable state's current values (insulating and metallic), then the PTM can stabilize to either of the states depending on the initial state of the PTM. These bi-stability points indicate the hysteretic characteristics of the system. For the hysteretic diode to be beneficial, it must maintain stability while also having a hysteretic zone within the range of
Figure 4: **(a)** Circuit for stability analysis of the system. **(b)** Inequality criteria for different region of stability. I-V characteristics of a standalone VO\({}_{2}\) and diode-connected FET for different source voltage (-1 V, -0.5 V, 0 V, 0.5 V, I) with **(c)** V\({}_{\rm{A}}\) = -0.2 V **(d)** V\({}_{\rm{A}}\) = 0.3 V and **(e)** V\({}_{\rm{A}}\)= 0.8 V. I-V characteristics of a standalone Ag-HfO\({}_{2}\)-Pt device and diode connected FET for different source voltage (-1 V, -0.5 V, 0 V, 0.5 V, I) with **(f)** V\({}_{\rm{A}}\) = -0.2 V **(g)** V\({}_{\rm{A}}\)= 0.3 V and **(h)** V\({}_{\rm{A}}\)= 0.8 V.
operating voltages. Fig. 4(b) summarizes the stability criteria for the Hyperodiode structure.
The host transistor's threshold voltage (\(V_{th}\)) is a tunable parameter that can be precisely adjusted to meet the diode's operational requirements. In case the system fails to stabilize at the nominal \(V_{th}\), we can vary the threshold voltage to approach a stable solution. Additionally, to make our analysis more comprehensive, we have also varied the number of fins(\(n_{fin}\)). The mechanism of our stability analysis is depicted in Fig.4. We have swept the \(V_{th}\) of the MOS-diode at every 0.1 V interval -0.2 V \(<V_{th}\)\(<0.8\)V and \(V_{top}\) is varied at a step size of 0.1 V for -2V \(<V_{top}\)\(<2\)V.
The bidirectional hysteretic behavior of a bipolar PTM (VO\({}_{2}\)) creates a stability concern, even when a negative voltage is applied across the Hyperodiode, as evident from Fig. 4(c). For the Ag-HfO\({}_{2}\)-Pt-based Hyperodiode, the system's instability is caused by large hysteresis at a specific voltage. We summarize the result of our analysis in Fig. 5 through a colormap. In the case of VO\({}_{2}\), the PTM stabilizes in either the insulating or metallic resistive states when \(V_{th}\)\(>\) -0.05V and \(n_{fin}\) is between 2 to 4. For \(V_{th}\)\(>0\) V, stability is maintained for \(n_{fin}\)\(>6\). Based on our analysis, \(V_{th}\) = 0 V with \(n_{fin}\) = 8 provides the most optimal characteristic for a VO\({}_{2}\)-based Hyperodiode. On the other hand, for the Ag-HfO\({}_{2}\)-Pt-based Hyperodiode, the unstable region can be eliminated by increasing \(n_{fin}\). However, to take advantage of the hysteretic behavior in lower threshold voltage, selecting the lowest \(n_{fin}\) while considering the area advantage is ideal. Therefore, the most optimal parameter selection for the Ag-HfO\({}_{2}\)-Pt-based hyperdiode would be \(V_{th}\) = -0.2 V and \(n_{fin}\) = 2.
## IV Hyperdiode-based Voltage Multiplier
Voltage multipliers have become increasingly important in a wide range of electronic circuit applications. They are particularly crucial for energy-autonomous and self-powering devices such as smart sensors and Body Area Networks (BANs) that utilize energy harvesting circuits to achieve higher voltage.[28]. Typically, energy harvesting circuits take a small transient AC signal as an input to create DC voltage output with higher values. The CW voltage multiplier is one such multiplier circuit widely used in energy harvesting technology. [29]. The DCP is a voltage multiplier topology commonly employed in integrated circuits to raise a low-voltage battery supply to the voltage level necessary for proper IC functioning. Additionally, it is widely used in energy harvesting applications such as photovoltaic cells and thermoelectric generators, where the voltage generated by these sources is often inadequate for other circuit components. In such cases, a DC-to-DC boost converter is needed, and the DCP can be utilized to perform this function.[30]. We have incorporated the standalone Ag-HfO\({}_{2}\)-Pt and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode in these multipliers.
### _Dickson Charge Pump (DCP)_
The basic circuit diagram of a DCP voltage multiplier is presented in Fig. 6(a). Here, an alternating capacitor stage network is powered by two non-overlapping clock signals with matching periods. To achieve maximum efficiency, a DCP is designed to transfer charge from one capacitor stage to the other during the subsequent clock pulses. An ideal n-stage DCP has
Figure 5: Colormap describing the stability for the circuit in fig. 4(a). Stability for different voltage across the VO\({}_{2}\)-based Hyperodiode (X-axis) and different threshold voltage for the transistor (Y-axis) with **(a)**\(n_{fin}\) = 2, **(b)**\(n_{fin}\) = 4, **(c)**\(n_{fin}\) = 6 and **(d)**\(n_{fin}\) = 8. Stability for different voltage across the Ag-HfO\({}_{2}\)-Pt-based Hyperodiode and different \(V_{th}\) for the transistor with **(e)**\(n_{fin}\) = 2, **(f)**\(n_{fin}\) = 4, **(g)**\(n_{fin}\) = 6 and **(h)**\(n_{fin}\) = 8.
an output of (n+1)\(\times V_{V_{BP}}\)-n\(\times V_{T}\), where \(V_{T}\) is the threshold voltage of the switching device. The voltage level transferred to the subsequent capacitors is restricted by the forward voltage drop of the diode. Despite the widespread use of Schottky diodes in DCP circuits due to their low voltage drop, their performance remains suboptimal due to the high leakage current and high reverse recovery time. It is believed that diodes with hysteresis in their current-voltage (_I-V_) characteristics could offer faster response times. In our study, we utilize two non-overlapping clock signals with a frequency of 1 MHz with a DC voltage input of 1V. Fig. 6(b) illustrates the final stage output of a four-stage DCP for a MOS-diode, standalone Ag-HfO\({}_{2}\)-Pt, and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode. To evaluate their performance, we selected a 200 \(\upmu\)s timeframe for our analysis. Our results indicate that the Ag-HfO\({}_{2}\)-Pt diode exhibits the fastest settling speed, while the proposed Hyperdiode outperforms the conventional MOS-diode by a significant margin of 0.4 V within the timeframe.
For future improvement of a DCP voltage multiplier, we have varied the \(V_{C\text{-}MT}\), and \(V_{C\text{-}MT}\) to observe the transient response. Fig. 6(c-d) illustrates the output voltage comparison for three different \(V_{C\text{-}MT}\) (0.3,0.35,0.4). At lower \(V_{C\text{-}MT}\), the diode turns on earlier which results in faster settling time and higher output voltage. On the other hand, varying \(V_{C\text{-}MT}\) hardly affects the output response. These findings suggest that incorporating a PTM with lower VC-IMT could significantly enhance the response of a DCP voltage multiplier, offering scope for future improvement.
### _Cockcroft-Walton (CW) voltage multiplier_
The basic Cockcroft Walton (CW) voltage multiplier is depicted in Fig. 7(a). The diode-capacitor network accumulates different levels of DC voltage from a small AC input signal over subsequent clock cycles at the output. In the context of energy-harvesting operation, a faster response is desired. To compare the performance, we have chosen a timeframe of 200 \(\upmu\)s. During no-load condition, the N\({}^{\text{th}}\) stage CW multiplier produces an output, \(V_{OUT:~{}N}=\) 2\(\times N\times V_{P}\)- _N\(\times V_{DRO}\)_. Fig. 7(b) shows the value of \(V_{OUT}\) for a 3-stage CW multiplier. The Standalone Ag-HfO\({}_{2}\)-Pt has a 23% larger \(V_{OUT}\) than the MOS-diode, while Ag-HfO\({}_{2}\)-Pt-based Hyperdiode shows a 20% higher \(V_{OUT}\). Fig. 7(c-e), shows the side-by-side comparison between different stages of a CW multiplier. The settling times for the Ag-HfO\({}_{2}\)-Pt device, Hyperdiode, and MOS diode are estimated as 5\(\upmu\)s, 120\(\upmu\)s, and 2.5\(\upmu\)s, respectively. Compared to the hysteretic diodes being considered (Ag-HfO\({}_{2}\)-Pt and Hyperdiode), a MOS-diode has a much longer settling time, taking several milliseconds. This is due to the sharp transition behavior and lower turn-off voltage of the hysteretic diodes, which allows them to remain turned on for a higher duration. Therefore, we observe a faster charging in these cases. Once the voltage levels at different stages reach certain values, the drop across the individual hysteretic diodes reaches below \(V_{C\text{-}MT}\)and
Figure 6: **(a)** Basic Blockson Charge Pump (DCP) voltage multiplier. **(b)** output of a 4-stage DCP with diode connected FET (red), standalone PTM (green) and Hyperdiode (blue) in response to a DC input, \(V_{up}\) (black). Comparison of \(V_{OUT}\) of a conventional diode based DCP and Hyperdiode with **(c)** three different \(V_{C\text{-}MT}\) and (e) three different \(V_{C\text{-}MT}\). Comparison of the \(V_{OUT}\) of a conventional diode-based DCP and a standalone Ag-HfO\({}_{2}\)-Pt with **(d)** three different \(V_{C\text{-}MT}\) and (f) three different \(V_{C\text{-}MT}\).
Figure 7: **(a)** A generalized circuit diagram of a Cockcroft Walton (CW) multiplier [11]. **(b)** Output voltage for a 3-stage CW multiplier circuit for a diode-connected FET, standalone Ag-HfO\({}_{2}\)-Pt and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode. **(c-e)** The output voltages at different stages for a 3-stage CW multiplier for diode connected FET, standalone Ag-HfO\({}_{2}\)-Pt and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode.
the voltage level settles at its maximum value. Although the voltage levels in the MOS-diode can eventually exceed those of the hysteretic diodes over time, its slower settling time makes it unsuitable for applications that require fast response.
We have extended our analysis by investigating the impact of the number of stages of the CW multiplier on the output voltage level. With an increase in the number of stages, the MOS-diode takes longer to settle down compared to the hysteretic diodes, resulting in a greater voltage difference between them. (Fig. 8(a)). We have examined how the output voltage level of the CW multiplier is affected by varying the \(V_{C\_{\textit{MT}}}\). To ensure a fair comparison, we have varied \(V_{th}\) of the diode-connected FET accordingly. With lower \(V_{C\_{\textit{MT}}}\)(\(\approx\)\(V_{D}\)), we get higher output voltage levels than the standalone MOS-diode (Fig. 8(b)). However, when \(V_{C\_{\textit{MT}}}\) is extremely low, the Ag-HfO\({}_{2}\)-Pt loses its effectiveness in the Hyperdiode, causing it to approach the same value as the MOS-diode. Hence, we see the Hyperdiode approaching the MOS-diode at a very low \(V_{C\_{\textit{MT}}}\). Overall, the augmentation of a PTM with the MOS- diode decreases the output voltage level slightly, but in return, we achieve manufacturing compatibility giving the proposed Hyperdiode a practical advantage.
## V Variation Analysis
The slight dimensional variation of a PTM can alter the transition voltages (\(V_{C\_{\textit{MT}}}\)and \(V_{C\_{\textit{MT}}}\)) and the level of resistivity (\(R_{\textit{\textit{NN}}}\)and \(R_{\textit{\textit{MT}}}\)). Additionally, it is important to consider the variation of the threshold voltage of a MOSFET to ensure reliable performance. To determine if our hysteretic diodes offer an advantage over conventional diode-connected FETs in all scenarios, we have examined the impact of variations in three key parameters: insulator to metal transition voltage (\(V_{C\_{\textit{MT}}}\)), insulating state resistance (\(R_{\textit{\textit{NN}}}\)), and transistor threshold voltage (\(V_{th}\)). These parameters have significant implications on their performance in device and circuit level. We have disregarded the possible variation of \(V_{C\_{\textit{MT}}}\) as it has minimal impact on the performance of the CW multiplier.
We perform Monte Carlo variation analysis for both the Hyperdiode and the standalone Ag-HfO\({}_{2}\)-Pt device. Here, we impose a Gaussian distribution of 3\(\sigma\) -variation in the parameters under consideration. The specification of the variation is given in Fig. 9(b). The nominal value of \(V_{th}\) is chosen to be -0.2V and kept at the same value for the host transistor of the Hyperdiode. We have estimated the output voltage of a 3-stage CW multiplier at the time instance of 200 \(\mu\)s. Our proposed Hyperdiode has an additional degree of variation, leading to a higher voltage spread in the observed distribution compared to the standalone Ag-HfO\({}_{2}\)-Pt device. Despite a higher spread in both cases, the standalone Ag-HfO\({}_{2}\)-Pt and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode offer higher \(V_{\textit{\textit{OUT}}}\)even in the worst-case variation.
## VI Discussion
While the PTM-based Hyperdiode shows promise, there are practical considerations that need addressing. The first concern is the repetitive switching capability of the materials. Ag-HfO\({}_{2}\)-Pt has been reported to switch reliably up to 10\({}^{8}\) cycles, making it suitable for voltage multiplier circuits compared to VO\({}_{2}\), which is limited to 10\({}^{3}\) cycles [24]. Besides, the switching time of the PTM is a critical factor that can significantly impact its overall performance. The filamentation process in Ag-HfO\({}_{2}\)-Pt takes around 1ns [31], which is negligible compared to the operation timescale of voltage multipliers (\(\mu\)s).Temperature stability is another important aspect. Ag-HfO\({}_{2}\)-Pt can withstand temperatures up to 90\({}^{\circ}\)C, whereas VO\({}_{2}\) has a lower insulator to metal switching temperature (\(T_{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{ \textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textittextittextittextit }}}}}}}}}}}}}}}}} }\)) 1000. This model is applied to the current of the FM. The current of the FM is \(V_{C\_{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{ \textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textit{\textittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextittextit \textit{\textit
voltage drop (\(<\) 0.1 V) and significantly lower off-current (\(\sim\)10 pA) compared to Schottky diodes, indicating superior speed and energy efficiency. Besides, the PTMs used in the proposed Hyperdiode do not exhibit reverse breakdown voltage or reverse recovery current, unlike conventional diodes, which is a favourable characteristic as the reverse breakdown voltage is a crucial factor that limits the operating voltage range of conventional diodes [33].
## VII Conclusion
Our study utilizes both unipolar and bipolar PTM for our proposed hysteretic Hyperdiode. Through stability analysis, we determine the most optimal performance parameters for both VO\({}_{2}\)-based Hyperdiode and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode. Although, both VO\({}_{2}\)-based Hyperdiode and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode provide expected hysteretic behavior, VO\({}_{2}\)-based Hyperdiode has a much higher off current than conventional MOS-diode owing to the adjusted low \(V_{th}\) for stability. So, we proceed with the rest of our study on standalone Ag-HfO\({}_{2}\)-Pt and Ag-HfO\({}_{2}\)-Pt-based Hyperdiode. We incorporate this hysteretic diode in two voltage multiplier circuits and observe \(\sim\)21% and \(\sim\)15% higher voltage output compared to MOS-diode in a 4-stage DCP and 23% and 20% higher output voltage in a 3-stage CW multiplier (both at 200 \(\upmu\)s). Finally, we perform a Monte-Carlo variation analysis for a 3 stage CW-multiplier by imposing a gaussian variation to three different chosen parameters. In addition, Monte-Carlo variation analysis is performed for a 3-stage CW multiplier by applying gaussian variation to three different parameters. Our results demonstrate that our proposed Hyperdiodes continue to outperform MOS-diode even in the worst-case variation by a significant margin.
|
2306.14243 | Asymptotic behaviour of the $\text{v}$-number of homogeneous ideals | Let $I$ be a graded ideal of a standard graded polynomial ring $S$ with
coefficients in a field $K$. The asymptotic behaviour of the $\text{v}$-number
of the powers of $I$ is investigated. Natural lower and upper bounds which are
linear functions in $k$ are determined for $\text{v}(I^k)$. We call
$\text{v}(I^k)$ the $\text{v}$-function of $I$. We prove that $\text{v}(I^k)$
is a linear function in $k$ for $k$ large enough, of the form
$\text{v}(I^k)=\alpha(I)k+b$, where $\alpha(I)$ is the initial degree of $I$,
and $b\in\mathbb{Z}$ is a suitable integer. For this aim, we construct new
blowup algebras associated to graded ideals. Finally, for a monomial ideal in
two variables, we compute explicitly its $\text{v}$-function. | Antonino Ficarra, Emanuele Sgroi | 2023-06-25T13:29:49Z | http://arxiv.org/abs/2306.14243v4 | # Asymptotic behaviour of the v-number of homogeneous ideals
###### Abstract.
Let \(I\) be a graded ideal of a standard graded polynomial ring \(S\) with coefficients in a field \(K\). The asymptotic behaviour of the v-number of the powers of \(I\) is investigated. Natural lower and upper bounds which are linear functions in \(k\) are determined for \(\operatorname{v}(I^{k})\). We call \(\operatorname{v}(I^{k})\) the v-function of \(I\). Under reasonable assumptions, it is proved that \(\operatorname{v}(I^{k})\) is a linear function in \(k\) for \(k\) large enough. For monomial ideals in two variables and several classes of graded ideals with linear powers, we show that the v-function is linear and compute it explicitly. The experimental evidence strongly invites us to conjecture that for an ideal \(I\) with linear powers, we have \(\operatorname{v}(I^{k})=\alpha(I)k-1\) for all \(k\geq 1\), where \(\alpha(I)\) is the initial degree of \(I\). This conjecture is settled for edge ideals with linear resolution, polymatroidal ideals and Hibi ideals.
Key words and phrases:graded ideals, v-number, asymptotic behaviour, primary decomposition 2020 Mathematics Subject Classification: Primary 13F20; Secondary 13F55, 05C70, 05E40
## Introduction
In 1921, Emmy Noether revolutionized Commutative Algebra by establishing the primary decomposition Theorem for Noetherian rings [36]. It says that any ideal \(I\) of a Noetherian ring \(R\) can be decomposed as the intersection of finitely many primary ideals \(I=Q_{1}\cap\cdots\cap Q_{t}\) and \(\operatorname{Ass}(I)=\{\sqrt{Q_{1}},\ldots,\sqrt{Q_{t}}\}\), the _set of associated primes of \(I\)_, is uniquely determined. This fundamental result is a landmark in Commutative Algebra, and always inspires new exciting research trends. A basic question in the seventies was the following. What is the asymptotic behaviour of the set \(\operatorname{Ass}(I^{k})\) for \(k\gg 0\) large enough? In 1976, it was predicted by Ratliff [37], and later proved by Brodmann in 1979 [2], that \(\operatorname{Ass}(I^{k})\) stabilizes. That is, there exists \(k_{0}>0\) such that \(\operatorname{Ass}(I^{k+1})=\operatorname{Ass}(I^{k})\) for all \(k\geq k_{0}\). Another remarkable result of Brodmann says that \(\operatorname{depth}(R/I^{k})\) is constant for \(k\gg 0\)[3]. Suppose furthermore that \(I\) is a graded ideal of a standard graded polynomial ring \(S=K[x_{1},\ldots,x_{n}]\) with coefficients in a field \(K\). In 1999, Kodiyalam [31], and, independently, Cutkosky, Herzog and Trung [13], showed that the Castelnuovo-Mumford regularity of \(S/I^{k}\) is a linear function in \(k\) for \(k\gg 0\). The legacy of Brodmann theorem opened up the most flourished research topic in Commutative Algebra: the asymptotic behaviour of the homological invariants of (ordinary) powers of graded ideals, see [6].
Now, let \(S=K[x_{1},\ldots,x_{n}]\) be the standard graded polynomial ring with coefficients in a field \(K\), \(I\subset S\) be a graded ideal and \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) be the maximal ideal. Note that \(S\) is Noetherian. The graded version of the primary decomposition theorem says that for any prime \(\mathfrak{p}\in\operatorname{Ass}(I)\), there exists a homogeneous element
\(f\in S\) such that \((I:f)=\mathfrak{p}\). It is natural to define the following invariants. Denote by \(S_{d}\) the \(d\)th graded component of \(S\). The _\(\mathrm{v}\)-number of \(I\) at \(\mathfrak{p}\)_ is defined as
\[\mathrm{v}_{\mathfrak{p}}(I)\ =\ \min\{d\ :\ \text{there exists}\ f\in S_{d} \ \text{such that}\ (I:f)=\mathfrak{p}\}.\]
Whereas, the _\(\mathrm{v}\)-number of \(I\)_ is defined as
\[\mathrm{v}(I)\ =\ \min\{d\ :\ \text{there exists}\ f\in S_{d}\ \text{such that}\ (I:f)\in\mathrm{Ass}(I)\}.\]
The concept of \(\mathrm{v}\)-number was introduced by Cooper _et all_ in [10], and further studied in [1, 8, 26, 32, 33, 38, 39].
This invariant plays an important role in Algebraic Geometry and in the theory of (_projective_) _Reed-Muller-type codes_[10, 15, 21, 22, 23, 24, 25]. Let \(\mathbb{X}\) be a finite set of points of the projective plane \(\mathbb{P}^{s-1}\), and let \(\delta_{\mathbb{X}}(d)\) be the minimum distance function of the projective Reed-Muller-type code \(C_{\mathbb{X}}(d)\). Then \(\delta_{\mathbb{X}}(d)=1\) if and only if \(\mathrm{v}(I(\mathbb{X}))\leq d\)[10, Corollary 5.6]. In such article, for a radical complete intersection ideal \(I\), the famous Eisenbud-Green-Harris conjecture [17] is shown to be equivalent to [10, Conjecture 6.2] (see [10, Proposition 6.8]). This latter conjecture is related to the \(\mathrm{v}\)-number. Indeed, for such an ideal \(I\) we have \(\mathrm{v}(I)=\mathrm{reg}(S/I)\). For a nice summary, see [39, Section 12].
The \(\mathrm{v}\)-number of edge ideals was studied in [33]. A graph \(G\) belongs to the class \(W_{2}\) if and only if \(G\) is well-covered without isolated vertices, and \(G\setminus v\) is well-covered for all vertices \(v\in V(G)\). Let \(I(G)\subset S=K[x_{v}:v\in V(G)]\) be the edge ideal of \(G\). Then \(G\) is in \(W_{2}\) if and only if \(\mathrm{v}(I(G))=\dim(S/I(G))\)[33, Theorem 4.5]. The \(\mathrm{v}\)-number of binomial edge ideals was recently considered in [1] and [32].
In this article, we investigate the eventual behaviour of the function \(\mathrm{v}(I^{k})\) for \(k\gg 0\), where \(I\subset S\) is a graded ideal. Such a function, for large \(k\) measures the "asymptotic homogeneous growth" of the primary decomposition of \(I^{k}\).
The article is structured as follows. In Section 1, we recall how to compute the \(\mathrm{v}\)-number of a graded ideal \(I\subset S\) (Theorem 1.1) as shown by Grisalde, Reyes and Villarreal [26]. Hereafter, for a finitely generated graded \(S\)-module \(M=\bigoplus_{d}M_{d}\neq 0\), we set \(\alpha(M)=\min\{d:M_{d}\neq 0\}\) and \(\omega(M)=\max\{d:(M/\mathfrak{m}M)_{d}\neq 0\}\). In the next theorem, the bar \(\overline{\phantom{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}} \)\) denotes the residue class modulo \(I\).
**Theorem 1.1**: _Let \(I\subset S\) be a graded ideal and let \(\mathfrak{p}\in\mathrm{Ass}(I)\). The following hold._
1. _If_ \(\mathcal{G}=\{\overline{g_{1}},\ldots,\overline{g_{r}}\}\) _is a homogeneous minimal generating set of_ \((I:\mathfrak{p})/I\)_, then_ \[\mathrm{v}_{\mathfrak{p}}(I)=\min\{\deg(g_{i})\ :\ 1\leq i\leq r\ \text{and}\ (I:g_{i})=\mathfrak{p}\}.\]
2. \(\mathrm{v}(I)=\min\{\mathrm{v}_{\mathfrak{p}}(I):\mathfrak{p}\in\mathrm{Ass}( I)\}\)_._
3. \(\mathrm{v}_{\mathfrak{p}}(I)\geq\alpha((I:\mathfrak{p})/I)\)_, with equality if_ \(\mathfrak{p}\in\mathrm{Max}(I)\)_._
4. _If_ \(I\) _has no embedded primes, then_ \(\mathrm{v}(I)=\min\{\alpha((I:\mathfrak{p})/I):\mathfrak{p}\in\mathrm{Ass}(I)\}\)_._
Firstly, we determine the "local" numbers \(\mathrm{v}_{\mathfrak{p}}(I)\), for all \(\mathfrak{p}\in\mathrm{Ass}(I)\). After computing a basis of the \(S\)-module \((I:\mathfrak{p})/I\), we select a generator \(\overline{f}\in(I:\mathfrak{p})/I\) of least degree \(d\) such that \((I:f)=\mathfrak{p}\). This latter condition is automatically satisfied if \(\mathfrak{p}\in\mathrm{Max}(I)\). Then \(\mathrm{v}_{\mathfrak{p}}(I)=d\) and \(\mathrm{v}(I)=\min\{\mathrm{v}_{\mathfrak{p}}(I):\mathfrak{p}\in\mathrm{Ass} (I)\}\).
Let \(I\subset S\) be a graded ideal, we call \(\mathrm{v}(I^{k})\) the _\(\mathrm{v}\)-function_ of \(I\). In Section 2, we investigate the asymptotic behaviour of \(\mathrm{v}(I^{k})\) for \(k\gg 0\). By Theorem 1.1(b), we have \(\mathrm{v}(I^{k})=\min\{\mathrm{v}_{\mathfrak{p}}(I^{k}):\mathfrak{p}\in \mathrm{Ass}(I^{k})\}\). By Brodmann [2], \(\mathrm{Ass}(I^{k})=\mathrm{Ass}(I^{k+1})\) for all \(k\gg 0\). We denote this common set by \(\mathrm{Ass}^{\infty}(I)\), and call each prime \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\) a _stable prime ideal_ of \(I\). Let \(\mathrm{Max}^{\infty}(I)\) be the set of stable prime ideals of \(I\) maximal with respect to the inclusion. Thus, \(\mathrm{v}(I^{k})=\min\{\mathrm{v}_{\mathfrak{p}}(I^{k}):\mathfrak{p}\in \mathrm{Ass}^{\infty}(I)\}\). To understand the asymptotic behaviour of the \(\mathrm{v}\)-function, we consider the _\(\mathrm{v}_{\mathfrak{p}}\)-functions_\(\mathrm{v}_{\mathfrak{p}}(I^{k})\) for each \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\). In the classical case, to prove the asymptotic linearity of the Castelnuovo-Mumford regularity of the powers of \(I\), \(\mathrm{reg}(I^{k})\), one introduces the Rees ring of \(I\), \(\mathcal{R}(I)=\bigoplus_{k\geq 0}I^{k}\), and shows that this is a bigraded finitely generated module over a suitable polynomial ring [13].
Let \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\). For the \(\mathrm{v}_{\mathfrak{p}}\)-function \(\mathrm{v}_{\mathfrak{p}}(I^{k})\), we consider a similar approach as above. It should be noted, however, that the \(S\)-module \((I^{k}:\mathfrak{p})/I^{k}\) has a more subtle module structure than the ordinary power \(I^{k}\). We introduce the module
\[\mathrm{Soc}_{\mathfrak{p}}(I)\ =\ \bigoplus_{k\geq 0}(I^{k}:\mathfrak{p})/I^{k}.\]
over the ring \(\mathcal{F}_{\mathfrak{p}}(I)\ =\ \bigoplus_{k\geq 0}(I^{k}/\mathfrak{p}I^{k})\).
A priori, it is not clear that \(\mathrm{Soc}_{\mathfrak{p}}(I)\) is a finitely generated bigraded \(\mathcal{F}_{\mathfrak{p}}(I)\)-module. This is shown in Theorem 2.2 by carefully analyzing the module structure of \(\mathrm{Soc}_{\mathfrak{p}}(I)\). We prove that \(\mathrm{Soc}_{\mathfrak{p}}(I)\) is equal to a truncation of the ideal \((0:_{\mathrm{gr}_{I}(S)}\mathfrak{p})\), and this ideal of \(\mathrm{gr}_{I}(S)\) is finitely generated as a \(\mathcal{F}_{\mathfrak{p}}(I)\)-module. Here \(\mathrm{gr}_{I}(S)=\bigoplus_{k\geq 0}(I^{k}/I^{k+1})\) denotes the associated graded ring of \(I\). The proof relies essentially on a property showed by Ratliff [37, Corollary 4.2], namely that \((I^{k+1}:I)=I^{k}\) for all \(k\gg 0\), and on the fact that \(\mathrm{gr}_{I}(S)\) is Noetherian ring [35, Proposition (10.D)].
The first main result in Section 2 is
**Theorem 2.1**.: _Let \(I\subset S=K[x_{1},\ldots,x_{n}]\) be a graded ideal, and let \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\). Then, the following holds._
* _For all_ \(k\geq 1\)_, we have_ \[\alpha((I^{k}:\mathfrak{p})/I^{k})\ \leq\ \mathrm{v}_{\mathfrak{p}}(I^{k})\ \leq\ \omega((I^{k}:\mathfrak{p})/I^{k}).\]
* _The functions_ \(\alpha((I^{k}:\mathfrak{p})/I^{k})\)_,_ \(\omega((I^{k}:\mathfrak{p})/I^{k})\) _are linear in_ \(k\) _for_ \(k\gg 0\)_._
* _There exist eventually linear functions_ \(f(k)\) _and_ \(g(k)\) _such that_ \[f(k)\leq\mathrm{v}(I^{k})\leq g(k),\ \text{ for all }\ k\gg 0.\]
Theorem 2.1(b) follows by a careful analysis of the bigraded structure of \(\mathrm{Soc}_{\mathfrak{p}}(I)\) and in the end boils down to a linear programming argument (Proposition 2.5).
The second main result in the section is the following one.
**Theorem 2.6**.: _Let \(I\subset S=K[x_{1},\ldots,x_{n}]\) be a graded ideal. Suppose that_
* _either_ \(\mathrm{Ass}^{\infty}(I)=\mathrm{Max}^{\infty}(I)\) _or_
* _for all_ \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\) _and all_ \(k\gg 0\)_,_ \((I^{k}:\mathfrak{p})/I^{k}\) _is generated in a single degree._
_Then, \(\mathrm{v}_{\mathfrak{p}}(I^{k})\), for all \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\), and \(\mathrm{v}(I^{k})\) are linear functions in \(k\) for \(k\gg 0\)._
This result is valid for several classes of graded ideals. To name a few, ideals of maximal minors, binomial edge ideals of closed graphs, and normally torsionfree squarefree monomial ideals (Example 2.7). We believe that the functions \(\mathrm{v}_{\mathfrak{p}}(I^{k})\), \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\), and \(\mathrm{v}(I^{k})\) are always linear in \(k\) for \(k\gg 0\). We conclude Section 2 with an estimate on the growth of \(\mathrm{v}_{\mathfrak{p}}(I^{k})\), \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\). We prove that \(\mathrm{v}_{\mathfrak{p}}(I^{k+1})\leq\mathrm{v}_{\mathfrak{p}}(I^{k})+ \omega(I)\), for all \(k\gg 0\). Thus, we have \(\mathrm{v}(I^{k+1})\leq\mathrm{v}(I^{k})+\omega(I)\) for \(k\gg 0\).
Section 3 concerns monomial ideals \(I\subset S=K[x,y]\) in two variables. Denote by \(G(I)=\{u_{1},\ldots,u_{m}\}\) the unique minimal monomial generating set of \(I\), with \(u_{i}=x^{a_{i}}y^{b_{i}}\) for all \(i\). Then \(I\) determines the sequences \(\mathbf{a}:a_{1}>a_{2}>\cdots>a_{m}\) and \(\mathbf{b}:b_{1}<b_{2}<\cdots<b_{m}\), and we set \(I=I_{\mathbf{a},\mathbf{b}}\). Conversely, given any two such sequences \(\mathbf{a}\) and \(\mathbf{b}\), \(\{x^{a_{1}}y^{b_{1}},\ldots,x^{a_{m}}y^{b_{m}}\}\) is the minimal generating set of a monomial ideal of \(S\). In terms of \(\mathbf{a}\) and \(\mathbf{b}\) we determine \(\mathrm{Ass}^{\infty}(I_{\mathbf{a},\mathbf{b}})\) (Corollary 3.4) and the v-number \(\mathrm{v}(I_{\mathbf{a},\mathbf{b}})\) (Theorem 3.7). We prove that \(\mathrm{v}(I_{\mathbf{a},\mathbf{b}}^{k})\) is a linear function \(f(k)=ak+b\) for \(k\gg 0\) (Theorem 3.1). Our experiments in _Macaulay2_[20] suggest that \(b\geq-1\). On the other hand, for any integers \(a\geq 1\) and \(b\geq-1\), we construct a monomial ideal of \(S\) such that \(\mathrm{v}(I^{k})=ak+b\) for all \(k\geq 1\) (Theorem 3.8).
In the last section, we study the v-function of ideal with linear powers. We expect that the following is true.
**Conjecture 4.1**.: _Let \(I\subset S\) be a graded ideal with linear powers. Then_
\[\mathrm{v}(I^{k})=\alpha(I)k-1,\ \text{ for all }k\geq 1.\]
We settle this conjecture for edge ideals with linear resolution, polymatroidal ideals and Hibi ideals. To prove these results, we use an inductive argument based on a bound proved by Saha and Sengupta (Proposition 4.2). On the other hand, if \(I\) does not have linear powers, then the conclusion of Conjecture 4.1 is no longer valid, as we show with an example due to Terai [9, Remark 3]. For such an ideal \(I\), we have \(\mathrm{v}(I)=\alpha(I)=3\) and \(\mathrm{v}(I^{k})=\alpha(I)k-1\) for all \(k\geq 2\). Nonetheless, the v-function of \(I\) is linear.
## 1. How to compute the v-number of a graded ideal?
Let \(I\) be an ideal of a Noetherian domain \(R\). We denote the set of associated primes of \(I\) by \(\mathrm{Ass}(I)\), and by \(\mathrm{Max}(I)\) the set of associated primes of \(I\) that are maximal with respect to the inclusion. It is clear that \(I\) has no embedded primes if and only if \(\mathrm{Ass}(I)=\mathrm{Max}(I)\).
Let \(S=K[x_{1},\ldots,x_{n}]=\bigoplus_{d}S_{d}\) be the standard graded polynomial ring with \(n\) variables and coefficients in a field \(K\), and let \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) be the graded maximal ideal. The concept of v-number was introduced by Cooper _et all_ in [10]. Let \(I\subset S\) be a graded ideal and let \(\mathfrak{p}\in\mathrm{Ass}(I)\). Then, the v_-number of \(I\) at \(\mathfrak{p}\)_ is defined as
\[\mathrm{v}_{\mathfrak{p}}(I)\ =\ \min\{d\ :\text{ there exists }f\in S_{d}\text{ such that }(I:f)=\mathfrak{p}\}.\]
Whereas, the v_-number of \(I\)_ is defined as
\[\mathrm{v}(I)\ =\ \min\{d\ :\text{ there exists }f\in S_{d}\text{ such that }(I:f)\in\mathrm{Ass}(I)\}.\]
Note that if \(I=\mathfrak{m}=(x_{1},\ldots,x_{n})\), then \(\mathrm{v}_{\mathfrak{m}}(I)=\mathrm{v}(I)=0\).
The following result due to Grisalde, Reyes and Villarreal [26, Theorem 3.2] shows how to compute the v-number of a graded ideal. For a finitely generated graded \(S\)-module \(M=\bigoplus_{d}M_{d}\neq 0\), we call \(\alpha(M)=\min\{d:M_{d}\neq 0\}\) the _initial degree_ of \(M\). In the next theorem, the bar \(\overline{\ \ }\) denotes the residue class modulo \(I\).
**Theorem 1.1**.: _Let \(I\subset S\) be a graded ideal and let \(\mathfrak{p}\in\operatorname{Ass}(I)\). The following hold._
1. _If_ \(\mathcal{G}=\{\overline{g_{1}},\ldots,\overline{g_{\overline{r}}}\}\) _is a homogeneous minimal generating set of_ \((I:\mathfrak{p})/I\)_, then_ \[\mathrm{v}_{\mathfrak{p}}(I)=\min\{\deg(g_{i})\ :\ 1\leq i\leq r\text{ and }(I:g_{i})=\mathfrak{p}\}.\]
2. \(\mathrm{v}(I)=\min\{\mathrm{v}_{\mathfrak{p}}(I):\mathfrak{p}\in \operatorname{Ass}(I)\}\)_._
3. \(\mathrm{v}_{\mathfrak{p}}(I)\geq\alpha((I:\mathfrak{p})/I)\)_, with equality if_ \(\mathfrak{p}\in\operatorname{Max}(I)\)_._
4. _If_ \(I\) _has no embedded primes, then_ \(\mathrm{v}(I)=\min\{\alpha((I:\mathfrak{p})/I):\mathfrak{p}\in\operatorname{ Ass}(I)\}\)_._
## 2. Asymptotic behaviour of the v-number
Let \(R\) be a commutative Noetherian domain and \(I\subset R\) an ideal. It is known by Brodmann [2] that \(\operatorname{Ass}(I^{k})\) stabilizes for large \(k\). That is, \(\operatorname{Ass}(I^{k+1})=\operatorname{Ass}(I^{k})\) for all \(k\gg 0\). A prime ideal \(\mathfrak{p}\subset R\) such that \(\mathfrak{p}\in\operatorname{Ass}(I^{k})\) for all \(k\gg 0\), is called a _stable prime of \(I\)_.
The set of the stable primes of \(I\) is denoted by \(\operatorname{Ass}^{\infty}(I)\). Likewise, \(\operatorname{Max}^{\infty}(I)\) denotes the set of stable primes of \(I\), maximal with respect to the inclusion. The least integer \(k_{0}\) such that \(\operatorname{Ass}(I^{k})=\operatorname{Ass}(I^{k_{0}})\) for all \(k\geq k_{0}\) is denoted by \(\operatorname{astab}(I)\).
Now, let \(S=K[x_{1},\ldots,x_{n}]\) be the standard graded polynomial ring, with \(K\) a field, and unique graded maximal ideal \(\mathfrak{m}=(x_{1},\ldots,x_{n})\). Let \(I\subset S\) be a graded ideal and let \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\). In light of Theorem 1.1 and Brodmann result, to understand the asymptotic behaviour of the function \(\mathrm{v}_{\mathfrak{p}}(I^{k})\), one has to understand the asymptotic behaviour of the modules \((I^{k}:\mathfrak{p})/I^{k}\) for \(k\gg 0\).
Let \(M\neq 0\) be a finitely generated graded \(S\)-module. Let \(\omega(M)\) be the highest degree of a homogeneous element of the \(K\)-vector space \(M/\mathfrak{m}M\). Equivalently, the highest degree \(j\) such that the graded Betti number \(\beta_{0,j}(M)\) is non-zero. Thus
\[\omega(M)=\max\{d:\beta_{0,d}(M)\neq 0\}=\max\{d:\operatorname{Tor}_{0}^{S}(S/ \mathfrak{m},M)_{d}\neq 0\}.\]
Similarly, one has that \(\alpha(M)=\min\{d:\operatorname{Tor}_{0}^{S}(S/\mathfrak{m},M)_{d}\neq 0\}\).
The following theorem provides natural asymptotic upper and lower bounds for the v-function \(\mathrm{v}(I^{k})\) which are linear functions in \(k\) for \(k\gg 0\).
**Theorem 2.1**.: _Let \(I\subset S=K[x_{1},\ldots,x_{n}]\) be a graded ideal, and let \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\). Then, the following holds._
1. _For all_ \(k\geq 1\)_, we have_ \[\alpha((I^{k}:\mathfrak{p})/I^{k})\ \leq\ \mathrm{v}_{\mathfrak{p}}(I^{k})\ \leq\ \omega((I^{k}:\mathfrak{p})/I^{k}).\]
2. _The functions_ \(\alpha((I^{k}:\mathfrak{p})/I^{k})\)_,_ \(\omega((I^{k}:\mathfrak{p})/I^{k})\) _are linear in_ \(k\) _for_ \(k\gg 0\)_._
3. _There exist eventually linear functions_ \(f(k)\) _and_ \(g(k)\) _such that_ \[f(k)\leq\mathrm{v}(I^{k})\leq g(k),\ \text{ for all }\ k\gg 0.\]
Statement (a) follows immediately from Theorem 1.1(a). Assume for a moment that statement (b) holds, then (c) can be proved as follows. By Brodmann, we have \(\mathrm{v}(I^{k})=\min\{\mathrm{v}_{\mathfrak{p}}(I^{k}):\mathfrak{p}\in\mathrm{ Ass}^{\infty}(I)\}\) for all \(k\gg 0\). Thus, by (a), for all \(k\gg 0\)
\[\min_{\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)}\alpha((I^{k}:\mathfrak{p})/I^{k })\leq\mathrm{v}(I^{k})\leq\min_{\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)} \omega((I^{k}:\mathfrak{p})/I^{k}).\]
Setting \(f(k)=\min_{\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)}\alpha((I^{k}:\mathfrak{p })/I^{k})\) and \(g(k)=\min_{\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)}\omega((I^{k}:\mathfrak{p} )/I^{k})\), by statement (b) it follows that \(f(k)\) and \(g(k)\) are the required eventually linear functions in \(k\). Statement (c) follows.
To prove statement (b), we construct a suitable module that encodes the growth of the modules \((I^{k}:\mathfrak{p})/I^{k}\). Indeed, we define it in the following more general context. Let \(I\) be an ideal of a commutative Noetherian domain \(R\) and let \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\). Then we set
\[\mathrm{Soc}_{\mathfrak{p}}(I)\ =\ \bigoplus_{k\geq 0}(I^{k}:\mathfrak{p})/I^{k},\]
and \(\mathrm{Soc}_{\mathfrak{p}}(I)_{k}=(I^{k}:\mathfrak{p})/I^{k}\) for all \(k\geq 0\).
The symbol "\(\mathrm{Soc}\)" is used, because when \(R=S\) or \(R\) is local and \(\mathfrak{p}=\mathfrak{m}\) is the (graded) maximal ideal, then \((I^{k}:\mathfrak{m})/I^{k}\) is the _socle module_ of \(S/I^{k}\), see [7].
The first step consists in showing that \(\mathrm{Soc}_{\mathfrak{p}}(I)\) is a finitely generated graded module over a suitable ring. For this aim, we introduce the following ring,
\[\mathcal{F}_{\mathfrak{p}}(I)\ =\ \bigoplus_{k\geq 0}(I^{k}/\mathfrak{p}I^{k}),\]
and we set \(\mathcal{F}_{\mathfrak{p}}(I)_{k}=I^{k}/\mathfrak{p}I^{k}\). We define addition in the obvious way and multiplication as follows. If \(a\in I^{k}/\mathfrak{p}I^{k}\) and \(b\in I^{\ell}/\mathfrak{p}I^{\ell}\), then \(ab\in I^{k+\ell}/\mathfrak{p}I^{k+\ell}\). It is routine to check that this multiplication is well-defined.
As before, we note that if \(R=S\) or \(R\) is local and \(\mathfrak{p}=\mathfrak{m}\) is the maximal ideal, then \(\mathcal{F}_{\mathfrak{m}}(I)=\bigoplus_{k\geq 0}(I^{k}/\mathfrak{m}I^{k})\) is the well-known _fiber cone_ of \(I\).
With the notation introduced, we have
**Theorem 2.2**.: _Let \(I\) be an ideal of a Noetherian commutative domain \(R\) and let \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\). Then, \(\mathrm{Soc}_{\mathfrak{p}}(I)\) is a finitely generated graded \(\mathcal{F}_{\mathfrak{p}}(I)\)-module._
Proof.: Firstly, we show that \(\mathrm{Soc}_{\mathfrak{p}}(I)\) has the structure of a graded \(\mathcal{F}_{\mathfrak{p}}(I)\)-module. For this purpose, let \(f\in I^{\ell}/\mathfrak{p}I^{\ell}\). It is clear that multiplication by \(f\) induces a map \((I^{k}:\mathfrak{p})/I^{k}\to(I^{k+\ell}:\mathfrak{p})/I^{k+\ell}\) for any \(k\geq 0\). Hence \(\mathcal{F}_{\mathfrak{p}}(I)_{\ell}\,\mathrm{Soc}_{\mathfrak{p}}(I)_{k}\subseteq \mathrm{Soc}_{\mathfrak{p}}(I)_{k+\ell}\).
To prove that \(\mathrm{Soc}_{\mathfrak{p}}(I)\) is a finitely generated \(\mathcal{F}_{\mathfrak{p}}(I)\)-module, we consider
\[J=(0:_{\mathrm{gr}_{I}(R)}\mathfrak{p})=\{f\in\mathrm{gr}_{I}(R):f\mathfrak{p} =0\},\]
\(i.e.\), the annihilator of \(\mathfrak{p}\) in the associated graded ring of \(I\), \(\mathrm{gr}_{I}(R)=\bigoplus_{k\geq 0}(I^{k}/I^{k+1})\). Recall that \(\mathrm{gr}_{I}(R)\) is a Noetherian ring [35, Proposition (10.D)]. Thus, as an ideal of \(\mathrm{gr}_{I}(R)\), \(J\) is a finitely generated graded \(\mathrm{gr}_{I}(R)\)-module. Since \(\mathfrak{p}\) annihilates \(J\), then
\(J\) has also the structure of a finitely generated graded \(\operatorname{gr}_{I}(R)/\mathfrak{p}\operatorname{gr}_{I}(R)\)-module. But
\[\operatorname{gr}_{I}(R)/\mathfrak{p}\operatorname{gr}_{I}(R) = \frac{\bigoplus_{k\geq 0}(I^{k}/I^{k+1})}{\mathfrak{p}\bigoplus_{k \geq 0}(I^{k}/I^{k+1})}=\frac{\bigoplus_{k\geq 0}(I^{k}/I^{k+1})}{\bigoplus_{k\geq 0 }(\mathfrak{p}I^{k}/I^{k+1})}\] \[= \bigoplus_{k\geq 0}\frac{I^{k}/I^{k+1}}{\mathfrak{p}I^{k}/I^{k+1} }=\bigoplus_{k\geq 0}(I^{k}/\mathfrak{p}I^{k})\] \[= \mathcal{F}_{\mathfrak{p}}(I).\]
Consequently, \(J\) is a finitely generated graded \(\mathcal{F}_{\mathfrak{p}}(I)\)-module.
Let us show that \(\operatorname{Soc}_{\mathfrak{p}}(I)_{k+1}=J_{k}\) for \(k\gg 0\). For this purpose, we compute the \(k\)th graded component of \(J\). We have
\[J_{k} = \{f\in\operatorname{gr}_{I}(R)_{k}:f\mathfrak{p}=0\}=\{f\in I^{ k}/I^{k+1}\ :f\mathfrak{p}=0\}\] \[= \{f\in I^{k}:f\mathfrak{p}\in I^{k+1}\}/I^{k+1}=(\{f\in R:f \mathfrak{p}\in I^{k+1}\}\cap I^{k})/I^{k+1}\] \[= ((I^{k+1}:\mathfrak{p})\cap I^{k})/I^{k+1}.\]
By Ratliff [37, Corollary 4.2], there exists \(r\) such that \((I^{k+1}:I)=I^{k}\) for all \(k\geq r\). Whereas, by Brodmann [2], there exists \(b\) such that \(\operatorname{Ass}(I^{k})=\operatorname{Ass}^{\infty}(I)\) for all \(k\geq b\). Let \(k^{*}=\max\{r,b\}\). Next, we show that \(\operatorname{Soc}_{\mathfrak{p}}(I)_{k+1}=J_{k}\) for \(k\geq k^{*}\).
Let \(k\geq k^{*}\). We claim that \(\mathfrak{p}\) contains \(I\). Indeed, \(\mathfrak{p}\in\operatorname{Ass}(I^{k})\), hence \(I^{k}\subseteq\mathfrak{p}\). Let \(a\in I\), then \(a^{k}\in I^{k}\subseteq\mathfrak{p}\). Since \(\mathfrak{p}\) is prime, actually \(a\in\mathfrak{p}\) and so \(I\subseteq\mathfrak{p}\). Therefore, \((I^{k+1}:\mathfrak{p})\subseteq(I^{k+1}:I)=I^{k}\) by the Ratliff property. Hence,
\[J_{k}=((I^{k+1}:\mathfrak{p})\cap I^{k})/I^{k+1}=(I^{k+1}:\mathfrak{p})/I^{k+1 }=\operatorname{Soc}_{\mathfrak{p}}(I)_{k+1}.\]
Consequently, we obtain that \(\operatorname{Soc}_{\mathfrak{p}}(I)_{k\geq k^{*}+1}=J_{\geq k^{*}}\), where \(M_{\geq\ell}\) denotes \(\bigoplus_{k\geq\ell}M_{\ell}\) if \(M=\bigoplus_{k\geq 0}M_{k}\) is graded. Since \(J\) is finitely generated as a \(\mathcal{F}_{\mathfrak{p}}(I)\)-module, it follows that \(\operatorname{Soc}_{\mathfrak{p}}(I)\) is a finitely generated \(\mathcal{F}_{\mathfrak{p}}(I)\)-module as well.
Now, we assume furthermore that \(R=S=K[x_{1},\ldots,x_{n}]\) is the standard graded polynomial ring with \(K\) a field, that \(I\) is a graded ideal of \(S\) and \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\) is a stable prime of \(I\). Then, \(I^{k}/\mathfrak{p}I^{k}\) is a graded \(S\)-module, for all \(k\geq 0\). Therefore, \(\mathcal{F}_{\mathfrak{p}}(I)\) is in a natural way a bigraded ring:
\[\mathcal{F}_{\mathfrak{p}}(I)\ =\ \bigoplus_{d,k\geq 0}(I^{k}/\mathfrak{p}I^{k}) _{d}.\]
In particular, we set \(\mathcal{F}_{\mathfrak{p}}(I)_{(d,k)}=(I^{k}/\mathfrak{p}I^{k})_{d}\) and \(\operatorname{bideg}(f)=(d,k)\) for \(f\in\mathcal{F}_{\mathfrak{p}}(I)_{(d,k)}\).
Note that each module \((I^{k}:\mathfrak{p})/I^{k}\) is a graded \(S\)-module. Thus, we can write
\[\operatorname{Soc}_{\mathfrak{p}}(I)\ =\ \bigoplus_{d,k\geq 0}\operatorname{Soc}_{ \mathfrak{p}}(I)_{(d,k)}\]
where \(\operatorname{Soc}_{\mathfrak{p}}(I)_{(d,k)}=((I^{k}:\mathfrak{p})/I^{k})_{d}\). Hence, \(\operatorname{Soc}_{\mathfrak{p}}(I)\) is a bigraded \(\mathcal{F}_{\mathfrak{p}}(I)\)-module, because \(\mathcal{F}_{\mathfrak{p}}(I)_{(d_{1},\ell)}\operatorname{Soc}_{\mathfrak{p}}(I )_{(d_{2},k)}\subseteq\operatorname{Soc}_{\mathfrak{p}}(I)_{(d_{1}+d_{2},k+\ell)}\).
Therefore, we have proved that
**Corollary 2.3**.: _Let \(I\) be a graded ideal of \(S=K[x_{1},\ldots,x_{n}]\) with \(K\) a field and let \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\). Then, \(\operatorname{Soc}_{\mathfrak{p}}(I)\) is a finitely generated bigraded \(\mathcal{F}_{\mathfrak{p}}(I)\)-module._
Let \(u_{1},\ldots,u_{m}\) be a minimal system of homogeneous generators of \(I\). It is well-known that the associated graded ring \(\operatorname{gr}_{I}(S)\) has a presentation
\[\varphi:T=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]\to\operatorname{gr}_{I}(S)\]
defined by setting
\[\varphi(x_{i})=x_{i}+I\in\operatorname{gr}_{I}(S)_{0}=S/I,\ \ \text{for}\ \ 1\leq i\leq n,\]
\[\varphi(y_{i})=u_{i}+I^{2}\in\operatorname{gr}_{I}(S)_{1}=I/I^{2},\ \ \text{for}\ \ 1\leq i\leq m.\]
Since \(I\) is graded, \(\operatorname{gr}_{I}(S)\) is naturally bigraded, with \(\operatorname{gr}_{I}(S)_{(d,k)}=(I^{k}/I^{k+1})_{d}\). Moreover, \(T\) can be made into a bigraded ring by setting \(\operatorname{bideg}(x_{i})=(1,0)\) for \(1\leq i\leq n\), and \(\operatorname{bideg}(y_{i})=(\deg(u_{i}),1)\) for \(1\leq i\leq m\), where \(\deg(u_{i})\) is the degree of \(u_{i}\) in \(S\). With these bigradings, \(\varphi\) is a bigraded surjective ring homomorphism.
In the proof of Theorem 2.2 we have seen that \(\mathcal{F}_{\mathfrak{p}}(I)=\operatorname{gr}_{I}(S)/\mathfrak{p} \operatorname{gr}_{I}(S)\). Let \(\pi:\operatorname{gr}_{I}(S)\to\mathcal{F}_{\mathfrak{p}}(I)\) be the canonical epimorphism. Then, the composition map \(\psi=\pi\circ\varphi:T\to\mathcal{F}_{\mathfrak{p}}(I)\) is a surjective ring homomorphism. It is clear that \(\psi\) preserves the bigraded structure. Thus, \(\operatorname{Soc}_{\mathfrak{p}}(I)\) has also the structure of a bigraded \(T\)-module, if we set
\[af=\psi(a)f\quad\text{for all}\ \ a\in T\ \ \text{and all}\ \ f\in\operatorname{Soc}_{\mathfrak{p}}(I).\]
Since \(\psi\) is surjective and \(\operatorname{Soc}_{\mathfrak{p}}(I)\) is a finitely generated \(\mathcal{F}_{\mathfrak{p}}(I)\)-module, it follows that \(\operatorname{Soc}_{\mathfrak{p}}(I)\) is a finitely generated \(T\)-module, as well.
The following lemma is required. For a bigraded \(T\)-module \(M=\bigoplus_{d,k}M_{d,k}\), we set \(M_{(*,k)}=\bigoplus_{d}M_{(d,k)}\). Note that \(M_{(*,k)}\) becomes a graded \(S\)-module.
**Lemma 2.4**.: _Let \(T=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]\) be a bigraded polynomial ring, with \(K\) a field, \(\operatorname{bideg}(x_{i})=(1,0)\) for \(1\leq i\leq n\) and \(\operatorname{bideg}(y_{i})=(d_{i},1)\) for \(1\leq i\leq m\). Let \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) and \(S=K[x_{1},\ldots,x_{n}]\subset T\). Let \(M\) be a finitely generated bigraded \(T\)-module. Then,_
\[\operatorname{Tor}_{i}^{S}(S/\mathfrak{m},M_{(*,k)})\cong\operatorname{Tor}_{ i}^{T}(T/\mathfrak{m},M)_{(*,k)}\]
_for all \(i\) and \(k\)._
Proof.: Let \(\mathbb{F}:0\to\cdots\to F_{j}\to\cdots\to F_{1}\to F_{0}\to M\to 0\) be a minimal bigraded \(T\)-resolution of \(M\). Then,
\[\mathbb{F}_{k}:0\to\cdots\to(F_{j})_{(*,k)}\to\cdots\to(F_{1})_{(*,k)}\to(F_{ 0})_{(*,k)}\to M_{(*,k)}\to 0\]
is a graded (possibly non-minimal) free \(S\)-resolution of \(M_{(*,k)}=\bigoplus_{d}M_{(d,k)}\). Since \(\operatorname{Tor}_{i}^{T}(T/\mathfrak{m},M)=H_{i}(\mathbb{F}/\mathfrak{m} \mathbb{F})\) we have that \(\operatorname{Tor}_{i}^{T}(T/\mathfrak{m},M)_{(*,k)}=H_{i}(\mathbb{F}_{k}/ \mathfrak{m}\mathbb{F}_{k})\) which in turn is isomorphic to \(\operatorname{Tor}_{i}^{S}(S/\mathfrak{m},M_{(*,k)})\). The desired conclusion follows.
Note that \(T/\mathfrak{m}=K[y_{1},\ldots,y_{m}]\) and that \(\operatorname{Tor}_{0}^{T}(T/\mathfrak{m},\operatorname{Soc}_{\mathfrak{p}}(I))\) is a finitely generated bigraded \(T/\mathfrak{m}\)-module. Therefore, by the above lemma, we have
\[\alpha((I^{k}:\mathfrak{p})/I^{k}) =\alpha(\operatorname{Tor}_{0}^{S}(S/\mathfrak{m},(I^{k}:\mathfrak{ p})/I^{k}))=\alpha(\operatorname{Tor}_{0}^{S}(S/\mathfrak{m},\operatorname{Soc}_{ \mathfrak{p}}(I)_{(*,k)}))\] \[=\alpha(\operatorname{Tor}_{0}^{T}(T/\mathfrak{m},\operatorname{Soc }_{\mathfrak{p}}(I))_{(*,k)}).\]
Similarly, \(\omega((I^{k}:\mathfrak{p})/I^{k})=\omega(\operatorname{Tor}_{0}^{T}(T/ \mathfrak{m},\operatorname{Soc}_{\mathfrak{p}}(I))_{(*,k)})\).
From this discussion, Theorem 2.1(b) follows from the next more general statement, which is a variation of [13, Theorem 3.4].
**Proposition 2.5**.: _Let \(T=K[y_{1},\ldots,y_{s}]\) a polynomial ring, with \(\operatorname{bideg}(y_{i})=(d_{i},1)\) for \(1\leq i\leq s\) and \(K\) a field, and let \(M\) be a finitely generated bigraded \(T\)-module. Then, \(\alpha_{M}(k)=\min\{d:M_{(d,k)}\neq 0\}\) and \(\omega_{M}(k)=\max\{d:M_{(d,k)}\neq 0\}\) are linear functions in \(k\) for \(k\gg 0\)._
Proof.: The claim about the linearity of \(\omega_{M}(k)\) follows from [13, Theorem 3.4]. The proof of the claim of the linearity of \(\alpha_{M}(k)\) is similar, but we include here all the details for the convenience of the reader.
For any exact sequence \(0\to M\to N\to P\to 0\) of finitely generated bigraded \(T\)-modules we have \(\alpha_{N}(k)=\min\{\alpha_{M}(k),\alpha_{P}(k)\}\), for all \(k\).
Since \(M\) is a finitely generated \(T\)-module and \(T\) is Noetherian, by the bigraded version of [16, Proposition 3.7] there exists a sequence of bigraded \(T\)-submodules
\[0=M_{0}\subset M_{1}\subset\cdots\subset M_{i-1}\subset M_{i}=M\]
of \(M\) such that \(M_{j}/M_{j-1}\cong T/\mathfrak{p}_{j}\), with \(\mathfrak{p}_{j}\) a bigraded prime ideal of \(T\), for all \(1\leq j\leq i\). Hence, we may suppose that \(M=T/J\) with \(J\) a bigraded ideal of \(T\). We show that \(J\) can be replaced by a monomial ideal. For this aim, let \(>\) be a monomial order on \(T\), and let \(\operatorname{in}(J)\) be the initial ideal of \(J\) with respect to \(>\). The natural \(K\)-basis of \(T/J\) consists of all residue classes (modulo \(J\)) of all monomials not belonging to \(\operatorname{in}(J)\), see [29, Proposition 2.2.5.(a)]. The same residue classes modulo \(\operatorname{in}(J)\) form a \(K\)-basis for \(T/\operatorname{in}(J)\). Thus \(\alpha_{M}(k)=\alpha_{T/J}(k)=\alpha_{T/\operatorname{in}(J)}(k)\), and we can assume that \(M=T/J\) with \(J\) a monomial ideal of \(T\).
Recall that \(\operatorname{bideg}(y_{i})=(d_{i},1)\) for \(1\leq i\leq s\). For later convenience, after a harmless relabeling of the variables, we may suppose that
\[d_{1}\leq d_{2}\leq\cdots\leq d_{s}. \tag{1}\]
Assume that \(J\) is minimally generated by the monomials \(\mathbf{y}^{\mathbf{c}_{i}}=y_{1}^{c_{i,1}}y_{2}^{c_{i,2}}\cdots y_{s}^{c_{i,s}}\), for \(1\leq i\leq r\).
Let \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{s})\in\mathbb{N}^{s}\), we denote by \(\overline{\mathbf{y}^{\mathbf{a}}}\) the residue class of \(\mathbf{y}^{\mathbf{a}}=y_{1}^{a_{1}}y_{2}^{a_{2}}\cdots y_{s}^{a_{s}}\) in \(T/J\). Let \(k\geq 0\), by \(B_{k}\) we denote the minimal basis of \((T/J)_{k}\). Then, we can write \(\alpha_{M}(k)=\min\{v(\mathbf{a}):\overline{\mathbf{y}^{\mathbf{a}}}\in B_{k}\}\), where \(v(\mathbf{a})=\sum_{i=1}^{s}a_{i}\deg(y_{i})=\sum_{i=1}^{s}a_{i}d_{i}\).
Clearly, \(\overline{\mathbf{y}^{\mathbf{a}}}\in B_{k}\) if and only if \(\sum_{j=1}^{s}a_{j}=k\), and for all \(i=1,\ldots,s\), there exists \(j\) such that \(a_{j}<c_{i,j}\). Denote by \(L\) the set of all maps \(\{1,\ldots,r\}\to\{1,\ldots,s\}\). We can decompose the set \(B_{k}\) as the union \(\bigcup_{f\in L}B_{k,f}\), where
\[B_{k,f}=\Big{\{}\overline{\mathbf{y}^{\mathbf{a}}}\;:\;\sum_{j=1}^{s}a_{j}=k \text{ and }a_{f(i)}<c_{i,f(i)},\;i=1,\ldots,r\Big{\}}.\]
With this in mind, we can write \(\alpha_{M}(k)=\min_{f\in L}\alpha_{f}(k)\), where \(\alpha_{f}(k)\) is defined as \(\alpha_{f}(k)=\min\{v(\mathbf{a}):\overline{\mathbf{y}^{\mathbf{a}}}\in B_{k,f}\}\). Hence, it is enough to prove that \(\alpha_{f}(k)\) is a linear function with integer coefficients for all \(f\in L\) and all \(k\gg 0\).
Fix \(f\in L\). Let \(\{j_{1}<j_{2}<\cdots<j_{t}\}\) be the image of \(f\). For \(h=1,\ldots,t\), we set \(c_{j_{h}}=\min\{c_{i,j_{h}}:i=1,\ldots,r\}-1\). Then, we have that
\[B_{k,f}=\Big{\{}\overline{\mathbf{y}^{\mathbf{a}}}\ :\ \sum_{j=1}^{s}a_{j}=k\ \text{and}\ a_{j_{h}}\leq c_{j_{h}},\ h=1,\ldots,t\Big{\}}.\]
Thus, \(\alpha_{f}(k)\) is given by the maximum of the _functional_\(v(\mathbf{a})\) on the following convex bounded set
\[C_{k,f}=\Big{\{}\mathbf{a}\ :\ \sum_{j=1}^{s}a_{j}=k\ \text{and}\ a_{j_{h}}\leq c _{j_{h}},\ h=1,\ldots,t\Big{\}}.\]
Let \(\ell\) be the smallest integer such that \(j_{1}=1\),..., \(j_{\ell}=\ell\) and \(j_{\ell+1}>\ell+1\). Thus, for \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{s})\in C_{k,f}\) we have \(a_{1}<c_{j_{1}},\ a_{2}<c_{j_{2}},\ \ldots,\ a_{\ell}<c_{j_{\ell}}\) and no bound on \(a_{j_{\ell+1}}\), except that \(\sum_{j=1}^{s}a_{j}=k\). We distinguish the two possible cases.
Case 1. Suppose \(\ell=s\). Then \(\sum_{j=1}^{s}a_{j}\) can be at most \(c_{j_{1}}+c_{j_{2}}+\cdots+c_{j_{\ell}}\). Thus, for all \(k\gg 0\), \(B_{k,f}=\emptyset\) and so \(\alpha_{f}(k)=0\).
Case 2. Suppose \(\ell<s\). We let \(k\) such that \(k\geq c_{j_{1}}+c_{j_{2}}+\cdots+c_{j_{\ell}}\). We claim that the functional \(v\) has its minimal value for \(\mathbf{a}_{*}=(c_{j_{1}},c_{j_{2}},\ldots,c_{j_{\ell}},k-\sum_{p=1}^{\ell}c_{ j_{p}},0,0,\ldots,0)\). Then, for all large \(k\gg 0\), we have that
\[\alpha_{f}(k)=v(\mathbf{a}_{*})=\sum_{p=1}^{\ell}c_{j_{p}}d_{j_{p}}+d_{j_{\ell +1}}(k-\sum_{p=1}^{\ell}c_{j_{p}}),\]
which is a linear function in \(k\) with integer coefficients, as desired.
Let \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{s})\in C_{k,f}\). Assume that for some \(1\leq i<j\leq s\) we have \(a_{i}<c_{i}\) and \(a_{j}>0\). Then, \(\mathbf{a}^{\prime}=(a_{1},a_{2}\ldots,a_{i}+1,\ldots,a_{j}-1,\ldots,a_{s})\) also belongs to \(C_{k,f}\) and \(v(\mathbf{a}^{\prime})\leq v(\mathbf{a})\) because by equation (1) we have \(d_{i}\leq d_{j}\). Thus, we see that the minimal value of \(v\) on \(C_{k,f}\) is achieved when we fill up the first "boxes" of \(\mathbf{a}\in C_{k,f}\) as much as possible. Finally, the functional \(v\) reaches its minimal value when \(\mathbf{a}=\mathbf{a}_{*}\), completing the proof.
Now, we come to our second fundamental result.
**Theorem 2.6**.: _Let \(I\subset S=K[x_{1},\ldots,x_{n}]\) be a graded ideal. Suppose that_
1. _either_ \(\operatorname{Ass}^{\infty}(I)=\operatorname{Max}^{\infty}(I)\) _or_
2. _for all_ \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\) _and all_ \(k\gg 0\)_,_ \((I^{k}:\mathfrak{p})/I^{k}\) _is generated in a single degree. Then,_ \(\operatorname{v}_{\mathfrak{p}}(I^{k})\)_, for all_ \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\)_, and_ \(\operatorname{v}(I^{k})\) _are linear functions in_ \(k\) _for_ \(k\gg 0\)_._
Proof.: Under hypothesis (a), for all \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\) and all \(k\gg 0\), by Theorem 1.1(c) we have \(\operatorname{v}_{\mathfrak{p}}(I^{k})=\alpha((I^{k}:\mathfrak{p})/I^{k})\). Thus, by Theorem 2.1(b), \(\operatorname{v}_{\mathfrak{p}}(I^{k})\) is a linear function in \(k\) for \(k\gg 0\). By Theorem 1.1(d), \(\operatorname{v}(I^{k})=\min\{\alpha((I^{k}:\mathfrak{p})/I^{k}):\mathfrak{p} \in\operatorname{Ass}^{\infty}(I)\}\) for \(k\gg 0\). Thus \(\operatorname{v}(I^{k})\) is a linear function in \(k\) for \(k\gg 0\).
Under hypothesis (b), for all \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\) and all \(k\gg 0\), we have \(\alpha((I^{k}:\mathfrak{p})/I^{k})=\omega((I^{k}:\mathfrak{p})/I^{k})\). By Theorem 2.1(a)-(b), it follows that \(\operatorname{v}_{\mathfrak{p}}(I^{k})=\alpha((I^{k}:\mathfrak{p})/I^{k})\) is a linear function in \(k\) for \(k\gg 0\). The assertion about \(\operatorname{v}(I^{k})\) follows once again.
**Example 2.7**.: Let \(I\subset S\) be a graded ideal without embedded primes. Thus \(\operatorname{Ass}(I)=\operatorname{Max}(I)\). Recall that the _\(k\)th symbolic power_ of \(I\subset S\) is the ideal defined as \(I^{(k)}=\bigcap_{\mathfrak{p}\in\operatorname{Ass}(I)}(I^{k}S_{\mathfrak{p}}\cap S)\). Suppose that \(I^{k}=I^{(k)}\) for all \(k\geq 1\). Since \(I\) does not have embedded primes, \(I^{k}=I^{(k)}=\bigcap_{\mathfrak{p}\in\operatorname{Ass}(I)}(I^{k}S_{ \mathfrak{p}}\cap S)\) is a primary decomposition of \(I^{k}\), for all \(k\geq 1\). Thus \(\operatorname{Ass}^{\infty}(I)=\operatorname{Ass}(I)=\operatorname{Max}(I)= \operatorname{Max}^{\infty}(I)\). Hence, for such an ideal the conclusion of Theorem 2.6 holds. Next, we give some examples.
1. Ideals of maximal minors [4, Corollary 3.5.3].
2. Binomial edge ideals of closed graphs [18, Corollary 3.4].
3. Normally torsionfree squarefree monomial ideals [29, Definition 1.4.5 and Theorem 1.4.6].
Theorem 1.1(c) combined with Theorem 2.1(b) yields
**Corollary 2.8**.: _Let \(I\subset S=K[x_{1},\ldots,x_{n}]\) be a graded ideal and let \(\mathfrak{p}\in\operatorname{Max}^{\infty}(I)\). Then \(\operatorname{v}_{\mathfrak{p}}(I^{k})\) is a linear function in \(k\) for \(k\gg 0\)._
We conclude this section with the following estimate on the growth of the functions \(\operatorname{v}_{\mathfrak{p}}(I^{k})\) for \(k\gg 0\). For the proof, we recall the following basic rules. Let \(I,I_{1},I_{2},\{J_{i}\}_{i}\) be ideals of a commutative Noetherian ring \(R\) and let \(\mathfrak{p}\) be a prime ideal of \(R\). Then,
1. \((I:\sum_{i}J_{i})=\bigcap_{i}(I:J_{i})\),
2. \(((I:I_{1}):I_{2})=(I:I_{1}I_{2})\),
3. if \(\mathfrak{p}=\bigcap_{i}J_{i}\), then \(\mathfrak{p}=J_{i}\) for some \(i\).
**Proposition 2.9**.: _Let \(I\subset S=K[x_{1},\ldots,x_{n}]\) be a graded ideal and let \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\). Then, we have_
\[\operatorname{v}_{\mathfrak{p}}(I^{k+1})\leq\operatorname{v}_{\mathfrak{p}}(I ^{k})+\omega(I),\ \text{ for all }k\gg 0.\]
_In particular, \(\operatorname{v}(I^{k+1})\leq\operatorname{v}(I^{k})+\omega(I)\) for all \(k\gg 0\)._
Proof.: By Brodmann and Ratliff, there exists \(k^{*}>0\) such that \(\operatorname{Ass}^{\infty}(I)=\operatorname{Ass}(I^{k})\) and \((I^{k+1}:I)=I^{k}\) for all \(k\geq k^{*}\). Fix \(k>k^{*}\) and let \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\).
Let \(f\in S\) be a homogeneous element such that \((I^{k}:f)=\mathfrak{p}\) and \(\deg(f)=\operatorname{v}_{\mathfrak{p}}(I^{k})\). Let \(f_{1},\ldots,f_{m}\) be a minimal homogeneous generating set of \(I\). By rules (ii) and (i),
\[\mathfrak{p}=(I^{k}:f) = (I^{k+1}:I):f=(I^{k+1}:fI)\] \[= (I^{k+1}:\sum_{i=1}^{m}(ff_{i}))=\bigcap_{i=1}^{m}(I^{k+1}:ff_{i}).\]
Hence, by rule (iii), we have \(\mathfrak{p}=(I^{k+1}:ff_{i})\) for some \(i\). By the definition of \(\operatorname{v}_{\mathfrak{p}}(I^{k+1})\), this means that \(\operatorname{v}_{\mathfrak{p}}(I^{k+1})\leq\deg(ff_{i})=\deg(f)+\deg(f_{i})\). The assertion follows, because \(\deg(f)=\operatorname{v}_{\mathfrak{p}}(I^{k})\) and \(\deg(f_{i})\leq\omega(I)\).
Theorems 1.1(a) and 2.1(a) combined with the previous result give immediately
**Corollary 2.10**.: _Let \(I\subset S=K[x_{1},\ldots,x_{n}]\) be a graded ideal and let \(\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)\). Then, we have_
\[\alpha((I^{k+1}:\mathfrak{p})/I^{k+1})\leq\omega((I^{k}:\mathfrak{p})/I^{k})+ \omega(I),\ \text{ for all }k\gg 0.\]
## 3. The v-number of monomial ideals in two variables
In this section, we consider monomial ideals of the polynomial ring in two variables \(S=K[x,y]\). Let \(I\subset S\) be a monomial ideal. As customary, we denote by \(G(I)\) the unique minimal monomial generating set of \(I\). Then
\[G(I)\ =\ \{x^{a_{1}}y^{b_{1}},x^{a_{2}}y^{b_{2}},\ldots,x^{a_{m}}y^{b_{m}}\},\]
where \(\mathbf{a}:a_{1}>a_{2}>\cdots>a_{m}\geq 0\) and \(\mathbf{b}:0\leq b_{1}<b_{2}<\cdots<b_{m}\). Conversely, given any two such sequences \(\mathbf{a}\) and \(\mathbf{b}\), the set \(\{x^{a_{1}}y^{b_{1}},x^{a_{2}}y^{b_{2}},\ldots,x^{a_{m}}y^{b_{m}}\}\) is the minimal monomial generating set of a monomial ideal of \(S\).
Therefore, the monomial ideals of \(S=K[x,y]\) are in bijection with all pairs \((\mathbf{a},\mathbf{b})\) of sequences \(\mathbf{a}:a_{1}>a_{2}>\cdots>a_{m}\geq 0\) and \(\mathbf{b}:0\leq b_{1}<b_{2}<\cdots<b_{m}\) as above. Hereafter, we write \(I=I_{\mathbf{a},\mathbf{b}}\) for \(I=(x^{a_{1}}y^{b_{1}},x^{a_{2}}y^{b_{2}},\ldots,x^{a_{m}}y^{b_{m}})\).
The natural \(K\)-basis of \(S/I_{\mathbf{a},\mathbf{b}}\) consists of the residue classes (modulo \(I_{\mathbf{a},\mathbf{b}}\)) of the monomials not belonging to \(I_{\mathbf{a},\mathbf{b}}\). These basis elements can be represented by the lattice points \((c,d)\in\mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\geq 0}\) such that \(x^{c}y^{d}\notin I_{\mathbf{a},\mathbf{b}}\), as in the next picture
Our main goal in this section is to prove the following theorem.
**Theorem 3.1**.: _Let \(I\subset S=K[x,y]\) be a monomial ideal. Then, \(\mathrm{v}_{\mathfrak{p}}(I^{k})\), for all \(\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)\), and \(\mathrm{v}(I^{k})\) are linear functions in \(k\) for \(k\gg 0\)._
The associated prime ideals of a monomial ideal \(I\) are monomial prime ideals, that is, ideals generated by a subset of the variables [29, Corollary 1.3.9]. Thus, in our case \(\mathrm{Ass}(I)\subseteq\{(x),(y),(x,y)\}\). We set \(\mathfrak{p}_{x}=(x)\), \(\mathfrak{p}_{y}=(y)\) and \(\mathfrak{m}=(x,y)\).
We can compute \(\mathrm{Ass}(I_{\mathbf{a},\mathbf{b}})\) in terms of the sequences \(\mathbf{a}\) and \(\mathbf{b}\).
**Proposition 3.2**.: _Let \(I=I_{\mathbf{a},\mathbf{b}}\subset S\) be a monomial ideal. Then,_
* \(\mathfrak{p}_{x}\in\mathrm{Ass}(I)\) _if and only if_ \(a_{m}>0\)_._
* \(\mathfrak{p}_{y}\in\mathrm{Ass}(I)\) _if and only if_ \(b_{1}>0\)_._
* \(\mathfrak{m}\in\mathrm{Ass}(I)\) _if and only if_ \(m>1\)_, i.e.,_ \(I\) _is not a principal ideal._
Proof.: If \(\mathfrak{p}_{x}\in\mathrm{Ass}(I)\) then \(I\subseteq\mathfrak{p}_{x}=(x)\). Hence \(x\) divides all minimal monomial generators of \(I\). In particular, \(x\) divides \(x^{a_{m}}y^{b_{m}}\) and \(a_{m}>0\).
Conversely, let \(a_{m}>0\). Then \(x\) divides all minimal monomial generators of \(I\). Hence \(I\subseteq\mathfrak{p}_{x}=(x)\). Since \(\mathfrak{p}_{x}\) is of height one, it follows that \(\mathfrak{p}_{x}\in\mathrm{Ass}(I)\).
This proves (a), statement (b) can be proved similarly.
Finally, for the proof of (c), suppose \(\mathfrak{m}\in\operatorname{Ass}(I)\). If \(I\) is a principal ideal, then for some \(c\) and \(d\), \(I=(x^{c}y^{d})=(x^{c})\cap(y^{d})=\mathfrak{p}_{x}^{c}\cap\mathfrak{p}_{y}^{d}\) is the primary decomposition of \(I\), which contradicts our assumption. Hence \(I\) is not principal.
Conversely, suppose \(I\) is not principal, but \(\mathfrak{m}\notin\operatorname{Ass}(I)\). Then \(\operatorname{Ass}(I)\subseteq\{\mathfrak{p}_{x},\mathfrak{p}_{y}\}\). Since \(\mathfrak{p}_{x}\) and \(\mathfrak{p}_{y}\) are height one prime ideals, \(I=\mathfrak{p}_{x}^{c}\cap\mathfrak{p}_{y}^{d}=(x^{c})\cap(y^{d})=(x^{c}y^{d})\) for some \(c\) and \(d\), against our assumption. The assertion follows.
The following lemma is required.
**Lemma 3.3**.: _Let \(I=I_{\mathbf{a},\mathbf{b}}\subset S\) be a monomial ideal. Then,_
\[x^{ka_{1}}y^{kb_{1}},x^{ka_{m}}y^{kb_{m}}\in G(I^{k}),\ \ \text{for all}\ \ k\geq 1.\]
Proof.: Let \(k\geq 1\). We know that
\[I^{k}\ =\ \big{(}\prod_{i=1}^{m}(x^{a_{i}}y^{b_{i}})^{k_{i}}\ :\ \sum_{i=1}^{m}k_{i}=k \big{)}.\]
Let \(x^{r}y^{s}\) be an arbitrary generator of \(I^{k}\) different from \(x^{ka_{1}}y^{kb_{1}}\). Then, we have \(r=k_{1}a_{1}+\ldots k_{m}a_{m}\), \(s=k_{1}b_{1}+\ldots k_{m}b_{m}\), \(\sum_{i=1}^{m}k_{i}=k\) and \(k_{i}>0\) for some \(i\neq 1\). Thus,
\[ka_{1}=k_{1}a_{1}+k_{2}a_{1}+\ldots k_{m}a_{1}>k_{1}a_{1}+k_{2}a_{2}+\ldots k_{ m}a_{m}=r\]
and
\[kb_{1}=k_{1}b_{1}+k_{2}b_{1}+\ldots k_{m}b_{1}<k_{1}b_{1}+k_{2}b_{2}+\ldots k_{ m}b_{m}=s.\]
Therefore, \(x^{r}y^{s}\) does not divide \(x^{ka_{1}}y^{kb_{1}}\). This shows that \(x^{ka_{1}}y^{kb_{1}}\) is a minimal generator of \(I^{k}\). By a similar argument we obtain that \(x^{ka_{m}}y^{kb_{m}}\in G(I^{k})\).
**Corollary 3.4**.: _Let \(I=I_{\mathbf{a},\mathbf{b}}\subset S\) be a monomial ideal. Then \(\operatorname{Ass}(I^{k})=\operatorname{Ass}^{\infty}(I)\), for all \(k\geq 1\). In particular, \(\operatorname{astab}(I)=1\)._
Proof.: Let us prove that \(\operatorname{Ass}(I)=\operatorname{Ass}(I^{k})\) for all \(k\geq 2\). By the previous proposition, \(\mathfrak{p}_{x}\in\operatorname{Ass}(I)\) if and only if \(x\) divides all minimal monomial generators of \(I\). Hence, if \(\mathfrak{p}_{x}\in\operatorname{Ass}(I)\), then \(\mathfrak{p}_{x}\in\operatorname{Ass}(I^{k})\) for all \(k\geq 2\), as well.
Now, suppose that \(\mathfrak{p}_{x}\in\operatorname{Ass}(I^{k})\) for some \(k\geq 2\), but \(\mathfrak{p}_{x}\notin\operatorname{Ass}(I)\). Then, by Proposition 3.2(a) we have \(a_{m}=0\). Hence \(y^{b_{m}}\in G(I)\). By the previous corollary, \(y^{kb_{m}}\in G(I^{k})\), as well. But this is impossible, because \(y^{kb_{m}}\notin\mathfrak{p}_{x}\), but by assumption \(\mathfrak{p}_{x}\) contains \(I^{k}\). Hence \(a_{m}>0\) and \(\mathfrak{p}_{x}\in\operatorname{Ass}(I)\), as wanted.
The same reasoning can be applied to show that \(\mathfrak{p}_{y}\in\operatorname{Ass}(I)\) if and only if \(\mathfrak{p}_{y}\in\operatorname{Ass}(I^{k})\), for any \(k\geq 2\).
Finally, by the previous proposition, \(\mathfrak{m}\in\operatorname{Ass}(I)\) if and only \(I\) is not principal. Lemma 3.3 implies that \(I\) is not principal if and only if \(I^{k}\) is not principal for any \(k\geq 2\). Thus \(\mathfrak{m}\in\operatorname{Ass}(I)\) if and only if \(\mathfrak{m}\in\operatorname{Ass}(I^{k})\) for any \(k\geq 2\).
Next, we compute the functions \(\operatorname{v}_{\mathfrak{p}_{x}}(I^{k}_{\mathbf{a},\mathbf{b}})\), \(\operatorname{v}_{\mathfrak{p}_{y}}(I^{k}_{\mathbf{a},\mathbf{b}})\).
**Corollary 3.5**.: _Let \(I=I_{\mathbf{a},\mathbf{b}}\subset S\) be a monomial ideal. The following holds._
1. _If_ \(a_{m}>0\)_, then_ \(\operatorname{v}_{\mathfrak{p}_{x}}(I^{k})=k(a_{m}+b_{m})-1\)_, for all_ \(k\geq 1\)_._
2. _If_ \(b_{1}>0\)_, then_ \(\operatorname{v}_{\mathfrak{p}_{y}}(I^{k})=k(a_{1}+b_{1})-1\)_, for all_ \(k\geq 1\)
Proof.: The only generator \(\overline{u}\in(I:\mathfrak{p}_{x})/I\) such that \((I:u)=\mathfrak{p}_{x}\) is \(x^{a_{m}-1}y^{b_{m}}\), for it has the largest \(y\)-degree. For each \(k\geq 1\), from Lemma 3.3, \(x^{ka_{m}}y^{kb_{m}}\in G(I^{k})\) and such generator has the highest \(y\)-degree. Thus, \(\overline{u}=\overline{x^{ka_{m}-1}y^{kb_{m}}}\) is the only generator of \((I^{k}:\mathfrak{p}_{x})/I^{k}\) such that \((I^{k}:u)=\mathfrak{p}_{x}\). Similarly, one can prove (b).
We are in the position to prove our main result in the section.
Proof of Theorem 3.1.: By Corollary 3.4, \(\operatorname{Ass}(I^{k})=\operatorname{Ass}^{\infty}(I)\) for all \(k\geq 1\). If \(\mathfrak{p}_{x}\) or \(\mathfrak{p}_{y}\) belongs to \(\operatorname{Ass}^{\infty}(I)\), then \(\operatorname{v}_{\mathfrak{p}_{x}}(I^{k})\) or \(\operatorname{v}_{\mathfrak{p}_{y}}(I^{k})\) is a linear function in \(k\) for \(k\gg 0\), by Corollary 3.5. If \(\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)\), then \(\operatorname{v}_{\mathfrak{m}}(I^{k})\) is a linear function in \(k\) for \(k\gg 0\) because \(\mathfrak{m}\in\operatorname{Max}^{\infty}(I)\) (Corollary 2.8). Finally, it follows by definition that \(\operatorname{v}(I^{k})\) is a linear function in \(k\) for \(k\gg 0\), as well.
In the next proposition, we show how to compute \(\operatorname{v}_{\mathfrak{m}}(I)\) for a non principal monomial ideal \(I=I_{\mathbf{a},\mathbf{b}}\subset S\). For our convenience, if \(c\geq 1\), in the proof of the next result we regard \(x^{-c}\) and \(y^{-c}\) as \(1\).
**Proposition 3.6**.: _Let \(I=I_{\mathbf{a},\mathbf{b}}\subset S\) be a non principal monomial ideal. Then,_
\[(I:\mathfrak{m})/I=\big{(}x^{a_{j}-1}y^{b_{j+1}-1}\ :\ 1\leq j\leq m-1\big{)}/I. \tag{2}\]
_In particular,_
\[\operatorname{v}_{\mathfrak{m}}(I)=\min\{a_{j}+b_{j+1}-2\ :\ 1\leq j\leq m-1\}.\]
Proof.: Firstly, we compute \(I:\mathfrak{m}\). We have
\[\begin{array}{rcl}I:\mathfrak{m}&=&(I:\mathfrak{p}_{x})\cap(I:\mathfrak{p}_ {y})\\ &=&(x^{a_{1}-1}y^{b_{1}},\ldots,x^{a_{m}-1}y^{b_{m}})\cap(x^{a_{1}}y^{b_{1}-1},\ldots,x^{a_{m}}y^{b_{m}-1})\\ &=&\big{(}\mathrm{lcm}(x^{a_{i}-1}y^{b_{i}},x^{a_{j}}y^{b_{j}-1})\ :\ 1\leq i\leq m,\ 1 \leq j\leq m\big{)}\\ &=&\big{(}\mathrm{lcm}(x^{a_{i}-1}y^{b_{i}},x^{a_{j+1}}y^{b_{j+1}-1})\ :\ 1\leq i\leq m,\ 0\leq j\leq m-1\big{)}\\ &=&\big{(}x^{\max\{a_{i}-1,a_{j+1}\}}y^{\max\{b_{i},b_{j+1}-1\}}\ :\ 1\leq i\leq m,\ 0\leq j\leq m-1\big{)}.\end{array} \tag{3}\]
Fix \(j\in\{0,\ldots,m-1\}\) and let \(i\in\{1,\ldots,m\}\).
If \(i\leq j\), we have \(a_{i}\geq a_{j}>a_{j+1}\) and \(b_{i}\leq b_{j}<b_{j+1}\). Therefore \(x^{a_{j+1}}|x^{a_{i}-1}\) and \(y^{b_{i}}|y^{b_{j+1}-1}\). Hence,
\[x^{a_{j}-1}y^{b_{j+1}-1}\in(I:\mathfrak{m})\ \ \text{divides}\ \ x^{\max\{a_{i}-1,a_{j+1}\}}y^{\max\{b_{i},b_{j+1}-1\}}\ \ \text{for}\ \ i\leq j. \tag{4}\]
If \(i>j\), we have \(a_{i}-1\leq a_{j+1}\) and \(b_{i}\geq b_{j+1}-1\), so \(x^{a_{i}-1}|x^{a_{j+1}}\) and \(y^{b_{j+1}-1}|y^{b_{i}}\). Hence,
\[x^{a_{j+1}}y^{b_{i}}\in(I:\mathfrak{m})\ \ \text{divides}\ \ x^{\max\{a_{i}-1,a_{j+1}\}}y^{\max\{b_{i},b_{j+1}-1\}}\ \ \text{for}\ \ i>j. \tag{5}\]
Thus, by equations (3), (4) and (5) we have
\[I:\mathfrak{m}\ =\ \big{(}x^{a_{j}-1}y^{b_{j+1}-1},x^{a_{j+1}}y^{b_{i}}\ :\ 1 \leq j\leq m-1,\ j+1\leq i\leq m\big{)}.\]
Note that for each \(i\geq j+1\) we have \(x^{a_{j+1}}y^{b_{i}}\in I\). It is clear that \(x^{a_{j}-1}y^{b_{j+1}-1}\notin I\), for all \(j=1,\ldots,m-1\). Hence, equation (2) follows.
The claim about \(\operatorname{v}_{\mathfrak{m}}(I)\) follows from (2) and Theorem 1.1(c).
As a consequence of our discussion, we obtain the next formula that shows us how to compute the v-number of \(I_{\mathbf{a,b}}\) solely in terms of the sequences \(\mathbf{a}\) and \(\mathbf{b}\).
**Theorem 3.7**.: _Let \(I=I_{\mathbf{a,b}}\subset S\) be a monomial ideal. Then_
\[\mathrm{v}(I)=\begin{cases}\min\{a_{i}+b_{i+1}-2\ :\ 1\leq i\leq m-1\},\text{ if }b_{1}=0 \text{ and }a_{m}=0,\\ \min\{a_{1}+b_{1}-1,a_{i}+b_{i+1}-2\ :\ 1\leq i\leq m-1\},\text{ if }b_{1}\neq 0 \text{ and }a_{m}=0,\\ \min\{a_{m}+b_{m}-1,a_{i}+b_{i+1}-2\ :\ 1\leq i\leq m-1\},\text{ if }b_{1}=0 \text{ and }a_{m}\neq 0,\\ \min\{a_{1}+b_{1}-1,a_{m}+b_{m}-1,a_{i}+b_{i+1}-2\ :\ 1\leq i\leq m-1\},\text{ otherwise.}\end{cases}\]
Proof.: Suppose \(I\) is non principal. Then, the statement follows by combining Proposition 3.2, Corollary 3.5 and Proposition 3.7. Now, if \(I\) is principal, the above formulas also hold. Indeed, in this case \(m=1\) and in the above last three minimums one does not have to consider the terms \(a_{i}+b_{i+1}-2\) because \(m-1=0\).
In all the examples we could check with _Macaulay2_[20], for a monomial ideal \(I\subset S=K[x,y]\), if the v-function of \(I\) is \(\mathrm{v}(I^{k})=ak+b\), \(k\gg 0\), we always have that \(b\geq-1\). At present, we do not know how to prove this lower bound. On the other hand, for any such linear function \(f(k)=ak+b\), (\(a\geq 1\), \(b\geq-1\)), there exists a monomial ideal \(I\subset S=K[x,y]\) such that \(\mathrm{v}(I^{k})\) agrees with \(f(k)\) for all \(k\geq 1\), as we show next.
**Theorem 3.8**.: _Let \(a\geq 1\) and \(b\geq-1\) be integers. Then, there exists a monomial ideal \(I\subset S=K[x,y]\) such that_
\[\mathrm{v}(I^{k})=ak+b,\ \text{ for all }\ k\geq 1.\]
Proof.: We claim that \(I=(x^{a},x^{a-1}y^{b+2})=x^{a-1}(x,y^{b+2})\) satisfies our assertion. For this aim, let us show that \(I^{k}=(x^{ka-i}y^{i(b+2)}:0\leq i\leq k)\) for all \(k\geq 1\). Indeed,
\[I^{k} = x^{k(a-1)}(x,y^{b+2})^{k}=x^{k(a-1)}\sum_{i=0}^{k}(x^{k-i}y^{i(b +2)})\] \[= (x^{ka-i}y^{i(b+2)}:0\leq i\leq k).\]
Since \(ka>ka-1>\cdots>ka-k\) and \(b+2<2(b+2)<\cdots<k(b+2)\), it follows that \(G(I^{k})=\{x^{ka-i}y^{i(b+2)}:0\leq i\leq k\}\).
Note that \(\mathrm{Ass}^{\infty}(I)=\{\mathfrak{p}_{x},\mathfrak{m}\}\). By Corollary 3.5(a) we have
\[\mathrm{v}_{\mathfrak{p}_{x}}(I^{k})=k(a+b+1)-1.\]
Whereas, by Proposition 3.6,
\[\mathrm{v}_{\mathfrak{m}}(I^{k}) =\ \min\{(ka-i)+(i+1)(b+2)-2:0\leq i\leq k-1\}\] \[=\ \min\{ka+(i+1)b+i:0\leq i\leq k-1\}\] \[=\ ak+b.\]
Finally, \(\mathrm{v}(I^{k})=\min\{\mathrm{v}_{\mathfrak{p}_{x}}(I^{k}),\mathrm{v}_{ \mathfrak{m}}(I^{k})\}=\min\{k(a+b+1)-1,ak+b\}=ak+b.\)
## 4. The v-number of ideals with linear powers
In this section, we consider several classes of graded ideals \(I\) arising from combinatorial contexts, with a particular focus on ideals having linear powers, and in some cases we compute explicitly the v-function \(\mathrm{v}(I^{k})\).
Hereafter, \(S\) denotes the standard graded polynomial ring \(K[x_{1},\ldots,x_{n}]\) with coefficients in a field \(K\), and \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) denotes the unique graded maximal ideal of \(S\). Recall that a graded ideal \(I\subset S\) has a _\(d\)-linear resolution_ if it is generated in a single degree \(d\), and for all \(i\geq 0\), \(\beta_{i,j}(I)=0\) if \(j\neq i+d\).
We say that \(I\) has _linear powers_ if \(I^{k}\) has a linear resolution, for all \(k\geq 1\). Famous examples of ideals with linear powers are given in the following list.
1. Edge ideals with linear resolution [29, Theorem 10.2.6].
2. Polymatroidal ideals [29, Corollary 12.6.4].
3. Hibi ideals [29, Corollary 10.2.9 and Theorem 9.1.13], or [11, Corollary 4.11].
In the following Theorems 4.4, 4.8 and 4.10, we show that for any ideal \(I\) in the above list, we have that \(\mathrm{v}(I^{k})=\alpha(I)k-1\), for all \(k\geq 1\),
Due to these results, and experimental evidence, we expect that
**Conjecture 4.1**.: _Let \(I\subset S\) be a graded ideal with linear powers. Then_
\[\mathrm{v}(I^{k})=\alpha(I)k-1,\ \ \text{for all }k\geq 1.\]
If \(I\) does not have linear powers, the conclusion of Conjecture 4.1 is no longer valid. Next example is due to Terai [9, Remark 3]. Let \(\mathrm{char}(K)\neq 2\), then the Stanley Reisner ideal \(I=(abd,abf,ace,adc,aef,bde,bcf,bce,cdf,def)\) of the minimal triangulation of the projective plane has a linear resolution, while \(I^{2}\) has not. By using _Macaulay2_[20], we have \(\mathrm{v}(I)=\alpha(I)=3\) and \(\mathrm{v}(I^{k})=3k-1\) for all \(k\geq 2\).
Before verifying Conjecture 4.1 for the ideals listed in (i), (ii) and (iii), we state some useful results that will be needed later.
If \(I\subset S\) is a monomial ideal, then all associated primes of \(I\) are monomial prime ideals, that is, ideals generated by a subset of the variables [29, Corollary 1.3.9]. Hereafter, for a positive integer \(n\), we set \([n]=\{1,2,\ldots,n\}\). Let \(A\) be a non empty subset of \([n]\). We denote by \(\mathfrak{p}_{A}\) the monomial prime ideal \((x_{i}:i\in A)\).
The following result [38, Proposition 3.11] of Saha and Sengupta provides an useful general method to bound \(\mathrm{v}(I)\) from above, when \(I\) is a monomial ideal.
**Proposition 4.2**.: _Let \(I\subset S\) be a monomial ideal and \(f\in S\setminus I\) a monomial. Then,_
\[\mathrm{v}(I)\leq\mathrm{v}(I:f)+\deg(f).\]
On the other hand, one always has
**Proposition 4.3**.: _Let \(I\subset S\) be a monomial ideal. Then,_
\[\mathrm{v}_{\mathfrak{p}}(I)\geq\alpha(I)-1,\ \ \text{for all }\ \mathfrak{p}\in \mathrm{Ass}(I).\]
Proof.: Let \(\mathfrak{p}\in\mathrm{Ass}(I)\) and let \(u\in S\) be a monomial such that \((I:u)=\mathfrak{p}\) and \(\deg(u)=\mathrm{v}_{\mathfrak{p}}(I)\). Then \(\mathfrak{p}=\mathfrak{p}_{A}=(x_{i}:i\in A)\) for some \(A\subseteq[n]\). Thus \(x_{i}u\in I\) for all \(i\in A\). In particular, \(\deg(x_{i}u)\geq\alpha(I)\). Hence, \(\deg(u)\geq\alpha(I)-1\), as desired.
### Edge ideals with linear resolution
Let \(G\) be a finite simple graph with vertex set \(V(G)=[n]\) and edge set \(E(G)\). The _edge ideal_ of \(G\) is the monomial ideal \(I(G)\) of \(S\) generated by the monomials \(x_{i}x_{j}\) such that \(\{i,j\}\in E(G)\). A graph \(G\) is _complete_ if every \(\{i,j\}\) with \(i,j\in[n]\), \(i\neq j\), is an edge of \(G\). The _open neighbourhood_ of \(i\in V(G)\) is the set
\[N_{G}(i)=\big{\{}j\in V(G):\{i,j\}\in E(G)\big{\}}.\]
Whereas, the _closed neighbourhood_ of \(i\in V(G)\) is defined as \(N_{G}[i]=N_{G}(i)\cup\{i\}\).
A graph \(G\) is called _chordal_ if it has no induced cycles of length bigger than three. Recall that a _perfect elimination order_ of \(G\) is an ordering \(v_{1},\ldots,v_{n}\) of its vertex set \(V(G)\) such that \(N_{G_{i}}(v_{i})\) induces a complete subgraph on \(G_{i}\), where \(G_{i}\) is the induced subgraph of \(G\) on the vertex set \(\{i,i+1,\ldots,n\}\). Hereafter, if \(1,2,\ldots,n\) is a perfect elimination order of \(G\), we denote it by \(x_{1}>x_{2}>\cdots>x_{n}\).
A famous theorem of Dirac guarantees that a finite simple graph \(G\) is chordal if and only if \(G\) admits a perfect elimination order [14].
The _complementary graph_\(G^{c}\) of \(G\) is the graph with vertex set \(V(G^{c})=V(G)\) and where \(\{i,j\}\) is an edge of \(G^{c}\) if and only if \(\{i,j\}\notin E(G)\). A graph \(G\) is called _cochordal_ if and only if \(G^{c}\) is chordal. In [19], Froberg proved that \(I(G)\) has a linear resolution if and only if \(G\) is cochordal.
As a consequence of Dirac and Froberg theorems we have
**Theorem 4.4**.: _Let \(I(G)\) be the edge ideal of a graph \(G\). Suppose that \(I(G)\) has a linear resolution. Then,_
\[\operatorname{v}(I(G)^{k})\ =\ 2k-1,\ \text{ for all }\ k\geq 1.\]
The proof is based upon the next lemma. If \(A\) is a subset of \(V(G)\), the induced subgraph of \(G\) on \(A\), is the graph on vertex set \(A\) and edges \(\{i,j\}\in E(G)\) such that \(i,j\in A\).
**Lemma 4.5**.: _Let \(I(G)\) be an edge ideal with linear resolution, and let \(x_{1}>x_{2}>\cdots>x_{n}\) be a perfect elimination order of \(G^{c}\). Then,_
\[(I(G):x_{1})\ =\ (x_{j}:j\in N_{G}(1)). \tag{6}\]
Proof.: It is clear that
\[(x_{j}:j\in N_{G}(1))\ \subseteq\ (I(G):x_{1}).\]
To end the proof, we show the opposite inclusion. Let \(x_{k}x_{\ell}\in I(G)\) and suppose that both \(k\) and \(\ell\) are not in \(N_{G}(1)\). Then \(\{1,k\},\{1,\ell\}\in E(G^{c})\), that is \(k,\ell\in N_{G^{c}}(1)\). Since, \(x_{1}>x_{2}>\cdots>x_{n}\) is a perfect elimination order of \(G^{c}\), it follows that that \(N_{G^{c}}(1)\) induces a complete subgraph of \(G^{c}_{2}\), where \(G^{c}_{2}\) is the induced subgraph of \(G^{c}\) on the vertex set \(\{2,\ldots,n\}\). Since \(k,\ell>1\), it follows that \(\{k,\ell\}\in E(G^{c})\), in contradiction with \(\{k,\ell\}\in E(G)\). Thus either \(k\in N_{G}(1)\) or \(\ell\in N_{G}(1)\) and formula (6) follows.
As a consequence, we obtain by a different method the next result already proved in [33, Proposition 3.19].
**Corollary 4.6**.: _Let \(I(G)\) be the edge ideal of a graph \(G\). Suppose that \(I(G)\) has a linear resolution. Then, \(\operatorname{v}(I(G))=1\)._
Proof.: We proceed by induction on \(|V(G)|\geq 2\) with the base case being trivial. Let \(|V(G)|>2\). Let \(x_{1}>x_{2}>\cdots>x_{n}\) be a perfect elimination order of \(G^{c}\). Then, by Lemma 4.5, equation (6) holds. Thus, by Proposition 4.2, \(\operatorname{v}(I(G))\leq\operatorname{v}(I(G):x_{1})+\deg(x_{1})=0+1=1\). On the other hand, by Proposition 4.3, \(\operatorname{v}(I(G))\geq\alpha(I(G))-1=1\). The assertion follows.
**Remark 4.7**.: Let \(I\subset S\) be a graded ideal. Suppose that \((I^{k+1}:I)=I^{k}\) and \(\mathfrak{p}\in\operatorname{Ass}(I^{k})\) for all \(k\geq 1\). Then, the proof of Proposition 2.9 shows that
\[\operatorname{v}_{\mathfrak{p}}(I^{k+1})\leq\operatorname{v}_{\mathfrak{p}}(I ^{k})+\omega(I)\]
for all \(k\geq 1\).
Now, we are in the position to prove Theorem 4.4.
Proof of Theorem 4.4.: By the previous result, \(\operatorname{v}(I(G))=1\). Therefore, for some \(\mathfrak{p}\in\operatorname{Ass}(I(G))\), we have \(\operatorname{v}_{\mathfrak{p}}(I(G))=1\). By [34, Theorem 2.15], we have
\[\operatorname{Ass}(I(G))\subseteq\operatorname{Ass}(I(G)^{2})\subseteq \cdots\subseteq\operatorname{Ass}(I(G)^{k})\subseteq\cdots.\]
Hence, \(\mathfrak{p}\in\operatorname{Ass}(I(G)^{k})\) for all \(k\geq 1\). By [34, Lemma 2.12], \((I(G)^{k+1}:I(G))=I(G)^{k}\) for all \(k\geq 1\). Since \(\alpha(I(G)^{k+1})=2(k+1)\) and \(\omega(I(G))=2\), by Remark 4.7 and Proposition 4.3, we have
\[2(k+1)-1\leq\operatorname{v}_{\mathfrak{p}}(I(G)^{k+1})\leq\operatorname{v}_{ \mathfrak{p}}(I(G)^{k})+2\]
for all \(k\geq 1\). By induction on \(k\geq 1\), we may assume that \(\operatorname{v}_{\mathfrak{p}}(I(G)^{k})=2k-1\). The above chain of inequalities gives \(\operatorname{v}_{\mathfrak{p}}(I(G)^{k+1})=2(k+1)-1=\alpha(I(G)^{k+1})-1\). By Proposition 4.3 it follows that \(\operatorname{v}(I(G)^{k})=2k-1\) for all \(k\geq 1\), as well.
### Polymatroidal ideals
For a monomial \(u\in S\), the _\(x_{i}\)-degree_ of \(u\) is the integer \(\deg_{x_{i}}(u)=\max\{j\geq 0:x_{i}^{j}\text{ {divides} }u\}\). Let \(I\subset S\) be a monomial ideal generated in a single degree. Then \(I\) is called _polymatroidal_ if the following exchange property holds: for all \(u,v\in G(I)\) and all \(i\) such that \(\deg_{x_{i}}(u)>\deg_{x_{i}}(v)\) there exists \(j\) such that \(\deg_{x_{j}}(u)<\deg_{x_{j}}(v)\) and \(x_{j}(u/x_{i})\in G(I)\).
Such a name is justified by the fact that the set of the multidegrees of the minimal generators of \(I\) is the set of the bases of a _discrete polymatroid_[29, Chapter 12].
This class of monomial ideals is very rich. Indeed, it includes
1. _Graphic matroids._ They are the ideals generated by the monomials \(\prod_{i\in F}x_{i}\), for all spanning forests \(F\) of a finite simple graph \(G\) on \(n\) vertices.
2. _Transversal polymatroidal ideals._ They are of the form \(I=\mathfrak{p}_{A_{1}}\mathfrak{p}_{A_{2}}\cdots\mathfrak{p}_{A_{r}}\), for some finite collection \(A_{1},\ldots,A_{r}\) of arbitrary nonempty subsets of \([n]\).
3. _Ideals of Veronese type._ Let \(\mathbf{c}=(c_{1},\ldots,c_{n})\in\mathbb{Z}^{n}\) be a vector with non negative entries. Then, the ideal of _Veronese type_\((d,\mathbf{c})\) is defined as \[I_{n,d,\mathbf{c}}\ =\ \bigl{(}x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in S\ :\ \sum_{i=1}^{n}a_{i}=d,\ \ a_{i}\leq c_{i},\text{ for }i\in[n]\bigr{)}.\]
Polymatroidal ideals also satisfies a dual exchange property [28, Lemma 2.1], namely: for all \(u,v\in G(I)\) and all \(i\) such that \(\deg_{x_{i}}(u)<\deg_{x_{i}}(v)\) there exists \(j\) such that \(\deg_{x_{j}}(u)>\deg_{x_{j}}(v)\) and \(x_{i}(u/x_{j})\in G(I)\).
**Theorem 4.8**.: _Let \(I\subset S\) be a polymatroidal ideal. Then_
\[\operatorname{v}(I^{k})=\alpha(I)k-1,\ \text{ for all }\ k\geq 1.\]
The proof is based upon the next lemma.
**Lemma 4.9**.: _Let \(I\subset S\) be a polymatroidal ideal generated in degree \(\alpha(I)\geq 2\). Then \((I:x_{i})\) is again a polymatroidal ideal generated in degree \(\alpha(I)-1\), for all \(i\in[n]\)._
Proof.: We may assume that \(i=1\). We can write \(I=x_{1}I_{1}+I_{2}\), where \(I_{1}\) and \(I_{2}\) are the unique monomial ideals of \(S\) such that
\[G(x_{1}I_{1}) = \{u\in G(I):x_{1}\text{ divides }u\},\] \[G(I_{2}) = \{u\in G(I):x_{1}\text{ does not divide }u\}.\]
We claim that \(I_{2}\subset I_{1}\). It is enough to show that \(G(I_{2})\subseteq I_{1}\). Let \(u\in G(I_{2})\) and let \(v\in G(x_{1}I_{1})\). Then \(\deg_{x_{1}}(u)=0<\deg_{x_{1}}(v)\). Thus, by the dual exchange property, we can find \(j\) such that \(\deg_{x_{j}}(u)>\deg_{x_{j}}(v)\) and \(x_{1}(u/x_{j})\in G(I)\). Hence \(x_{1}(u/x_{j})\in x_{1}I_{1}\), and so \(u/x_{j}\in I_{1}\). Consequently, \(u\in I_{1}\) too, and thus \(I_{2}\subset I_{1}\).
By the previous paragraph, we have \((I:x_{1})=I_{1}+I_{2}=I_{1}\). It is clear that \(I_{1}\) is equigenerated in degree \(\alpha(I)-1\). It remains to prove that \(I_{1}\) is polymatroidal. Let \(u_{1},v_{1}\in G(I_{1})\) and \(i\) such that \(\deg_{x_{i}}(u_{1})>\deg_{x_{i}}(v_{1})\). Our job is to find \(j\) such that \(\deg_{x_{j}}(u_{1})<\deg_{x_{j}}(v_{1})\) and \(x_{j}(u_{1}/x_{i})\in G(I_{1})\). Set \(u=x_{1}u_{1}\) and \(v=x_{1}v_{1}\). Then \(u,v\in G(x_{1}I_{1})\subset G(I)\) and \(\deg_{x_{i}}(u)>\deg_{x_{i}}(v)\). Since \(I\) is polymatroidal, there exists \(j\) such that \(\deg_{x_{j}}(u)<\deg_{x_{j}}(v)\) and \(x_{j}(u/x_{i})\in G(I)\). We claim that \(x_{1}\) divides \(x_{j}(u/x_{i})\). Indeed \(x_{1}\) divides \(u\). If \(i\neq 1\), then \(x_{1}\) divides \(x_{j}(u/x_{i})\) as well. If \(i=1\), since \(x_{1}\) divides \(v\) and \(\deg_{x_{i}}(u)>\deg_{x_{i}}(v)>0\), it follows that \(x_{1}^{2}\) divides \(u\) and so \(x_{1}\) divides \(x_{j}(u/x_{i})=x_{j}(u/x_{1})\). Therefore, in any case \(x_{1}\) divides \(x_{j}(u/x_{i})\). Hence, \((x_{j}(u/x_{i}))/x_{1}=x_{j}(u_{1}/x_{i})\in G(I_{1})\) and the proof is complete.
We are ready for the proof of the theorem.
Proof of Theorem 4.8.: Firstly, we show that \(\operatorname{v}(I)=\alpha(I)-1\). We proceed by strong induction on \(\alpha(I)\geq 1\). If \(\alpha(I)=1\), then \(I=\mathfrak{p}_{A}\) for some \(A\subseteq[n]\), \(\alpha(I)=1\) and \(\operatorname{v}(I)=\operatorname{v}_{\mathfrak{p}_{A}}(I)=0\). Suppose \(\alpha(I)>1\). By the previous proposition, \((I:x_{1})\) is a polymatroidal ideal and \(\alpha(I:x_{1})=\alpha(I)-1\). By induction hypothesis, \(\operatorname{v}(I:x_{1})=\alpha(I:x_{1})-1=\alpha(I)-2\). Hence, by Proposition 4.2,
\[\operatorname{v}(I)\leq\operatorname{v}(I:x_{1})+\deg(x_{1})=\alpha(I)-1.\]
By Proposition 4.3, \(\operatorname{v}(I)\geq\alpha(I)-1\). Equality follows.
Let \(k>1\). It is well-known that the product of polymatroidal ideals is polymatroidal [29, Theorem 12.6.3]. Hence, \(I^{k}\) is a polymatroidal ideal generated in degree \(\alpha(I)k\). By what shown above, \(\operatorname{v}(I^{k})=\alpha(I^{k})-1=\alpha(I)k-1\)
### Hibi ideals
Let \((P,\succeq)\) be a finite partially ordered set (a _poset_, for short) with \(P=\{p_{1},\ldots,p_{n}\}\). A _poset ideal_ of \(P\) is a subset \(\mathcal{I}\) of \(P\) such that if \(p_{i}\in P\), \(p_{j}\in\mathcal{I}\) and \(p_{i}\preceq p_{j}\), then \(p_{i}\in\mathcal{I}\). To any poset ideal \(\mathcal{I}\) of \(P\), we associate the monomial \(u_{\mathcal{I}}=(\prod_{p_{i}\in\mathcal{I}}x_{i})(\prod_{p_{i}\in P\setminus \mathcal{I}}y_{i})\in S=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}]\). The set of all poset ideals of \(P\) is denoted by \(\mathcal{J}(P)\). Then the _Hibi ideal_ of \(P\) is the monomial ideal of \(S\) defined as
\[H_{P}\ =\ (u_{\mathcal{I}}\ :\ \mathcal{I}\in\mathcal{J}(P)).\]
The Hibi ideal \(H_{P}\) is equigenerated in degree \(|P|\). It is well known that Hibi ideals have linear powers. Next, we calculate the v-function of \(H_{P}\).
**Theorem 4.10**.: _Let \(H_{P}\) be a Hibi ideal. Then,_
\[\mathrm{v}(H_{P}^{k})=k|P|-1,\ \ \text{for all}\ \ k\geq 1.\]
Proof.: By [29, Lemma 9.1.9], we have the minimal primary decomposition
\[H_{P}\ =\ \bigcap_{p_{i}\preceq p_{j}}(x_{i},y_{j}).\]
Therefore \(\mathrm{Ass}(H_{P})=\{(x_{i},y_{j}):p_{i}\preceq p_{j}\}\). Let \(p_{i}\in P\) be a minimal element of \(P\) with respect to \(\succeq\). After a relabeling, we may assume that \(i=1\). Then \(\{p_{1}\}\in\mathcal{J}(P)\) and \(x_{1}y_{2}\cdots y_{n}\in H_{P}\). Note that \(y_{1}y_{2}\cdots y_{n}\in H_{P}\) because \(\emptyset\in\mathcal{J}(P)\). Thus,
\[(H_{P}:y_{2}\cdots y_{n})\supseteq(x_{1},y_{1}).\]
Note that any generator of \(H_{P}\) is divided by either \(x_{1}\) or \(y_{1}\). Thus, any monomial \(u\in(H_{P}:y_{2}\cdots y_{n})\) must be divided by either \(x_{1}\) or \(y_{1}\). Hence, we see that \((H_{P}:y_{2}\cdots y_{n})=(x_{1},y_{1})\). Therefore, \(\mathrm{v}_{(x_{1},y_{1})}(H_{P})\leq\deg(y_{2}\cdots y_{n})=|P|-1\). By Proposition 4.3, it follows that \(\mathrm{v}(H_{P})=\mathrm{v}_{(x_{1},y_{1})}(H_{P})=\alpha(H_{P})-1=|P|-1\).
By [27, Corollary 1.2] the Rees algebra \(\mathcal{R}(H_{P})\) is a normal domain. Hence, \(H_{P}\) is a normal ideal (see, also, [12, Corollary 3.5]). Thus, by [37, Proposition (4.7)], \((H_{P}^{k+1}:H_{P})=H_{P}^{k}\), for all \(k\geq 1\). By [29, Theorem 9.1.13]\(H_{P}\) is the cover ideal \(J(G)\) of a Cohen-Macaulay bipartite graph \(G\). Next, by [39, Theorem 6.10] we have that \(J(G)^{k}=J(G)^{(k)}\) for all \(k\geq 1\), that is, ordinary and symbolic powers of \(J(G)=H_{P}\) coincide. Since \(H_{P}\) is a squarefree monomial ideal, by [29, Proposition 1.4.4 and Corollary 1.3.6] we have
\[H_{P}^{k}\ =\ H_{P}^{(k)}\ =\ \bigcap_{p_{i}\preceq p_{j}}(x_{i},y_{j})^{k}.\]
Hence, \(\mathrm{Ass}(H_{P}^{k})=\mathrm{Ass}(H_{P})\) for all \(k\geq 1\). Thus, by Remark 4.7, for all \(k\geq 1\),
\[\mathrm{v}_{(x_{1},y_{1})}(H_{P}^{k+1})\leq\mathrm{v}_{(x_{1},y_{1})}(H_{P}^{k })+|P|. \tag{7}\]
Now, we prove that \(\mathrm{v}_{(x_{1},y_{1})}(H_{P}^{k})=k|P|-1\) for all \(k\geq 1\). This is true for \(k=1\) as shown above. Assume \(k>1\) and that \(\mathrm{v}_{(x_{1},y_{1})}(H_{P}^{k})=k|P|-1\). Then, Proposition 4.3, equation (7) and the inductive hypothesis give
\[(k+1)|P|-1=\alpha(H_{P}^{k+1})-1\leq\mathrm{v}_{(x_{1},y_{1})}(H_{P}^{k+1}) \leq\mathrm{v}_{(x_{1},y_{1})}(H_{P}^{k})+|P|=k|P|-1+|P|.\]
Hence, \(\mathrm{v}_{(x_{1},y_{1})}(H_{P}^{k+1})=(k+1)|P|-1\), as wanted. Finally \(\mathrm{v}(H_{P}^{k})=k|P|-1\), for all \(k\geq 1\), as well. |
2307.03416 | Learning Adversarial Semantic Embeddings for Zero-Shot Recognition in
Open Worlds | Zero-Shot Learning (ZSL) focuses on classifying samples of unseen classes
with only their side semantic information presented during training. It cannot
handle real-life, open-world scenarios where there are test samples of unknown
classes for which neither samples (e.g., images) nor their side semantic
information is known during training. Open-Set Recognition (OSR) is dedicated
to addressing the unknown class issue, but existing OSR methods are not
designed to model the semantic information of the unseen classes. To tackle
this combined ZSL and OSR problem, we consider the case of "Zero-Shot Open-Set
Recognition" (ZS-OSR), where a model is trained under the ZSL setting but it is
required to accurately classify samples from the unseen classes while being
able to reject samples from the unknown classes during inference. We perform
large experiments on combining existing state-of-the-art ZSL and OSR models for
the ZS-OSR task on four widely used datasets adapted from the ZSL task, and
reveal that ZS-OSR is a non-trivial task as the simply combined solutions
perform badly in distinguishing the unseen-class and unknown-class samples. We
further introduce a novel approach specifically designed for ZS-OSR, in which
our model learns to generate adversarial semantic embeddings of the unknown
classes to train an unknowns-informed ZS-OSR classifier. Extensive empirical
results show that our method 1) substantially outperforms the combined
solutions in detecting the unknown classes while retaining the classification
accuracy on the unseen classes and 2) achieves similar superiority under
generalized ZS-OSR settings. | Tianqi Li, Guansong Pang, Xiao Bai, Jin Zheng, Lei Zhou, Xin Ning | 2023-07-07T06:54:21Z | http://arxiv.org/abs/2307.03416v1 | # Learning Adversarial Semantic Embeddings for Zero-Shot Recognition in Open Worlds
###### Abstract
Zero-Shot Learning (ZSL) focuses on classifying samples of **unseen classes** with only their side semantic information presented during training. It cannot handle real-life, open-world scenarios where there are test samples of **unknown classes** for which neither samples (_e.g._, images) nor their side semantic information is known during training. Open-Set Recognition (OSR) is dedicated to addressing the unknown class issue, but existing OSR methods are not designed to model the semantic information of the unseen classes. To tackle this combined ZSL and OSR problem, we consider the case of "Zero-Shot Open-Set Recognition" (ZS-OSR), where a model is trained under the ZSL setting but it is required to accurately classify samples from the unseen classes while being able to reject samples from the unknown classes during inference. We perform large experiments on combining existing state-of-the-art ZSL and OSR models for the ZS-OSR task on four widely used datasets adapted from the ZSL task, and reveal that ZS-OSR is a non-trivial task as the simply combined solutions perform badly in distinguishing the unseen-class and unknown-class samples. We further introduce a novel approach specifically designed for ZS-OSR, in which our model learns to generate adversarial semantic embeddings of the unknown classes to train an unknowns-informed ZS-OSR classifier. Extensive empirical results show that our method 1) substantially outperforms the combined solutions in detecting the unknown classes while retaining the classification accuracy on the unseen classes and 2) achieves similar superiority under generalized ZS-OSR settings. Our code is available at [https://github.com/lhrst/ASE](https://github.com/lhrst/ASE).
## 1 Introduction
_'A zebra is a horse with black and white striped coats.'_ With this description, a child who has never seen a zebra can recognize it at first sight. Humans can recognize images of such unseen classes using the shared semantic knowledge learned from images of the classes previously seen. Inspired by this phenomenon, Zero-Shot Learning (ZSL) was proposed to learn a multi-modal cognition
ability in image classification [1]. Given only some side semantic information like attribute vectors or description text of a set of targeted classes (**unseen classes**), together with samples (_e.g._, images) of another set of classes (**seen classes**) and their semantic information, ZSL aims to learn a model to recognize images of the unseen classes. Many ZSL methods have been introduced over the years, including embedding methods [2; 3] that learn mappings between the given semantic knowledge and images in a new feature space, and generative methods [4; 5; 6] that train a generator to synthesize training samples for the unseen classes, transforming the ZSL to a standard image recognition task.
However, current ZSL approaches cannot handle real-life, open-world scenarios, where there are test samples of **unknown classes** for which neither samples (_e.g._, images) nor their side semantic information is known during training, such as unknown objects encountered in self-driving contexts and chest X-ray images of unknown viral pneumonia. This is because they assume a closed-set learning setting where all classes in the test data are presented in the training data. As a result, ZSL methods would misclassify the unknown-class samples into one of the unseen classes, as demonstrated in Figure 1 (top). Open-Set Recognition (OSR) [7; 8] is dedicated to addressing the unknown class issue, but existing OSR methods are not designed to model the semantic information of the unseen classes. To tackle this problem, we consider the case of "_Zero-Shot Open-Set Recognition_" (ZS-OSR), where a model is trained under the ZSL setting but it is required to accurately classify samples from the unseen classes while being able to reject samples from the unknown classes during inference, as illustrated in Figure 1 (bottom). As shown in Table 1, ZS-OSR is a joint ZSL and OSR problem. A straightforward solution is thus to simply combine existing state-of-the-art (SOTA) ZSL and OSR models to build ZS-OSR models. As a contribution to Pattern Recognition research, we establish a set of such baselines and construct four ZS-OSR benchmark datasets adapted from widely-used ZSL datasets, _i.e._, CUB [9], AWA2 [10], FLO [11] and SUN [12], to evaluate their performance. Our empirical results reveal that such combined solutions perform badly in differentiating the unseen-class and the unknown-class samples. These findings highlight the need for more effective solutions for ZS-OSR, which can contribute to the development of more robust and effective pattern recognition systems.
We further introduce a novel approach specifically designed for ZS-OSR, namely ASE, which learns to generate Adversarial Semantic Embeddings of the unknown classes to train an unknowns-informed open-set classifier. The key insight is to generate meaningful samples of both unseen and unknown classes to train the unknowns-informed open-set classifier. Existing generative ZSL models have demonstrated superior performance in generating the samples/features of the unseen classes based on their learned relations between the samples and the semantic embeddings of the seen classes. The key challenge lies in the generation of unknown-class samples. To address this challenge, we introduce the adversarial semantic embedding learning module that learns a set of
Figure 1: The ZS-OSR task vs. the ZSL task. ZS-OSR considers open-set scenarios where test data can also contain samples of unknown classes, unlike the closed-set setting in ZSL that presumes the presence of unseen-class samples only. As a result, ZSL methods would misclassify the unknown-class samples into one unseen class, whereas ZS-OSR methods can reject these unknown samples while correctly classifying the unseen-class samples.
adversarial semantic embeddings for the unknown classes so that they are tightly distributed around but separable from the unseen-class embeddings (see Figure 5 for a visualization of these generated samples). This is achieved by jointly minimizing a distance loss in the _semantic embedding space_ that brings the unknown-class embeddings closer to the given unseen-class embeddings, and an adversarial loss in the _feature space_ that pulls the corresponding unknown-class feature prototypes away from the unseen-class features generated by an off-the-shelf trained generative ZSL model. Given the learned semantic embeddings, samples of the unknown classes are generated using the trained generative ZSL model.
There have been OSR methods that generate adversarial samples to represent the unknown-class samples, _e.g._, [13; 14]. However, unlike ASE that can work on both the semantic and feature spaces, they cannot model the rich semantics in the semantic embedding space as they were primarily designed to work in the single feature space, largely limiting their performance in the ZS-OSR setting.
In summary, this work makes the following contributions: 1) we explore the ZS-OSR problem and establish extensive performance benchmarks by building a set of baselines based on the combination of existing SOTA ZSL and OSR models on four adapted ZS-OSR widely-used datasets, 2) we propose a novel generative approach for ZS-OSR, which learns a set of adversarial semantic embeddings to represent the unknown classes to train an unknowns-informed open-set classifier, and 3) large-scale empirical results show that our method ASE substantially outperforms the baselines in detecting the unknown classes without degrading the classification accuracy on the unseen classes, and it also achieves similar superiority on generalized ZS-OSR and ZS-OOD (out-of-distribution) settings.
## 2 Related Work
**Zero-shot learning**[1] utilizes an additional class semantic embedding set to connect the seen and unseen classes. Current ZSL approaches are either to align the images and semantic embeddings [2; 3; 15; 16; 17; 18; 19; 20; 21; 22; 23], or to generate image features of unseen classes and train a closed-set classifier [24; 25; 4; 5; 6; 26]. Recently, big vision-language models like CLIP [27] have demonstrated significant potential in the realm of ZSL. Nevertheless, they are still focused on the standard ZSL setting. New designs are required to utilize such big models to handle unknown samples as neither samples nor side semantic information of the unknown classes is available during ZS-OSR training.
**Open-set recognition**[7; 8] targets the problem of learning a classifier to reject samples of classes that are unseen during training. A large number of OSR methods have been introduced. Some early studies focus on designing new network layers, such as the OpenMax layer [28], while most studies are dedicated to generating pseudo unknown-class samples to train open-set classifiers [14; 29; 30; 31]. The other studies explore new ways of representing the unknown classes, _e.g._, through prototype mining [32].
**Out-of-distribution (OOD) detection** addresses a similar problem as OSR, but focuses on detecting data from a different distribution. For example, [33; 34; 35] tackle the problem by exploiting the prediction logits to define OOD scores and reject samples from different datasets, while [36; 37] focused on the class-agnostic information in feature space that is not recoverable from logits.
**Zero-shot open-set recognition (ZS-OSR)** has not been explored in previous studies, as far as we know. Some related but different explorations are done in [38; 39; 40; 41; 42; 43; 44; 45]. Particularly, [38; 39; 40] treat
\begin{table}
\begin{tabular}{c|c|c} \hline \hline TaskSet & Training & Testing \\ \hline _ZSL_ & \begin{tabular}{c} seen classes \\ \& semantic information1 \\ \end{tabular} & Unseen classes \\ \hline _OSR_ & \begin{tabular}{c} seen classes \\ \& semantic information1 \\ \end{tabular} & \begin{tabular}{c} seen classes \\ \& unknown classes \\ \end{tabular} \\ \hline _ZS-OSR_ & \begin{tabular}{c} seen classes \\ \& semantic information1 \\ \end{tabular} &
\begin{tabular}{c} unseen classes \\ \& unknown classes \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: ZS-OSR vs. ZSL and OSR. Here _semantic information_1 is the semantic information of both seen and unseen classes.
ZSL and OSR as two independent problems, meaning that ZSL-oriented methods handle seen and unseen classes only, while OSR-oriented methods handle seen and unknown classes only. Typically, these methods handle both unknown and unseen classes with the same branch in the network, so they cannot be used to solve ZS-OSR tasks. Consequently they do not have the ability to distinguish between unseen and unknown classes. [41; 42] attempt to utilize OOD detection models to address the problem of generalized ZSL, solving the GZSL problem while neglecting the potential presence of OOD scenarios. [43] explores a new task, compositional ZSL, different from the conventional ZSL. While conventional ZSL aims to recognize unseen classes, compositional ZSL is designed to explore and recognize unknown combinations of known patterns. Due to the absence of unknown classes in its assumption, it cannot handle ZS-OSR tasks. [44; 45] explore the use of a large CLIP model [27] pretrained with extensive auxiliary data for OOD detection without using any training samples, which differs from zero-shot learning as there are no seen classes involved. Additionally, the task of open-set recognition under a few-shot setting is explored in some recent studies [46; 47], which addresses a different task from ours as we focus on zero-shot settings.
## 3 Zero-Shot OSR and Its Challenges
### Problem Statement
In ZS-OSR, there are three different types of classes, including seen classes \(\mathcal{Y}^{seen}\), unseen classes \(\mathcal{Y}^{unseen}\), and unknown classes \(\mathcal{Y}^{unknown}\), where \(\mathcal{Y}^{seen}\cap\mathcal{Y}^{unseen}\cap\mathcal{Y}^{unknown}=\emptyset\). Compared to the \(\mathcal{Y}^{unknown}\) classes that the training data does not provide any prior information, the classes in \(\mathcal{Y}^{seen}\) and \(\mathcal{Y}^{unseen}\) are considered as known classes since the training data contains the image samples of the \(\mathcal{Y}^{seen}\) classes and some side semantic information of both \(\mathcal{Y}^{seen}\) and \(\mathcal{Y}^{unseen}\) classes. ZS-OSR aims to accurately recognize the images of unseen classes while rejecting the images of unknown classes based on these training image samples and semantic information. Formally, given a training set consisting of \(\mathcal{D}^{seen}=\{(\mathbf{x},y_{\mathbf{x}},\mathbf{a}_{y})\mid\mathbf{x }\in\mathcal{X}^{seen},y_{\mathbf{x}}\in\mathcal{Y}^{seen},\mathbf{a}_{y}\in \mathcal{A}^{seen}\}\) and \(\mathcal{D}^{unseen}=\{(\tilde{y}_{i},\mathbf{a}_{\tilde{y}_{i}})\mid\tilde{y }_{i}\in\mathcal{Y}^{unseen},\mathbf{a}_{\tilde{y}_{i}}\in\mathcal{A}^{unseen}\}\), where \(\mathbf{x}\) is an image from the seen class sample set \(\mathcal{X}^{seen}\), \(y_{\mathbf{x}}\) or \(\tilde{y}_{i}\) is a class label, \(\mathbf{a}_{y},\mathbf{a}_{\tilde{y}_{i}}\in\mathbb{R}^{M}\) are \(M\)-dimensional class semantic embeddings (_i.e._, class-level side information), \(\mathcal{A}^{seen}\) and \(\mathcal{A}^{unseen}\) contain the semantic embeddings of seen and unseen classes respectively. For test images consisting of images of both unseen and unknown classes, _i.e._, \(\mathcal{X}^{unseen}\cup\mathcal{X}^{unknown}\), ZS-OSR aims to learn a model \(\phi^{open}\) that maps the images to the class label space \(\mathcal{O}=\mathcal{Y}^{unseen}\cup\{{}^{\prime}unknown^{\prime}\}\), where \(\mathcal{O}\) includes an _'unknown'_ class label in addition to the labels of the unseen classes.
### The Challenges
One of the key challenges in ZS-OSR is the difficulty in distinguishing between the unseen and unknown classes due to the lack of training data for both of them. In typical OSR problems, the most commonly used approach involves conducting OSR first, followed by classification. Nevertheless, in ZS-OSR, none of the currently existing OSR methods have been found to be effective due to the lack of images from the unseen class for training purposes. To address this challenge, a straightforward solution would be to use simple combinations of existing generative ZSL and OSR methods. Specifically, generative ZSL methods can be first applied to generate latent visual features of unseen classes. The ZS-OSR task is then converted to a general OSR task in the visual feature space that contains the features of both seen and unseen classes. OSR methods can then be directly used to learn an open-set classifier using the training set composed of these features to recognize known (unseen) classes while being capable of rejecting unknown classes. Figure 2 shows the results of such solutions that uses the widely-used TF-VAEGAN [6] and MSP [33], OpenMax [28], Placeholder [29], Energy [35], ODIN [34], LogitNorm [48], and MaxLogit [49], as the generative ZSL method and the OSR methods, respectively, where the open scores are the likelihood scores of being unknown class samples yielded by the open-set classifier, with larger open scores indicating higher likelihood (see Table 4 for detailed quantitative results of these methods).
The results show that the method performs poorly in distinguishing between unseen and unknown classes since the open scores of many unseen class samples are large, highly overlapping with that of the unknown class samples. The ineffective performance may be impacted by the fact that the generative ZSL model generates features for the known unseen classes based on the closed-set assumption, without being informed of the possible presence of unknown classes. As a result, these
## 4 Conclusion
In this paper, we have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of the proposed method on the performance of the proposed method on the performance of the proposed method. We have proposed a novel approach to detect the effects of
generated unseen features can heavily overlap with that of the unknown classes in the feature space, rendering the subsequent OSR models ineffective. In the next section, we introduce the ASE approach that learns an open-set classifier for the zero-shot setting to address this issue.
## 4 The Proposed Approach
### Overview of Our Approach
Our proposed approach, namely Adversarial Semantic Embeddings (ASE), is a generative framework specifically designed for the ZS-OSR problem. Since ZS-OSR does not provide training samples of unseen and unknown classes, the framework aims to directly generate these samples to train a classifier for this task. Unlike the simple solution in Sec. 3.2 that directly generates the features of unseen classes for the subsequent OSR task, ASE takes a step back and focuses on learning faithful semantic embeddings of unknown classes via an adversarial learning approach before training an unknowns-informed OSR model.
An overview of ASE is provided in Figure 3, which consists of three successive components, including _Using off-the-shelf generative ZSL models to generate unseen classes_, _learning adversarial semantic embeddings of unknown classes_, and _unifying these learned features and semantic embeddings to train a \(K+1\) open-set classifier_ where \(K=|\mathcal{Y}^{unseen}|\). The first component is to directly take an existing generative ZSL model to generate the visual features of the unseen classes based on
Figure 2: Distribution of the open scores yielded by combinations of TF-VAEGAN [6] and MSP [33], OpenMax [28], Placeholder [29], Energy [35], ODIN [34], LogitNorm [48], and MaxLogit [49], combination of APN [15] and MSP for the test images of unseen and unknown classes on CUB, AWA2, FLO and SUN datasets.
their semantic embeddings. The second component is the key novelty of ASE, which is designed to take the trained generator and the closed-set classifier in the ZSL model as input to learn a set of adversarial semantic embeddings of unknown classes, such that the learned semantic embeddings are distributed around the boundary between the known and unknown classes in the semantic space. In the third component, these semantic embeddings are subsequently used to generate a set of adversarial feature vectors to represent the samples of the unknown classes, along with the previously generated unseen class samples, to train the \(K+1\) classifier for OSR. Below we introduce each component in detail.
### Using Off-the-Shelf Generative ZSL Models to Generate Unseen-Class Features
Training a zero-shot open-set classifier in the visual feature space requires the features of unseen classes. Generative ZSL models have demonstrated superior performance in utilizing the relationship between semantic embeddings and image features of the seen classes to generate the unseen-class features. Therefore, existing off-the-shelf generative ZSL models are directly taken to generate these features. Briefly, generative ZSL methods learn a generator network \(G\) to generate sample \(\tilde{\mathbf{x}}=G(\mathbf{a},\epsilon)\) conditioned on a Gaussian noise \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and a class semantic embedding \(\mathbf{a}\). Meanwhile, a discriminator network \(D(\mathbf{x},\mathbf{a})\) is learned by taking an input feature \(\mathbf{x}\) and outputs a real value representing the probability that \(\mathbf{x}\) is from the real data rather than from the generator network. \(G\) and \(D\) can be learned by optimizing the following adversarial objective:
\[\mathcal{L}=\mathbb{E}[D(\mathbf{x},\mathbf{a})]-\mathbb{E}[D(\tilde{\mathbf{ x}},\mathbf{a})], \tag{1}\]
where \(\mathbf{x}\in\mathcal{X}^{unseen}\) is an image from the seen classes and \(\mathbf{a}\in\mathcal{A}^{\text{seen}}\) is its class semantic embedding. The trained generator generates synthetic features \(\tilde{\mathcal{X}}^{unseen}=\{\tilde{\mathbf{x}}_{l}^{\tilde{y}_{i}}|\tilde{ \mathbf{x}}_{l}^{\tilde{y}_{i}}=G(\mathbf{a}_{\tilde{y}_{i}},\epsilon_{l})\}\) for semantic embedding \(\mathbf{a}_{\tilde{y}_{i}}\) of each class, where \(\mathbf{a}_{\tilde{y}_{i}}\in\mathcal{A}^{unseen}\). Then we obtain \(\tilde{\mathcal{D}}^{unseen}=\{(\tilde{\mathbf{x}},\tilde{y}_{\tilde{\mathbf{ x}}})\mid\tilde{\mathbf{x}}\in\tilde{\mathcal{X}}^{unseen},\tilde{y}_{\tilde{ \mathbf{x}}}\in\mathcal{Y}^{unseen}\}\), and train a general ZSL (closed-set) classifier \(\phi^{closed}\) to classify the test image samples from unseen classes.
Existing state-of-the-art (SOTA) generative ZSL models [24; 5; 6] can be directly used to implement this component. The generator \(G\) and the \(\phi^{closed}\) classifier (or the generated unseen-class features) are taken as input to adversarially learn the semantic embeddings of unknown classes.
Figure 3: The proposed approach ASE. Given a pre-trained generative ZSL model, ASE first uses its generator \(G\) to generate the features of unseen classes. It then learns a set of adversarial semantic embeddings of unknown classes so that they are tightly distributed around but separable from the unseen-class embeddings. Lastly ASE uses the unknown-class embeddings to train an unknowns-informed open-set classifier \(\phi^{open}\).
### Learning Adversarial Semantic Embeddings of Unknown Classes
ASE is dedicated to learning an unknowns-informed open-set classifier. However, we are given neither semantic embeddings nor image samples of the unknown classes. ASE aims to learn adversarial representations of the unknown classes to train such a classifier. Given the generated features and the training data, the representations of the unknown classes can be adversarially learned in either the semantic embedding space or the visual feature space. However, as discussed in Sec. 3.2, the generated unseen-class features are not faithful enough as they are generated under the closed-set setting. Consequently, directly using these synthetic unseen-class features to generate the unknown-class features may accumulate and/or amplify the closed-set biases in the generator, leading to non-discriminative features for the unknown classes. Such unseen-class and unknown-class features are ineffective in training the open-set classifier (see Table 5).
Thus, ASE instead learns the unknown-class representations in the semantic space, while enforcing separable unseen-and-unknown representations in the visual feature space. This approach is more plausible since 1) it directly learns the unknown-class semantic embeddings based on the pre-defined embeddings of both seen and unseen classes, while the other approach above is an indirect way that heavily relies on the unstable quality of the generated unseen-class visual features; and 2) it seamlessly leverages the learned relation between the semantic space and the visual feature space in the ZSL models to learn the unknown-class semantic embeddings.
To this end, ASE introduces a class-wise adversarial semantic embedding learning approach to generate a set of semantic embeddings of the unknowns, \(\mathcal{A}^{unknown}\). As highlighted in Figure 3, for each unseen class, ASE generates multiple adversarial semantic embeddings for the unknown classes that are tightly distributed around but separable from the unseen-class embeddings. To achieve this goal, it jointly minimizes a distance loss \(\mathcal{L}_{\text{dis}}\) in the embedding space that brings the unknown-class embeddings closer to the unseen-class embeddings, and an adversarial loss \(\mathcal{L}_{\text{adv}}\) in the feature space that pulls the corresponding prototypical unknown-class features away from the generated unseen-class features:
\[\mathcal{L}_{\text{ase}}=\mathcal{L}_{\text{adv}}+\beta\mathcal{L}_{\text{ dis}}, \tag{2}\]
where \(\beta\) is a hyper-parameter, \(\mathcal{L}_{\text{dis}}\) is defined as the Euclidean distance between the generated unknown-class embedding \(\hat{\mathbf{a}}\in\mathcal{A}^{unknown}\) and the given unseen-class embedding \(\hat{\mathbf{a}}\in\mathcal{A}^{unknown}\):
\[\mathcal{L}_{\text{dis}}=\left\|\hat{\mathbf{a}}-\tilde{\mathbf{a}}\right\|_ {2}, \tag{3}\]
and \(\mathcal{L}_{\text{adv}}\) is defined as a _Helmholtz free energy_-based loss:
\[\mathcal{L}_{\text{adv}}=T\cdot\log\sum_{i}^{K}e^{\phi^{closed}_{i}(\hat{ \mathbf{p}})/T}, \tag{4}\]
where \(\phi^{closed}_{i}\) is the ZSL (closed-set) classifier obtained from the off-the-shelf ZSL model, and \(\hat{\mathbf{p}}=G(\hat{\mathbf{a}},\epsilon)\) is a generated prototypical unknown-class feature vector corresponding to the adversarial semantic embeddings \(\hat{\mathbf{a}}\). Note that since \(\phi^{closed}\) was trained using the unseen-class features and its weight parameters are fixed, the energy score that the unseen-class features receive are consistently low. Thus, Eq. (4) is designed to encourage high energy scores for unknown-class feature prototypes only. By minimizing \(\mathcal{L}_{\text{ase}}\), the unknown-class and unseen-class class embeddings are close to each other, but they are discriminative from each other; this adversarial relation also applies to the corresponding unknown-class feature prototypes w.r.t. the unseen-class features.
### Unknowns-Informed ZS-OSR
We then train an unknowns-informed ZS-OSR classifier with \(K+1\) classes, in which the extra (+1) class is for the '_unknown_' class and it is trained based on the learned unknown-class semantic embeddings.
Specifically, we first utilize the adversarial semantic embeddings \(\hat{\mathcal{A}}^{unknown}\) and the trained generator \(G\) to obtain a set of unknown-class features \(\hat{\mathcal{X}}^{unknown}=\{\hat{\mathbf{x}}^{\hat{y}_{i}}_{l}|\hat{ \mathbf{x}}^{\hat{y}_{i}}_{l}=G(\mathbf{a}_{\hat{y}_{i}},\epsilon_{l}),\mathbf{ a}_{\hat{y}_{i}}\in\hat{\mathcal{A}}^{unknown},\hat{y}_{i}\in\mathcal{Y}^{unknown}\}\), where \(\mathcal{Y}^{unknown}\) is a set of unknown classes collectively labeled as '_unknown_', resulting in \(\hat{\mathcal{D}}^{unknown}=\{(\hat{\mathbf{x}},^{\prime}unknown^{\prime}) \mid\tilde{\mathbf{x}}\in\hat{\mathcal{X}}^{unknown}\}\). The unknown-class features \(\hat{\mathbf{x}}\) are expected to be centered around the unknown-class feature prototypes \(\hat{\mathbf{p}}\). The unseen-class and unknown-class features are then combined to form the open-set training data, _i.e._,
\(\mathcal{D}^{open}=[\tilde{\mathcal{D}}^{unseen},\tilde{\mathcal{D}}^{unknown}]\), which is used to train the open-set classifier \(\phi^{open}\) by minimizing a standard cross-entropy loss:
\[\min_{\theta}\mathbb{E}_{(\mathbf{x},y_{\mathbf{x}})\sim\mathcal{D}^{open}} \left[-\log\phi^{open}_{y_{\mathbf{x}}}(\mathbf{x})\right]. \tag{5}\]
During inference, given a test image \(\mathbf{x}\), \(\phi^{open}\) yields a softmax score in its \(K+1\)-th class prediction, which can be directly used as an open score. If the score exceeds a pre-defined threshold, \(\mathbf{x}\) is predicted as '_unknown_', and otherwise \(\mathbf{x}\) is predicted as the class with the highest logit among the \(K\) unseen classes. Alternatively, post-hoc OSR methods like MSP [33] and ODIN [34] can also be applied for obtaining the open score, but the \(\phi^{open}\)-based open score is generally more effective (shown in Table 5), and is used by default.
## 5 Experiments
### Experimental Setup
**Datasets**. To our knowledge, there are no publicly-available datasets designed for evaluating the performance of ZS-OSR, so we introduce four ZS-OSR datasets adapted from four existing widely-used ZSL datasets: Caltech-UCSD-Birds 200-2011 (CUB) [9], Animals with Attributes 2 (AWA2) [10], FLO [11] and SUN [12]. In particular, we first used the commonly-used seen/unseen class split as in [10] and [50]. However, this data split does not provide the unknown classes. In order to facilitate the ZS-OSR setup on different datasets without loss of generality, for the test data of each dataset, we further randomly take half of the unseen classes as the unknown classes. In other words, the test data is the same as ZSL datasets but it now contains both unseen and unknown classes; while part of semantic information in the original ZSL training data is removed to create the unknown classes in the test data. Detailed information of the datasets is presented in Tables 2 and 3.
**Implementation Details of ASE**. Our ASE approach, outlined in Sec. 4, relies on an off-the-shelf generative ZSL method, TF-VAEGAN [6], to produce feature vectors for unseen classes based on their semantic embeddings. We use ResNet101 [51] to extract features for \(\mathcal{X}\) and conduct a grid search on the validation set [10] to determine the optimal hyperparameter \(\beta\) for our model. We generate 50 unknown-class semantic embeddings around each unseen class and produce 1,000 adversarial samples for each unknown-class semantic embedding in the feature space, resulting in \(|\mathcal{D}^{unknown}|=1,000\times|\mathcal{Y}^{unseen}|\) unknown-class samples per dataset. We train the unknowns-informed open-set classifier with a linear classifier featuring one fully connected layer and optimize it with the Adam optimizer using Eq. (5). We maintain the same hyperparameters as in TF-VAEGAN, which we hold fixed throughout the training process of the open-set classifier.
**Comparison Baselines**. Although there are no methods reported to deal with the ZS-OSR problem, SOTA ZSL and OSR methods can be combined to establish some good solutions. Similar to ASE, TF-VAEGAN is used as a SOTA ZSL method here and combined with eight diverse SOTA OSR methods, including **MSP**[33], **OpenMax**[28], **ODIN**[34], **Placeholder**[29], **Energy**[35], **LogitNorm**[48], and **MaxLogit**[49], Since all of these methods are for OSR or OOD detection and do not support ZS-OSR tasks, we adapt them to ZS-OSR using the following method: 1) We first generate the features of the unseen classes using the ZSL generative method TF-VAEGAN, and then 2) we treat the unseen classes as closed-set classes and apply one of these eight OSR methods to recognize the unseen classes while rejecting unknown-class samples. Additionally, we evaluate the performance of a popular non-generative ZSL method **APN**[15] in conjunction with **MSP**. All the hyperparameters of the baselines are tuned in the same way as ASE on each dataset to have a fair empirical comparison.
### ZS-OSR Performance
The ZS-OSR results of ASE and the combined baselines on the four proposed datasets are shown in Table 4. ASE outperforms all eight baseline methods by a significant margin in detecting unknown-class samples and performs comparably well in terms of classification on the unseen-class samples on all four datasets. Details are discussed below.
**Superior unknown-class detection**. The baseline methods are inconsistent in detecting unknown-class samples on the four datasets, whereas ASE performs consistently well on all of them. ASE significantly improves the AUROC score compared to the best competing baseline on CUB, AWA2, and FLO by margins of 5.22%, 14.12%, and 4.64%, respectively. ASE is also the best performer on SUN, with a relatively marginal improvement. In terms of the FPR95 metric, ASE also demonstrates the best performance across all datasets. Notably, it reduces the FPR95 by 12.42% compared to the best baseline on AWA2.
**Maintaining classification accuracy on unseen classes**. ASE's superior ability to detect unknown-class samples does not affect its unseen-class classification ability. As seen in Table 4, ASE achieves the best overall accuracy performance across the four datasets, as competing methods such as OpenMax and Placeholder have large accuracy drops on certain datasets.
**The reasons behind**. As discussed in Sec. 3.2, the simply combined ZSL-and-OSR solutions fail to produce discriminative open scores for the unseen and unknown class samples, as illustrated in Figure 2. In contrast, ASE yields significantly more discriminative open scores for the samples of
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**CUB**} & \multicolumn{3}{c|}{**AWA2**} & \multicolumn{3}{c|}{**FLO**} & \multicolumn{3}{c}{**SUN**} \\ & Acc \(\uparrow\) & FPR95\(\downarrow\) & AUC \(\uparrow\) & Acc \(\uparrow\) & FPR95\(\downarrow\) & AUC \(\uparrow\) & Acc \(\uparrow\) & FPR95\(\downarrow\) & AUC \(\uparrow\) & FPR95\(\downarrow\) & AUC \(\uparrow\) \\ \hline
**MSP**[33] & 76.33 & 80.96 & 70.08 & 71.56 & 99.61 & 47.51 & 81.41 & 91.33 & 63.74 & 23.33 & 75.00 & 71.63 \\
**OpenMax**[28] & 76.41 & 79.54 & 74.98 & 70.33 & 99.43 & 49.86 & 81.34 & 97.16 & 52.80 & 72.94 & 74.03 & 69.75 \\
**Placeholder**[29] & 76.11 & 83.25 & 72.43 & 70.63 & 91.43 & 47.25 & **82.92** & 95.17 & 47.90 & 72.78 & 94.03 & 53.55 \\
**Energy**[35] & 76.19 & 82.04 & 71.52 & 71.01 & 99.96 & 62.82 & 81.37 & 92.93 & 68.14 & 72.33 & 70.45 & 71.06 \\
**ODIN**[34] & 76.25 & 84.94 & 69.39 & 70.94 & 98.51 & 62.86 & 81.15 & 92.00 & 63.53 & 72.97 & 73.75 & 71.15 \\
**LogitNorm**[48] & 75.00 & 72.92 & 73.41 & 69.04 & 99.29 & 45.79 & 78.76 & 92.50 & 67.65 & 66.11 & 76.25 & 65.21 \\
**MaxLogit**[49] & 76.24 & 89.22 & 71.87 & 71.64 & 99.58 & 46.70 & 80.16 & 91.00 & 62.11 & 72.92 & 74.86 & 69.70 \\
**APN**[15] & **77.41** & 69.82 & 71.82 & 71.41 & 97.66 & 42.71 & 76.53 & 87.62 & 63.31 & 67.50 & 85.00 & 62.58 \\ \hline
**ASE (Ours)** & 76.26 & **68.67** & **80.20** & **72.30** & **77.09** & **81.99** & 82.44 & **87.50** & **72.78** & **73.61** & **70.41** & **72.69** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Main results. Accuracy (%), FPR95 and AUROC of ASE and baselines on four ZS-OSR datasets. The best (second-best) results are boldfaced (underlined).
the unseen and unknown classes, as demonstrated in Figure 4. Furthermore, as shown in Figure 5, the unknown samples generated by ASE either lie between unseen and true unknowns or overlap with the real unknown samples, suggesting that ASE can effectively leverage the ZSL training data to generate unknown-class representations and distinguish them from the unseen-class samples through the adversarial unknown-class embedding learning in both the semantic and feature spaces.
### Effectiveness on Data with Varying Openness
Following the OSR literature [52; 29; 53; 54], we conduct experiments on the CUB dataset to examine ZS-OSR performance under varying degrees of openness, defined as \(1-\sqrt{\frac{|\mathcal{Y}^{\text{mean}}|}{|\mathcal{Y}^{\text{mean}}|+| \mathcal{Y}^{\text{mean}}|}}\). A larger openness indicates the presence of relatively more unknown classes in the test data. The experiment is focused on the CUB dataset. In particular, there are 50 unseen classes in CUB under the ZSL setting [10]. We create four ZS-OSR datasets based on CUB by retaining 10 classes as the unseen classes and the respectively remaining 10, 20, 30, and 40 classes as the unknown classes. This results in four datasets with respective openness of 29.3%, 42.3%, 50%, and 55.3%. The results of ASE and baselines on these four datasets are shown in Figure 6. It is clear that ASE maintains consistently superiority in unknown-class detection and comparably good unseen-class classification across different openness rates, demonstrating strong robustness and stability to the data openness.
### Ablation Study
Table 5 shows the ablation study results of two main modules in ASE, including \(\mathcal{L}_{ase}\) and \(\phi^{open}\). We discuss the results in detail below.
Figure 4: Distribution of the open scores yielded by ASE.
Figure 5: T-SNE visualization of ASE features on CUB.
**Adversarial semantic embedding learning via \(\mathcal{L}_{ase}\)**. We compared the \(\mathcal{L}_{ase}\)-based adversarial embedding learning module with four alternative methods, including _Mixup_[55], _Uniform Noise_, _Semantic + Noise_, and _Adversarial Features_[13, 14], as described in Section 4.3. Table 5 shows that although some of the simpler methods, such as _Semantic + Noise_, can achieve fairly good performance compared to the baselines in Table 4, our full model (\(\mathcal{L}_{ase}\) + \(\phi^{open}\)) consistently outperforms them in AUROC on all four datasets. While _Adversarial Features_ is more effective than the other baselines, it performs rather unstably. In summary, ASE is consistently the best performer.
**Unknowns-informed open-set classifier via \(\phi^{open}\)**. As discussed in Section 4.4, Post-hoc OSR methods, such as ODIN and MSP, can also be applied to the final classifier trained by ASE. Table 5 shows that ASE-enabled ODIN and MSP methods can largely improve their unknown detection performance compared to the original ODIN and MSP methods (see Table 4), indicating that the final unknowns-informed classifier trained by ASE is more effective in discriminating the unknown samples than the classifier in the original generative ZSL model. However, ASE substantially outperforms both ASE-enabled ODIN and MSP methods in AUROC on CUB, AWA2, and FLO, with a maximal increase of about 9% on CUB and 7% on AWA2. Although ASE-enabled MSP obtains the best AUROC on SUN, it performs poorly on other datasets. Overall, since \(\phi^{open}\) is end-to-end trained to detect unknown-class samples, it is much more effective than the heuristic methods, ASE-enabled ODIN and MSP.
### Extending to Generalized ZS-OSR and ZS-OOD Settings
Our approach focuses on distinguishing unseen and unknown samples in ZS-OSR, but ASE can be extended to function under generalized ZS-OSR settings with a minor modification to Eq. (3). ASE generates tightly distributed adversarial semantic embeddings around each of the seen-class and unseen-class semantic embeddings, and can generate unknown-class features and train the \(\phi^{open}\) classifier with seen-class, unseen-class, and unknown-class features. SOTA generalized ZS model **GCM-CF**[38] with MSP is included to extend our baselines under this setting. The results are
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Unknowns**} & \multirow{2}{*}{**Score**} & \multicolumn{4}{c}{**AUROC**} \\ & & CUB & AWA2 & FLO & SUN \\ \hline _Mixup_ & \(\phi^{open}\) & 60.33 & 25.31 & 46.96 & 57.98 \\ _Uniform Noise_ & \(\phi^{open}\) & 63.58 & 46.11 & 50.42 & 57.29 \\ _Semantic + Noise_ & \(\phi^{open}\) & 63.74 & 69.02 & 63.22 & 54.77 \\ _Adversarial Features_ & \(\phi^{open}\) & 74.92 & 54.70 & **73.19** & 70.75 \\ \hline \(\mathcal{L}_{ase}\) & \(\phi^{open}\) & **80.20** & **81.99** & 72.78 & 72.69 \\ \hline \(\mathcal{L}_{ase}\) & MSP & 71.69 & 75.07 & 69.63 & **76.11** \\ \(\mathcal{L}_{ase}\) & ODIN & 72.07 & 74.67 & 66.81 & 71.96 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of ASE (shaded) and its six variants
Figure 6: Performance w.r.t. openness rates on CUB.
presented in Table 6, where ASE achieves the most effective detection of unknown samples across all four datasets, outperforming the best baseline per dataset by 0.1-4.9% in AUROC. In terms of FPR95, ASE also maintains a similarly leading position. Although APN surpasses ASE on two datasets in FPR95, ASE substantially outperforms it in both AUROC and FPR on the other datasets.
While ZS-OSR focuses on the scenario where known classes and unknown classes belong to the same distribution, we believe that there exists scenarios of ZS-OOD (Out-of-Distribution), where we only know the semantic information of the known classes but need to detect images of unknown classes from a different distribution (_e.g._, samples from a largely different dataset). To evaluate ASE's performance in the ZS-OOD setting, we randomly select 25 classes from AWA2, FLO, and SUN as unknown classes and combine them with 25 unseen classes of CUB to obtain three ZS-OOD test datasets, CUB-AWA2, CUB-FLO, and CUB-SUN. Table 7 shows that considering both AUROC and FPR95 metrics, ASE's performance surpasses all baselines by a significant margin, demonstrating the empirical effectiveness of ASE under the ZS-OOD setting.
## 7 Conclusions
This work introduces ZS-OSR, a problem setting which extends ZSL to open-set scenarios, and analyzes the challenge of distinguishing samples of unseen and unknown classes. To promote the development and evaluation of ZS-OSR methods, we build eight baselines that combine SOTA ZSL and OSR models, and establish performance benchmarks by applying them to four ZS-OSR datasets adapted from ZSL datasets. We further propose the ASE approach that learns adversarial semantic embeddings to accurately detect the unknown samples while maintaining preferable classification accuracy of the unseen-class samples. Empirical results show that ASE 1) outperforms the baselines on the four datasets in AUROC, 2) performs stably on datasets with varying openness, and 3) can be easily extended to detect the unknown samples under generalized ZS-OSR settings and ZS-OOD settings.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**CUB**} & \multicolumn{2}{c|}{**AWA2**} & \multicolumn{2}{c|}{**FLO**} & \multicolumn{2}{c}{**SUN**} \\ & FPR95 \(\downarrow\) & AUC \(\uparrow\) & FPR95 \(\downarrow\) & AUC \(\uparrow\) & FPR95 \(\downarrow\) & AUC \(\uparrow\) & FPR95 \(\downarrow\) & AUC \(\uparrow\) \\ \hline
**MSP** & 81.1 & 66.2 & 84.7 & 63.3 & 70.6 & 72.5 & 88.6 & 60.5 \\
**OpenMax** & 92.4 & 55.1 & 95.5 & 59.0 & 81.8 & 66.6 & 94.3 & 48.7 \\
**Placeholder** & 79.3 & 71.0 & 91.9 & 45.9 & 94.8 & 50.4 & 88.0 & 57.4 \\
**Energy** & 89.3 & 64.1 & 89.3 & 50.8 & 81.2 & 52.7 & 91.8 & 59.6 \\
**OIN** & 85.1 & 67.8 & 78.5 & 68.2 & 80.5 & 75.1 & 89.9 & 60.7 \\
**LogitNorm** & 78.6 & 69.6 & 75.9 & 62.1 & 75.3 & 66.5 & 87.6 & 61.7 \\
**MaxLogit** & 84.2 & 64.8 & 83.7 & 62.6 & 70.2 & 65.4 & 89.1 & 59.5 \\
**APN** & 88.3 & 56.7 & **69.1** & 73.9 & **54.3** & 71.7 & 90.9 & 56.1 \\
**GCM-CF** & 84.3 & 64.5 & 74.0 & 67.0 & 67.3 & 75.7 & 90.2 & 60.6 \\ \hline
**ASE (Ours)** & **77.8** & **75.9** & 73.8 & **78.7** & 69.4 & **79.1** & **87.5** & **61.8** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results under generalized ZS-OSR setting.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**CUB-AWA2**} & \multicolumn{2}{c|}{**CUB-FLO**} & \multicolumn{2}{c}{**CUB-SUN**} \\ & FPR95 \(\downarrow\) & AUC \(\uparrow\) & FPR95 \(\downarrow\) & AUC \(\uparrow\) & FPR95 \(\downarrow\) & AUC \(\uparrow\) \\ \hline
**MSP** & 83.0 & 72.6 & 84.2 & 74.2 & 86.4 & 75.0 \\
**OpenMax** & 95.1 & 48.2 & 91.3 & 41.6 & 91.6 & 43.3 \\
**Placeholder** & 79.5 & 70.9 & 78.9 & 72.7 & 81.5 & 72.0 \\
**Energy** & 69.4 & 91.8 & 74.9 & 98.6 & 56.4 & **99.9** \\
**ODIN** & 91.7 & 69.2 & 78.9 & 62.4 & 83.9 & 70.5 \\
**LogitNorm** & 66.8 & 69.2 & 61.4 & 62.4 & 55.4 & 70.5 \\
**MaxLogit** & 86.7 & 79.4 & 83.1 & 83.7 & 81.4 & 89.0 \\
**APN** & 45.8 & 88.4 & 25.7 & 94.0 & 31.3 & 93.1 \\ \hline
**ASE (Ours)** & **4.0** & **97.2** & **5.3** & **99.0** & **0.3** & **99.9** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results under generalized ZS-OOD setting. |
2305.16104 | Information encoded in gene-frequency trajectories | In this work we present a systematic mathematical approximation scheme that
exposes the way that information, about the evolutionary forces of selection
and random genetic drift, is encoded in gene-frequency trajectories.
We determine approximate, time-dependent, gene-frequency trajectory
statistics, assuming additive selection. We use the probability of fixation to
test and illustrate the approximation scheme introduced. For the case where the
strength of selection and the effective population size have constant values,
we show how a standard result for the probability of fixation, under the
diffusion approximation, systematically emerges, when increasing numbers of
approximate trajectory statistics are taken into account. We then provide
examples of how time-dependent parameters influence gene-frequency statistics. | Konstantinos Mavreas, David Waxman | 2023-05-25T14:34:45Z | http://arxiv.org/abs/2305.16104v1 | ###### Abstract
###### Abstract
In this work we present a systematic mathematical approximation scheme that exposes the way that information, about the evolutionary forces of selection and random genetic drift, is encoded in gene-frequency trajectories.
We determine approximate, time-dependent, gene-frequency trajectory statistics, assuming additive selection. We use the probability of fixation to test and illustrate the approximation scheme introduced. For the case where the strength of selection and the effective population size have constant values, we show how a standard result for the probability of fixation, under the diffusion approximation, systematically emerges, when increasing numbers of approximate trajectory statistics are taken into account. We then provide examples of how time-dependent parameters influence gene-frequency statistics.
**Information encoded in gene-frequency trajectories**
K. Mavreas and D. Waxman
Centre for Computational Systems Biology, ISTBI,
Fudan University, 220 Handan Road, Shanghai 200433, PRC
## 1 Introduction
A _gene-frequency trajectory_, namely the set of values taken by an allele's relative frequency over a period of time, encodes information about the underlying processes that give rise to the trajectory. These processes are a combination of deterministic and stochastic evolutionary forces. In the present work, we present a systematic mathematical approximation scheme that exposes this information.
This work focusses on basic statistics associated with a set of gene-frequency trajectories. Such an analysis gives us the means to understand and quantify how different evolutionary forces influence statistics of trajectories and how they feed into quantities of direct interest.
### Scope of the study
The primary focus of this work is on the approximation of time-dependent _trajectory statistics_, which may have various applications. However, we shall repeatedly apply and test the results we obtain on the _probability of fixation_, i.e., the probability that an allele _ultimately_ achieves a relative frequency of unity.
The probability of fixation, which is a quantity of considerable interest in its own right (see [1] and [2]), is, for a given initial frequency, a single number, and constitutes a convenient testing ground/target for our approach,
compared with trajectory statistics such as the mean frequency, which is defined over a range of times, and hence is a function of time, and not a single number.
We consider a single locus in a randomly mating diploid population. While the evolutionary forces at play in such a population can include mutation, natural selection and random genetic drift, we shall neglect mutation, assuming that for the timescales/population sizes considered, mutation occurs with negligible probability.
Natural selection, while often treated as a deterministic force, can have deterministic and stochastic aspects (see, for example, [3] and [4]). Here, we consider purely deterministic (i.e., predictable) selection, first with a constant strength, later with a time dependent strength. Incorporating selection with a stochastic component is possible, within the results we present, but would require carrying out an average.
Random genetic drift, which we shall sometimes refer to as just 'drift' or 'genetic drift', occurs in a population of finite size, and is stochastic in character [2]. We will first consider a constant effective population size, corresponding to genetic drift with a fixed'strength', and later will allow the effective population size to be time dependent, corresponding to drift with a varying strength.
The calculations we present lie in a regime of near neutrality. Thus with \(s\) a typical selection coefficient associated with a mutant at the locus of interest, and \(N_{e}\) the effective population size, the regime we consider is
\[N_{e}|s|\lesssim 1. \tag{1.1}\]
In such a regime we show how it is possible to develop a theoretical methodology that exposes information about evolutionary forces, that is encoded in gene-frequency trajectories.
We note that while the method we present corresponds to a restriction on the _magnitude_ of the selection coefficient of a mutant (Eq. (1.1)), it can flexibly deal with selection coefficient of _both signs_. This flexibility allows the establishment of results for both beneficial mutations, which are directly relevant to evolutionary adaptation, and deleterious mutations, which, for example, play an important role in the survival of asexual populations ([5], [6]).
As already stated, we primarily test and apply the methods we present on the probability of fixation. In particular, we show how Kimura's result, for the probability of fixation when the strength of selection and the effective population size take constant values, emerges as the number of approximate trajectory statistics is increased. We proceed to show how to extend the analysis to time dependent parameters.
Because we work in a nearly neutral regime (Eq. (1.1)), the examples we give on the fixation probability are complementary to previous results
for this quantity, i.e., for quite strongly beneficial mutations (\(4N_{e}s\gg 1\)) when the population size is constant (see [7] and, for example, [8]) or when it changes with time, either monotonically [9], or more generally [10]), or where selection and population size change over a finite time [11]), or when selection is very weak (\(N_{e}|s|\ll 1\), see e.g., [12]). However, this work is more general than just an analysis of the fixation probability, since it focuses on _trajectory statistics_, and in the Discussion some space is devoted to the ways the methods presented in this work can be extended to a wider class of problems.
## 2 Background
Consider a randomly mating diploid dioecious sexual population with equal sex ratio. Generations are discrete, and labelled 0, 1, 2,.... The fitness of each individual is determined by a single locus that has two alleles, one of which is a mutant or focal allele, \(A\), and the other a non-focal allele, \(B\).
Throughout this work we assume that the phenomena we consider occur under conditions where new mutations are sufficiently improbable that they can be neglected.
We take fitness to be _additive_ in nature. The implementation and parameterisation of additive selection is achieved by writing the relative fitnesses of the \(AA\), \(AB\) and \(BB\) genotypes as \(1+2s\), \(1+s\), and \(1\), respectively, where \(s\) is a selection coefficient associated with the number of \(A\) alleles in a genotype. While the value of \(s\) is restricted in magnitude, according to Eq. (1.1), the sign of \(s\) is unrestricted. Thus \(s\) can be positive, negative, or zero, corresponding to mutations that are beneficial, deleterious, or neutral, respectively.
Apart from the selection coefficient, another parameter, that plays a key role in the dynamics, is the effective population size, which we write as \(N_{e}\). This characterises the strength of random frequency fluctuations associated with random genetic drift. In the simplest case of an ideal population, namely one described by a Wright-Fisher model ([13], [14]), the effective population size, \(N_{e}\), coincides with the actual (or census) population size, \(N\). However, when there is greater variability in offspring numbers than that of a Poisson distribution, the effective population size will be smaller than the census size [2], i.e., \(N_{e}<N\). Other deviations from the pure Wright-Fisher model also lead to \(N_{e}\) deviating from \(N\)[2]. The effective population size explicitly appears in the diffusion equation associated with the diffusion approximation [15] and can be incorporated into simulations of the Wright-Fisher model [16].
### Fixation probability
In the biallelic population described above, which is characterised by the parameters \(s\) and \(N_{e}\), a consequence of the neglect of mutation is that fixation and loss of the different alleles are the only possibilities at long times. The probability that the \(A\) allele ultimately achieves fixation, termed the _fixation probability_, was obtained by Kimura under the diffusion approximation [1]. In terms of a composite parameter \(R\) defined by
\[R=4N_{e}s \tag{2.1}\]
which is a scaled measure of the strength of selection, Kimura found that when the initial frequency of the \(A\) allele is \(y\), the probability of fixation is approximately
\[P_{fix}(y)=\frac{1-e^{-Ry}}{1-e^{-R}} \tag{2.2}\]
[1]. Note that this result has the properties that it vanishes at an initial frequency of zero (\(\lim_{y\to 0}P_{fix}(y)=0\)) and it takes the value of unity at an initial frequency of unity (\(\lim_{y\to 1}P_{fix}(y)=1\)).
Kimura's result, in Eq. (2.2), is relatively simple to state, but it takes some mathematical machinery to derive, requiring, for example, solution of a backward or forward diffusion equation (see, for example, [1] and [17], respectively). In the present work we also use a diffusion approximation, but show how results, such as Eq. (2.2), can simply and systematically _emerge_ from the inclusion of some basic statistics of gene-frequency trajectories.
Indeed, the resulting understanding, that basic properties of fixation and loss can be derived from essentially elementary statistics of gene frequency trajectories, allows principled generalisations of results, such as Eq. (2.2), to more realistic and complex scenarios involving selection and population sizes that are time-dependent, and we will give examples of this. Furthermore, the analysis that we present explicitly illustrates the way that the important information is encoded in statistics of gene frequency trajectories.
### What determines gene frequency trajectories?
We now give some background, that we shall shortly use, on what theoretically determines gene-frequency trajectories, and hence which underlies, for example, Kimura's result for the probability of fixation (Eq. (2.2)).
To proceed, let \(X(t)\) denote the relative frequency (_frequency_ for short) of the \(A\) allele at time \(t\), with \(1-X(t)\) the corresponding frequency of the \(B\) allele. A general feature of the frequency, since it is a proportion, is that for all \(t\) it lies in the range \(0\) to \(1\), which includes the end points of the range, namely \(0\) and \(1\).
We take time to run from an initial value of \(0\), and the frequency at this time, termed the _initial frequency_, is denoted by \(y\), i.e.,
\[X(0)=y. \tag{2.3}\]
A particular gene (or allele) frequency trajectory is specified by the form of \(X(t)\) for a range of times that (in the present work) starts at \(t=0\).
We can think of the dynamics of the frequency as being driven by evolutionary forces, which generally cause changes in the frequency. In the present case, we have assumed a one locus problem where there are only two evolutionary forces acting, namely selection and random genetic drift. We shall assume that these two forces are weak, in the absolute sense that, from one generation to the next, they cause only small changes in the frequency. We can then, reasonably, work under the _diffusion approximation_, where time and frequency are treated as continuous variables. A frequency trajectory is then approximated as a _continuous_ function of _continuous_ time.
Over the very small time interval, from \(t\) to \(t+dt\), the change in the frequency, \(dX(t)=X(t+dt)-X(t)\), derives a contribution from the systematic forces that are acting (i.e., forces that exclude random genetic drift). When the frequency is \(x\) this change in frequency is written as \(F(x)dt\), with \(F(x)\) the systematic force. In the problem at hand, this force is derived purely from selection. Under additivity, as defined above, we have, to leading order in \(s\),
\[F(x)=sx(1-x). \tag{2.4}\]
When the frequency at time \(t\) is \(x\), the corresponding contribution to \(dX(t)\) from genetic drift is
\[\sqrt{V(x)}dW(t). \tag{2.5}\]
The first factor in this expression involves the function \(V(x)\), which is a measure of the variance of allele frequency caused by drift, and sometimes called the _infinitesimal variance_. The form of \(V(x)\) originates in the Wright-Fisher model [2], and is given by
\[V(x)=\frac{x(1-x)}{2N_{e}}. \tag{2.6}\]
The other factor in Eq. (2.5) is the quantity \(dW(t)=W(t+dt)-W(t)\), which represents the random 'noise' associated with genetic drift1. Combining the contributions from selection and drift, we obtain the following differential equation for the change in \(X(t)\) from time \(t\) to time \(t+dt\):
Footnote 1: The quantity \(W(t)\) is a _Wiener process_ or _Brownian motion_. It is a random function of the time with mean zero. For the full set of properties of \(W(t)\) see e.g., [18]).
\[dX(t)=F(X(t))dt+\sqrt{V(X(t))}dW(t). \tag{2.7}\]
Equation (2.7) contains randomness2 and is one way of representing the diffusion approximation of random genetic drift. Another way of representing this approximation is in terms of the equation obeyed by the distribution (probability density) of \(X(t)\). It can be shown that Eq. (2.7) directly leads to the distribution of \(X(t)\) obeying a diffusion equation (see, e.g., [18]).
Footnote 2: Formally, Eq. (2.7) is an _Ito stochastic differential equation_[19] and has the key property that \(X(t)\) and \(dW(t)\) are statistically independent, as we shall use later.
Equation (2.7) determines allele frequencies over a range of times, and so determines _gene frequency trajectories_. Since \(F(x)\) and \(V(x)\) have been specified, we can obtain an approximate realisation of a trajectory by numerically solving Eq. (2.7) over a given time interval, when starting from an initial frequency of \(y\) at time \(0\), and using a particular realisation of the noise from random genetic drift3. If we solve Eq. (2.7) again, with the same initial frequency, but with a different realisation of the noise, then we obtain a different realisation of a frequency trajectory.
Footnote 3: A realisation of the noise corresponds to the specification of the random function, \(W(t)\), over a _range_ of times, starting from time \(0\).
### Statistics of trajectories
Statistics of trajectories, such as the expected (or mean) value of the frequency at a given time, \(t\), are obtained by averaging \(X(t)\) over many trajectories, and we leave it implicit that every trajectory starts at frequency \(y\) (see Eq. (2.3)). Generally, we will indicate such mean values by an overbar, for example, the mean values of \(X(t)\) and \([X(t)]^{2}\) are written as \(\bar{X}(t)\equiv\overline{X^{1}}(t)\) and \(\overline{X^{2}}(t)\), respectively. At \(t=0\) these mean values reduce to \(y\) and \(y^{2}\), respectively, because the initial frequency has the definite value \(y\).
To illustrate just some of the information that is contained within statistics of frequency trajectories, let us consider the fixation probability, which, for an initial frequency of \(y\), we write as \(P_{fix}(y)\). For any positive constant, \(c\), we can write the fixation probability as a long time limit
\[P_{fix}(y)=\lim_{t\to\infty}\overline{X^{c}}(t) \tag{2.8}\]
which follows because at long times, the only outcomes are loss or fixation4. Thus knowledge of the mean value of any positive power of \(X(t)\), for all \(t\), is fully sufficient to determine the fixation probability.
Footnote 4: Equation (2.8) follows since as \(t\to\infty\), the frequency \(X(t)\) only achieves one of two values, namely \(0\) (loss) and \(1\) (fixation), and it does so with the probabilities \(1-P_{fix}(y)\) and \(P_{fix}(y)\), respectively. Consequently \(\lim_{t\to\infty}\overline{X^{c}}(t)=0^{c}\times[1-P_{fix}(y)]+1^{c}\times P_{ fix}(y)\) and any \(c>0\) leads to Eq. (2.8).
Approximate trajectory statistics - with time-independent parameters
In principle, the initial frequency, \(y\), and the differential equation that governs the behaviour of the frequency, \(X(t)\) (Eqs. (2.3) and (2.7), respectively), contain all available information on frequency trajectories. We cannot, however, directly get access to this information because Eq. (2.7) cannot, generally, be analytically solved for \(X(t)\)[20]. We shall proceed by carrying out an approximate analytical analysis, where we determine approximate time-dependent trajectory statistics, and in this way gain access to information encoded within trajectories.
As we shall shortly show, a given calculation simultaneously determines a _set_ of approximate time-dependent trajectory statistics. In particular, if a calculation yields an approximation to \(\overline{X^{1}}(t)\), \(\overline{X^{2}}(t)\), \(\overline{X^{3}}(t)\), \(\ldots\), \(\overline{X^{n}}(t)\), then we say 'we have an \(n\)'th order approximation to the problem.' Thus if we approximately determine just \(\overline{X^{1}}(t)\) then we have a _first order approximation_, while if we approximately determine both \(\overline{X^{1}}(t)\) and \(\overline{X^{2}}(t)\) then we have a _second order approximation_.
By virtue of Eq. (2.8), an \(n\)'th order approximation, for any non-zero \(n\), rather directly contains information about the fixation probability. We shall illustrate the quality and content of the \(n\)'th order approximation by comparing results for the fixation probability, for some different values of \(n\). We begin with the first order approximation, i.e., an approximation of just \(\overline{X^{1}}(t)\equiv\bar{X}(t)\).
### First order approximation
Indicating expectations (or mean values) by an overbar, the expected value of Eq. (2.7) is \(d\bar{X}(t)=\overline{F(X(t))}dt+\overline{\sqrt{V(X(t))}dW}(t)\), and statistical independence of \(X(t)\) and \(dW(t)\) leads to the second term, on the right hand side of this averaged equation, vanishing5. We thus obtain \(d\bar{X}(t)=\overline{F(X(t))}dt\) or
Footnote 5: Omitting time arguments, we have that \(\overline{\sqrt{V(X)}dW}\) equals \(\overline{\sqrt{V(X)}}\times d\overline{W}\) (by statistical independence), which then vanishes because \(d\overline{W}=0\).
\[\frac{d\bar{X}(t)}{dt}=s\left[\bar{X}(t)-\overline{X^{2}}(t)\right]. \tag{3.1}\]
Equation (3.1) follows from Eq. (2.7) with no approximations, and while we can say the right hand side of Eq. (3.1) has its _origin_ in selection, this identification is not completely straightforward6.
Footnote 6: We can say the right hand side of Eq. (3.1) has its _origin_ in selection, in the sense that it is the expected value of the selective force \(sX(t)\left[1-X(t)\right]\). However, the right hand side of Eq. (3.1) contains the expected values \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\), which are the outcome of _both_ of the evolutionary forces acting within Eq. (2.7). As a consequence, \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) depend on both \(s\) and \(N_{e}\) (see later results, e.g., Eq. (3.7)), and hence contain effects of \(N_{e}\).
We note that purely from the viewpoint of Eq. (3.1), we cannot determine \(\bar{X}(t)\) because the function \(\overline{X^{2}}(t)\) is present but unknown, and Eq. (3.1) gives no information about \(\overline{X^{2}}(t)\). We shall thus pursue approximations.
Two simple first order approximations of Eq. (3.1), which both allow explicit determination of \(\bar{X}(t)\), suggest themselves. These are: (i) omit \(\overline{X^{2}}(t)\), or (ii) replace \(\overline{X^{2}}(t)\) by \([\bar{X}(t)]^{2}\). However, both approximations lead to forms of \(\bar{X}(t)\) with large \(t\) behaviours that are unsatisfactory. The large \(t\) limit of \(\bar{X}(t)\) of approximation (i) either diverges or vanishes, depending on the sign of \(s\), while that of approximation (ii) either vanishes or is unity, again depending on the sign of \(s\). Neither approximation, when used in Eq. (2.8) with \(c=1\), leads to a meaningful result for the fixation probability, which cannot diverge, and generally has a value that lies between 0 and 1.
A preferable first order approximation of Eq. (3.1) is based on noting that when the time, \(t\), gets large, \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) both approach the _same_ limit, which is an exact property that follows from Eq. (2.8). It suggests that \(\bar{X}(t)-\overline{X^{2}}(t)\) is not large for an appreciable amount of time, and motivates the approximation of simply _omitting_\(s[\bar{X}(t)-\overline{X^{2}}(t)]\) in Eq. (3.1), i.e., omitting the entire right hand side of this equation. This leads to
\[\frac{d\bar{X}(t)}{dt}\simeq 0\quad\mbox{first order approximation}. \tag{3.2}\]
The solution to Eq. (3.2), subject to \(X(0)=y\), is simply
\[\bar{X}(t)\simeq y\quad\mbox{first order approximation}. \tag{3.3}\]
#### 3.1.1 Fixation probability
The first order approximation of \(X(t)\) in Eq. (3.3) can be used to approximate the fixation probability. Setting \(c=1\) in Eq. (2.8), and using Eq. (3.3) leads to the neutral fixation probability, namely \(P_{fix}(y)=\lim_{t\to\infty}\bar{X}(t)\simeq y\), that follows from the \(s\to 0\) limit of Eq. (2.2). For reasons that will shortly become clear, we write this result for the fixation probability in the form
\[P_{fix}(y)\simeq\frac{\frac{(Ry)}{1!}}{\frac{R}{1!}}\quad\mbox{first order approximation}. \tag{3.4}\]
For small \(R\) (\(|R|\ll 1\)), Eq. (3.3) (or Eq. (3.4)) is a valid approximation of Eq. (2.2). It also exhibits appropriate \(y\) behaviour: the approximation for \(P_{fix}(y)\) takes the exact value 0 at \(y=0\), increases with \(y\), and achieves the exact value 1 when \(y=1\).
Let us proceed to more sophisticated expressions, by considering a second order approximation.
### Second order approximation
From Eq. (2.7) we can determine the following equation for \(\overline{X^{2}}(t)\):
\[\frac{d\overline{X^{2}}(t)}{dt}=2s\left[\overline{X^{2}}(t)-\overline{X^{3}}(t) \right]+\frac{1}{2N_{e}}\left[\bar{X}(t)-\overline{X^{2}}(t)\right] \tag{3.5}\]
(see Appendix 1 for details), where the first term on the right hand side originates in selection, while the second term is an average of the infinitesimal variance, \(V(X(t))\), and hence originates in random genetic drift.
From the viewpoint of Eqs. (3.1) and (3.5), we cannot simultaneously solve these equations for \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) because the function \(\overline{X^{3}}(t)\) is present but unknown, and Eqs. (3.1) and (3.5) give no information about this function. We can, again, make an approximation that provides a feasible way forward, and allows approximate determination of \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\).
We proceed by keeping Eq. (3.1) fully intact, but approximate Eq. (3.5), using similar reasoning to that used when we made the first order approximation for \(\bar{X}(t)\). In particular, in Eq. (3.5), we omit the selection-originating term \(2s\left[\overline{X^{2}}(t)-\overline{X^{3}}(t)\right]\), assuming it to be small. As will become evident, this approximation applies when \(R\) (Eq. (2.1)) is suitably small. From Eq. (3.1) and the approximated Eq. (3.5), we thus arrive at a pair of coupled differential equations for \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) given by
\[\left.\begin{array}{l}\frac{d\bar{X}(t)}{dt}=s\left[\bar{X}(t)- \overline{X^{2}}(t)\right]\\ \\ \frac{d\overline{X^{2}}(t)}{dt}\simeq\frac{1}{2N_{e}}\left[\bar{X}(t)- \overline{X^{2}}(t)\right]\end{array}\right\}\text{second order approximation}. \tag{3.6}\]
These equations, combined with \(\bar{X}(0)=y\) and \(\overline{X^{2}}(0)=y^{2}\), are sufficient to fully determine \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) for all \(t>0\). In particular, when \(s\) and \(N_{e}\) are independent of time, we show in Appendix 2 that the solution of the set of equations in Eq. (3.6) can be written as
\[\left.\begin{array}{l}\bar{X}(t)\simeq\frac{Ry-\frac{1}{2}\left(Ry\right)^{ 2}}{R-\frac{1}{2}R^{2}}-\frac{\frac{1}{2}R^{2}y\left(1-y\right)}{R-\frac{1}{2} R^{2}}e^{-\lambda t}\\ \\ \overline{X^{2}}(t)\simeq\frac{Ry-\frac{1}{2}\left(Ry\right)^{2}}{R-\frac{1}{ 2}R^{2}}-\frac{Ry\left(1-y\right)}{R-\frac{1}{2}R^{2}}e^{-\lambda t}\end{array} \right\}\text{second order approximation} \tag{3.7}\]
where
\[\lambda=\left(1-\frac{R}{2}\right)\frac{1}{2N_{e}}. \tag{3.8}\]
The approximations for \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) in Eq. (3.7) have time dependence which is governed by \(\lambda\). The approximations have the obvious limitation that \(\lambda\) must be _positive_ (to avoid a spurious divergence of the solutions at large \(t\), which occurs if \(\lambda\) is negative). This is a clear indication that the approximation applies under restrictions on the range of values of the \(R\) parameter. We consider accuracy of the approximation and the range of \(R\) in a section, below, on numerical accuracy of the fixation probability.
In Figure (1) we compare the form of \(\bar{X}(t)\) obtained from the second order approximation, given in Eq. (3.7), with the corresponding result for the mean trajectory derived from simulations of the Wright-Fisher model, using an effective size, \(N_{e}\), that differs considerably from the census size, \(N\)[16].
Figure 1: **Comparing approximate and simulated mean trajectories.** In this figure we plot approximate and simulation results for the mean allele frequency, \(\bar{X}(t)\), against the time, \(t\). The approximate results are obtained from the second order approximation given in Eq. (3.7). The simulation results are based on a Wright-Fisher model where the effective population size, \(N_{e}\) (that is used in Eq. (3.7)) differs from the census size of the population, using the method of [16]. The parameter-values adopted are: census size, \(N=100\); effective population size, \(N_{e}=25\); initial frequency, \(y=10/200\); and we give plots for the two selection coefficients \(s=\pm 10^{-3}\). The mean of \(2\times 10^{6}\) simulated trajectories were used for each selection coefficient.
We see from Figure (1) that for the parameter values adopted, there is reasonable agreement between the different approaches to \(\bar{X}(t)\). In particular, just the second order approximation can capture meaningful time-dependent features of trajectories.
#### 3.2.1 Fixation probability
When \(R<2\), the quantity \(\lambda\) of Eq. (3.8) is positive, and the approximate forms for \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\), given in Eq. (3.7), both converge to the same long time limiting value, consistent with the long-time limit in Eq. (2.8) being independent of the exponent, \(c\). This long time limiting value is the second order approximation of the fixation probability, which is thus given by
\[P_{fix}(y)\simeq\frac{Ry}{\frac{1!}{1!}-\frac{(Ry)^{2}}{2!}}\qquad\mbox{second order approximation}. \tag{3.9}\]
We note that, irrespective of the value of \(R\), the approximation in Eq. (3.9) has the intrinsic feature of taking the exact values \(0\) and \(1\) when \(y\) approaches \(0\) and \(1\), respectively.
### Higher order approximations - applied to the fixation probability
We note that the first and second order approximations of the fixation probability, given in Eqs. (3.4) and (3.9), respectively, can be seen to follow from the fixation probability of Kimura (Eq. (2.2)), when both numerator and denominator in Kimura's result are _separately_ expanded, to first or second order in \(R\), respectively.
We shall use the notation \([\ldots]_{n}\) to denote expansion of the bracketed quantity to \(n\)'th order in \(R\). For example \([1-e^{-kR}]_{3}\) contains all terms in \(1-e^{-kR}\) up to and including \(O(R^{3})\) and is given by \([1-e^{-kR}]_{3}=\frac{kR}{1!}-\frac{(kR)^{2}}{2!}+\frac{(kR)^{3}}{3!}\). We can then write Eq. (3.4) as \(P_{fix}(y)\simeq\frac{\left[1-e^{-Ry}\right]_{1}}{\left[1-e^{-R}\right]_{1}}\), while Eq. (3.9) can be written as \(P_{fix}(y)\simeq\frac{\left[1-e^{-Ry}\right]_{2}}{\left[1-e^{-R}\right]_{2}}\). Under a third order approximation, we consider the set of coupled differential equations for \(\bar{X}(t)\), \(\overline{X^{2}}(t)\) and \(\overline{X^{3}}(t)\), but now, in the differential equation for \(\overline{X^{3}}(t)\), we omit the term that originates from selection, namely \(3s\left[\bar{X^{3}}(t)-\overline{X^{4}}(t)\right]\), with the resulting equations given in Appendix 3, in Eq. (C.2). This leads to the approximate
result
\[\begin{array}{rcl}P_{fix}(y)&\simeq&\frac{\frac{Ry}{1!}-\frac{(Ry)^{2}}{2!}+ \frac{(Ry)^{3}}{3!}}{\frac{R}{1!}-\frac{R^{2}}{2!}+\frac{R^{3}}{3!}}\\ \\ &\equiv&\frac{\left[1-e^{-Ry}\right]_{3}}{\left[1-e^{-R}\right]_{3}} \hskip 56.905512pt\mbox{third order approximation}\end{array} \tag{3.10}\]
as is shown Appendix 3.
It then becomes highly plausible that an \(n\)'th order approximation, where \(\overline{X^{1}}(t)\), \(\overline{X^{2}}(t)\),..., \(\overline{X^{n}}(t)\), are determined by omitting the selection-originating term in the differential equation for \(\overline{X^{n}}(t)\), leads to
\[P_{fix}(y)\simeq\frac{\left[1-e^{-Ry}\right]_{n}}{\left[1-e^{-R}\right]_{n}} \hskip 21.681ptn\mbox{'th order approximation} \tag{3.11}\]
and this is proved in Appendix 3.
### Numerical accuracy of the fixation probability
It is of interest to have an indication of the largest value of \(|R|\) for which the \(n\)'th order approximation works to a given accuracy. This is most simply found, when applied to the fixation probability, which is a single number (for a given initial frequency). To this end, we introduce a quantity we call \(R_{n}(\varepsilon)\), such that for \(|R|<R_{n}(\varepsilon)\) the error on the \(n\)'th order approximation of the fixation probability, compared with Kimura's result7, never exceeds \(\varepsilon\). In Table 1 we give numerically determined values of \(R_{n}(\varepsilon)\) for the orders \(n=1\), \(2\),..., \(5\) of the approximation, and for the error values \(\varepsilon=2\%\), \(5\%\), and \(10\%\).
Footnote 7: The error is calculated when all parameters are independent of time, and applies irrespective of the value of the initial frequency, \(y\).
One example of the usage of Table 1 is when \(|R|\) has a value less than 1. Then a third order approximation leads to an error in the fixation probability that deviates from Kimura's result by less than 5%. However, the real use of Table 1 is in more complex situations, as we consider next.
## 4 Approximate trajectory statistics - with time-dependent parameters
We shall now use the methodology, developed above, to determine results for the case where parameters depend on time. This seems to be a principled way to proceed, since, as we have seen, there is a systematic development of results such as the fixation probability, with the order of approximation.
Let us reconsider the model considered above, but now with the parameters \(s\) and \(N_{e}\) varying with time, i.e., with \(s=s(t)\) and \(N_{e}=N_{e}(t)\). This leads to the composite parameter \(R\) becoming a function of time, i.e., \(R(t)=4N_{e}(t)s(t)\).
We shall proceed under the assumption that from \(t=0\) onwards, the value of \(|R(t)|\) remains small. For example if \(|R(t)|\) is always below 0.5 then the results in Table 1 make it _plausible_ that if we use the second order
\begin{table}
\begin{tabular}{|c|c|c|} \hline error, \(\varepsilon\) & order, \(n\) & \(R_{n}(\varepsilon)\) \\ \hline \multirow{3}{*}{2\%} & 1 & 0.04 \\ \cline{2-3} & 2 & 0.33 \\ \cline{2-3} & 3 & 0.74 \\ \cline{2-3} & 4 & 1.14 \\ \cline{2-3} & 5 & 1.56 \\ \hline \multirow{3}{*}{5\%} & 1 & 0.10 \\ \cline{2-3} & 2 & 0.50 \\ \cline{2-3} & 3 & 0.99 \\ \cline{2-3} & 4 & 1.40 \\ \cline{2-3} & 5 & 1.85 \\ \hline \multirow{3}{*}{10\%} & 1 & 0.19 \\ \cline{2-3} & 2 & 0.68 \\ \cline{1-1} \cline{2-3} & 3 & 1.24 \\ \cline{1-1} \cline{2-3} & 4 & 1.62 \\ \cline{1-1} \cline{2-3} & 5 & 2.13 \\ \hline \end{tabular}
\end{table}
Table 1: **Error on the approximations.** For time-independent values of the parameters \(s\) and \(N_{e}\), we define the parameter \(R_{n}(\varepsilon)\) such that for \(|R|<R_{n}(\varepsilon)\) the \(n\)’th order approximation of the fixation probability (Eq. (3.11)) has an error, compared with Kimura’s result (Eq. (2.2)), that never exceeds \(\varepsilon\). This table contains numerically determined values of \(R_{n}(\varepsilon)\) for different values of \(\varepsilon\) and \(n\).
(\(n=2\)) approximation, there will be an error that is smaller than \(5\%\) in the result obtained for the fixation probability.
Since a first order approximation, is, by Eq. (3.3), independent of any parameters, we shall consider non-trivial cases of second and third order approximations.
### Second order approximation - with time-dependent parameters
For the second order approximation, the functions \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) continue to approximately satisfy Eq. (3.6) but now the quantities \(s\) and \(N_{e}\), that are present in the equations, are time-dependent. There are various ways to write the solutions for \(\bar{X}(t)\), \(\overline{X^{2}}(t)\) and \(P_{fix}(y)\). One such way is in terms of a function \(\Phi(t)\) defined by
\[\Phi(t)=1-\exp\left(-\int_{0}^{t}\left(1-\frac{R(z)}{2}\right)\frac{dz}{2N_{e} (z)}\right). \tag{4.1}\]
Then with \(\Phi^{\prime}(t)=d\Phi(t)/dt\) we find we can write
\[\bar{X}(t)\simeq\int_{0}^{t}\frac{\left[1-e^{-R(z)y}\right]_{2}}{\left[1-e^{- R(z)}\right]_{2}}\Phi^{\prime}(z)dz+y[1-\Phi(t)]. \tag{4.2}\]
(see Appendix 4 for details). The second order approximation for \(\overline{X^{2}}(t)\) follows from Eq. (4.2) by replacing \(y\) by \(y^{2}\) in the factor multiplying \([1-\Phi(t)]\).
We note that the form of \(\bar{X}(t)\) in Eq. (4.2) has an apparent probabilistic interpretation8.
Footnote 8: To see the probabilistic interpretation of Eq. (4.2), we introduce a random variable \(\tau\) with cumulative probability distribution \(\mbox{Prob}(\tau\leq t)=\Phi(t)\) and probability density \(\Phi^{\prime}(t)=d\Phi(t)/dt\). Then the form of \(\bar{X}(t)\) in Eq. (4.2) coincides with the average of a function that, for \(\tau\leq t\), takes the value \(\frac{\left[1-e^{-R(\tau)y}\right]_{2}}{\left[1-e^{-R(\tau)}\right]_{2}}\), and for \(\tau>t\), takes the value \(y\).
It may be verified that when \(s\) and \(N_{e}\) are independent of time, Eq. (4.2) reduces to Eq. (3.7).
Since we work under the assumption of relatively small \(|R(t)|\) (i.e., \(|R(t)|\lesssim 1\)) it follows that as \(t\to\infty\) we have \(\Phi(t)\to 1\), hence from \(P_{fix}(y)=\lim_{t\to\infty}\bar{X}(t)\) and from Eq. (4.2) we obtain the approximate result
\[P_{fix}(y)\simeq\int_{0}^{\infty}\frac{\left[1-e^{-R(t)y}\right]_{2}}{\left[1 -e^{-R(t)}\right]_{2}}\Phi^{\prime}(t)dt\]
\[=y^{2}+y(1-y)\int_{0}^{\infty}e^{-\int_{0}^{t}\left(1-\frac{R(z)}{2}\right) \frac{dz}{2N_{e}(z)}}\frac{dt}{2N_{e}(t)}. \tag{4.3}\]
The first form of the fixation probability in Eq. (4.3) is equivalent to an average of \(\frac{\left[1-e^{-R(t)y}\right]_{2}}{\left[1-e^{-R(t)}\right]_{2}}\), with \(\Phi^{\prime}(t)\) playing the role of a probability density. This tells us, without any additional calculation, that the approximation of \(P_{fix}(y)\) in Eq. (4.3) lies between the smallest and largest values that \(\frac{\left[1-e^{-R(t)y}\right]_{2}}{\left[1-e^{-R(t)}\right]_{2}}\) takes, from time \(t=0\) onwards.
The second form given for the fixation probability in Eq. (4.3) may be more useful for practical computations.
#### 4.1.1 Piecewise constant variation
As a simple illustration of the use of Eq. (4.3), suppose the effective population size, \(N_{e}\), stays constant,
\[N_{e}=N_{0} \tag{4.4}\]
while the selection coefficient changes over time according to
\[s(t)=\left\{\begin{array}{ll}s_{0}&\mbox{for $0\leq t<T$}\\ \\ s_{1}&\mbox{for $t\geq T$}.\end{array}\right. \tag{4.5}\]
In terms of the composite parameters
\[R_{0} = 4N_{0}s_{0},\quad R_{1}=4N_{0}s_{1},\quad w=\exp\left[-\left(1- \frac{R_{0}}{2}\right)\frac{T}{2N_{0}}\right]\]
we obtain
\[P_{fix}(y)\simeq(1-w)\frac{\left[1-e^{-R_{0}y}\right]_{2}}{\left[1-e^{-R_{0}} \right]_{2}}+w\frac{\left[1-e^{-R_{1}y}\right]_{2}}{\left[1-e^{-R_{1}}\right] _{2}}. \tag{4.7}\]
The result in Eq. (4.7) is a weighted average of approximate fixation probabilities associated with the selection coefficients of \(s_{0}\) and \(s_{1}\). The weighting factor, \(w\), is determined by the time that the change in selection coefficient occurs, along with the parameter-values that apply prior to this change, namely \(s_{0}\) and \(N_{0}\). A very similar 'weighted average' result also occurs when the effective population size discontinuously changes, while the selection coefficient stays constant.
### Third order approximation - with time-dependent parameters
A third order approximation corresponds to the solution of the three equations given in Eq. (C.2) or Eq. (C.3). However, to extract, e.g., \(\bar{X}(t)\) there
is a simpler way of proceeding. We first define two functions \(A(t)\) and \(B(t)\) via
\[A\equiv A(t)=\bar{X}(t)-\overline{X^{2}}(t),\quad B\equiv B(t)=\overline{X^{2}}(t )-\overline{X^{3}}(t). \tag{4.8}\]
These are shown in Appendix 5 to obey
\[\left.\begin{array}{rcl}\frac{dA}{dt}&\simeq&-\frac{1}{2N_{e}} \left[\left(1-\frac{R}{2}\right)A+RB\right]\\ \frac{dB}{dt}&\simeq&-\frac{1}{2N_{e}}\left[-A+\left(3-R\right)B\right]\end{array}\right\} \tag{4.9}\]
and are subject to \(A(0)=y-y^{2}\) and \(B(0)=y^{2}-y^{3}\). The third order problem only requires determination of \(A(t)\) and \(B(t)\), with statistics of frequencies following by integration, e.g.,
\[\bar{X}(t)\simeq y+\int_{0}^{t}\frac{R(z)}{4N_{e}(z)}A(z)dz \tag{4.10}\]
(see Appendix 5 for details).
For \(N_{e}\) and \(s\) independent of \(t\) we can explicitly solve Eq. (4.9). Here, we shall illustrate the working of the above in the time-dependent case by determining \(\bar{X}(t)\) for the specific forms of \(s(t)\) and \(N_{e}(t)\) in two different examples.
#### 4.2.1 Example 1: \(s\) constant, \(N_{e}\) changing
We take a constant selection coefficient
\[s=s_{0} \tag{4.11}\]
and the time-dependent effective population size
\[N_{e}(t)=N_{0}\times\left\{\begin{array}{ll}1&\mbox{for $0\leq t<T$}\\ t/T&\mbox{for $T\leq t<2T$}\\ 2&\mbox{for $t\geq 2T$}.\end{array}\right. \tag{4.12}\]
In Figure 2 we give the results of numerically solving Eq. (4.9) for this example, and illustrate the form of \(\bar{X}(t)\), as derived from Eq. (4.10).
Figure 2: **Third order approximation, Example 1.** This figure illustrates results of the third order approximation that follow by first solving the coupled equations given in Eq. (4.9) for Example 1, where the selection coefficient is constant (Eq. (4.11)), and the effective population size changes over time according to Eq. (4.12). The parameter values adopted are \(s_{0}=0.001\), \(N_{0}=1/100\), \(T=100\), and \(y=10/(2N_{0})=1/200\). We plot \(A(t)\) (solid line) and \(B(t)\) (broken line) against the time, \(t\) (top panel). In the bottom panel we plot the mean frequency, \(\bar{X(t)}\), based on the third order approximation given in Eq.(4.10) (solid line), and the mean of \(3\times 10^{4}\) simulated trajectories (broken line).
#### 4.2.2 Example 2: \(N_{e}\) constant, \(s\) changing
We take a constant effective population size
\[N_{e}=N_{0} \tag{4.13}\]
and the time-dependent selection coefficient
\[s(t)=s_{0}\times\left\{\begin{array}{ll}1&\mbox{for $0\leq t<T$}\\ \\ t/T&\mbox{for $\leq t<2T$}\\ \\ 2&\mbox{for $t\geq 2T$}.\end{array}\right. \tag{4.14}\]
In Figure 3 we give the results of solving Eq. (4.9) for this example, and illustrate the form of \(\bar{X}(t)\), as derived from Eq. (4.10).
Figure 3: **Third order approximation, Example 2.** This figure illustrates results of the third order approximation that follow by first solving the coupled equations given in Eq. (4.9) for Example 2, where the effective population size is constant (Eq. (4.13)), and the selection coefficient changes over time according to Eq. (4.14). The parameter values adopted are \(s_{0}=0.001\), \(N_{0}=1/100\), \(T=100\), and \(y=1/(2N_{0})=10/200\). We plot \(A(t)\) (solid line) and \(B(t)\) (broken line) against the time, \(t\) (top panel). In the bottom panel we plot the mean frequency, \(\bar{X(t)}\), based on the third order approximation given in Eq.(4.10) (solid line), and the mean of \(3\times 10^{4}\) simulated trajectories (broken line).
In Example 2 the values of \(R(t)\) are identical to those in Example 1. However, the fixation probability generally yields different results when \(s\) varies at fixed \(N_{e}\), compared with when \(N_{e}\) varies at fixed \(s\). As a consequence, despite \(R(t)\) taking the same form in both examples, we find a small but significant difference between the long time values of \(\bar{X}(t)\) in the two examples, signalling different fixation probabilities in the two cases. The effects of genetic drift is different in the two examples (cf. [11]).
## 5 Discussion
In this work we have presented a systematic mathematical approximation scheme that can show approximate parameter-dependencies of statistics of gene-frequency trajectories, thus exposing information contained in the trajectories.
The analysis is restricted to a nearly neutral (or weak selection) regime (see Eq. (1.1)). It can, however, capture properties of both negative as well as positive selection coefficients.
We have presented examples for the fixation probability, which is a long time limit of a trajectory statistic (see Eq. (2.8)). The reasonable accuracy of the fixation probability, which is related to the mean trajectory, see Figure 1 for results under the second order approximation, suggests that the approximations presented have the capability of connecting features of trajectories at early and late times.
We note that despite testing and applying the methods presented on the probability of fixation, it is the case that time dependent frequency trajectory statistics contain information on more than just this probability. For example, with \(P_{fix}(t,y)\) the probability of fixation _by_ time \(t\), given an initial frequency of \(y\) at time \(0\), it can be shown that for any positive \(k\),
\[\overline{X^{k}}(t)\geq P_{fix}(t,y) \tag{5.1}\]
with equality only holding when \(k\to\infty\). Furthermore, from Eq. (2.8), we have \(\overline{X^{k}}(\infty)=P_{fix}(y)\) and, with \(T\) denoting the random time it takes for fixation to be achieved (given that fixation ultimately occurs), we can write \(\mathrm{Prob}(T\leq t)=P_{fix}(t,y)/P_{fix}(y)\) and hence
\[\mathrm{Prob}(T\leq t)\leq\frac{\overline{X^{k}}(t)}{\overline{X^{k}}(\infty)}. \tag{5.2}\]
We can express the mean time to fixation as \(\overline{T}=\int_{0}^{\infty}\left[1-\mathrm{Prob}(T\leq t)\right]dt\) and using Eq. (5.2) we obtain
\[\overline{T}\geq\int_{0}^{\infty}\left(1-\frac{\overline{X^{k}}(t)}{\overline {X^{k}}(\infty)}\right)dt. \tag{5.3}\]
This result holds for \(k=1,2,\ldots\). It thus follows that the particular _way_ that time dependent statistics, such as \(\overline{X^{k}}(t)\), approach their long time asymptotic values contains information about the random time to fixation.
We cannot guarantee the inequality in Eq. (5.3), when we use an approximate form for \(\overline{X^{k}}(t)\), however, we can use it get an _indication_ of the mean time to fixation by determining the right hand side of Eq. (5.3) e.g., for \(k=2\), using the second order results in Eq. (3.7). We find
\(\int_{0}^{\infty}\left[1-\overline{X^{2}}(t)/\overline{X^{2}}(\infty)\right]dt\) has the approximate value \(2N_{e}(1-y)/[(1-R/2)(1-Ry/2)]\). The neutral diffusion result, for small \(y\) is approximately \(4N_{e}\) generations, hence even for \(k=2\) we are close to \(50\%\) of the full \(k=\infty\) result.
The results we have presented in this work are an attempt to extract information from the stochastic differential equation \(dX(t)=F(X(t))dt+\sqrt{V(X(t))}dW(t)\) that the gene frequency, \(X(t)\), obeys. This equation is, generally, not analytically soluble, despite representing a very simple class of problems, namely those with one locus, two alleles, additive selection, no mutation, and a finite population size. More complex problems, such as those involving non-additive and/or frequency-dependent selection, mutation/migration, two or more loci, and multiple alleles, are further removed from having analytical solution. However, the methodology we have introduced may give some systematic access to such problems.
**APPENDICES**
## Appendix A The equation obeyed by \(\overline{X^{n}}(t)\) for \(n>1\)
In this appendix we derive the differential equation that \(\overline{X^{n}}(t)\) obeys, where an overbar denotes an expected (or average) value.
Note that we assume that \(X(0)\) takes the definite value \(y\), thus for all \(n\) we have
\[\overline{X^{n}}(0)=y^{n}.\] (A.1)
We begin with the _stochastic differential equation_ (SDE) for the frequency, which is given by
\[dX(t)=F(X(t))dt+\sqrt{V(X(t))}dW(t).\] (A.2)
This is an Ito SDE [18], which means that \(X(t)\) and \(dW(t)\) are statistically independent. Thus the expected value of \(\sqrt{V(X(t))}dW(t)\) equals \(\overline{\sqrt{V(X(t))}}\times\overline{dW}(t)\). This vanishes because \(\overline{dW}(t)=0\). Thus Eq. (A.2) yields
\[\overline{\sqrt{V(X(t))}dW(t)}=0.\] (A.3)
The expected value of Eq. (A.2) then yields \(d\bar{X}(t)=\overline{F(X(t))}dt\) or
\[\frac{d\bar{X}(t)}{dt}=\overline{F(X(t))}.\] (A.4)
In this work we look at expected value of various powers of \(X(t)\). There are different ways to proceed, but a direct approach derives, from Eq. (A.2), equations involving the expected values of \(X^{2}(t)\), \(X^{3}(t)\),...that are analogous to Eq. (A.4). The key point is that the noise increment, \(dW(t)\), has an expected value of zero, but behaves as a random variable with mean \(0\) and standard deviation \(\sqrt{dt}\). Thus changes of quantities over a time interval of \(dt\), arise from terms that are of first order in \(dt\), and also from a term that is second order in \(dW(t)\). This is codified in the rules of Ito calculus [18]. With \(n=2,3,\ldots\), Ito's rules lead to a change in \(X^{n}\), from \(t\) to \(t+dt\) (omitting time arguments), of
\[dX^{n}=nX^{n-1}dX+\frac{1}{2}n(n-1)X^{n-2}\left[dX\right]^{2}\\ \\
When \(F(x)=sx(1-x)\) and \(V(x)=\frac{1}{2N_{e}}x(1-x)\), Eq. (A.6) becomes
\[\frac{d\overline{X^{n}}}{dt}=ns\left(\overline{X^{n}}-\overline{X^{n+1}}\right) +\frac{n(n-1)}{4N_{e}}\left(\overline{X^{n-1}}-\overline{X^{n}}\right).\] (A.7)
For the special case of \(n=2\), Eq. (A.7) reduces to
\[\frac{d\overline{X^{2}}}{dt}=2s\left(\overline{X^{2}}-\overline{X^{3}}\right) +\frac{1}{2N_{e}}\left(\bar{X}-\overline{X^{2}}\right)\] (A.8)
which is Eq. (3.5) of the main text.
## Appendix B Solution of \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\)
In this appendix we determine the form \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\), under a second order approximation when the parameters \(s\) and \(N_{e}\) have constant values (i.e., are independent of the time).
The equations that \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) approximately obey, under a second order approximation, are given in Eq. (3.7) of the main text. For convenience we reproduce them here:
\[\left.\begin{array}{l}\frac{d\bar{X}(t)}{dt}=s\left[\bar{X}(t)- \overline{X^{2}}(t)\right]\\ \\ \frac{d\overline{X^{2}}(t)}{dt}\simeq\frac{1}{2N_{e}}\left[\bar{X}(t)- \overline{X^{2}}(t)\right]\end{array}\right\}\] (B.1)
Since \(X(0)\) takes the definite value \(y\), the above equations are subject to \(\bar{X}(0)=y\) and \(\overline{X^{2}}(0)=y^{2}\).
There are many ways to solve Eq. (B.1), and we adopt the following approach.
Define
\[D(t)=\bar{X}(t)-\overline{X^{2}}(t)\] (B.2)
then on subtracting the second equation from the first, in Eq. (B.1), we obtain
\[\frac{dD(t)}{dt} =\left(s-\frac{1}{2N_{e}}\right)D(t)\] \[=-\lambda D(t)\] (B.3)
where \(\lambda=\frac{1}{2N_{e}}-s\) and using \(R=4N_{e}s\) (Eq. (2.1)), we have
\[\lambda=\left(1-\frac{R}{2}\right)\frac{1}{2N_{e}}.\] (B.4)
The solution to Eq. (B.3) is \(D(t)=D(0)e^{-\lambda t}\) i.e.,
\[D(t)=y(1-y)e^{-\lambda t}.\] (B.5)
The two equations in Eq. (B.1) can be written as \(d\bar{X}(t)/dt=sD(t)\) and \(d\overline{X^{2}}(t)/dt\simeq D(t)/(2N_{e})\), respectively. Given the explicit form of \(D(t)\) in Eq. (B.5) we can determine \(\bar{X}(t)\) and \(\overline{X^{2}}\) by direct integration. The results can be written as
\[\bar{X}(t)=\frac{Ry-\frac{1}{2}\left(Ry\right)^{2}}{R-\frac{1}{2}R^{2}}-\frac {\frac{1}{2}R^{2}y\left(1-y\right)}{R-\frac{1}{2}R^{2}}e^{-\lambda t}\] (B.6)
and
\[\overline{X^{2}}(t)=\frac{Ry-\frac{1}{2}\left(Ry\right)^{2}}{R-\frac{1}{2}R^ {2}}-\frac{Ry\left(1-y\right)}{R-\frac{1}{2}R^{2}}e^{-\lambda t}.\] (B.7)
Note that these (approximate) forms for \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) have a number of exact properties:
1. \(\bar{X}(0)=y\),
2. \(\overline{X^{2}}(0)=y^{2}\),
3. \(\lim_{y\to 0}\bar{X}(t)=0\),
4. \(\lim_{y\to 0}\overline{X^{2}}(t)=0\),
5. \(\lim_{y\to 1}\bar{X}(t)=1\),
6. \(\lim_{y\to 1}\overline{X^{2}}(t)=1\),
7. they have the same long time limiting values (providing \(\lambda>0\)): \(\lim_{t\rightarrow\infty}\bar{X}(t)=\lim_{t\rightarrow\infty}\overline{X^{2}} (t)\).
## Appendix C Higher order approximations for the fixation probability
In this appendix we show how higher order approximations for the fixation probability can be obtained in the case where the parameters \(s\) and \(N_{e}\) have constant values.
We begin with a third order approximation.
From Eq. (A.6) for \(n=3\) we have
\[\frac{d\overline{X^{3}}(t)}{dt}=3s\left[\overline{X^{3}}(t)-\overline{X^{4}}( t)\right]+\frac{3}{2N_{e}}\left[\overline{X^{2}}(t)-\overline{X^{3}}(t)\right].\] (C.1)
Proceeding as before, we now omit the assumed small term that originates in selection in Eq. (C.1), namely \(3s\)\(\left[\overline{X^{3}}(t)-\overline{X^{4}}(t)\right]\). The resulting approximate equation, along with Eqs. (3.1) and (3.5), are
\[\left.\begin{array}{rcl}\frac{d\bar{X}(t)}{dt}&=&s\left[\bar{X}(t)-\overline{ X^{2}}(t)\right]\\ \\ \frac{d\overline{X^{2}}(t)}{dt}&=&\frac{1}{2N_{e}}\left[\bar{X}(t)-\overline{ X^{2}}(t)\right]\\ \\ &&+2s\left[\overline{X^{2}}(t))-\overline{X^{3}}(t)\right]\\ \\ \frac{d\overline{X^{3}}(t)}{dt}&\simeq&\frac{3}{2N_{e}}\left[\overline{X^{2}}( t)-\overline{X^{3}}(t)\right]\end{array}\right\}\] (C.2)
and can be written as
\[\left.\begin{array}{rcl}\frac{d\bar{X}(t)}{dt}&=&s\left[\bar{X}(t)-\overline{ X^{2}}(t)\right]\\ \\ \frac{d\overline{X^{2}}(t)}{dt}&=&\frac{2s}{R}\left[\bar{X}(t)-\overline{X^{2} }(t)\right]\\ \\ &&+2s\left[\overline{X^{2}}(t))-\overline{X^{3}}(t)\right]\\ \\ \frac{d\overline{X^{3}}(t)}{dt}&\simeq&\frac{6s}{R}\left[\overline{X^{2}}(t)- \overline{X^{3}}(t)\right]\end{array}\right\}\] (C.3)
These three equations constitute a closed system that allows determination of \(\bar{X}(t)\), \(\overline{X^{2}}(t)\) and \(\overline{X^{3}}(t)\).
However, with the parameters \(s\) and \(N_{e}\) independent of time, we can determine the fixation probability without explicitly solving for \(\bar{X}(t)\), \(\overline{X^{2}}(t)\) and \(\overline{X^{3}}(t)\). Rather, we eliminate \(\bar{X}(t)-\overline{X^{2}}(t)\) and \(\overline{X^{2}}(t)-\overline{X^{3}}(t)\) from Eq. (C.3) to obtain the single equation
\[R\frac{d\bar{X}(t)}{dt}-\frac{R^{2}}{2!}\frac{d\overline{X^{2}}(t)}{dt}+\frac {R^{3}}{3!}\frac{d\overline{X^{3}}(t)}{dt}\simeq 0.\] (C.4)
We then integrate Eq. (C.4) over \(t\), from \(0\) to \(\infty\), use Eq. (A.1), and identify \(\overline{X^{n}}(\infty)\), for \(n>0\), with the fixation probability, \(P_{fix}(y)\). We obtain \(\left(R-\frac{R^{2!}}{2}+\frac{R^{3}}{3!}\right)P_{fix}(y)-\left(Ry-\frac{R^{2 }y^{2}}{2!}+\frac{R^{3}y^{3}}{3!}\right)\simeq 0\) which immediately leads to
\[P_{fix}(y)\simeq\frac{Ry-\frac{R^{2}y^{2}}{2!}+\frac{R^{3}y^{3}}{3!}}{R-\frac{ R^{2}}{2!}+\frac{R^{3}}{3!}}.\] (C.5)
This expression can be written as \(P_{fix}(y)\simeq\frac{[1-e^{-Ry}]_{3}}{[1-e^{-R}]_{3}}\), in which: (i) the numerator consists of the leading three terms of the Taylor series expansion,
in \(R\), of the numerator of Kimura's result \(P_{fix}(y)=\frac{1-e^{-Ry}}{1-e^{-R}}\), and (ii) the denominator consists of the leading three terms of the Taylor series expansion, in \(R\), of the denominator of Kimura's result.
We can now show that the \(n\)'th order approximation of Kimura's fixation probability is \(\frac{[1-e^{-Ry}]_{n}}{[1-e^{-R}]_{n}}\). To obtain this we begin with Eq. (A.7), and omit the term originating in selection. We can write this approximate equation, along with the exact forms of Eq. (A.7) when applied to \(\overline{X^{n-1}}\), \(\overline{X^{n-2}}\),..., \(\overline{X^{1}}\), in the form
\[\left.\begin{array}{rcl}\frac{R^{n}}{n!}\frac{d\overline{X^{n}}(t)}{dt}& \simeq&\frac{R^{n}}{(n-2)!}\frac{\overline{X^{n-1}}(t)-\overline{X^{n}}(t)}{4 N_{e}}\\ \\ \frac{R^{n-1}}{(n-1)!}\frac{d\overline{X^{n-1}}(t)}{dt}&=&\frac{R^{n}}{(n-2)!} \frac{\overline{X^{n-1}}(t)-\overline{X^{n}}(t)}{4N_{e}}\\ \\ &&+\frac{R^{n-1}}{(n-3)!}\frac{\overline{X^{n-2}}(t)-\overline{X^{n-1}}(t)}{4 N_{e}}\\ \\ \frac{R^{n-2}}{(n-2)!}\frac{d\overline{X^{n-2}}(t)}{dt}&=&\frac{R^{n-1}}{(n-3)!} \frac{\overline{X^{n-2}}(t)-\overline{X^{n-1}}(t)}{4N_{e}}\\ \\ &&+\frac{R^{n-2}}{(n-4)!}\frac{\overline{X^{n-3}}(t)-\overline{X^{n-2}}(t)}{4N_ {e}}\\ &\vdots\\ \frac{R^{1}}{1!}\frac{d\overline{X^{1}}(t)}{dt}&=&R^{2}\frac{\bar{X}(t)- \overline{X^{2}}(t)}{4N_{e}}.\end{array}\right\}\] (C.6)
It may then be seen that
\[\left.\begin{array}{rcl}\frac{(-R)^{n}}{n!}\frac{d\overline{X^{n}}(t)}{dt}+ \frac{(-R)^{n-1}}{(n-1)!}\frac{d\overline{X^{n-1}}(t)}{dt}...+\frac{(-R)^{1}}{ 1!}\frac{d\overline{X^{1}}(t)}{dt}&\simeq&0\end{array}\right.\] (C.7)
and the integral of this equation over \(t\), from \(0\) to \(\infty\) yields \(P_{fix}(y)\simeq\frac{[1-e^{-Ry}]_{n}}{[1-e^{-R}]_{n}}\).
## Appendix D Solution of the second order approximation with time-dependent parameters
In this appendix we present a method for solving the equations for \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) when the quantities \(s\) and \(N_{e}\) depend on the time.
We begin with the equations for \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) which now take the form
\[\frac{d\bar{X}(t)}{dt}=s(t)\left[\bar{X}(t)-\overline{X^{2}}(t)\right]\] (D.1)
\[\frac{d\overline{X^{2}}(t)}{dt}\simeq\frac{1}{2N_{e}(t)}\left[\bar{X}(t)- \overline{X^{2}}(t)\right].\] (D.2)
and are subject to \(\bar{X}(0)=y\) and \(\overline{X^{2}}(0)=y^{2}\).
We define
\[R(t)=4N_{e}(t)s(t)\] (D.3)
\[D(t)=\bar{X}(t)-\overline{X^{2}}(t)\] (D.4)
\[\Phi(t)=1-\exp\left(-\int_{0}^{t}\left(1-\frac{R(z)}{2}\right)\frac{dz}{2N_{e} (z)}\right).\] (D.5)
It follows that \(D(t)\) obeys
\[\frac{dD(t)}{dt} = -\left(\frac{1}{2N_{e}(t)}-s(t)\right)D(t)\] (D.6) \[= -\frac{1}{2N_{e}(t)}\left(1-\frac{R(t)}{2}\right)D(t)\]
and has the solution
\[D(t)=D(0)\exp\left(-\int_{0}^{t}\left(1-\frac{R(w)}{2}\right)\frac{dw}{2N_{e} (w)}\right)\]
\[=y(1-y)\exp\left(-\int_{0}^{t}\left(1-\frac{R(w)}{2}\right)\frac{dw}{2N_{e}(w )}\right)\]
\[=y(1-y)\left[1-\Phi(t)\right].\] (D.7)
Proceeding, we rewrite Eq. (D.1) as
\[\frac{d\bar{X}(t)}{dt}=\frac{R(t)}{2}\frac{1}{2N_{e}(t)}D(t).\] (D.8)
We then use Eqs. (D.6) and (D.7) to write
\[\frac{1}{2N_{e}(t)}D(t)=-\frac{1}{1-\frac{R(t)}{2}}\frac{dD(t)}{dt}=\frac{y(1 -y)}{1-\frac{R(t)}{2}}\Phi^{\prime}(t)\] (D.9)
where \(\Phi^{\prime}(t)=d\Phi(t)/dt\). Equation (D.9) allows Eq. (D.8) to be written as
\[\frac{d\bar{X}(t)}{dt}=\frac{y(1-y)\frac{R(t)}{2}}{1-\frac{R(t)}{2}}\Phi^{ \prime}(t)=\left(\frac{\frac{R(t)y}{1!}-\frac{\left[R(t)y\right]^{2}}{2!}}{ \frac{R(t)}{1!}-\frac{\left[R(t)\right]^{2}}{2!}}-y\right)\Phi^{\prime}(t)\]
\[=\frac{\left[1-e^{-R(t)y}\right]_{2}}{\left[1-e^{-R(t)}\right]_{2}}\Phi^{ \prime}(t)-y\Phi^{\prime}(t).\] (D.10)
On integrating this equation from \(0\) to \(t\), and using \(\bar{X}(0)=y\), we obtain
\[\bar{X}(t)=\int_{0}^{t}\frac{\left[1-e^{-R(z)y}\right]_{2}}{\left[1-e^{-R(z)} \right]_{2}}\Phi^{\prime}(z)dz+y\left[1-\Phi(t)\right].\] (D.11)
Using a similar approach, we obtain
\[\overline{X^{2}}(t)=\int_{0}^{t}\frac{\left[1-e^{-R(z)y}\right]_{2}}{\left[1-e ^{-R(z)}\right]_{2}}\Phi^{\prime}(z)dz+y^{2}\left[1-\Phi(t)\right].\] (D.12)
On the assumption that \(1-R(t)/2>0\) for all \(t\), we have that \(\lim_{t\to\infty}\Phi(t)=1\) and then both \(\bar{X}(t)\) and \(\overline{X^{2}}(t)\) in Eqs. (D.11) and (D.12) have the same long time limit of \(\int_{0}^{\infty}\frac{\left[1-e^{-R(z)y}\right]_{2}}{\left[1-e^{-R(z)} \right]_{2}}\Phi^{\prime}(z)dz\), which is the second order approximation of \(P_{fix}(y)\).
## Appendix E Solution of the third order approximation with time-dependent parameters
In this appendix we present a method for solving the equations for \(\bar{X}(t)\), \(\overline{X^{2}}(t)\) and \(\overline{X^{3}}(t)\), associated with the third order approximation, when the quantities \(s\) and \(N_{e}\) depend on the time.
The third order approximation corresponds to solving the equations
\[\left.\begin{array}{rcl}\frac{d\bar{X}(t)}{dt}&=&s\left[\bar{X}(t)-\overline{ X^{2}}(t)\right]\\ \\ \frac{d\overline{X^{2}}(t)}{dt}&=&\frac{2s}{R}\left[\bar{X}(t)-\overline{X^{2} }(t)\right]\\ \\ &+2s\left[\overline{X^{2}}(t))-\overline{X^{3}}(t)\right]\end{array}\right\}\] (E.1)
\[\left.\begin{array}{rcl}\frac{d\overline{X^{3}}(t)}{dt}&\simeq&\frac{6s}{R} \left[\overline{X^{2}}(t)-\overline{X^{3}}(t)\right]\end{array}\right\}\] (E.2)
(see Appendix 3). However, underlying these three equations are a simpler pair of coupled equations. In terms of the functions \(A(t)\) and \(B(t)\) defined by
\[A\equiv A(t)=\bar{X}(t)-\overline{X^{2}}(t),\qquad B\equiv B(t)=\overline{X^{ 2}}(t)-\overline{X^{3}}(t)\] (E.3)
we can write
\[\left.\begin{array}{rcl}\frac{d\bar{X}}{dt}&=&\frac{1}{2N_{e}}\frac{R}{2}A\\ \\ \frac{d\overline{X^{2}}}{dt}&=&\frac{1}{2N_{e}}\left(A+RB\right) \\ \\ \frac{d\overline{X^{3}}(t)}{dt}&\simeq&\frac{1}{2N_{e}}3B. \end{array}\right\}\] (E.4)
These equations lead to the pair of coupled equations
\[\left.\begin{array}{rcl}\frac{dA}{dt}&=&-\frac{1}{2N_{e}}\left[ \left(1-\frac{R}{2}\right)A+RB\right]\\ \frac{dB}{dt}&=&-\frac{1}{2N_{e}}\left[-A+\left(3-R\right)B \right]\end{array}\right\}\] (E.4)
and are subject to \(A(0)=y-y^{2}\) and \(B(0)=y^{2}-y^{3}\).
We thus need to solve Eq. (E.4), for \(A(t)\) and \(B(t)\), and statistics of frequencies, can be obtained from knowledge of \(A(t)\) and \(B(t)\) by integration. For example from Eq. (E.3) we obtain
\[\bar{X}(t)\simeq y+\int_{0}^{t}\frac{R(z)}{4N_{e}(z)}A(z)dz.\] (E.5) |
2304.10884 | Pion screening mass at finite chemical potential | We present a method to compute the responses of meson screening masses to the
chemical potential by Taylor expanding the correlator using lattice QCD
simulation. We start by comparing the free theory lattice results with the
analytical expression. Then, using symmetry arguments, we obtain an expression
for the correlator in a series of the chemical potential at finite temperature.
Using this, we obtain the lowest order correction to the screening mass at a
finite chemical potential for temperatures around 2.5 GeV. Our lattice analysis
is limited to isoscalar chemical potential for the pseudoscalar channel. The
calculations were performed using (2+1)-flavors of the Highly Improved
Staggered Quark (HISQ/tree) action, with the ratio of the strange quark mass to
the light quark mass $m_s/m_\ell=20$ corresponding to pion masses of 160 MeV. | Rishabh Thakkar, Prasad Hegde | 2023-04-21T10:52:27Z | http://arxiv.org/abs/2304.10884v2 | # Pion screening mass at finite chemical potential
###### Abstract
We present a method to compute the responses of meson screening masses to the chemical potential by Taylor expanding the correlator using lattice QCD simulation. We start by comparing the free theory lattice results with the analytical expression. Then, using symmetry arguments, we obtain an expression for the correlator in a series of the chemical potential at finite temperature. Using this, we obtain the lowest order correction to the screening mass at a finite chemical potential for temperatures around 2.5 GeV. Our lattice analysis is limited to isoscalar chemical potential for the pseudoscalar channel. The calculations were performed using (2+1)-flavors of the Highly Improved Staggered Quark (HISQ/tree) action, with the ratio of the strange quark mass to the light quark mass \(m_{s}/m_{\ell}=20\) corresponding to pion masses of 160 MeV.
###### Contents
* 1 Introduction
* 2 Screening Correlators at Finite Density
* 3 Free Theory Screening Correlator at \(\mu_{\ell}\neq 0\)
* 4 Screening Correlators at Finite \(T\) and \(\mu_{\ell}\)
* 4.1 Finite Temperature Analysis
* 5 Conclusions
* A Third and Fourth Derivatives of \(\langle\langle\operatorname{tr}\bigl{[}P(\boldsymbol{x},0,\mu_{\ell})P(0, \boldsymbol{x},\mu_{\ell})\bigr{]}\rangle\rangle\)
* B Derivatives of \(G\) and \(\Delta\)
* B.1 Correlator-like operators
* B.2 Trace-like operators
## 1 Introduction
It is well-known that strongly-interacting nuclear matter undergoes a phase transition at high temperatures to a new state of matter called the quark-gluon plasma in which quarks and gluons are not confined within hadrons but are free to move throughout the volume of the system. This deconfinement is accompanied by the restoration of the chiral symmetry that is spontaneously broken at zero temperature. The nature of the phase transition depends upon the number of light quarks and their masses. For 2+1-flavor QCD with physical quark masses, the transition is known to be a crossover [1] with a pseudocritical temperature \(T_{pc}=156.5\pm 1.5\) MeV [2].
The properties of the quark-gluon plasma (QGP) have been studied extensively using a variety of approaches. Besides being of theoretical interest, an additional impetus for its study is provided by the various experiments in which the QGP is created in collisions of heavy nuclei at ultra-relativistic energies. The experimental results indicate that the QGP created in these experiments is strongly coupled [3; 4; 5], which makes a theoretical description of the system challenging. Furthermore, the usual approach to calculating observables in field theories, namely perturbation theory, breaks down for Yang-Mills theories at finite temperatures beyond \(\mathcal{O}(g^{6})\), where \(g\) is the Yang-Mills coupling constant, due to the severity of the infrared divergences [6; 7]. Even at lower orders, the series is slow to converge except at very high temperatures and successive corrections can even differ in sign. This latter issue however can be addressed through the resummation of the QCD perturbation series. Two widely used resummation schemes are Hard Thermal Loop (HTL) QCD [8; 9; 10; 11], and dimensionally reduced QCD or EQCD [12; 13; 14; 15; 16; 17]. These approaches have resulted in the determination of the QCD Equation of State (QEOS) to \(\mathcal{O}(g^{6}\ln g)\)[18]. Alternatively, one can calculate these observables directly from the underlying theory of QCD using first-principles numerical simulations. This approach, known as lattice QCD, is a non-perturbative approach as it does not require the QCD coupling to be small. It has yielded precise estimates of many properties of the QGP [19; 20; 21; 22; 23; 24; 25; 26].
Apart from bulk observables such as the pressure or the energy density, which are defined via the QCD partition function and its derivatives, there are also the spectral properties of the QGP
defined in terms of various real or imaginary time thermal correlation functions. The most familiar of these observables are the various hadron correlators, which are the imaginary time two-point functions of the familiar hadron creation/annihilation operator \(J_{H}\). By projecting these functions to zero transverse momentum (\(p_{x}=p_{y}=0\)) and zero frequency (\(\omega=0\)) in Fourier space by integrating over \(x\), \(y\) and the imaginary time \(\tau\), one obtains the well-known screening correlators \(C_{H}(z,T)\) of the hadron \(H\) at temperature \(T\), defined as
\[C_{H}(z,T)=\int_{0}^{1/T}dr\int dxdy\,\big{<}J_{H}^{\dagger}(x,y,z,\tau)J_{H}(0,0,0,0)\big{>}, \tag{1}\]
where \(J_{H}(x,y,z,\tau)\) is the hadron operator and the angular brackets represent the thermal average. As the separation \(z\to\infty\), \(C_{H}(z)\to e^{-zM_{H}(T)}\), where \(M_{H}(T)\) is the screening mass at temperature \(T\). As \(T\to 0\), \(M_{H}(T)\) approaches the mass of the corresponding hadron. However, \(M_{H}(T)\) is non-zero even in the QGP phase, that is, even when the quarks and gluons are deconfined. The screening mass thus provides information about the degrees of freedom present in the QGP at high temperatures. Additionally, since the hadron operators form multiplets according to the symmetries of the QCD Lagrangian, the corresponding correlators also become degenerate when the corresponding symmetry is restored, e.g., chiral symmetry restoration at \(T\sim T_{pc}\) or effective \(U(1)_{A}\) restoration [22], or also in the case of the appearance of possible new emergent symmetries at high temperatures [27; 28; 29].
Among the various hadrons, the screening masses corresponding to the flavor non-singlet mesons have been the most studied since their calculation does not require the evaluation of the computationally expensive disconnected diagrams. Continuum-extrapolated results for the masses of all the flavor-singlet spin-0 and spin-1 mesons formed out of the light and strange quarks, over a temperature range 130 MeV \(\lesssim T\lesssim\) 1000 MeV have been recently published [22]. Similar results are also available for the charm quark mesons, although these results have not yet been continuum-extrapolated [30].
The above discussion assumed that the QGP is at zero quark chemical potential, \(\mu_{u}=\mu_{d}=\mu_{s}=0\). Collisions at lower beam energies produce a QGP that is at non-zero baryochemical potential \(\mu_{B}\) at freeze-out [31; 32]. This makes it possible to also study the properties of the QGP in the \(T\)-\(\mu_{B}\) plane. The phase diagram of QCD in the \(T\)-\(\mu_{B}\) plane is a topic of great interest and various phases of nuclear matter have been conjectured [33; 34; 35; 36]. One such prediction is that the QCD chiral crossover transition turns into a first-order transition line at a second-order \(Z(2)\) critical point. This is the famous conjectured QCD critical point. The change from a crossover to a genuine phase transition should have consequences for physical observables including screening masses and hence a knowledge of the screening masses at finite chemical potential should be able to provide some information regarding the existence and location of the QCD critical point. Unfortunately, lattice QCD calculations are not possible at finite chemical potential due to the infamous sign problem of lattice QCD. Although a complete solution to the sign problem is not known, several partial approaches have been proposed among which the method of Taylor expansions [37; 38] has also been applied to calculate second-order corrections to screening masses [39; 40] as well as temporal correlators [41].
In this paper, we will present a new way of calculating the second Taylor coefficient \(M^{\prime\prime}(0)/2\) of the screening mass with respect to the isoscalar chemical potential \(\mu_{\ell}\), defined as \(\mu_{u}=\mu_{d}=\mu_{\ell}\), \(\mu_{s}=0\). Our approach derives from the exact result for the free theory screening correlator at finite \(\mu_{\ell}\) presented in Ref. [42]. We thus expect our approach to be reliable at high temperatures. We first calculate the free theory isoscalar screening correlator to \(\mathcal{O}(\hat{\mu}_{\ell}^{4})\) using the Highly Improved Staggered Quark (HISQ) formulation on an \(80^{3}\times 8\) lattice and compare our results with the exact expressions. While we obtain good agreement with the theoretical expressions, we also find that we need to go to large \(zT\) in order to achieve this agreement. Next, we repeat the calculation at
finite temperature using \(64^{3}\times 8\) lattices at two temperatures viz. \(T=2.24\) and \(2.90\) GeV. We find \(M^{\prime\prime}(0)\) to be small but non-zero within error at the smaller temperature of \(T=2.24\) GeV, while it is consistent with zero within error at the higher temperature of \(T=2.90\) GeV. We expect these results to improve as the fit window is moved towards larger \(zT\). It should therefore be possible to improve upon these estimates in the future by working with lattices having a larger aspect ratio.
Our paper is organized as follows: In section2, we outline the calculation of the pseudoscalar screening correlator and its Taylor coefficients in lattice QCD starting from the QCD partition function at non-zero \(T\) and \(\mu_{\ell}\). The exact form of the correlator is known in the continuum for the free theory with massless quarks. Using our formalism, we calculate the free theory screening correlator and its Taylor coefficients up to the fourth order on the lattice and compare our results with the corresponding continuum expressions in section3. We repeat the same calculation in section4, but this time for two finite temperatures in the range \(T\sim 2\) - \(3\) GeV. We present an _ansatz_ motivated by the free theory expression and compare it with the obtained results. We also describe a procedure for extracting the \(\mathcal{O}(\mu_{\ell}^{2})\) correction to the \(\mu_{\ell}=0\) pseudoscalar screening mass using that ansatz above and present our results for the two above temperatures. We state our conclusions in section5. We present the formulas for the screening correlator and its first four Taylor coefficients in terms of the derivatives of the fermion propagator and fermion determinant in AppendixA. In AppendixB, we present the different operators that are required for calculating the derivatives of the screening correlator.
## 2 Screening Correlators at Finite Density
We consider lattice QCD with 2+1 flavors of staggered quarks on an \(N_{\sigma}^{3}\times N_{\tau}\) lattice in Euclidean spacetime. The partition function at finite temperature \(T\) and isoscalar chemical potential \(\mu_{\ell}\) is given by
\[\mathcal{Z}(T,\mu_{\ell})=\int\mathcal{D}U\,\Delta(T,\mu_{\ell})\,e^{-S_{G}(T)}, \tag{1}\]
where the integral is over all gauge links \(U\), \(S_{G}\) is the gauge action, and \(\Delta(T,\mu_{\ell})\) is the fermion determinant given by
\[\Delta(T,\mu_{\ell})=\prod_{f=u,d,s}\big{[}\det M_{f}(m_{f},T,\mu_{f})\big{]} ^{1/4}, \tag{2}\]
where \(M_{f}(m_{f},T,\mu_{f})\) is the staggered fermion matrix for flavor \(f\). In the present case, we have considered \(m_{u}=m_{d}=m_{\ell}=m_{s}/20\), and \(\mu_{u}=\mu_{d}=\mu_{\ell}\), \(\mu_{s}=0\).
A staggered meson operator is given by \(\mathcal{M}(\mathbf{x})\equiv\sum_{\mathbf{x}^{\prime}}\bar{\chi}_{i}(\mathbf{x})\,\phi( \mathbf{x},\mathbf{x}^{\prime})\,\chi_{j}(\mathbf{x}^{\prime})\), where \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) are sites belonging to the same unit hypercube, \(\bar{\chi}_{i}\) and \(\chi_{j}\) are staggered quark fields with flavor indices \(i\) and \(j\) respectively, and \(\phi(\mathbf{x},\mathbf{x}^{\prime})\) is a phase factor that depends upon the spin and taste quantum numbers of the meson [43]. For a _local_ meson operator, the phase factor is given by \(\phi(\mathbf{x},\mathbf{x}^{\prime})=\phi(\mathbf{x})\delta_{\mathbf{x},\mathbf{x}^{\prime}}\) and the meson operator then simply becomes \(\mathcal{M}(\mathbf{x})=\phi(\mathbf{x})\bar{\chi}_{i}(\mathbf{x})\chi_{j}(\mathbf{x})\).
The finite-\(\mu_{\ell}\) meson correlator \(\mathcal{G}(\mathbf{x},T,\mu_{\ell})\) is the two-point function of the corresponding meson operator: \(\mathcal{G}(\mathbf{x},T,\mu_{\ell})\equiv\langle\!\langle\mathcal{M}(\mathbf{x}) \overline{\mathcal{M}}(0)\rangle\!\rangle\), where \(\mathbf{x}=(x,y,z,\tau)\) and the double angular brackets \(\langle\!\langle\cdot\rangle\!\rangle\) denote a thermal expectation value at \(\mu_{\ell}\neq 0\) viz.
\[\big{\langle}\!\langle\mathcal{O}(\mu_{\ell})\rangle\!\big{\rangle}=\frac{1}{ \mathcal{Z}(T,\mu_{\ell})}\int\mathcal{D}U\,e^{-S_{G}(T)}\,\mathcal{O}(\mu_{ \ell})\,\Delta(T,\mu_{\ell}). \tag{3}\]
For the rest of this paper, we shall only consider the two-point function of the staggered light pseudoscalar meson, for which \((i,j)=(u,d)\) and \(\phi(\mathbf{x})=1\) for all \(\mathbf{x}\). The expectation value \(\langle\!\langle\mathcal{M}(\mathbf{x})\overline{\mathcal{M}}(0)\rangle\!\rangle\) can then be shown to be [44]
\[\big{\langle}\!\langle\mathcal{M}(\mathbf{x})\overline{\mathcal{M}}(0)\rangle\! \big{\rangle}=\big{\langle}\!\langle\mathrm{tr}\big{[}P_{u}(\mathbf{x},0,\mu_{u})P _{d}^{\dagger}(\mathbf{x},0,-\mu_{d})\big{]}\rangle\!\big{\rangle}, \tag{4}\]
where the trace is over the color indices and is the staggered quark propagator for the -flavor flavor. We can drop the flavor indices and since the up and down quarks are identical in the 2+1 flavor case. Setting in the above equation, and denoting
(5)
we can write
(6)
Owing to the sign problem of lattice QCD, it is not possible to calculate directly. Instead, we expand in a Taylor series in :
(7)
where the Taylor coefficients are evaluated at. By differentiating Equation 6 w.r.t., we find that the first three Taylor coefficients are given by
(8)
where the primes denote differentiation w.r.t. and the single angular brackets denote thermal expectation values, _i.e._,
(9)
We have dropped terms containing the expectation value of odd derivatives of the determinant since it can be shown that they vanish at [45]. For the present work, we also require the Taylor coefficients for third and fourth orders, and hence we give the corresponding expressions in Appendix A. In Appendix B, we also give the various operator equations required for the calculation of the terms in Equation 8 and Appendix A.
The screening correlator is obtained from by summing over, and i.e.
(10)
Its Taylor expansion follows straightforwardly from Equation 7, namely
(11)
## 3 Free Theory Screening Correlator at
Equation 10 can be calculated exactly for free massless quarks in continuum QCD. For, the screening correlator is given by [42]
(12)
By differentiating w.r.t. \(\hat{\mu}_{\ell}\), we obtain the first few Taylor coefficients as (with \(\hat{z}\equiv zT\))
\[\frac{C_{\text{free}}^{(0)}(z,T)}{T^{3}} =\frac{3e^{-2\pi\hat{z}}}{2\hat{z}}\left(1+\frac{1}{2\pi\hat{z}} \right), \frac{C_{\text{free}}^{(2)}(z,T)}{T^{3}} =6\hat{z}e^{-2\pi\hat{z}}\left(\frac{1}{2\pi\hat{z}}-1\right),\] \[\frac{C_{\text{free}}^{(4)}(z,T)}{T^{3}} =24\hat{z}^{3}e^{-2\pi\hat{z}}\left(1-\frac{3}{2\pi\hat{z}} \right), C_{\text{free}}^{(1)}(z,T) =C_{\text{free}}^{(3)}(z,T)=0. \tag{10}\]
The odd-numbered Taylor coefficients are identically zero while the even-numbered Taylor coefficients are non-zero and share the same exponential decay factor. Removing the exponential decay of the correlator, we define the amplitude
\[A_{\text{free}}\equiv\left(\frac{C_{\text{free}}}{T^{3}}\right)\hat{z}\,e^{2 \pi\hat{z}}=\frac{3}{2}\left[\left(1+\frac{1}{2\pi zT}\right)\cos(2z\mu_{\ell} )+\frac{\mu_{\ell}}{\pi T}\sin(2z\mu_{\ell})\right] \tag{11}\]
Similar to the correlator, we can expand the amplitude in a Taylor series
\[A(z,T,\mu_{\ell})=\sum_{k=0}^{\infty}\frac{A^{(k)}(z,T)}{k!}\left(\frac{\mu_{ \ell}}{T}\right)^{k}, \tag{12}\]
where \(A^{(k)}\) are the Taylor coefficients for the amplitude obtained by taking the derivatives of Equation 11. We also define the ratios
\[\Gamma(\hat{z})\equiv\frac{C^{(2)}(z,T)}{C^{(0)}(z,T)}\qquad\quad\text{and} \qquad\quad\Sigma(\hat{z})\equiv\frac{C^{(4)}(z,T)}{C^{(0)}(z,T)}\,, \tag{13}\]
which gets rid of the exponential factor and in the large-\(\hat{z}\) limit, we obtain:
\[\Gamma_{\text{free}}(\hat{z}) =-4\hat{z}^{2}\left(1-\frac{1}{2\pi\hat{z}}\right)\Big{/}\left(1+ \frac{1}{2\pi\hat{z}}\right), \Sigma_{\text{free}}(\hat{z}) =16\hat{z}^{4}\left(1-\frac{3}{2\pi\hat{z}}\right)\Big{/}\left(1+ \frac{1}{2\pi\hat{z}}\right),\] \[=-4\hat{z}^{2}+\frac{4\hat{z}}{\pi}-\frac{2}{\pi^{2}}+\mathcal{ O}\left(\hat{z}^{-1}\right), =16\hat{z}^{4}-\frac{32\hat{z}^{3}}{\pi}+\frac{16\hat{z}^{2}}{\pi^ {2}}+\mathcal{O}\left(\hat{z}\right),\] \[\equiv\alpha_{2}\hat{z}^{2}+\alpha_{1}\hat{z}+\alpha_{0}, \equiv\beta_{4}\hat{z}^{3}+\beta_{2}\hat{z}^{2}. \tag{14}\]
The above equations provide the Taylor expansions for both \(\Gamma_{\text{free}}(\hat{z})\) and \(\Sigma_{\text{free}}(\hat{z})\) truncated at the fourth term. The Taylor expansion has coefficients with alternating signs. The truncated terms contribute less than \(2\%\) and \(4\%\) respectively for \(\hat{z}>1\) which become less significant with increasing \(\hat{z}\). The expansion starts at \(\mathcal{O}(\hat{z}^{2})\) for \(\Gamma_{\text{free}}(\hat{z})\) and at \(\mathcal{O}(\hat{z}^{4})\) for \(\Sigma_{\text{free}}(\hat{z})\). In the large-\(\hat{z}\) limit therefore, \(\Gamma_{\text{free}}(\hat{z})\) and \(\Sigma_{\text{free}}(\hat{z})\) are approximately given by quadratic and quartic polynomials respectively. We will see that this remains true when we generalize the free theory expressions to the finite temperature case in section 4.
To verify Equation 10 and Equation 14, we calculated \(C_{\text{free}}^{(0)}\), \(C_{\text{free}}^{(2)}\) and \(C_{\text{free}}^{(4)}\) using the HISQ/tree staggered quark action on an \(80^{3}\times 8\) lattice using a gauge configuration with gauge links equal to the unit \(3\times 3\) matrix. To ensure the convergence of the fermion matrix inverter, it was necessary to work with a non-zero light quark mass. However, we repeated our simulation with different quark masses \(am_{\ell}=0.01\), \(0.00014\), and \(0.00001\), with the stopping residual always equal to \(10^{-9}\), and found that the results we obtained were independent of the quark mass up to very small differences at large \(\hat{z}\). We, therefore, felt confident that the results we had obtained were quite close to the results for free massless quarks.
We plot our results for \(C_{\text{free}}^{(0)}\), \(C_{\text{free}}^{(2)}\) and \(C_{\text{free}}^{(4)}\), as obtained for \(am_{\ell}=0.00014\), in Figure 1 (left). In the same figure, we also compare our results with the corresponding theoretical expressions as given in Equation 10. We plot our results as functions of \(zT\equiv\hat{z}\) i.e. as functions of the separation \(z\) in units of the inverse temperature \(T^{-1}\). Due to the use of periodic boundary conditions in the
simulation, the largest \(z\) value possible was \(z_{\rm max}=N_{\sigma}a/2=40a\), where \(a\) was the lattice spacing and \(N_{\sigma}\) was the number of sites in the \(z\) direction. This maximum separation was equivalent to \(\hat{z}_{\rm max}=5\) since \(T=1/aN_{\tau}\) and the number of sites \(N_{\tau}\) in the temporal direction was equal to \(8\).
Although Equation 2 are valid for \(\hat{z}\gg 1\), we find that the theoretical curves agree well with the lattice data down to \(\hat{z}\simeq 0.30\). Close to \(\hat{z}=5\) on the other hand, we find that the lattice data oscillate about the corresponding theoretical curves. These oscillations, which vanish in the continuum limit, exist for all \(\hat{z}\) and increase as \(\hat{z}\) is increased. They are a well-known feature of both temporal as well as spatial correlators calculated for free staggered fermions and arise because the functional form of the staggered quark propagator is different for even and odd sites [46, 47].
Using the obtained Taylor coefficients, the summed correlators were calculated using Eq. (11). To compare the summed correlator with the exact expression, the summed amplitude Equation 4 for various orders of \(\hat{\mu}_{\ell}\) are plotted in Figure 1 (right) for \(\hat{\mu}_{\ell}=1.5\pi\). The summed amplitudes agree with the exact expression for a finite distance after which it diverges. This distance of agreement increases with increasing the number of summed terms. Like the exact expression, the summed amplitudes also display oscillatory behavior. The summed amplitude using lattice data up to \(\mathcal{O}(\hat{\mu}_{\ell}^{16})\) is also plotted. The lattice data and analytic expression have a good agreement. Repeating the analysis with various values of \(\hat{\mu}_{\ell}\), similar behavior was observed. The summed amplitude with smaller values of \(\hat{\mu}_{\ell}\) has an agreement with the exact expression for larger values of \(\hat{z}\) before it starts diverging.
In Figure 2 (left) and Figure 2 (right), we plot our results for \(\Gamma_{\rm free}(\hat{z})\) and \(\Sigma_{\rm free}(\hat{z})\) respectively and compare them with the corresponding theoretical expressions derived from Equation 2. Once again, we find very good agreement between our lattice data and the theoretical expressions, in fact seemingly right up to \(\hat{z}=0\). The reason for this extended agreement is due to the fact that the ratios \(\Gamma_{\rm free}(\hat{z})\) and \(\Sigma_{\rm free}(\hat{z})\), unlike the Taylor coefficients themselves, remain finite in the \(\hat{z}\to 0\) limit.
Despite the good agreement between our results for \(\Gamma_{\rm free}(\hat{z})\) and \(\Sigma_{\rm free}(\hat{z})\) and the exact expressions, we also fit the data to polynomials \(\alpha_{2}\hat{z}^{2}+\alpha_{1}\hat{z}+\alpha_{0}\) and \(\beta_{4}\hat{z}^{4}+\beta_{3}\hat{z}^{3}+\beta_{2}\hat{z}^{2}\) and compared the fit results with the exact values given in Equation 6. In section 4, we will present a procedure in which the \(\mathcal{O}(\hat{\mu}_{\ell}^{2})\) corrections to the \(\mu_{\ell}=0\) screening mass and screening amplitude in the finite temperature case can be obtained from the coefficients of polynomial fits to the lattice data for \(\Gamma(\hat{z})\)
Figure 1: (left) Lattice calculations of \(C_{\rm free}^{(0)}\), \(C_{\rm free}^{(2)}\) and \(C_{\rm free}^{(4)}\) compared to the corresponding free theory expressions obtained from Equation 2. Points are lattice results while solid lines are the corresponding continuum expressions. The main plot shows the results for the range \(1\lesssim\hat{z}\leq 5\), while the results for \(0\leq\hat{z}\lesssim 1\) are plotted in the inset. (right) Summed amplitude for various orders of \((\hat{\mu}_{\ell})\) as well as the exact expression from Equation 3 are plotted for \(\hat{\mu}_{\ell}=1.5\pi\). The lattice data summed up to \(\mathcal{O}(\hat{\mu}_{\ell}^{4})\) is also plotted.
nd \(\Sigma(\hat{z})\). The polynomial _ansatze_ are valid only in the large-\(\hat{z}\) limit. By fitting our free theory results to polynomial _ansatze_ for different fit ranges, we can obtain an idea for the minimum \(\hat{z}\) range above which such fits may be carried out.
We present our results for the fits in Table 1. The fits were carried out for two ranges \([\hat{z}_{\min},\hat{z}_{\max}]=[1.0,\,4.0]\) and \([2.0,\,4.0]\). The upper fit window of \(\hat{z}=4\) was chosen to avoid the non-physical oscillation affecting the fit parameters. We note that the polynomials Equation 6 are derived from Equation 2 as approximations that are valid in the large-\(\hat{z}\) limit. We, therefore, expect the fit results to improve as \(\hat{z}_{\min}\) is increased. Conversely, by keeping more terms in Equation 6, one should be able to fit the data over a wider \([\hat{z}_{\min},\hat{z}_{\max}]\) range.
We present our fit results in Table 1. The fits were carried out using Equation 6, both with and without the lowest order coefficients \(\alpha_{0}\) and \(\beta_{2}\). In each case, the fit was performed for two ranges in \(\hat{z}\), namely \([1.0,\,4.0]\) and \([2.0,\,4.0]\). The obtained fit coefficients are fairly close to the expected free theory values. One expects the contribution of the lower order terms to become more important, and hence the coefficients \(\alpha_{0}\) and \(\beta_{2}\) to be better determined, as \(\hat{z}_{\min}\) is decreased. This is indeed what we observe from a comparison of the fit results in the two fit ranges. On the other hand, retaining these coefficients in the fit range \([2.0,\,4.0]\) yields poorer fits when compared with fits with these terms dropped.
We also note from Table 1 that the systematic error, namely the variation of the fit coefficients with the change in the fit range, can exceed the statistical error, which is the error on the fit coefficients themselves. For example, in the fit range \([1.0,\,4.0]\) and without the coefficient \(\alpha_{0}\), one obtains \(-\alpha_{2}=3.985(3)\) i.e. a result that is five standard deviations away from the true value of 4. When the fit range is changed to \([2.0,\,4.0]\), one obtains \(-\alpha_{2}=3.995(4)\), which is in much better agreement with the true value. Thus, in order to determine the highest order coefficients, one must either retain sufficiently many lower order terms or go to large enough \(\hat{z}_{\min}\) that the contribution of
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Fit range & \(-\alpha_{2}\) & \(\alpha_{1}\) & \(-\alpha_{0}\) & \(\beta_{4}\) & \(-\beta_{3}\) & \(\beta_{2}\) \\ \hline \multirow{2}{*}{\(1.0\leq\hat{z}\leq 4.0\)} & 3.985(3) & 1.20(1) & & 15.97(5) & 10.21(19) & \\ \cline{2-6} & 4.018(6) & 1.37(3) & 0.20(4) & 16.39(18) & 12.9(1.1) & 4.0(1.6) \\ \hline \multirow{2}{*}{\(2.0\leq\hat{z}\leq 4.0\)} & 3.995(4) & 1.24(1) & & 15.99(7) & 10.29(24) & \\ \cline{2-6} & 4.04(2) & 1.53(11) & 0.44(17) & 16.63(33) & 14.4(2.1) & 6.6(3.4) \\ \hline Exact & 4 & \(4/\pi\approx 1.273\) & \(2/\pi^{2}\approx 0.203\) & 16 & \(32/\pi\approx 10.186\) & \(16/\pi^{2}\approx 1.621\) \\ \hline \end{tabular}
\end{table}
Table 1: Results of polynomial fits \(\alpha_{2}\hat{z}^{2}+\alpha_{1}\hat{z}+\alpha_{0}\) and \(\beta_{4}\hat{z}^{4}+\beta_{3}\hat{z}^{3}+\beta_{2}\hat{z}^{2}\) to \(\Gamma_{\rm free}(\hat{z})\) and \(\Sigma_{\rm free}(\hat{z})\) respectively, with and without the lowest order coefficients \(\alpha_{0}\) and \(\beta_{2}\), for the fit ranges \(1.0\leq\hat{z}\leq 3.0\) and \(2.0\leq\hat{z}\leq 4.0\).
Figure 2: Lattice calculations of \(\Gamma_{\rm free}\) (left) and \(\Sigma_{\rm free}\) (right) compared to the corresponding free theory expressions obtained from Equation 6. Points are lattice results while solid lines are the continuum results.
he lower order terms can be neglected. We will keep this in mind when we fit the finite-temperature data in section 4.
According to Equation 1, \(\Gamma_{\rm free}\) and \(\Sigma_{\rm free}\) approach quadratic and quartic polynomials respectively in the large-\(\hat{z}\) limit. To understand how these limits are approached we look at \(\Gamma_{\rm free}/\hat{z}^{2}\) (Figure 3 (left)) and \(\Sigma_{\rm free}/\hat{z}^{4}\) (Figure 3 (right)). From Equation 1, these are equal to
\[\frac{\Gamma_{\rm free}}{\hat{z}^{2}}=-4+\frac{4}{\pi\hat{z}}-\frac{2}{\pi^{2} \hat{z}^{2}}\,,\qquad\qquad\frac{\Sigma_{\rm free}}{\hat{z}^{4}}=16-\frac{32} {\pi\hat{z}}+\frac{16}{\pi^{2}\hat{z}^{2}}\,. \tag{10}\]
The curves approach the value of the constant term, given in Equation 1, asymptotically as the contribution from other terms decreases at large \(\hat{z}\). The asymptotic values are depicted by a horizontal red line in Figure 3. \(\Gamma/\hat{z}^{2}\) approach the negative constant value of \(-4\) from above while \(\Sigma/\hat{z}^{4}\) approach a positive constant value of \(16\) from below. This asymptotic behavior is due to the different signs of the highest and second-highest coefficients.
We also note that in both figures Figure 2 and Figure 3, the \(\hat{z}=N_{\sigma}/2N_{\tau}\) data point does not have exaggerated oscillations, unlike the nearby points. While the correlator and its derivatives deviate from the exact expression for this point in Figure 1(left), for the ratios \(\Gamma\) and \(\Sigma\), the last point seems to be unaffected by the boundary effect with the values matching the exact expression. We will see that the same features also appear in the finite-temperature data.
## 4 Screening Correlators at Finite \(T\) and \(\mu_{\ell}\)
In this Section, we will present a new method of calculating the second-order correction to the \(\mu_{\ell}=0\) pseudoscalar screening mass at finite temperature. Ignoring terms of \(\mathcal{O}(e^{-4\pi\hat{z}})\) and higher in Equation 1, we see that for \(\mu_{\ell}=0\), the free theory correlator can be written as
\[\frac{C(z,\mu_{\ell}=0)}{T^{3}}=Ae^{-Mz}\quad\text{where}\] \[A=\frac{3}{2\hat{z}}\left(1+\frac{1}{2\pi\hat{z}}\right)\quad \text{and}\quad M=2\pi T, \tag{11}\]
Figure 3: Lattice calculations of \(\Gamma_{\rm free}/(zT)^{2}\) (left) and \(\Sigma_{\rm free}/(zT)^{4}\) (right), compared with the corresponding free theory expressions obtained from Equation 10. Points are the lattice data while the dashed lines are the theoretical expressions. The solid lines joining the points are only to guide the eye. The red horizontal line is the constant value corresponding to the asymptotic limit of \(z\to\infty\).
are the free theory screening amplitude and screening mass. For \(\mu_{\ell}\neq 0\), the free theory correlator can still be written as \(Ae^{-Mz}\) provided we allow \(A\) and \(M\) to take complex values i.e.
\[\frac{C_{\text{free}}(z,T,\mu_{\ell})}{T^{3}} =\text{Re}\Big{[}A(\mu_{\ell})e^{-zM(\mu_{\ell})}\Big{]},\] \[=e^{-zM_{R}(\mu_{\ell})}\Big{[}A_{R}(\mu_{\ell})\cos(zM_{I}(\mu_ {\ell}))+A_{I}(\mu_{\ell})\sin(zM_{I}(\mu_{\ell}))\Big{]},\] \[M(\mu_{\ell})=2\pi T+2i\mu_{\ell}\equiv M_{R}(\mu_{\ell})+iM_{I} (\mu_{\ell}),\] \[A(\mu_{\ell})=\frac{3}{2zT}\left(1+\frac{1}{2\pi zT}\right) \left(1-i\frac{\mu_{\ell}}{\pi T}\right)\equiv A_{R}(\mu_{\ell})-iA_{I}(\mu_{ \ell}). \tag{10}\]
Similar to Equation 10, we postulate that the finite-temperature screening correlator can be written with complex screening mass and screening amplitude as
\[\frac{C(z,T,\mu_{\ell})}{T^{3}}=e^{-zM_{R}(\mu_{\ell})}\Big{[}A_{R}(\mu_{\ell} )\cos\left(zM_{I}(\mu_{\ell})\right)+A_{I}(\mu_{\ell})\sin\left(zM_{I}(\mu_{ \ell})\right)\Big{]}. \tag{11}\]
\(M_{R}\) and \(A_{R}\) are even functions of \(\mu_{\ell}\), while \(M_{I}\) and \(A_{I}\) are odd functions of \(\mu_{\ell}\). This can be seen by looking at the hermitian conjugate of \(G\) (Equation 11) and \(\gamma_{5}-\)hermiticity of \(\Delta\) (Equation 10)
\[G(x,\mu_{\ell})^{*}=G(x,-\mu_{\ell})\,\ \ \ \ \Delta(T,\mu_{\ell})^{*}= \Delta(T,-\mu_{\ell}). \tag{12}\]
Taking hermitian conjugate of Equation 12 and using the reality of the screening correlator Equation 11, we see that the screening correlator must be an even function of \(\mu_{\ell}\)
\[C(x,T,\mu_{\ell})=C(x,T,-\mu_{\ell}) \tag{13}\]
This requires that the odd (even) derivatives of \(M_{R}\) and \(A_{R}\) (of \(M_{I}\) and \(A_{I}\)) vanish at \(\mu_{\ell}=0\). By successively differentiating Equation 11 w.r.t. \(\hat{\mu}_{\ell}\), we obtain (primes denote differentiation w.r.t. \(\hat{\mu}_{\ell}\) at \(\hat{\mu}_{\ell}=0\)):
\[\Gamma(z) =\frac{A_{R}^{\prime\prime}}{A_{R}}+z\left[2\frac{A_{I}^{\prime} }{A_{R}}M_{I}^{\prime}-M_{R}^{\prime\prime}\right]-z^{2}\left(M_{I}^{\prime} \right)^{2},\] \[\equiv\alpha_{2}\hat{z}^{2}+\alpha_{1}\hat{z}+\alpha_{0}, \tag{14}\] \[\Sigma(z) =\frac{A_{R}^{\prime\prime\prime}}{A_{R}}+z\left[4\frac{A_{I}^{ \prime}}{A_{R}}M_{I}^{\prime\prime\prime}+4\frac{A_{I}^{\prime\prime\prime}}{A _{R}}M_{I}^{\prime}-M_{R}^{\prime\prime\prime}-6M_{R}^{\prime\prime}\frac{A_{R }^{\prime\prime}}{A_{R}}\right]\] \[+z^{2}\left[3M_{R}^{\prime\prime 2}-12\frac{A_{I}^{\prime}}{A_{R}} M_{I}^{\prime}M_{R}^{\prime\prime}-4M_{I}^{\prime}M_{I}^{\prime\prime\prime}-6M_{I}^{ \prime 2}\frac{A_{R}^{\prime\prime}}{A_{R}}\right]\] \[+z^{3}\left[6M_{R}^{\prime\prime}M_{I}^{\prime 2}-4\frac{A_{I}^{ \prime}}{A_{R}}M_{I}^{\prime 3}\right]+z^{4}\left(M_{I}^{\prime}\right)^{4},\] \[\equiv\beta_{4}\hat{z}^{4}+\beta_{3}\hat{z}^{3}+\beta_{2}\hat{z} ^{2}+\beta_{1}\hat{z}+\beta_{0}. \tag{15}\]
Equation 14 and Equation 15 are then quadratic and quartic polynomials in \(\hat{z}\) for \(\Gamma(z)\) and \(\Sigma(z)\) respectively, just as for the free theory (Equation 11). The lowest order corrections \(M_{I}^{\prime}\) and \(M_{R}^{\prime\prime}\) to the screening mass can be obtained from the coefficients of these polynomials as
\[\hat{M}_{I}^{\prime}=(-\alpha_{2})^{1/2}=\beta_{4}^{1/4}\ \ \ \ \ \text{and}\ \ \ \ \ \hat{M}_{R}^{\prime\prime}=\frac{1}{4}\left(2\alpha_{1}-\frac{\beta_{3}}{ \alpha_{2}}\right). \tag{16}\]
where \(\hat{M}=M/T\). Substituting the free theory values for these coefficients from Equation 11 in the above equations, we obtain the correct values \(\hat{M}_{I}^{\prime}(\text{free theory})=2\) and \(\hat{M}_{R}^{\prime\prime}\) (free theory) \(=0\).
In deriving Equation 14 and Equation 15, we only considered contributions from a single state. Previously, we had derived equations similar to the above equations, but additionally including the
contributions from the first excited state as well [48]. The excited states contribute only at shorter distances \(\hat{z}\lesssim 2.25\). Considering the contribution of the excited states was necessary due to the large number of fit parameters, which necessitated going to smaller \(\hat{z}\). By contrast, in this paper, we present a new method that allows us to fit the data at larger \(\hat{z}\) by reducing the number of fit parameters and thus decreasing the uncertainty in the fit parameters. Hence in this paper, we will work only with Equation 4.6 and Equation 4.7 and not consider the contributions coming from the excited states.
### Finite Temperature Analysis
Our finite temperature analysis was carried out keeping the temporal extent on the lattice fixed at \(N_{\tau}=8\). The analysis was done for two temperatures viz. \(T=2.24\) GeV and \(T=2.90\) GeV, and for two volumes \(N_{\sigma}=32\) and \(N_{\sigma}=64\). The number of configurations analyzed and quark masses for each \(\beta\) are listed in Table 2. The temperature was determined from the relation \(T=1/N_{\tau}a(\beta)\), where \(a(\beta)\) is the lattice spacing at a given gauge coupling \(\beta\), using the updated parametrization for the function \(af_{\text{K}}(\beta)\) given in Ref. [22]. The strange quark mass \(m_{s}\) was set to its physical value using the parametrization provided in Ref. [19]. The light quark mass \(m_{\ell}\) was set to \(m_{s}/20\), corresponding to a nearly physical pion mass of 160 MeV at \(T=0\).
The gauge configurations were generated using the (2+1)-flavor HISQ/tree action [49; 50; 51]. The Bielefeld RHMC GPU code was used to generate the configurations [52]. The configurations were generated using the leapfrog evolution with molecular dynamics step size 0.2 and trajectory length of 5 steps keeping the acceptance rate between 65% and 80%, with every \(10^{\text{th}}\) configuration being saved.
On each saved configuration, we calculated the derivatives of the correlator, which we call correlator-like operators, as well as the derivatives of the fermion determinant, which we call the trace-like operators in Appendix B. The former were calculated by using 8 point sources per configuration, which we placed at \(n_{i}=0\) or \(N_{\sigma}/2\) for \(i\in\{x,y,z\}\) keeping \(n_{t}=0\). The latter were estimated stochastically using 1000 Gaussian noise vectors per configuration.
We compare the finite temperature results for \(\Gamma(\hat{z})\) and \(\Sigma(\hat{z})\) to the corresponding free theory expressions in Figure 4 (top left) and Figure 4 (top right) respectively. In each plot, we plot the results for both \(T=2.24\) GeV and \(T=2.90\) GeV. Such high temperatures were considered as our ansatz was obtained by comparison with free theory expression and thus is more applicable at higher temperatures. Although both \(\Gamma(\hat{z})\) and \(\Sigma(\hat{z})\) at finite temperatures show a polynomial-like behavior similar to the corresponding free theory expressions, they differ by as much as 30-40% from the corresponding free theory correlators, with the fourth derivative \(\Sigma\) being comparatively farther from the free theory limit than the second derivative \(\Gamma\). Furthermore, both \(\Gamma\) and \(\Sigma\) seem to approach the corresponding free theory results as the temperature is increased, although the approach to the free theory is very slow. We note that a similar slow approach to the free theory limit has also recently been observed in the case of the zero chemical potential screening masses [53]. Thus we expect the polynomial coefficients in Equation 4.6 and Equation 4.7 to differ significantly from their free theory values (Table 1). This is seen more clearly in Figure 4 (bottom), in which
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\beta\) & \(T\) [GeV] & \(N_{\sigma}\) & \(m_{s}\) & configurations \\ \hline
9.670 & 2.90 & 32 & 0.002798 & 12700 \\ & & 64 & 0.002798 & 6000 \\ \hline
9.360 & 2.24 & 64 & 0.003691 & 6000 \\ \hline \end{tabular}
\end{table}
Table 2: The list of configurations used for the finite temperature. All the configurations used here have \(N_{\tau}=8\).
we plot \(\Gamma/\Gamma_{\rm free}\) and \(\Sigma/\Sigma_{\rm free}\) given by
\[\frac{\Gamma}{\Gamma_{\rm free}}=\frac{\alpha_{2}\hat{z}^{2}+\alpha_{1}\hat{z}+ \alpha_{0}}{-4\hat{z}^{2}+4\hat{z}/\pi-2/\pi^{2}}\,,\qquad\frac{\Sigma}{\Sigma_ {\rm free}}=\frac{\beta_{4}\hat{z}^{4}+\beta_{3}\hat{z}^{3}+\beta_{2}\hat{z}^{ 2}+\beta_{1}\hat{z}+\beta_{0}}{16\hat{z}^{4}-32\hat{z}^{3}/\pi+16\hat{z}^{2}/ \pi^{2}}\,, \tag{4.9}\]
as functions of \(\hat{z}\) for all the temperatures and volumes given in Table 2. For large \(\hat{z}\) however, both \(\Gamma/\Gamma_{\rm free}\) and \(\Sigma/\Sigma_{\rm free}\) curves are seen to slowly decrease to approach the corresponding asymptotic values of \(-\alpha_{2}/4\) and \(\beta_{4}/16\) according to Equation 4.9. The deviation from the free theory value of unity indicates that the finite temperature polynomial coefficients differ significantly from the free theory ones.
Based on our results for the two volumes considered for \(T=2.90\) GeV, we conclude that there are no significant finite-volume effects for these temperatures. However, both \(\Gamma/\Gamma_{\rm free}\) and \(\Sigma/\Sigma_{\rm free}\) curve upwards for all the data sets as \(\hat{z}\to N_{\sigma}/(2N_{\tau})\), indicating the presence of boundary effects. These boundary effects do not seem to affect the point at \(z=N_{\sigma}/2\). The upward deviation starts at around \(\hat{z}=1.5\) for \(N_{\sigma}=32\) and around \(\hat{z}_{\rm max}=3.5\) for \(N_{\sigma}=64\). Due to this, we set an upper limit of \(\hat{z}=3.25\) for our fits on the \(N_{\sigma}=64\) lattices while also however including the \(\hat{z}=N_{\sigma}/(2N_{\tau})\) point.
Similar to the free theory, we plot \(\Gamma/\hat{z}^{2}\) and \(\Sigma/\hat{z}^{4}\) in Figure 5 (left) and Figure 5 (right) respectively to understand the approach of \(\Gamma\) and \(\Sigma\) to their respective asymptotic limits. We rewrite Equation 4.6 and Equation 4.7 as
\[\frac{\Gamma}{\hat{z}^{2}}=-|\alpha_{2}|-\frac{|\alpha_{1}|}{\hat{z}}+\frac{ \alpha_{0}}{\hat{z}^{2}}\qquad\text{and}\qquad\frac{\Sigma}{\hat{z}^{4}}=\beta _{4}+\frac{\beta_{3}}{\hat{z}}-\frac{|\beta_{2}|}{\hat{z}^{2}}+\mathcal{O} \left(\frac{1}{\hat{z}^{3}}\right)\,, \tag{4.10}\]
The boundary effects mentioned earlier are observable near \(\hat{z}=N_{\sigma}/2N_{\tau}\) as the data points curve downwards for both temperatures although as noticed earlier, the \(\hat{z}=N_{\sigma}/2N_{\tau}\) point seems unaffected by the boundary effects. While the data do seem to approach a constant value asymptotically,
Figure 4: (Top left) \(\Gamma\) and (top right) \(\Sigma\) for T= 2.24 GeV and T= 2.90 GeV along with the free theory expression Equation 3.6 plotted against \(n_{z}\) measured on \(64^{3}\times 8\) lattice. (Bottom) \(\Gamma/\Gamma_{\rm free}\) and \(\Sigma/\Sigma_{\rm free}\) plotted against \(\hat{z}\) for the two temperatures as well as the two volumes given in Table 2.
however unlike the free theory, we observe that the finite temperature curves attain a minimum for \(\Gamma/\hat{z}^{2}\) and a maximum for \(\Sigma/\hat{z}^{4}\). In anticipation of this behavior, we have made \(\alpha_{2}\), \(\alpha_{1}\) and \(\beta_{2}\) in Equation 4.10 negative. By comparing the results for the two temperatures, we see that these minima and maxima shift to larger \(\hat{z}\) as the temperature is increased, with \(\hat{z}\to\infty\), i.e. vanishing, as \(T\to\infty\).
From Equation 4.10, we can identify the location of the extrema of \(\Gamma/\hat{z}^{2}\) and \(\Sigma/\hat{z}^{4}\) as
\[\hat{z}_{\Gamma}=-2\,\frac{\alpha_{0}}{\alpha_{1}}\,,\qquad\qquad\qquad\hat{z} _{\Sigma}=-2\,\frac{\beta_{2}}{\beta_{3}}\,. \tag{4.11}\]
Instead of keeping all three coefficients as fit parameters, the extremum points \(\hat{z}_{\Gamma}\) and \(\hat{z}_{\Sigma}\) were located for each jackknife sample using spline fittings, and their values were used to re-express the lowest order coefficients \(\alpha_{0}\) and \(\beta_{2}\) in terms of \(\alpha_{1}\) and \(\beta_{3}\). Reducing the number of fit parameters
Figure 5: (left) \(\Gamma/\hat{z}^{2}\) and (right) \(\Sigma/\hat{z}^{4}\) data plotted against \(n_{z}\) for the two temperatures as well as the two volumes given in Table 2.
Figure 6: (top left) \(\alpha_{2}\), (top right) \(\alpha_{1}\),(bottom left) \(\beta_{4}\), and (bottom right) \(\beta_{3}\) coefficients obtained using fitting the \(\Gamma/\hat{z}^{2}\) and \(\Sigma/\hat{z}^{4}\) respectively for T=2.24 GeV and T=2.90 GeV on \(64^{3}\times 8\) lattice. The upper window is fixed to \(\hat{z}_{\rm max}=3.25\) while the lower window \(\hat{z}_{\rm min}\) is varied.
from three to two resulted in better fits to the data. The fits were carried out for various fit windows \([\hat{z}_{\rm min},\hat{z}_{\rm max}]\). The fit results showed very little variation for changing \(\hat{z}_{\rm max}\) and thus the upper fit window was fixed to \(\hat{z}_{\rm max}=3.25\). Subsequently, we sought to obtain stable results for the fit coefficients by varying \(\hat{z}_{\rm min}\). Our results for \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{4}\), and \(\beta_{3}\) are plotted in Figure 6. We note that our procedure yields stable values for \(\alpha_{1}\) and \(\alpha_{2}\), whereas the plateaus for \(\beta_{4}\) and \(\beta_{3}\) are not reached for \(T=2.90\) GeV. We need to consider larger lattices in order to get more reliable values of these coefficients, especially at higher temperatures. For our estimates of \(\beta_{4}\) and \(\beta_{3}\) at \(T=2.90\) GeV, we approximated their values from fits with windows keeping \(\hat{z}_{\rm min}=2.625\).
By fitting a constant value to the plateaus, we obtain the best-fit values for each coefficient, which we present in Table 3. In quoting the errors on these fits, we note that there are two separate sources of error. In the first place, the fitter itself returns an error on each fit parameter. Second, there is also the variation of the fit parameter itself from one jackknife sample to the next. Since we found that the former was always much smaller than the latter, we have only quoted the jackknife errors in Table 3.
We tabulate our results for all the fit parameters, namely \(\hat{z}_{\Gamma}\), \(\hat{z}_{\Sigma}\), \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{3}\) and \(\beta_{4}\), in Table 3. From the Table, we see that the highest coefficients \(\alpha_{2}\) and \(\beta_{4}\) have the same sign for both the free theory and finite temperature. The second highest coefficients \(\alpha_{1}\) and \(\beta_{3}\) have different signs for the free theory and finite temperature, as we have already seen earlier. We also note that all the coefficients are far from the free theory values, although they seem to slowly approach the free theory as the temperature is increased.
By substituting the results for the fit parameters into Equation 8, we obtain the lowest order corrections to the screening mass, namely \(M_{I}^{\prime}\) and \(M_{R}^{\prime\prime}\). We have also listed our results for both these quantities in Table 3. In the same Table, we have also listed the values of the screening masses \(M_{R}(0)\) obtained for each temperature. These were calculated by fitting the pseudoscalar correlator to one-state and two-state fits and using the Akaike Information Criterion (corrected) (AICc) to obtain the mass plateau from the two-state fit. The procedure is identical to that used in Ref. [22] for the calculation of the \(\mu=0\) screening masses, and we refer the reader to that paper and the references therein for further details.
We find that the screening mass \(M_{R}(0)\) values are larger than the free theory value of \(2\pi\) at the temperatures of our analysis. We also see that \(M_{R}^{\prime\prime}(0)\) is around \(4\%\) of \(M_{R}(0)\) for both temperatures. Assuming higher-order corrections to be negligible, this suggests that \(M_{R}(\mu_{\ell})\) differs from \(M_{R}(0)\) by only about \(2\%\) for \(\hat{\mu}_{\ell}=1\) near \(T\sim 2.5\) GeV. Note however that our results for \(M_{I}^{\prime}(0)\) differ by as much as \(25\)-\(30\%\) from the free theory value of \(2\). Both \(M_{R}^{\prime\prime}(0)\) and \(M_{I}^{\prime}(0)\) seem to approach
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Temp (T) & \(\hat{z}_{\Gamma}\) & \(\alpha_{2}\) & \(\alpha_{1}\) & \(\hat{z}_{\Sigma}\) & \(\beta_{4}\) & \(\beta_{3}\) \\ \hline
2.24 GeV & 2.269(23) & -2.034(13) & -1.955(57) & 2.860(50) & 5.383(218) & 10.091(1255) \\ \hline
2.90 GeV & 2.500(16) & -2.117(18) & -2.175(87) & 3.125(25) & 5.815(365) & 10.667(2321) \\ \hline Free th. & & \(-4\) & \(4/\pi\approx 1.273\) & & 16 & \(-32/\pi\approx-10.186\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline Temperature & \(M_{R}(\hat{\mu}_{\ell}=0)\) & \(\hat{M}_{R}^{\prime\prime}\) & \(\hat{M}_{I}^{\prime}\) \\ \hline
2.24 GeV & 6.337(1) & 0.263(169) & 1.426(5) \\ \hline
2.90 GeV & 6.352(1) & 0.172(328) & 1.455(6) \\ \hline Free theory & \(2\pi\approx 6.283\) & 0 & 2 \\ \hline \end{tabular}
\end{table}
Table 3: Values listed for extremum point \(\hat{z}_{\Gamma}(\hat{z}_{\Sigma})\) for the function \(\Gamma/\hat{z}^{2}(\Sigma/\hat{z}^{4})\) along with the obtained values for fit parameters \(\alpha_{2}\) and \(\alpha_{1}\) (\(\beta_{4}\) and \(\beta_{3}\)) for two temperatures on lattices with volume \(64^{3}\times 8\). Two-state fit screening mass \(\hat{M}_{R}(\hat{\mu}_{\ell}=0)\), along with \(\hat{M}_{R}^{\prime\prime}\) and \(\hat{M}_{I}^{\prime}\) obtained using Equation 8 are listed for the two temperatures obtained on lattices with volume \(64^{3}\times 8\).
the respective free theory values with increasing temperature, although the rate of approach is very slow. While this is surprising, we note that a recent determination of the pseudoscalar screening mass at \(\mu=0\) too found a similar exponentially slow approach to the free theory value [53].
## 5 Conclusions
In this paper, we introduced a new way to calculate the finite-\(\mu_{\ell}\) corrections to the pseudoscalar screening mass. Our approach is based on the method of Taylor expansions, in which both the screening correlator and the screening mass are expanded in a Taylor series in \(\hat{\mu}_{\ell}\equiv\mu_{\ell}/T\). In the free theory, the finite density screening correlator manifests oscillatory behavior in addition to decaying exponentially with increasing separation [42]. In this work, we showed that these oscillations can be taken into account by making the screening mass complex viz. \(M=M_{R}+iM_{I}\). The real part \(M_{R}(\mu_{\ell})\) is the familiar screening mass in the \(\mu_{\ell}\to 0\) limit while the imaginary part \(M_{I}\) vanishes in the same limit. However, the imaginary part is responsible for the oscillations at \(\mu_{\ell}\neq 0\) and hence it is necessary to incorporate it into our formalism in order to obtain a reliable estimate of \(M_{R}^{\prime\prime}(0)\) at finite temperature.
We expanded the free correlator in a Taylor series in \(\hat{\mu}_{\ell}\) and calculated the first four Taylor coefficients on an \(80^{3}\times 8\) lattice using HISQ fermions. Our results showed very good agreement with the theoretical expressions Equation 3, down to \(\hat{z}\sim 0.3\). By combining the results of different orders, we were also able to show that the correlator displays the expected oscillations as a function of \(\hat{\mu}_{\ell}\) (Figure 1). We also showed that the ratios of Taylor coefficients \(\Gamma(\hat{z})\) and \(\Sigma(\hat{z})\) should behave as quadratic and quartic polynomials respectively in the large-\(\hat{z}\) limit (Equation 3). Our fits to the free theory results confirmed the expected behavior, while also indicating that one had to go to \(\hat{z}\gtrsim 2\)-\(3\) to observe this asymptotic behavior. We also showed how the screening mass corrections \(M_{R}^{\prime\prime}(0)\) and \(M_{I}^{\prime}(0)\) could be extracted from the coefficients of these polynomials (Equation 4). Finally, we applied our formalism to screening correlators at two temperatures \(T=2.24\) GeV and \(T=2.90\) GeV calculated on \(64^{3}\times 8\) lattices with the HISQ/tree action. We extracted results for \(M_{R}^{\prime\prime}(0)\) and \(M_{I}^{\prime}(0)\) for both these temperatures. In both cases, the screening mass correction \(M_{R}^{\prime\prime}(0)\) was positive and around \(4\%\) of \(M_{R}(0)\). We also found significant differences from the free theory values, for all quantities but especially \(M_{I}^{\prime}(0)\), and a very slow approach to the free theory as the temperature was increased.
We thank Frithjof Karsch, Anirban Lahiri, Sourendu Gupta, Saumen Datta, and Rishi Sharma for helpful discussions and suggestions. The data for this project were generated at the Centre for High Energy Physics of the Indian Institute of Science, Bengaluru. The Bielefeld RHMC GPU code was used to generate the gauge configurations, and a modification of the same code was used in calculating the Taylor coefficients.
|
2308.10589 | Engineering of intelligent reflecting surfaces: Reflection locality and
angular stability | Reconfigurable intelligent surfaces (RISs) are electromagnetically passive
controllable structures, deflecting the incident wave beam in directions
predefined by the control signal. A usual way to design RIS based on
metasurfaces (MSs) is based on the application of the approximation in which
the reflective properties of a uniform MS are attributed to a unit cell of the
non-uniform one. We call this approximation the reflection locality. In the
present paper, we show that this approximation may result in heavy errors. We
also find a condition under which this approximation is applicable for a wide
range of incidence and deflection angles. This condition is the angular
stability of the reflection phase of a uniform MS based on which the
non-uniform one is generated. We present an approximate analytical proof of the
equivalence of the reflection locality and angular stability. As an example, we
report theoretical and experimental results we obtained for a binary RIS whose
generic uniform analogue has the angular stability. Meanwhile, for its
counterpart without angular stability (the so-called mushroom MS) the same
model fails. | Javad Shabanpour, Vladimir Lenets, Geoffroy Lerosey, Sergei Tretyakov, Constantin Simovski | 2023-08-21T09:41:42Z | http://arxiv.org/abs/2308.10589v1 | # Engineering of intelligent reflecting surfaces: Reflection locality and angular stability
###### Abstract
Reconfigurable intelligent surfaces (RISs) are electromagnetically passive controllable structures, deflecting the incident wave beam in directions predefined by the control signal. A usual way to design RIS based on metasurfaces (MSs) is based on the application of the approximation in which the reflective properties of a uniform MS are attributed to a unit cell of the non-uniform one. We call this approximation the reflection locality. In the present paper, we show that this approximation may result in heavy errors. We also find a condition under which this approximation is applicable for a wide range of incidence and deflection angles. This condition is the angular stability of the reflection phase of a uniform MS based on which the non-uniform one is generated. We present an approximate analytical proof of the equivalence of the reflection locality and angular stability. As an example, we report theoretical and experimental results we obtained for a binary RIS whose generic uniform analogue has the angular stability. Meanwhile, for its counterpart without angular stability (the so-called mushroom MS) the same model fails.
Reconfigurable Intelligent Surface (RIS), Wireless communication, Metasurface, Channel reciprocity, Angular stability, Smart environment.
## I Introduction and problem formulation
Recent years have witnessed a remarkable growth in attention toward the transition to high-frequency wireless networks. Reasons such as higher peak data rates, more accurate localization of users and objects, and much denser connectivity made this transition indisputable [1, 2]. The millimeter-wave and THz spectra provide a much larger bandwidth and huge data rates to innovate communication architecture [3]. For the fifth-generation (5G) and sixth-generation (6G) cellular networks, the frequencies below 10 GHz cannot support the needed data rates
and low latency communication as well as the increasing number of new applications such as virtual/augmented reality (VR/AR), autonomous driving, and Internet of Things [4]. Besides, optical wireless communications have some inherent drawbacks such as the impact of atmospheric and water absorption, low transmission power budget due to eye safety, and high diffusion losses on rough surfaces. Therefore, millimeter-wave and THz ranges are commonly recognized to be optimal for prospective wireless telecommunications [5]. Meanwhile, millimeter-wave communications networks still suffer from some basic shortcomings, such as high path loss, that demands high-power transmitters, and obstacles blocking communication paths [6]. To manage with these challenges, the concept of Reconfigurable Intelligent Surfaces (RISs) has been introduced. It is based on the phenomenon of anomalous reflection.
The anomalous reflection from a periodically non-uniform metasurface has become a popular topic since 2011 when paper [7] was published in which the generalized reflection law initially revealed in [25] was rediscovered. In the following decade, the idea of controllable anomalous reflection with minimized scattering losses was developed in numerous works, reviewed in [8]. As a means of enabling future smart wireless networks, the concept of RIS-based wireless communication was proposed, in which the RIS is claimed to be a strongly controllable anomalous reflector [9]. After 2019, the development of RISs has become even more extensive, (see e.g. in [1, 2, 3, 4, 9, 10, 11, 12, 13, 14]) in view of the extreme importance of RISs for 5G and 6G networks. A RIS in its modern understanding is a 2D array of reflecting elements that play the role of an electromagnetically passive but electronically (or optically) tunable relay station between the transmitter and receiver [9, 10]. With advantages such as consuming little power, requiring less hardware complexity, and adapting to the wireless channels, a RIS becomes an ideal candidate for smart radio environment [11]. Several studies have focused on RIS from a variety of angles, including multi-modulation schemes [12], passive beamforming [13], and MIMO-assisted networks [14].
As it was explained in [10], RISs can be implemented in three ways: as a MS (deeply subwavelength unit cells), as a meta-grating (scattering elements are deeply subwavelength but the gaps between them are substantial), and as a phased array, whose element has the resonant size (close to \(\lambda/2\), where \(\lambda\) is the wavelength). Following to [25] we call such RISs periodic reflectarrays. Each approach has its own advantages and disadvantages, which were discussed in [1, 3, 9, 10, 17, 18, 19, 20]. In this paper, we concentrate on the RISs based on metasurfaces. A metasurface (MS) can be defined as an electromagnetically thin composite layer with an
electromagnetically dense arrangement of strongly subwavelength constitutive elements [15, 16, 22]. The advantageous operation of a MS making this structure useful for applications when it is resonant as a whole, whereas its unit cells are deeply subwavelength [23, 24, 22].
The most common approach for engineering the anomalous reflection in a MS is to change the reflection phase linearly along the trace of the incidence plane [25, 7]. This method was discussed in [24] in the context of the usual approximation when one assumes that the reflection coefficient at any point \(A\) of the non-uniform MS is equal to the reflection coefficient calculated for a uniform MS, i.e. as if all unit cells around the reference unit cell were the same as the reference one. Indeed, in a non-uniform MS all unit cells are same only geometrically, they differ from one another by the values of the lumped loads connecting the metal elements. The approximation we call the reflection locality (RL) neglects this difference. This approximation is often mixed up with the approximation of physical optics (PO). In the literature, PO is defined as the approximation utilizing the ray optics so that to find the currents and fields on the scattering surface [21]. Indeed, the approximation of RL is not as demanding since it only implies that the difference of the loads in the surrounding unit cells (with which the reference unit cell interacts) does not change the reflection coefficient on the surface of the reference unit cell. Meanwhile, the approximation of PO for a periodically non-uniform MS (PNUMS) implies that the reference unit cell would not interact with the surrounding unit cells at all. So, PO is a more restrictive approximation than RL and in this paper we do not concern it. We consider the applicability of RL which offers an easy tool to engineer the wave deflection for a PNUMS assuming that the local reflection phase \(\Phi_{R}(x)\) is equal to \(\Phi_{R}\) corresponding to the uniform MS that we call the generic one. The main purpose of the present paper is to reveal the conditions of the applicability of this design tool. This applicability is considered in the context of RISs capable to operate at frequencies of the order of 20 GHz or higher for non-polarized waves and in a broad operation band (up to 20%).
Though the application of RL for the design of PNUMSs is very popular, if the PNUMS is dedicated to operate with large angles of incidence and deflection, this approximation may result in serious errors. For example, in [26] the application of the PO resulted in a wrong model for varactor-based RISs. With this model, the authors of [26] studied channel reciprocity and found that it is violated for large \(\theta_{r}\) when \(\theta_{i}\) are small and vice versa. Being sure in their model (formulated as a set of theorems), the authors claimed that the angular stability of the local reflection phases versus the incidence angle is necessary to preserve the reciprocity, which is
otherwise violated. Angular stability means that the reflection phase \(\Phi_{R}\) in each of the states of the uniform MS is nearly the same for whatever incidence angle in a broad interval (practically, up to \(60-70^{\circ}\)). Indeed, the property of angular stability has nothing to do with the reciprocity. The reciprocity can be broken only in few cases: employing nonlinearities together with structural asymmetries, using nonreciprocal elements such as magnets biased by the dc field, and changing the MS in time [27]. A few years after the publication of [26], the same authors probably came to the same conclusion because in their recent overview [28] their work [26] was not mentioned.
In this paper, we show the possibility to apply RL for a certain class of MSs. We prove it analytically for MSs with slow spatial variations of the unit cell response but numerically and experimentally we show it for binary MSs, in which the unit cell properties vary step-wise. In binary RISs two substantially different values of the load are used being uniform in two half-periods of the binary MS. Being easier in the design and fabrication compared to gradually varying MSs, binary MSs became a hot-spot topic in their applications for RISs [29, 30, 31, 32, 33]. In the literature, there are several types of binary MSs, but their common point is the approximation of RL applied to the their generic (uniform) version. First, one considers a uniform MS and obtains for one value of the unit cell lumped load the reflection state '0' and for the other value of the load the reflection state '1'. These states are clearly distinguished when the reflection phases differ by nearly \(180^{\circ}\)[26, 29, 30, 31, 32]. In practice the phase difference \((180\pm 40)^{\circ}\) is still allowed. The approximation of RL means that one supercell (half-period) of the binary MS is claimed to have the local reflection phase \(\Phi_{R}\) identified as '0', and another half-period - the local reflection phase identified as '1' [26, 29, 30, 34]. Since each unit cell can be in one of two states, the period \(D\) and, therefore, the deflection directions are controllable.
In works [26, 29, 30] one described binary MSs operating either in a narrow frequency band or only for small deviation angles \(|\theta_{r}-\theta_{i}|<\pi/4\). In work [34] both broadband operation of a RIS and large deviation angles \(|\theta_{r}-\theta_{i}|=(60-70)^{\circ}\) were reported for a binary MS. In this case, the generic (uniform) MS possessed the angular stability in three states '0' and '\(\pm\)1' and the PNUMS reported in [34] was not binary but ternary. The authors of [34] have shown that reflection locality holds for their PNUMS if it is illuminated by waves with TM polarization. For TE polarization their ternary PNUMS was not suitable. Meanwhile, in advanced mobile communication systems, the dual polarization operation is very important [41, 42, 43]. Another drawback of the PNUMS suggested in [34] is the low-frequency operation band \(3\pm 0.5\) GHz that can be hardly scaled to mm waves because the substrate comprises very dense linear arrays
of metal via mimicking vertical compartment walls.
In our previous paper [40] we developed a uniform MS which consisted of a grid of copper Jerusalem crosses located on top of a metal-backed dielectric substrate with very small but not negligible losses. In the present paper, we theoretically and experimentally study a binary PNUMS whose generic version was developed in [40]. We numerically and experimentally show that the RL is really adequate for it in a wide sheer of incidence and deviation angles, in the broad range of frequencies, and for both TE and TM polarizations of the incident wave. The obtained operation parameters of our PNUMS are better than those previously reported for RISs in the available literature.
However, the demonstration that the MS suggested in [40] operates properly only complements our main claim. Our main claim is as follows: a uniform MS with angular stability can be considered really generic for a PNUMS because the concept of angular stability for a uniform MS and the concept of RL for its non-uniform analogue are equivalent. We present an approximate analysis of this equivalence complemented by explicit examples of the MSs with and without angular stability. The angularly dependent MS is so-called mushroom MS, the most known type of so-called _high-impedance surfaces_. The mushroom MS has angular stability for one (TE) polarization of the incident wave and has no this property for the TM-polarized waves. We show that for the TE-waves the uniform mushroom MS is generic and for the TM-waves it is not generic. Both examples - our PNUMS and the mushroom PNUMS - convincingly confirm our analysis.
## II Equivalence of Reflection Locality and Angular Stability
### _Preliminary remarks_
Deflection of planes wave by a periodic surface is described by the Floquet theory of phase diffraction gratings. When a plane wave illuminates an infinite periodic structure, a discrete set of spatial harmonics is created, some of them can be propagating waves, while the others are evanescent waves [18]. The number of the propagating waves (open diffraction channels) at a given frequency is specified by the period of the structure and the incidence angle. The reflection and incidence angles are related as follows (e.g., [23]):
\[\sin\!\theta_{r,M}=\sin\theta_{i}+M\lambda/D\ \ (M\in Z) \tag{1}\]
Here, \(D\) is the grating period, \(\lambda\) is the wavelength in free space, and \(M\) is the spatial harmonic number. If \(M=\pm 1\), Eq. (1) results in the relation \(|\sin\theta_{r,\pm 1}-\sin\theta_{i}|D=\lambda\) that we have referred above. The use of phase diffraction gratings allows one to engineer the amplitude of \(M\)-th harmonic higher than that of the specular reflection [53, 54, 55]. Practically, because for \(M\gg 1\) the requirement \(|\sin\theta_{r,M}|<1\) cannot be respected if the angles \(\theta_{r,M}\) and \(\theta_{i}\) are not very small. For the single-user regime one engineers the period \(D\) so that a harmonic with \(|M|=1\) is dominant. For the multi-user regime several harmonics with \(M=0,\pm 1\) or \(\pm 2\) may dominate.
In the case of a reconfigurable MS, one period comprises a sufficient number of scattering elements that are deeply subwavelength in size. For different incidence and deflection angles these periods can be different, but all differences between the unit cells of a PNUMS and those of the uniform MS are in the values of the loading impedances. So, the PNUMS is the same initially uniform MS in which one changes the loads (periodically along the trace \(x\) of the incidence plane). This initial uniform MS can be called generic for PNUMSs with different periods \(D\).
The RL approximation replaces the reflection phase \(\Phi_{R}(x)\) at the reference unit cell loaded by the lumped impedance \(Z(x)\) by the reflection coefficient of the generic MS whose unit cells are all loaded by the same impedance \(Z(x)\). This is so for a gradually non-uniform MS. For a binary MS the reflection phase of the generic MS whose unit cells are loaded by the impedance \(Z_{0}\) (state '0') is attributed (in this approximation) to one half-period, and the reflection phase of the generic MS whose unit cells are loaded by the impedance \(Z_{1}\) (state '1') is attributed to another half-period. First, let us we prove that this approximation is not suitable for PNUMS whose generic MS has no angular stability.
### _Angular instability disables the approximation of reflection locality_
Let us consider a uniform MS without angular stability of the reflection phase. We assume that there is a set of \(N\) lumped loads \(Z_{1},\,Z_{2},\ldots Z_{N}\) with which the uniform MS offers different \(\Phi_{R}\) for the normal incidence (\(\theta_{i}=0\)). Let \(\Phi_{R}^{(m)}(\theta_{i}=0)\) corresponding to \(Z_{m}\) (\(m=2,\ldots N-1\)) differ from \(\Phi_{R}^{(m\pm 1)}(\theta_{i}=0)\) by \(\pm 2\pi/N\). Then the set \(\Phi_{R}^{(m)}(\theta_{i}=0)\) covers the whole range \([0,2\pi]\). If we want to deflect the normally incident wave to the angle \(\theta_{r}=\pi/3\) we need, in accordance to (1), the period \(D=1.16\lambda\). Postulating the reflection locality, we engineer this period for the PNUMS taking \(N\) loads and having for them \(\Phi_{R}^{(m)}(0)-\Phi_{R}^{(m\pm 1)}(0)=\pm 2\pi/N\).
No angular stability means that the values \(\Phi_{R}^{(m)}(\theta_{i})\) for a substantial incidence angle, such as \(\theta_{i}=\pi/3\), are noticeably different from \(\Phi_{R}^{(m)}(0)\). It worth notice, that the concept of angular
stability makes sense namely for substantial incidence angle \(\theta_{i}\) and for similarly substantial deviation angles. For small incidence angles (practically for \(\theta_{i}<\pi/4\)) the majority of reflecting MSs, called high-impedance surfaces, have \(\Phi_{R}(\theta_{i})\approx\Phi_{R}(0)\). This is not surprising because for these angles the period \(D\) in (1) is electromagnetically large and the reflection phase gradient in a PNUMS is small.
Consider the case when the reflection phase of the generic MS for \(\theta_{i}=\pi/3\) is twice larger than that for the normal incidence. Then for the same loads \(Z_{m}\) and \(Z_{m\pm 1}\) as we used above we will have \(\Phi_{R}^{(m)}(\pi/3)-\Phi_{R}^{(m\pm 1)}(\pi/3)=\pm 4\pi/N\). In this case, postulating the reflection locality for the PNUMS we obtain the period equal to \(D=1.16\lambda\) for the normal incidence and \(D=0.58\lambda\) for the incidence under the angle \(\theta_{i}=\pi/3\). Since the period of the PNUMS turns out to be subwavelength, this PNUMS does not possess anomalous reflection. No power will be deflected under the angle \(\theta_{r}=0\), though the normally incident wave deflects under the angle \(\theta_{r}=\pi/3\).
Consider the case when the reflection phase of the generic MS for \(\theta_{i}=\pi/3\) is twice smaller than that for the normal incidence. Then we have for the phase difference \(|\Phi_{R}^{(m)}(\pi/3)-\Phi_{R}^{(m\pm 1)}(\pi/3)|=\pi/N\) is also twice smaller than it was in the case of the normal incidence. The same periodicity \(D=1.16\lambda\) is engineered in this case, but the reflection phase varies in the range \([0,\pi]\) instead of \([0,2\pi]\). It means that the coordinate gradient of the reflection phase is not constant along the PNUMS and the deflection from the angle \(\theta_{i}=\pi/3\) to the angle \(\theta_{r}=0\) is impossible again.
Thus, for both manifestations of angular instability - either noticeably larger values of \(\Phi_{R}(\theta_{i}>\pi/4)\) compared to \(\Phi_{R}(0)\) or noticeably smaller values - the transmittance from the channel \(\theta=0\) to the channel \(\theta=\pi/3\) and the reciprocal transmittance from the channel \(\pi/3\) to the channel \(0\) are essentially different. This violation of reciprocity evidently results from the assumed reflection locality in absence of angular stability. Therefore, reflection locality cannot hold for a PNUMS if the generic MS does not possess angular stability. The same proof can be easily rewritten in terms of a binary MS.
_Relations of the angular stability property with response locality and the approximation of physical optics_
Now, let us prove that the angular stability of the generic MS requires locality of unit cell response for uniform or nearly uniform infinite arrays. In that case, the locality of unit cell response is the same as the physical optics approximation. In this proof, we demand that the
basic requirements to the RIS realized as a phase-gradient reflector are respected: in the locally periodic approximation, the magnitude of the local reflection coefficient is equal unity and the reflection phase can be varied in the whole \(2\pi\) range. We stress that more advanced non-local MS or metagrating designs cannot in principle be modeled under this approximation, and we do not consider them here.
For uniform and slightly non-uniform MSs the locality of reflection is the same as the locality of excitation of individual unit cells. The last one means that the excitation of each unit cell is determined only by the incident field at the position of this unit cell and its own properties, assuming that all surrounding unit cells are all the same.
The excitation locality, as a prerequisite for independence of array response on the propagation factor along the array, was discussed in [56]. In that paper, arrays of individual dipolar scatters were considered, and it was proved: if the near-field (reactive) interactions are negligible, the array response does not depend on the propagation constant along the array. In [56], the array period was subwavelength, and only excitation of evanescent modes was of interest, but the results hold also for propagating harmonics, in which case the independence from the tangential propagation constant is equivalent to the angular stability of response. Importantly, this condition for locality of response and applicability of the approximation of reflection locality does not mean that the unit cell interactions are negligible. The power scattered by one unit cell in free space (as a spherical wave) and the power scattered by one unit cell in a periodic array (as a contribution to plane-wave reflection) are different. However, the required smallness of _reactive-field_ interactions ensures that the frequency response of each unit cell (its resonance frequency, etc.) does not depend on the presence and excitation of the other cells, ensuring local control of the reflection phase.
To use the results of paper [56] for proving the requirement of the response locality for angular stability we exploit the equivalence of a generic MS depicted in Fig. 1(a) and a planar electromagnetically dense array of resonant bianisotropic scatterers sketched in Fig. 1(b). In Fig. 1(a) a generic MS illuminated by a plane wave is shown. The wave of an arbitrary polarization can be decomposed into TE and TM waves incident under the same angle \(\theta\) (since there is no deflection from a uniform array with q subwavelength period, the incidence angle is denoted simply as \(\theta\)). The building blocks of the planar grid are shown in blue, and the lumped loads are green. Notice that the lumped loads in this drawing are arranged horizontally, because in the case of a vertical arrangement they would have no impact to the reflection phase for the
normal incidence. The concept of the MS implies that \(kh\ll\pi\) and \(ka\ll\pi\), where \(h\) is the substrate thickness, \(a\) is the grid period, and \(k\) is the wavenumber of free space. These two inequalities form the prerequisite of MS homogenization, allowing representing it as an effective sheet of electric and/or magnetic surface current. Conditions \(\max(ka,kh)\ll\pi\) practically mean that \(\max(ka,kh)\) should be smaller than unity. Assume that the reflection phase of the generic MS does not depend on the incidence angle until \(\theta=\theta_{\max}=\pi/3\). Specifying \(\theta_{\max}=\pi/3\) as an example, we have in mind our work [40]. The limit of angular stability is different for different generic MSs, though it makes sense only for \(\theta_{\max}\geq\pi/4\).
The scatterers in Fig. 1(b) are uniaxial omega particles, whose example shape is shown in the inset. This equivalence for the normal incidence was proven in [52]. It is a conditional equivalence: at different frequencies, the equivalent omega particles may be different not only in size, but also in shape. However, this equivalence is instructive because the electromagnetic interactions in optically dense (\(a<\lambda/2\)) uniaxial bianisotropic grids were thoroughly studied in works [48, 49, 61, 63, 64, 65, 66, 67].
Let the reference particle with electric and magnetic dipole moments \(\mathbf{p}\) and \(\mathbf{m}\) be centered at the coordinate origin \(A\), as shown in Fig. 1(b). If we number the particles of the array by \(l\) along the \(x\)-axis and by \(s\) along the \(y\)-axis, we can write
\[\mathbf{p}_{ls}=\mathbf{p}e^{-jk_{x}la-jk_{y}sa},\quad\mathbf{m}_{ls}=\mathbf{ m}e^{-jk_{x}la-jk_{y}sa}. \tag{2}\]
Fig. 1: Illustration to the proof of the requirement of the response locality for angular stability of the reflection coefficient. (a) A generic reflecting MS is formed by an electromagnetically dense (\(ka<1\)) planar grid of metal elements over a metal plane. The elements (blue) are connected to controllable lumped loads (green). (b) The equivalent MS is a planar array of uniaxial omega-particles (schematically shown in the inset). Since there is no deflection, the incidence angle is denoted simply as \(\theta\).
Here, \(k_{x}=k\sin\theta\cos\phi\) and \(k_{y}=k\sin\theta\sin\phi\) (\(\phi\) is the angle between the incidence plane and the axis \(x\)). The interaction electromagnetic field (whose electric component can be denoted as \(\mathbf{E}_{\rm int}\) and magnetic one as \(\mathbf{H}_{\rm int}\)) is the sum of the fields produced by all the particles but the reference one at \(A\). Due to relations (2), \(\mathbf{E}_{\rm int}\) and \(\mathbf{H}_{\rm int}\) are proportional to \(\mathbf{p}\) and \(\mathbf{m}\), and we can write
\[\mathbf{E}_{\rm int}=\overline{\overline{\beta}}_{ee}\cdot\mathbf{p}+ \overline{\overline{\beta}}_{em}\cdot\mathbf{m},\quad\mathbf{H}_{\rm int}= \overline{\overline{\beta}}_{mm}\cdot\mathbf{m}+\overline{\overline{\beta}}_{ me}\cdot\mathbf{p}, \tag{3}\]
where three independent dyadic parameters \(\overline{\overline{\beta}}_{ee}\), \(\overline{\overline{\beta}}_{mm}\) and \(\overline{\overline{\beta}}_{me}=\overline{\overline{\beta}}_{em}^{T}\) (the last equivalence follows from reciprocity) are called electric, magnetic, and magneto-electric interaction factors, respectively. In the general case these values are tensors, however, the array (as well as the generic MS) is practically isotropic in the horizontal plane and its electric and magnetic polarizations have no vertical components. In this case the interaction factors do not depend on the angle \(\phi\) and each of them splits into two scalar values, one corresponding to the TM-incidence and another one to the TE-incidence [61].
The individual polarizabilities of reciprocal particles \(\overline{\overline{\alpha}}_{ee}\), \(\overline{\overline{\alpha}}_{mm}\), and \(\overline{\overline{\alpha}}_{em}=\overline{\overline{\alpha}}_{me}^{T}\) do not depend on the presence or absence of other particles. They are defined through the local electric and magnetic fields acting on the particle:
\[\mathbf{p}=\overline{\overline{\alpha}}_{ee}\cdot\mathbf{E}_{\rm loc}+ \overline{\overline{\alpha}}_{em}\cdot\mathbf{H}_{\rm loc},\quad\mathbf{m}= \overline{\overline{\alpha}}_{mm}\cdot\mathbf{H}_{\rm loc}+\overline{\overline {\alpha}}_{me}\cdot\mathbf{E}_{\rm loc}. \tag{4}\]
The individual polarizabilities in the TE and TM-cases can be treated as scalar values, different for the TE and TM cases. Since only the tangential components of the local electromagnetic field are relevant for the array polarization, we may write for the local fields scalar relations \(E_{t,{\rm loc}}=E_{ti}+E_{\rm int}^{(t)}\), \(H_{t,{\rm loc}}=H_{ti}+H_{\rm int}^{(t)}\)[61, 63]. In the TE-case, the incident electric field is tangential: \(E_{ti}=E_{i}\), whereas \(H_{ti}=H_{i}\cos\theta\). Vector \(\mathbf{p}\) is parallel to \(\mathbf{E}_{i}\), and vector \(\mathbf{m}\) lies in the incidence plane. The TM-case is dual to the TE-case. After these specifications, formulas (4) can be written as scalar equations
\[p=\alpha_{ee}E_{t,{\rm loc}}+\alpha_{em}H_{t,{\rm loc}},\quad m= \alpha_{mm}H_{t,{\rm loc}}+\alpha_{me}E_{t,{\rm loc}}. \tag{5}\]
The same conclusion is valid also for so-called collective polarizabilities \(\hat{\alpha}_{ee,mm,em,me}\) (where \(\hat{\alpha}_{em}=\hat{\alpha}_{me}\)) that relate the electric and magnetic moments of the reference particle with the incident fields. Being different for the TE- and the TM-cases, they can be also defined in the scalar form [48, 60, 66, 67]:
\[p=\hat{\alpha}_{ee}E_{ti}+\hat{\alpha}_{em}H_{ti},\quad m=\hat{ \alpha}_{me}E_{ti}+\hat{\alpha}_{mm}H_{ti}. \tag{6}\]
Formulas for the reflection coefficient of an arbitrary reciprocal bianisotropic array at oblique incidence were derived in [48] (formulas (2-4)). In the present case (in-plane isotropy, uniaxial omega particles) these formulas simplify and the reflection coefficients for the TE and TM cases, respectively, take the form
\[R^{TE}=\frac{\omega}{2ja^{2}}\left[\eta\frac{\hat{\alpha}_{ee}}{\cos\theta}- \frac{\hat{\alpha}_{mm}}{\eta}\cos\theta+2\hat{\alpha}_{em}\right], \tag{7}\]
\[R^{TM}=\frac{\omega}{2ja^{2}}\left[\eta\hat{\alpha}_{ee}\cos\theta-\frac{\hat{ \alpha}_{mm}}{\eta\cos\theta}+2\hat{\alpha}_{em}\right]. \tag{8}\]
Here, \(\eta=\sqrt{\mu_{0}/\varepsilon_{0}}\) is the free-space impedance. The requirement of the angular stability demands that
\[\hat{\alpha}_{ee}^{TE,TM}(\theta)=\hat{\alpha}_{ee}^{(0)}\cos^{\pm 1}\theta, \qquad\hat{\alpha}_{mm}^{TE,TM}=\hat{\alpha}_{mm}^{(0)}\cos^{\mp 1}\theta, \quad\hat{\alpha}_{em,me}^{TE,TM}=\hat{\alpha}_{em}^{(0)}. \tag{9}\]
Here, index \((0)\) corresponds to the normal incidence. Importantly, relations (9) mean that angularly stable arrays must be spatially dispersive, since the required collective polarizabilities depend on the angle of incidence. In work [67], for any uniform lossless MS operating at the frequency of the magnetic-wall (parallel) resonance one derived the following conditions for the collective polarizabilities:
\[\eta\hat{\alpha}_{ee}^{(0)}=\hat{\alpha}_{mm}^{(0)}/\eta=\hat{\alpha}_{me}^{ (0)}\equiv\hat{\alpha}. \tag{10}\]
Eqs. (10) together with (9) ensure that reflection at any incidence angle is total. If the operational frequency is close to that of the parallel resonance but not exactly equal to it, the collective polarizabilities are complex-valued and are only approximately balanced. For the absolute value of the balanced polarizability \(\hat{\alpha}\) we obtain \(|\hat{\alpha}|\approx a^{2}/\omega\), whereas its phase \(\Phi_{\hat{\alpha}}\) can be arbitrary. Taking into account (9) and (10), formulas (7) and (8) yield
\[\Phi_{R}^{TE,TM}\approx-\frac{\pi}{2}+\Phi_{\hat{\alpha}}=\mathrm{const}( \theta). \tag{11}\]
The resonance of the balanced collective polarizability is parallel. Therefore, changing the unit cell load in the generic MS, i.e., shifting the resonance frequency of the equivalent array with respect to \(\omega\), we can vary the phase of \(\hat{\alpha}\) from \(\Phi_{\hat{\alpha}}=-\pi\) (electric-wall reflection) to \(\Phi_{\hat{\alpha}}=0\) (magnetic wall) via \(\Phi_{\hat{\alpha}}<0\) (capacitive walls) and from the magnetic-wall reflection again to the electric wall via \(\Phi_{\hat{\alpha}}>0\) (inductive walls).
The relations between the collective and individual polarizabilities were obtained in [48, 66]. All three interaction factors \(\beta_{ee,mm,me}\) enter the expressions \(\hat{\alpha}_{ee,mm,me}^{TE,TM}\) derived in [66] and
[48]. From [48] it is clear that the contributions of the real parts of the interaction factors into collective polarizabilities are strongly dependent on \(\theta\). Formulas (23) of [48] can be presented sharing \(\mathrm{Re}(\beta^{TM}_{ee,em})\) as
\[\mathrm{Re}\beta^{TM}_{ee})=\mathrm{Re}(\beta^{(0)}_{ee})\left[F_{0}(\theta)+F_{ 1}(\theta)\frac{\sin^{2}\theta}{\cos\theta}\right],\,\mathrm{Re}(\beta^{TM}_{ em})=\mathrm{Re}(\beta^{(0)}_{em})\left[F_{2}(\theta)+F_{3}(\theta)\frac{1-\cos \theta}{\cos\theta}\right], \tag{12}\]
where \(F_{0,1,2,3}\) are values of the order of unity slowly varying with \(\theta\). In the TE-case these formulas modify as follows: the function \((1-\cos\theta)/\cos\theta\) enters \(\mathrm{Re}(\beta^{TE}_{ee})\) and \(\sin^{2}\theta/\cos\theta\) enters \(\mathrm{Re}(\beta^{TE}_{em})\). We also have \(\mathrm{Re}(\beta^{TM,TE}_{mm})=\mathrm{Re}(\beta^{TM,TE}_{ee})/\eta^{2}\). If the contributions of \(\mathrm{Re}(\beta^{TE,TM})\) into collective polarizabilities are important, \(\hat{\alpha}^{TE,TM}_{me}\) is essentially angle-dependent even in the interval \(|\theta|<\pi/3\). Two other collective polaizabilities also cannot depend on \(\theta\) in the needed way. If we cannot neglect \(\mathrm{Re}(\beta^{TE,TM}_{ee,me,mm})\) in collective polarizabilities, it is impossible to satisfy (9) even approximately. The only way to achieve the angular stability at least until the angle \(\theta_{\mathrm{max}}=\pi/3\) is to engineer the individual polarizabilities of the omega-particles so that their contribution into the collective polarizabilities would sufficiently dominate over the contribution of the real parts of the interaction factors. It is evident: if we achieve it for the normal incidence, we achieve it for any angle \(\theta<\theta_{\mathrm{max}}\). Thus, we see that the angular stability in the subwavelength array requires the same condition of negligible near-field interactions that is required for applicability of the RL approximation. This observation concludes our main proof. It only remains to show that the smallness of the near-field interactions is achievable for our MS, is compatible with the required tunability of the reflection phase and allows the application of the RL for a PNUMS.
Systems of equations (5, 6) of [48] relating the collective and individual polarizabilities are recursive, whereas explicit formulas (42-44) of [66] are involved. Therefore, here we only present the result of their analysis - conditions on which the contributions of the real parts of interaction factors into collective polarizabilities can be neglected:
\[\mathrm{Re}[\beta^{(0)}_{ee}]\ll\left|\frac{\alpha^{(0)}_{mm}+\alpha^{(0)}_{ em}\eta}{\alpha^{(0)}_{ee}\alpha^{(0)}_{mm}+v^{2}}\right|,\,\mathrm{Re}[\beta^{(0)}_{ em}]\ll\frac{|\alpha^{(0)}_{em}|}{|\alpha^{(0)}_{em}|^{2}+v^{2}},\,\mathrm{Re}[\beta^{(0)}_{ mm}]\ll\left|\frac{\alpha^{(0)}_{ee}+\alpha^{(0)}_{em}/\eta}{\alpha^{(0)}_{ee} \alpha^{(0)}_{mm}+v^{2}}\right|, \tag{13}\]
where it is denoted \(v=a^{2}/2\omega\). Deriving conditions (13) from formulas (42-44) of [66] we used the relations [64, 65]
\[\mathrm{Im}\left[\frac{1}{\alpha^{(0)}_{ee}}-\beta^{(0)}_{ee}\right]=\frac{ \eta}{v},\qquad\mathrm{Im}\left[\frac{1}{\alpha^{(0)}_{em}}-\beta^{(0)}_{em} \right]=\frac{1}{v},\quad\mathrm{Im}\left[\frac{1}{\alpha^{(0)}_{mm}}-\beta^{( 0)}_{mm}\right]=\frac{1}{\eta v}. \tag{14}\]
Formulas (14) allow us to keep only the real parts of the interaction factors, since their imaginary parts cancel out. Notice, that these imaginary parts describe the reflective properties of the array
[50]. Therefore, neglecting the contributions of \(\mathrm{Im}(\beta)\) into the reflection coefficient is possible only because these imaginary parts cancel out with the imaginary parts of the inverse polarizabilities. This is so because the left-hand sides of formulas (14) directly enter the expressions for the reflection coefficient [48, 66] (see also in [60, 67] where the normal incidence case was considered).
From formulas (13) we explicitly see that the angular stability demands the relative smallness only of the near-field interactions, described by the real parts of the interaction factors [50]. The far-field interactions expressed by their imaginary parts are never negligible. If conditions (13) are respected, for a lossless array we obtain
\[\hat{\alpha}_{em}^{(0)}\approx\mathrm{Re}[\alpha_{em}^{(0)}],\quad\hat{\alpha }_{ee,mm}^{(0)}\approx\frac{\mathrm{Re}[\alpha_{ee,mm}^{(0)}]}{1-jk\mathrm{Re }[\alpha_{ee,mm}^{(0)}]/2\varepsilon_{0}a^{2}}. \tag{15}\]
These identities express the approximation that we called _excitation locality_. The unit cell of our MS is excited not like an isolated unit cell in free space, it feels the infinite array but it feels it only through the plane wave created by it. The far-field coupling is expressed by the imaginary term in the denominator of \(\hat{\alpha}_{ee,mm}^{(0)}\). The near-field interactions for the normal incidence are weak and are absent in (15) (if (13) hold). According to (12) it implies their weakness also for the oblique incidence, i.e. the excitation locality keeps valid in the same range of angles in which the angular stability holds.
Relations (15) hold for the normal incidence. For oblique incidence, the far-field coupling term in the second relation contains factor \(\cos^{\pm 1}\theta\). Considering the case of TE incidence and electric polarizability, we have
\[\hat{\alpha}_{ee}(\theta)\approx\frac{\mathrm{Re}[\alpha_{ee}(\theta)]}{1-jk \mathrm{Re}[\alpha_{ee}(\theta)]/(2\varepsilon_{0}a^{2}\cos\theta)}. \tag{16}\]
We see that if one can engineer the individual (single-inclusion) polarizability \(\alpha_{ee}(\theta)\) to behave as \(\alpha_{ee}(\theta)=\alpha_{ee}^{(0)}\cos\theta\), the dependence of the collective polarizability on the incidence angle becomes \(\hat{\alpha}_{ee}^{(0)}\cos\theta\), as required for the angular stability of the array. The same conclusion holds for the magnetic polarizability if one engineers \(\alpha_{mm}(\theta)=\alpha_{mm}^{(0)}/\cos\theta\) and for the TM polarization. Thus, we see that the spatial dispersion of the array resulting in the reflection locality can be realized by engineering the spatial dispersion of only one single array element. Now, let us discuss if this possible for omega particles.
Actually, these angular dependencies are impossible for an omega particle made of a solid wire that sketched in Fig. 1(b). However, this angular dependence can be approximately achieved for
the polarizabilities of an omega particle made of two planar and not identical metal elements with a small gap between them [49]. The angular dependence \(\alpha_{mm}^{TM}\sim\cos\theta\) for this particle can be qualitatively explained very simply. The response of any reciprocal particle to the local magnetic field is in fact the response to the spatial variation of the external electric field [62]. For the TM-case the magnetic moment induced in the effective omega particle is maximal when the wave incidence is normal, because in this case the electric field of the wave is polarized horizontally and both metal elements of the are maximally excited. Grazing incidence corresponds to a vertical electric field, when the currents in the elements are not induced. The explanation of other angular dependencies is more difficult and is related with the phase relations between the currents in two planar elements that depend on the incidence angle in a different way for the TE- and TM-cases. These phase shifts are essentially not equal to those of the incident wave because the particle described in [49] experiences electric and magnetic resonances whose bands overlap. The angular dependencies \(\hat{\alpha}_{ee}^{TE,TM}(\theta)=\hat{\alpha}_{ee}^{(0)}\cos^{\pm 1}\theta\) and \(\hat{\alpha}_{mm}^{TE,TM}(\theta)=\hat{\alpha}_{ee}^{(0)}\cos^{\mp 1}\theta\) can be approximately engineered using omega-particles described in [49] for the angles smaller than \(\theta_{\max}\approx\pi/3\). As to the magneto-electric response of such omega-particles, it does not depend on \(\theta\) in this range of angles [49].
To finalize the study of a uniformly periodic MS let us show that the generic MS, sketched in Fig. 1(a), is really equivalent to an array of uniaxial omega particles operating in the band of their resonance. For it we will express the collective polarizabilities of the array via the parameters of the generic MS - the sheet impedance of the grid \(Z_{g}=jX_{g}\) (it should be reactive to avoid absorption in the grid) and the substrate thickness \(h\). In the general case, the substrate is also characterized by permittivity \(\varepsilon\) that can be a tensor if the substrate is anisotropic. However, for our purposes it is not a relevant parameter, and for simplicity of the proof we replace the substrate material by free space.
To express the current \(J_{m}\) flowing on the metal plane and the homogenized current \(J_{g}\) flowing on the grid through the amplitude \(E_{0}\) of the external electric field we may write two boundary conditions for the tangential component \(E_{t}\) of the total electric field. One condition holds on the metal plane \(z=0\) at which \(E_{t}=0\), another one - in the grid plane \(z=h\) where \(E_{t}=jX_{g}J_{g}\). To find \(\hat{\alpha}_{ee}\) and \(\hat{\alpha}_{me}\) we excite the MS by a standing wave and locate the center of the MS (plane \(z=h/2\)) at the node of the magnetic field. To find \(\hat{\alpha}_{mm}\) (and to check that \(\hat{\alpha}_{em}=\hat{\alpha}_{me}\)) we set the node of the electric field at \(z=h/2\). this way we realize excitations by electric and
magnetic external fields. For these two excitations we have, respectively,
\[E_{0}\cos\frac{kh}{2}-\frac{\eta J_{g}}{2}-\frac{\eta J_{m}e^{-jkh}}{ 2}=jX_{g}J_{g},\quad E_{0}\cos\frac{kh}{2}-\frac{\eta J_{g}e^{-jkh}}{2}-\frac{ \eta J_{m}}{2}=0, \tag{17}\] \[jH_{0}\sin\frac{kh}{2}-\frac{J_{g}}{2}-\frac{J_{m}e^{-jkh}}{2}=j \frac{X_{g}}{\eta}J_{g},\quad-jH_{0}\sin\frac{kh}{2}-\frac{J_{g}e^{-jkh}}{2}- \frac{J_{m}}{2}=0. \tag{18}\]
In accordance to [67], the collective polarizabilities of the equivalent uniaxial omega-array for the normal incidence in the case of the electric excitation are as follows:
\[\hat{\alpha}_{ee}^{(0)}=\frac{a^{2}(J_{g}+J_{m})}{j\omega E_{0}},\quad\hat{ \alpha}_{me}^{(0)}=\frac{a^{2}\mu_{0}h(J_{g}-J_{m})}{2E_{0}}, \tag{19}\]
and in the case of the magnetic excitation we have:
\[\hat{\alpha}_{mm}^{(0)}=\frac{a^{2}\eta\mu_{0}h(J_{g}-J_{m})}{2H_{0}},\quad \hat{\alpha}_{em}^{(0)}=\frac{a^{2}(J_{g}+J_{m})}{j\omega H_{0}}. \tag{20}\]
The condition \(\eta\hat{\alpha}_{ee}^{(0)}=\hat{\alpha}_{me}^{(0)}\) after substitutions of (19) into (17) and some algebra results in the system of two real-valued equations \(\cos kh+kh/2=1\) and \(X_{g}=-\eta\sin kh/(1+kh/2)\). The solution of this system is presented in [67] for the case \(kh\ll 1\). However, this solution is not relevant for us, because leaves no freedom for the reflection phase control. Instead of the exact balance of the collective polarizabilities \(\eta\hat{\alpha}_{ee}^{(0)}=\hat{\alpha}_{me}^{(0)}\) we will search for a condition of their approximate balance \(|\eta\hat{\alpha}_{ee}^{(0)}-\hat{\alpha}_{me}^{(0)}/\hat{\alpha}_{me}^{(0)}|\ll 1\), which is mathematically equivalent to the requirement
\[\left|\frac{J_{g}+J_{m}}{kh(J_{g}-J_{m})}\right|^{2}\ll\mathrm{Re}\left(\frac {J_{g}+J_{m}}{J_{g}-J_{m}}\right). \tag{21}\]
In the vicinity of the parallel resonance where \(X_{g}=-\eta\tan kh\), we have \(|J_{g}+J_{m}|\ll kh|J_{g}-J_{m}|\). If \(kh<1\), the approximate balance condition (21) is satisfied when \(|X_{g}+\eta kh|\ll\eta kh\), i.e., for a small detuning from the parallel resonance frequency. We see that the approximate balance of two polarizabilities \(\eta\hat{\alpha}_{ee}^{(0)}\approx\hat{\alpha}_{me}^{(0)}\) is compatible with arbitrary variations of the reflection phase offered by small variations of \(X_{g}\).
From Eqs. (18) and (20) we can deduce \(\hat{\alpha}_{mm}^{(0)}\) and see that \(\eta\hat{\alpha}_{me}^{(0)}\approx\hat{\alpha}_{mm}^{(0)}\) when \(|X_{g}+\eta kh|\ll\eta kh\). This means that all three polarizabilities are approximately balanced in the vicinity of the parallel resonance of our MS. The absolute value of the balanced polarizability is approximately equal to \(a^{2}/\omega\). It is possible to show that the conditions of the excitation locality (13) can be satisfied in the whole resonance band with the properly engineered frequency dispersion of \(X_{g}\). For it one may use the electric and magnetic interaction factors deduced in [50]:
\[\mathrm{Re}(\beta_{ee,mm}^{(0)})=\frac{\eta^{\pm 1}}{v}\left(\frac{\cos k \rho}{k\rho}-\sin k\rho\right)\approx\frac{\eta^{\pm 1}c}{4a^{2}\rho}, \tag{22}\]
where \(\rho\approx a/1.438\).
To sum up: in this proof we have postulated the angular stability of a uniform MS and have seen that it can be achieved only in case of the excitation locality expressed by relations (15), that is, when interactions of the unit cells of the array via reactive fields are negligible. The proof is valid for uniform arrays. More exactly, when the array is uniform at the wavelength scale, so that the physical optics approximation is valid. We can conclude that angular stability of reflection phase is achievable when two conditions are satisfied: 1) near-field interactions between the array elements are negligible and 2) the array is uniform at the wavelength scale. We have also seen that the required nonlocal (spatially dispersive) polarizabilities can be engineered by a proper choice of the top grid.
In the previous subsection, we proved that the absence of angular stability disables the applicability of RL and also physical optics for a MS with anomalous reflection. In this subsection, we have proven that the angular stability demands the validity of RL for the generic MS at arbitrary incidence. Additionally, we proved that the concept of the angular stability and excitation locality are compatible with complete tunability of the reflection phase. For a generic MS the reflection phase is controlled by tuning of all unit cells from the parallel resonance. For a PNUMS a spatially periodical detuning of the unit cells should vary the phase \(\Phi_{R}(x)\) along the \(x\)-axis in accordance to the physical optics (generalized reflection law) which should hold for such PNUMS.
Thus, it seems that the angular stability of the generic MS and the applicability of the PO approximation to the PMUMS are, in some sense, equivalent. However, it is not a strict theorem. The proof was made for a uniform array, and the above speculations about a non-uniform MS are not supported quantitatively because our model does not allow us to determine the maximal allowed deviations in the adjacent unit cells which keep the same reflection phase at the reference point. Our analysis only tells that the angular stability of the generic MS requires locality of the unit-cell response in the PNUMS, and that its absence leaves no chances for the validity of the physical optics model. For every explicit PNUMS the applicability of this approximation needs to be verified. To support our theoretical expectations, below we consider two explicit examples of PNUMSs based on two different generic MSs - those with and without angular stability. Below we will see that performance estimations based on the model of local reflection coefficient (surface impedance) is successful for a PNUMS with angular stability (that based on Jerusalem crosses). In this case, the close to \(\pi\) reflection phase difference between the two states
of the uniform MS keeps basically the same for two half-periods of the binary MS, and the model of impedance boundaries with the same difference in reflection phases is adequate. Meanwhile, for a PNUMS based on a mushroom MS it is not so, i.e., the local-response (impedance) model for the binary mushroom MS does not hold.
## III Binary Metasurfaces With and Without Angular Stability
### _Parameters of generic metasurfaces_
In paper [40] we suggested a MS with angular stability of the reflection phase. The suggested MS represents a high-impedance surface formed by a planar grid located on top of a metal-backed dielectric layer. The grid is formed by metal Jerusalem crosses mutually connected by switchable lumped capacitors. In state '0' the lumped capacitance is zero, while in state '1' it is nonzero and partially shunts the capacitance of the parallel stems of two adjacent crosses. In both states '0' and '1' the angular stability of the uniform MS was achieved for both TE and TM polarizations in the frequency range \(4.0-5.2\) GHz. Of course, the angular stability cannot be ideal up to the grazing incidence and for all frequencies in the targeted 20% operation band. The practical requirement for the closeness of \(\Delta\Phi_{R}\) to ideal \(180^{\circ}\) was defined as the maximal allowed deviation of \(\Delta\Phi_{R}\) from \(180^{\circ}\) equal to \(\pm 40^{\circ}\). The studies of [40] have shown that this
Fig. 2: (a) Geometrical structure of the example metasurface. (b) Results of numerical simulations of the reflection phase frequency dispersion in the two states of the uniform MS and the difference \(\Delta\Phi_{R}\) for the case \(\theta=\)0. (c, d) The same for \(\theta=45^{\circ}\) in the TE-case (c) and TM-case (d).
practical angular stability holds for \(\theta_{i}<\theta_{\max}\approx 60^{\circ}\) for both TE and TM-cases in the 20% frequency band.
In [40] we also compared this MS with a "mushroom" MS, which is often used as a generic MS for microwave RISs (see references in [40]). We optimized a uniform mushroom MS for the same operation frequency band and for two wave polarizations, but no angular stability was achieved for TM-waves in the needed frequency band. Either the frequency band turns out to be narrow (\(<10\)%), or the angle \(\theta_{\max}\) turns out to be small (\(<30^{\circ}\)). This result agrees with earlier works [35, 36, 37, 38, 39].
Here, in line with our analytical proof, our purpose is to show that the angular stability allows the locality of reflection for non-uniform metasurfaces. We do it for MSs of Jerusalem crosses and for a mushroom MS illuminated by TE waves. First, we redesign a generic MS for higher frequencies, namely for \(15-22\) GHz. This range is chosen due to the restrictions of our experimental facilities. Scaling all geometric parameters and keeping the same permittivity of the substrate we can redesign the MS for mm-waves. With the use of optical microlithography, manufacturing is feasible up to 100 GHz. In our experiments we do not use electronically controllable loads of the unit cells and create a binary MS using two values of the structural capacitance in the generic MS of Jerusalem crosses. In other words, two states of the uniform MS correspond to two MSs - MS A and MS B - which differ from one another by the value of the gap \(g\) between the stems of the adjacent Jerusalem crosses and the step length \(d\).
The geometric parameters of grid A are as follows: \(a=2.3\) mm, \(d=0.5\) mm, \(w=0.1\) mm, and \(g=0.3\) mm. For grid B we have the same \(a,w\) with \(g=0.1\) mm, and \(d=1.3\) mm. Fig. 2(a) shows all these notations. The substrate is chosen as Meteorwave 8300 with the relative permittivity of \(\varepsilon_{r}=3(1-j0.0025)\) and the thickness of \(h=0.5\) mm. Our study (see Appendix 1 of [40]) shows that low values of the substrate permittivity grant a wider operation band keeping the same angular stability (the same \(\theta_{\max}\)). Therefore, we introduced an air gap of thickness \(h_{1}=1.5\) mm between the dielectric layer and the ground plane.
Figure 2(b)-(d) presents the results of CST numerical simulations of the two-state reflection phase frequency dispersion (RPFD) for TE- and TM-cases for normal incidence and for \(\theta_{i}=45^{\circ}\). A small (about 30%) change of the structural capacitance grants the needed difference between MS A and MS B, i.e., allows us to satisfy the practical criterion of the angular stability (see above) in the frequency band of 20%. For this MS there is no difference in \(\Delta\Phi_{R}\) for TE- and TM-polarizations and for different incidence planes. Figure 3 shows the reflection amplitude for the two states, which is affected by the frequency dispersion and material losses. We see that the MS reflects more than 99% of the incident power. The plot corresponds to the TE-polarization. For the TM-polarization the reflectance also exceeds 99%. This is an important result because our proof of the equivalence of reflection locality and angular stability is valid only for totally reflecting MSs. In work [40] we achieved a broadband angular stability in the range of incidence angles \(\theta<\pi/3\) namely by the price of nonzero absorption (\(|R^{TE,TM}|=0.95-0.99\) for all incidence angles and operation frequencies). In the MS re-designed for higher frequencies, the absorption is even smaller, and there is no doubts that the validity of the theoretical model should not suffer of it.
For large incidence angles, it is instructive to compare the properties of the designed angle-stable MS and the mushroom MS designed in [40]. Using the same analytical model as in [40], we have optimized a generic (uniform) version for operation in the same (\(16-20\) GHz) frequency band. The unit cell of the mushroom MS has the same in-plane size of the unit cell as in the MS based on Jerusalem crosses. For this mushroom MS the angular stability is not good in the TE-case, when the deviation of \(\Delta\Phi_{R}\) from \(180^{\circ}\) exceeds \(40^{\circ}\)), but it is nearly uniform and reasonably small for all angles: \(\pi/2<\Delta\Phi_{R}<3\pi/2\). In the TM-case, the situation is
Fig. 3: The reflection amplitude in two states ’0’ and ’1’ for \(\theta=0^{\circ}\) and \(\theta=45^{\circ}\) (TE polarization).
opposite. Quite good angular stability holds for the incidence angles not exceeding \(45^{\circ}\), whereas for \(\theta_{i}\geq 60^{\circ}\) the states '0' and '1' cannot be properly engineered. For these angles \(\Delta\Phi_{R}\) does not exceed \(\pi/2\) in the whole operation band, that corresponds to the situation qualitatively illustrated by Fig. 1.
### _Finite binary metasurface: Diffraction patterns_
For realistic scenarios, we should numerically simulate our MS as a finite-size sample. The induced surface electric and magnetic currents are obviously perturbed by the edges of the metasurface, which evidently results in some coupling of evanescent harmonics and propagating ones, affecting the side-lobe pattern. With this in mind we still expect that the angular stability will allow us to obtain an agreement between the numerical results and predictions based on the locality of reflection.
The total size of the finite MSs in numerical simulations was selected as \(92\times 92\,\mathrm{mm}^{2}\), corresponding to \(40\times 40\) unit cells. Depending on the needed deflection angle one supercell \(L(D_{x}/2,D_{y}/2)\) can comprise from \(5\times 5\) to \(10\times 10\) identically loaded unit cells. One supercell (two half-periods) is identified with the state '0' (MS A) or '1' (MS B) previously engineered for a uniform and infinite MS. So, each period of our PNUMS is formed by two uniform sections of equal size. The case when the half-period includes five unit cells corresponds to \(D=1.3\lambda_{0}\), where \(\lambda_{0}\) is the free-space wavelength at \(17\) GHz.
We start our report for deviation angle (\(|\theta_{i}-\theta_{r}|=15^{\circ}\)) to show the reciprocity and polarization stability of our structure. For better visibility we assign sign minus to the angles corresponding to the half-space of specular reflection, and in this notation \(|\theta_{i}+\theta_{r}|=15^{\circ}\).
Upon illuminating by an obliquely incident plane wave with \(\theta_{i}=15^{\circ}\) (TE-case in Fig. 4(a) and TM-case in Fig. 4(b) or \(\theta_{i}=30^{\circ}\) (Fig. 4(c) and Fig. 4(d)), the main diffraction lobes are oriented along two reflection angles \(\theta_{r}=-30^{\circ}\) in the first case and \(\theta_{r}=-15^{\circ}\) in the second case. These directions exactly coincide with the predictions of the theoretical model. In this example, the MS exposes two open channels corresponding to spatial harmonics of \(m=0,\,-1\). Besides these two propagating modes, the rest of the spatial spectrum, including \(m=1\) is evanescent. As depicted in Fig. 4, the power radiated to side lobes is very small. The existence of these lobes is related not with an error of the reflection locality approximation of the infinite binary MS but with the finite size of the simulated one.
As depicted in Figs. 4(a),(c) and 4(b),(d) in the two reciprocal situations, we have \(\theta_{r,0}\)= \(-15^{\circ}\) or \(\theta_{r,0}\)= \(-30^{\circ}\) as the specular reflection angle. In both cases the specular reflection is noticeably lower than the deflection in the desired directions. For the quantitative analysis in Fig. 4, we also plot the bistatic radar cross-sections of our MS versus the scattering angle \(\theta\) in the \(xoz\) plane together with the corresponding 3D far-field scattering patterns. From Figs. 4(a)-(d) we see that the polarization stability of our MS is respected with excellent accuracy.
In the following numerical example, we conduct a comparison between a binary MS (angle-stable MS) and a binary mushroom MS (angle-dependent MS) with identical overall sizes and periods. The main goal is to compare the full-wave simulations of the actual structures with simple analytical models based on the local input impedance approximation. As a reference structure, we use a binary reflector formed by PEC and PMC strips, because in this case we have the same nearly \(\pi\) difference of the local reflection phases and these reflection phases do not depend on the incident angle. It should be noted that setting a specific value for the local reflection coefficient in full-wave simulators is extremely challenging. However, we are able
Fig. 4: Full-wave simulations of the considered binary MS diffraction patterns under illumination by (a) TE-polarized incident wave with \(\theta_{i}=15^{\circ}\), (b) TM-polarized wave with \(\theta_{i}=15^{\circ}\), (c) TE-polarized wave with \(\theta_{i}=30^{\circ}\) and (d) TM-polarized wave with \(\theta_{i}=30^{\circ}\). Each subfigure comprises the 2D pattern plotted in the \(xoz\) plane. The incident waves are shown by blue lines with arrows.
to compare them with a PEC/PMC binary MS of the same size. Indeed, the local reflection coefficient for PEC and PMC is always \(-1\) and \(+1\), respectively, with a phase difference of \(180^{\circ}\) between these two states.
Figure 5 shows scattering pattern for the TE polarization, where the incident angle is set to \(\theta_{i}=15^{\circ}\). The 2D bi-static RCS pattern depicted in Figs. 5(a),(b) demonstrates that the scattering pattern of the PEC/PMC binary MS closely resembles that of the angle-stable MS. Particularly, the maximum in the anomalous reflection towards the first diffraction order (the desired direction) coincides with the corresponding maximum for the PEC/PMC binary MS. However, the results for the mushroom structure (Fig. 5b) as an angle-dependent MS exhibit a significant deviation from the reflection locality approximation. In this scenario, the specular reflection (undesired direction) is even greater than the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, this example clearly illustrates that the scattering pattern of the angle-stable MS adheres to the approximation of reflection locality, thereby validating our analytical proof.
It is important to note that, in order to ensure a fair comparison with the PEC/PMC binary MS, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved. The maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\). Thus, the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ}\) is achieved by the maximum in the anomalous reflection towards \(\theta_{r,-1}=-30^{\circ
MS, the reflection phase difference (in the two states of the corresponding generic MS) of both the angle-stable and angle-dependent MS should be \(180^{\circ}\). This is precisely why we selected a small incident angle of \(\theta_{i}=15^{\circ}\) in the previous example. At this small incident angle, both the mushroom MS and our designed angle-stable MS exhibit the desired \(180^{\circ}\) phase difference in their generic MS for states "0" and "1". However, for larger incident angles, the mushroom MS fails to meet the aforementioned criteria due to angular dependency of its response. Consequently, a fair comparison should be conducted specifically for small incident angles. Nevertheless, the angle-stable MS still allows for a valid comparison of the scattering pattern with the PEC/PMC binary MS up to \(\theta_{i}=30^{\circ}\), where the phase difference between the "0" and "1" states of the generic MS remains close to \(180^{\circ}\). Therefore, within this range, we can confidently evaluate and compare the performance of our designed MS against the PEC/PMC binary MS. This comparison is illustrated by Fig. 6 where the result for the mushroom MS is also shown. This figure confirms the observations we have made analysing Fig. 5.
Figure 6(a) provides a comprehensive comparison between the scattering patterns of the
Fig. 6: a) A comparison of 2D scattering patterns plotted versus \(\theta\) in the \(xz\) plane for the designed angle-stable MS with the same sized PEC/PMC binary MS under illumination at \(\theta_{i}=30^{\circ}\); b) The same for a binary mushroom structure; c) 3D scattering patterns for the designed angle-stable MS, the corresponding mushroom structure, and the PEC/PMC binary MS.
designed angularly stable MS and the PEC/PMC binary MS that offers the reflection locality reference. In this case, the incident angle is set to \(\theta_{i}=30^{\circ}\). The comparison yields an exceptionally good agreement, particularly evident in the maxima of anomalous reflection towards the desired direction (\(\theta_{r,-1}=-15^{\circ}\)). These examples provide compelling evidence that verify the analytical proof we have put forth regarding the equivalence of angular stability and reflection locality. Based on the information provided in the preceding paragraph, it would be unfair to compare the angularly dependent binary mushroom structure scattering pattern with that of the PEC/PMC binary structure at an incident angle of \(\theta_{i}=30^{\circ}\). However, in order to provide a quantitative demonstration of the behavior of the mushroom structure, we present the corresponding pattern in Figure 6(b).
## IV Experiment
In this section, we present the experimental results validating the theoretically predicted scattering pattern for the binary PNUMS engineered, simulated and discussed above (formed by two alternating finite-size MSs A and B). We are going to demonstrate that the design approach
Fig. 7: a) a Schematic 3D view of the measurement setup, b) the remote-controlled rotating platform with the designed MS, and with d) the ground plane. c) The fabricated binary MS with four periods.
based on the reflection locality works for this PNUMS very well even for large incidence and deviation angles. We measured the diffraction pattern in the symmetrically located vertical plane at the Ku/K band (\(15-22\) GHz) and compared it with the predictions of the model. Each supercell (half periods formed either by MS A or MS B) contains \(5\times 5\) unit cells (\(10\) unit cells per period \(D\)). To measure the diffraction pattern for varying incidence angle we used the NRL Arc setup [71]. Two linearly polarized horn antennas covering the frequency range of \(15-22\) GHz were used as transmitters and receivers. As depicted in the 3D model view of our experimental structure (see Fig. 7(a)), the transmitting horn antenna and the MS are located on a remotely controlled rotation stage. Two stepper motors precisely adjusted the location and mutual angular orientation of the MS and antennas. TX and RX horn antennas are connected to the two ports of the vector network analyzer (VNA), measuring both phase and amplitude of the scattered field for all available observation angles \(\theta\). The distance between the MS and the transmitting antenna was equal to 50 cm. We did not use the lenses since they were bulky and not effective for the aperture of our size. Indeed, we are working in the intermediate zone, close to far-field region. In order to exclude the mutual coupling effect between the antennas and reflections from the setup and surrounding objects, a time-gating procedure [59] was performed. As a phase reference for time-gating, we used a copper plate of the same size as the MS.
Figs. 7(b)-(d) show the measurement setup and the sample: a binary MS with four periods (8 supercells in the incidence plane). Fig. 8 shows the color map of the scattered power density for varying incidence angle \(-75^{\circ}<\theta_{i}<75^{\circ}\) for three frequencies. These color maps were reconstructed from the post-processed data. The results are captured for 16, 17, and 18 GHz for both TE (subfigures (a)-(c)) and TM polarizations (subfigures (d)-(f)). The vertical sections of this color map show the diffraction pattern versus \(\theta\) for given \(\theta_{i}\). The diagonal white strip corresponds to the blind area we left to prevent the horn antennas from collision.
The green dotted lines correspond to the propagating Floquet harmonics calculated for the infinite MS. Maxima of measured intensity exactly coincide with these lines. The straight dotted line shows the specular reflection, while the curved green ones correspond to \(M=\pm 1\) and \(M=\pm 2\) in (1). We can see that the scattering patterns fit the simplistic analytical predictions very well. It is worth noting that by dividing the results into two areas (with respect to the specular direction line), the symmetries of the scattered power are visible. It stresses the channel reciprocity property. For the comparison between full-wave simulations of scattering from the manufactured sample and experimental results, we report the 2D bi-static RCS pattern in
Fig. 8: Measured normalized refracted intensity color maps for TE polarized incident wave at a) 16 GHz, b) 17 GHz and c) 18 GHz. (d)-(f) show the same results for the TM polarized incident wave. The dotted green lines show the analytical predictions. White areas show blind spots.
Fig. 9: A comparison of 2D scattering patterns plotted versus \(\theta\) in the \(xz\) plane between the experimental and numerical results under illumination of TE-polarized wave with a) \(\theta_{i}=0^{\circ}\), and b) \(\theta_{i}=62^{\circ}\)
Figs. 9(a,b) for two incident angles of \(\theta_{i}=0^{\circ}\) and \(\theta_{i}=62^{\circ}\), respectively. The experimental results validate that the model based on the reflection locality works well for the angular-stable MS. Note that discontinuities in the experimental curves correspond to the blind spots. Other experimental results also align with numerical simulations and generally validate the applicability of the reflection locality for the design of the binary PNUMS if the reflection of the generic MS does not depend on the angle in the broad angular range.
## V Conclusions
In this work, we studied the relation between the angular stability of a generic MS used for the design of a RIS and the applicability of the approximation of reflection locality to its non-uniform counterpart - non-uniform periodical MS. Majority of researchers utilize this approximation in the context of the generalized reflection law, not considering its applicability limits. This can result in serious errors which arise when the incidence or deviation angles are large. However, there are MSs to which the reflection locality approximation is applicable even for large angles. Only in these cases the unit cells of a non-uniform MS behave as the unit cells of metasurfaces designed using the locally periodical approximation.
The main message of this paper is twofold:
1. The approximation of reflection locality commonly adopted by specialists developing RISs (see e.g. in [28, 29, 30, 31, 32, 33, 34, 30]) may be not applicable for large incidence and deviation angles.
2. However, if the corresponding generic (uniform) RIS has angular stability of the reflection phase, the reflection locality approximation is applicable to non-uniform RISs, even if this RIS is strongly non-uniform, e.g., binary.
This message was supported by analytical studies, numerical examples, and an experiment. We demonstrated the predicted operation up to the incidence and deviation angles \(\theta_{\max}=75^{\circ}\) in the 20% frequency band for both polarizations of the incident wave.
To make the conclusions more convincing we numerically studied the so-called mushroom MS, that we optimized for the same frequency band and both polarizations. We observed that the diffraction pattern has significant deviations from the research locality approximation predictions. This is because for mushroom MS the reflection phase is not angularly stable.
We believe that the RIS based on Jerusalem crosses is practically useful. To achieve tunability, the loading capacitances should be replaced either by electronically biased pin-diodes or by optically biased photosensitive elements such as metal-insulator diodes [70].
## Acknowledgment
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 956256.
|
2307.09885 | Test-takers have a say: understanding the implications of the use of AI
in language tests | Language tests measure a person's ability to use a language in terms of
listening, speaking, reading, or writing. Such tests play an integral role in
academic, professional, and immigration domains, with entities such as
educational institutions, professional accreditation bodies, and governments
using them to assess candidate language proficiency. Recent advances in
Artificial Intelligence (AI) and the discipline of Natural Language Processing
have prompted language test providers to explore AI's potential applicability
within language testing, leading to transformative activity patterns
surrounding language instruction and learning. However, with concerns over AI's
trustworthiness, it is imperative to understand the implications of integrating
AI into language testing. This knowledge will enable stakeholders to make
well-informed decisions, thus safeguarding community well-being and testing
integrity. To understand the concerns and effects of AI usage in language
tests, we conducted interviews and surveys with English test-takers. To the
best of our knowledge, this is the first empirical study aimed at identifying
the implications of AI adoption in language tests from a test-taker
perspective. Our study reveals test-taker perceptions and behavioral patterns.
Specifically, we identify that AI integration may enhance perceptions of
fairness, consistency, and availability. Conversely, it might incite mistrust
regarding reliability and interactivity aspects, subsequently influencing the
behaviors and well-being of test-takers. These insights provide a better
understanding of potential societal implications and assist stakeholders in
making informed decisions concerning AI usage in language testing. | Dawen Zhang, Thong Hoang, Shidong Pan, Yongquan Hu, Zhenchang Xing, Mark Staples, Xiwei Xu, Qinghua Lu, Aaron Quigley | 2023-07-19T10:28:59Z | http://arxiv.org/abs/2307.09885v1 | # Test-takers have a say: understanding the implications of the use of AI in language tests
###### Abstract
Language tests measure a person's ability to use a language in terms of listening, speaking, reading, or writing. Such tests play an integral role in academic, professional, and immigration domains, with entities such as educational institutions, professional accreditation bodies, and governments using them to assess candidate language proficiency. Recent advances in Artificial Intelligence (AI) and the discipline of Natural Language Processing have prompted language test providers to explore AI's potential applicability within language testing, leading to transformative activity patterns surrounding language instruction and learning. However, with concerns over AI's trustworthiness, it is imperative to understand the implications of integrating AI into language testing. This knowledge will enable stakeholders to make well-informed decisions, thus safeguarding community well-being and testing integrity. To understand the concerns and effects of AI usage in language tests, we conducted interviews and surveys with English test-takers. To the best of our knowledge, this is the first empirical study aimed at identifying the implications of AI adoption in language tests from a test-taker perspective. Our study reveals test-taker perceptions and behavioral patterns. Specifically, we identify that AI integration may enhance perceptions of fairness, consistency, and availability. Conversely, it might incite mistrust regarding reliability and interactivity aspects, subsequently influencing the behaviors and well-being of test-takers. These insights provide a better understanding of potential societal implications and assist stakeholders in making informed decisions concerning AI usage in language testing.
Keywords: language test, fairness, reliability, transparency, artificial intelligence, automated scoring
Introduction
Language tests (LTs) have played an important, and at times contentious part in society, serving multiple roles across sectors, such as education [10], immigration [4], or citizenship [26]. For example, many Australian universities necessitate that non-native English speakers attain a certain score in an accredited English language test to satisfy admission criteria [10]. Similarly, permanent residency in several countries including the United Kingdom, Canada, and Australia typically hinges on the completion of an English language test [45]. In addition, the United States (U.S.) government specifies that "_applicants must demonstrate a basic understanding of English, including an ability to read, write, and speak the language_" to obtain U.S. citizenship [26].
LTs are "_the practice and study of evaluating the proficiency of an individual in using a particular language effectively_" [54]. Specifically, these tests aim to assess the language proficiency of candidates at specific levels. Educational institutions, professional accreditation bodies, and governments employ LTs to evaluate whether nominees have sufficient language skills to study abroad, engage in professional work, immigate, or pursue naturalization. Consequently, LTs could directly influence the areas of immigration, education, and employment, thereby affecting the well-being of individuals and, by extension, entire communities [45]. For example, LTs have been employed as "gatekeepers" to selectively deny entry to people at borders in the past.
Artificial Intelligence (AI) simulates human intelligence by utilizing computer systems and has become increasingly vital in contemporary society [32, 51]. AI is applied in sophisticated search engines [5, 27], recommendation systems [23, 52], natural language understanding [7, 18], and autonomous vehicles [6, 46]. In language learning, developers incorporate AI models into their software applications to enhance pronunciation and vocabulary skills [49], review spelling and grammar mistakes [21], or assist in crafting articles for clients [22]. However with LTs, the process of human scoring can be laborious and time-consuming [57, 14], and it often presents the challenge of eliminating human bias [16, 17, 42]. Therefore, an automated scoring system is a sought-after solution to reduce costs and minimize human error. AI has demonstrated its utility across various real-world applications, leading language test organizers to increasingly adopt AI algorithms in constructing their automated scoring systems. For example, Pearson Language Tests,1 which validate the proficiency of non-native English speakers, have implemented an automated scoring system to grade test-takers. The PTE test provider asserts that its automated scoring system, built upon a vast array of real responses from test-takers, is precise, consistent, and unbiased.2
Footnote 1: [https://www.pearsonpte.com/](https://www.pearsonpte.com/)
Footnote 2: [https://www.pearsonpte.com/scoring/automated-scoring](https://www.pearsonpte.com/scoring/automated-scoring)
While researchers often highlight the efficiency of AI applications, they may overlook other crucial aspects, such as AI trustworthiness and AI ethics. _AI trustworthiness_ involves assessing the safety and reliability of AI models based on pre-defined criteria, i.e., fairness, explainability, accountability, reliability,
and acceptance [29]. This assessment process is vital and can significantly influence the future adoption of AI applications. Over the past decade, AI trustworthiness has received considerable scholarly attention [29, 40, 3, 33]. For example, Ashoori and Weisz [3] evaluated various factors, i.e., decision stakes, decision authority, and model trainers, that could influence trust in AI. Recently, Li et al. [33] argued that enhancing the trustworthiness of AI models necessitates concerted efforts at multiple stages of an AI product's lifecycle, including data preparation and algorithm design. On the other hand, _AI ethics_ focuses on examining ethical challenges such as privacy issues and data bias during the design and development of AI models. Given that AI applications are now impacting diverse aspects of our lives, including healthcare [60], transportation [48], and business [1], the exploration of AI's ethical implications is a pressing need. The literature on AI ethics is substantial [28, 8, 24, 30]. For instance, Jobin et al. [28] outlined a set of principles and guidelines for AI systems' ethics. Following this work, Kazim and Koshiyama [29] proposed three research directions for AI ethics--principles, processes, and ethical consciousness--as critical focal points to address ethical issues in AI systems.
Several studies have examined the fairness of automated scoring systems, employing AI models in Language Tests (LTs) [25, 34]. However, to our knowledge, no prior work has thoroughly explored the trustworthiness, transparency, consistency, or explainability of these systems from the test-takers' perspective. Neglecting these factors could influence the employment of AI in LTs and the derived benefits for test-takers. In this paper, we undertake an empirical analysis to assess the implications of AI usage in English LTs. Specifically, we engage with test-takers of globally recognized English exams, including the TOEFL, International English Language Testing System3 (IELTS), PTE, and Duolingo English Test4 (DET). The insights garnered will assist researchers and developers in better understanding the effective application of AI models in their evaluation systems and uncover the broader implications of current AI usage for all LT stakeholders. Our investigation seeks to answer the following research questions:
Footnote 3: [https://www.ielts.org/](https://www.ielts.org/)
Footnote 4: [https://englishtest.duolingo.com/](https://englishtest.duolingo.com/)
* What concerns do test-takers have regarding various types of language tests?
* What is the impact of using AI in language tests on test-takers?
Our paper makes the following key contributions:
* We conduct an empirical study to uncover the implications of using AI in language tests, with a specific focus on the perspective of test-takers. Our interviews and online surveys offer concrete empirical evidence to underpin our findings.
* We outline six distinct categories of concerns and two types of impacts experienced by language test-takers.
* We explain the potential consequences for stakeholders involved in language tests. Furthermore, we offer guidance on incorporating AI techniques into language tests in a fair, effective, and seamless manner.
## 2 Background and Related Work
This section provides an overview of the history of English language tests, delves into the concept of automated scoring, and reviews the existing literature pertaining to the implications of AI implementation in these language tests.
### Language Tests
The first formal language test was initiated by the Michigan Language Assessment in 1941 with the objective of evaluating the English proficiency of foreign students at Michigan and other universities in the United States (U.S.) [53]. Subsequently, in 1964, the Test of English as a Foreign Language (TOEFL) was introduced by the Educational Testing Service (ETS). This test was designed to assess a wide range of English skills, including reading, listening, speaking, and writing, for students seeking admission to U.S. and Canadian universities [2]. Since its inception, TOEFL has gained global recognition and evolved into an internet-based format, known as TOEFL iBT.5
Footnote 5: [https://www.ets.org/toef1/test-takers/ibt.html](https://www.ets.org/toef1/test-takers/ibt.html)
With the increasing demand for language proficiency assessment, numerous language test organizations have emerged to cater to this need. For example, the International English Language Testing System (IELTS), introduced in 1989 by the British Council, is globally recognized as a reliable English language proficiency test [12]. Moreover, IELTS is accepted by most English-speaking academic institutions and numerous international professional organizations.6 In some English-speaking countries, such as the United Kingdom, Australia, and New Zealand, the IELTS certificate is also employed for visa applications.7 Differing from the TOEFL or IELTS, which offer both paper-based and computer-based formats, the Pearson Language Tests (PTE) are exclusively computer-based.8 In addition, the PTE employs an automated scoring system to evaluate candidate performance. In 2016, the education company Duolingo launched its own language proficiency assessment, namely the Duolingo English Test (DET) [56]. DET offers a convenient remote testing experience, allowing candidates to complete their English assessment at any time via their computers, eliminating the need for travel to physical test centers. Given the laborious and time-intensive nature of manual language test evaluations, the development of automated scoring systems has been prioritized to reduce costs and minimize human error. Several language test organizations, including ETS, Pearson, and Duolingo, have integrated AI models into their scoring systems. This integration
aims to enhance the accuracy, reliability, and efficiency of evaluating test-takers' language proficiency.
### Automated Scoring
The exploration of automated systems in language test scoring has been an area of interest for many decades, particularly in the field of Automated Essay Scoring (AES). The initial spark for AES research was ignited by Project Essay Grade, which aimed at evaluating the quality of written essays [38]. The subsequent attention from the Educational Testing Service (ETS) fueled the intensive study of language test scoring. Specifically, ETS implemented an AES system ('e-rater') for the evaluation of Graduate Record Examinations essays, offering early insights into the limitations and effectiveness of AES [41]. Rudner et al. [47] introduced a two-score-point AES system based on Bayes' theorem, while Foltz et al. [19] proposed the Intelligent Essay Assessor, which employed Latent Semantic Analysis to evaluate essay quality through language similarities. The advent of deep learning techniques resulted in the integration of deep neural networks into AES. A noteworthy example is the method proposed by Dong et al. [13], which combined an attention mechanism with a recurrent convolutional neural network, demonstrating superior performance compared to traditional AES methods. Recently, Ramesh et al. offered [43] a systematic review of AES research, summarizing available datasets, features, metrics, and techniques used in the field.
Due to the success of deep neural networks, the development of Automatic Speech Recognition techniques largely accelerated the automated scoring in Speaking tests [58]. ETS has been at the forefront of this innovation, actively creating automated scoring systems for various types of tests and incorporating diverse features, including fluency, grammar, content, and structure [15, 14, 61]. Duolingo has also developed automated scoring systems, applying an adaptive testing approach [50]. Their system generates scores based on both the performance of the test-taker and the difficulty level of the question. Duolingo also designs their speaking tests in an elicited speech format [55].
### Implications of the Use of AI in Language Tests
While numerous studies have delved into various aspects of AI applications in Language Tests (LTs), no existing research has comprehensively examined the implications such as consistency, transparency, or explainability--of AI use from the perspective of the test-takers. We highlight a few notable works that are relevant to our study. Xi et al. [59] conducted an empirical investigation into the perception of Chinese users towards the automated scoring system in the ETS TOEFL Practice Online (TPO) service. Hamid et al. [25], leveraging responses from 430 IELTS test-takers, underscored that a significant proportion of test-takers felt their language proficiency was not accurately reflected in their test scores, thereby raising concerns about the fairness of language evaluation systems. Loutina et al. [34] assessed the fairness of automated scoring systems
by considering various dimensions of test-takers, such as different groups and languages. However, these previous studies only focused on a few narrow angles, and have overlooked the broader implications of AI use in LTs, which could influence AI's adoption in LTs and limit the potential benefits for test-takers. In our paper, we aim to unravel these implications, providing valuable insights for developers to more effectively integrate AI models into their automated scoring systems.
## 3 Methodology
In this section, we present an overview of our analytical framework, designed to assess the implications of AI use in English language tests (LTs) from the test-takers' perspective. Given that the majority of questions in the Listening and Reading sections of LTs are multiple-choice and hence easily marked, **our study focuses on the Speaking and Writing sections of these tests.**
### Framework Overview
In this study, we categorize language tests (LTs) into two types: Human-based and AI-based. _Human-based_ language tests (HLTs) rely on human evaluators to score the performance of test-takers, whereas _AI-based_ language tests (ALTs) use automated systems, eliminating the need for human intervention during the evaluation process. Our research focuses on four major English LTs: TOEFL, IELTS, PTE, and DET. The basic information of these tests is listed in Table 1. By our classification, TOEFL and IELTS fall under HLTs, while PTE and DET are classified as ALTs. Note that although TOEFL incorporates AI in its scoring process, due to the presence of at least one human grader involved in every question and the primary decision-making role humans play in the scoring phase, we classify TOEFL as a Human-based language test.
Figure 1 presents the comprehensive framework of our empirical study. Primarily, we conduct interviews with language test-takers and subsequently construct an online survey to explore their perceptions regarding the use of AI in language testing. Our framework encompasses three stages, which are detailed below:
1. _Planning and Preparation._ In this initial stage, our goal is to devise a comprehensive list of questions designed to glean insights into the test-takers' backgrounds and experiences with language tests.
\begin{table}
\begin{tabular}{c c c c c c} \hline Name of LT & Marking mode & Official AI-assisted practice tool & Exam cost & Result release time & Remarking \\ \hline IELTS & Human & None & \$ 255 (USA) & 3-5 days & Yes (\$ 120) \\ TOEFL & Hybrid & TOEFL Practice Online & \$ 255 (USA) & 4-8 days & Yes (\$ 80-160) \\ PTE & AI & PTE Practice Tests & \$ 235 (USA) & 2 days & Yes (\$ 120-270) \\ DET & AI & Duolingo Practice Tests & \$ 59 (USA) & 2 days & No \\ \hline \end{tabular}
\end{table}
Table 1: Basic information of four language tests in the study.
1. _Interview._ This stage is characterized by the conduction of interviews with test-takers. The objective here is to capture detailed, qualitative feedback on their perspectives toward LTs.
2. _Online Survey._ In the final stage, we create and distribute an online survey to a broader range of test-takers. This stage aims to capture wide-scale data on test-taker experiences and sentiments toward LTs.
We present the details of each stage in the following subsections.
### Stage Zero: Planning and Preparation
In this stage, we formulate two categories of questions: demographic and open-ended. The demographic questions are intended to gather information about the test-takers' educational background and level of English proficiency. The open-ended questions are divided into two sections: general experience and AI experience. The general experience questions are designed to delve into the test-takers' previous encounters with English tests, asking which language tests they have taken and for what reasons. The AI experience questions aim to understand the test-takers' views on the use of AI in language tests, particularly concerning aspects, such as fairness, reliability, and consistency. We have secured ethics approval for this empirical study, ensuring that the test-takers retain the right to access their data and understand the process of our investigation.
### Stage One: Interview
We introduce the interview stage of our study, designed to understand insights from test-takers' perspectives, such as consistency, transparency, and explain
Figure 1: The overview framework of our empirical study for revealing the implications of the use of AI in language tests.
ability, regarding English language tests (LTs). In particular, the interview stage includes the following steps.
**Pilot Interview & Protocol Refinement.** Initially, we conducted pilot interviews with three individuals who had taken English LTs. We refined the interview protocol based on their feedback before launching the formal interviews.
**Participant Criteria.** We required our participants to have experience with at least one English language test, such as TOEFL, IELTS, PTE, or DET. In addition, participants needed to have functional English proficiency to comprehend the interview questions.
**Participant Recruitment.** We employed social media platforms, including Twitter, Facebook, LinkedIn, and WeChat, to recruit participants. Additionally, we invited students from language schools and language test centers to join our empirical study. In the end, we have 16 participants from four different English LTs: TOEFL, IELTS, PTE, and DET. Notably, 13 participants (81.25%) had experience with at least two different English LTs, such as IELTS and PTE. We identified the participants as I1 to I16. Table 2 provides details about our participants.
**Transcribing and Coding.** We began by recording the interviews. The first author transcribed the audio, while the second author reviewed the transcripts for any English errors, i.e., mishearing, misspelling, grammar mistakes, or incorrect punctuation. Both authors then performed thematic coding on the three pilot interview transcripts for qualitative analysis.
**Statement Extraction.** We manually extracted statements from the transcripts, eventually deriving 25 consolidated statements representing five different stages of English LTs: test preparation, test administration, testing, test scoring, and test results. We categorized these statements into various themes, such as fairness, consistency, and robustness. A comprehensive list of these themes is provided in Section 4.
### Stage Two: Online Survey
We conduct an online survey to gain a broader understanding of test-takers' experiences with English language tests (LTs). Based on the prior work [31], we design our online survey according to the following principles: setting clear objectives, selecting the appropriate design, formulating the survey questions, and gathering valid data. Notably, our survey responses are anonymous, ensuring that the information collected is non-identifiable. Additionally, human verification is also employed to prevent fraudulent responses, i.e., the responses that are not generated by English test-takers. The subsequent steps involved in this stage are listed as follows.
**Survey Design and Pilot Study.** We craft our online survey questions drawing from the findings obtained in the first stage interviews. This is done with the goal of achieving a broader comprehension of English test-takers' experiences.
We utilize Qualtrics,9 an online survey platform, to distribute our survey. The survey comprises three question types: multiple choice, Likert scale, and rank order. We initially engage five participants for a pilot study, and their feedback subsequently informs the refinement of our survey's design and questions.
Footnote 9: [https://www.qualtrics.com/](https://www.qualtrics.com/)
**Participant Recruitment.** In parallel with the first stage, we employ social media platforms for participant recruitment and also invite students to participate in our empirical study. In total, we collected 99 valid responses. Among these, 61 participants (61.6%) had taken the IELTS, 29 (29.3%) the TOEFL, 28 (28.3%) the PTE, and nine participants (9.1%) had experience with the DET. Our participants cover a broad demographic spectrum, providing diverse experiences. Specifically, the participants' native languages include Mandarin (67.7%), Hindi (19.2%), English (10.1%), Cantonese (7.1%), Spanish (2.0%), and German (1.0%). The motives for taking these tests span Education (89.9%), Immigration (25.3%), and Work (10.1%).
**Online Survey.** Our online survey was open from November 1, 2022, to December 15, 2022. Throughout this period, we accumulated 165 responses. However, following the best practices from previous work [11], we manually filtered out responses that were deemed invalid due to factors such as duplicate IP addresses, unusually fast response times (completed within one minute), and inconsistent answers. As a result, our final dataset comprises 99 valid responses.
**Data Analysis.** We proceed to analyze the frequency of each option selected by the participants. Our objective is to identify potential relationships between the choices made across various questions and language tests. The findings from this analysis are presented in Section 4.
\begin{table}
\begin{tabular}{c c c c c c} ID & Tests Taken & Purposes of Tests & Roles & Gender & Native Languages \\ \hline I1 & IELTS, PTE & Edu & Test-taker & Female & Mandarin, Cantonese \\ I2 & IELTS, PTE & Edu, Work, Immi & Test-taker & Male & Mandarin, Cantonese \\ I3 & TOEFL, IELTS, PTE & Edu, Immi & Test-taker & Male & Vietnamese \\ I4 & IELTS, PTE & Edu & Test-taker & Male & Mandarin \\ I5 & IELTS, PTE & Edu, Immi & Test-taker & Male & Spanish \\ I6 & IELTS, DET & Edu & Test-taker & Male & Mandarin \\ I7 & TOEFL, IELTS, PTE & Edu, Work, Immi & Test-taker, Teacher, Examiner & Female & Mandarin \\ I8 & IELTS, PTE & Edu, Work & Test-taker & Female & Mandarin \\ I9 & IELTS, PTE & Edu, Immi & Test-taker & Female & Mandarin, Cantonese \\ I10 & IELTS, PTE & Edu, Immi & Test-taker & Female & Mandarin \\ I11 & TOEFL, IELTS, PTE & Edu, Work, Immi & Test-taker, Teacher & Female & Korean \\ I12 & DET & Edu & Test-taker & Female & Mandarin \\ I13 & TOEFL & Edu & Test-taker & Female & Mandarin \\ I14 & TOEFL & Edu & Test-taker & Female & Mandarin \\ I15 & TOEFL, IELTS, DET & Edu & Test-taker & Male & Mandarin \\ I16 & IELTS, PTE & Edu, Immi & Test-taker & Female & Cantonese \\ \hline \end{tabular}
\end{table}
Table 2: Information about interview participants. Education and Immigration in Purposes of Tests are denoted as Edu and Immi, respectively.
Results
In this section, we delve into the findings of our empirical study. Initially, we shed light on English test-takers' concerns regarding AI-based and Human-based language tests (LTs). Subsequently, we scrutinize how the implementation of AI in LTs influences these test-takers.
**RQ1. What concerns do test-takers have regarding various types of language tests?**
To address this research question, we offer our insights drawn from interviews and survey results, representing various perspectives on English language tests. Our findings indicate that test-takers possess distinct perceptions about LTs, particularly based on whether AI is utilized in the scoring process. However, opinions and concerns about LTs within the same category can vary significantly. Specifically, test-takers expressed concerns about the reliability of AI-based language tests (ALTs) and the fairness of Human-based language tests (HLTs). In this study, we classify PTE and DET as ALTs due to their dependence on AI for scoring. Conversely, other language tests, including TOEFL and IELTS, are referred to as HLTs, given their reliance on human effort for score determination.
**RQ1.1. Fairness and Consistency.** In our interview stage (refer to Section 3.3), 11 out of the 16 participants who took ALTs were of the opinion that the implementation of AI in automated scoring systems enhances the _fairness_ of LTs. This perception was corroborated by our online survey stage (refer to Section 3.4), wherein 77.8% of the ALT participants characterized these tests as "unbiased." However, some interviewees did identify instances of perceived AI bias. For instance, participants I4, I5, and I10 (refer to Table 2) relayed the experiences of their acquaintances who consistently received low English test scores. The individuals referenced by I4 and I5 are high-pitched female test-takers from Asian countries, while those mentioned by I10 are male speakers with strong Spanish accents. These participants postulated that the AI models utilized in ALTs may struggle to recognize their specific pronunciations. Additionally, participants I7 and I8, who are Asian females, reported modifying their vocal pitch (deepening it) to secure higher scores on the PTE Speaking Test. I7 further speculated that the datasets used to train automated scoring systems may be imbalanced, causing a potential skew in fairness among diverse ethnic groups. This could imply preferential treatment for test-takers from certain ethnic groups over others.
In contrast, participants adopted a more critical view of the _fairness_ of HLTs during the interview phase. They voiced their apprehensions from various angles. For instance, I2 expressed concerns about the potential influence of human markers' moods on scoring, stating: _"Even though the markers are highly trained and professional, their status during the marking may still potentially cause bias."_ Concurrently, I3 suggested that certain accents might not be favored by human markers, commenting: _"From my own experience, if I keep my accent, the examiner will deduct a mark from my speaking test."_ These perceptions of HLT fairness were mirrored in our online survey. Specifically, only
31.1% and 50% of the participants deemed IELTS and TOEFL, respectively, as "unbiased."
During our interview stage, the majority of participants expressed a clear preference for the consistency provided by ALTs over HLTs. For instance, I8, who had experience with both test types, pointed out that while HLTs results varied significantly, ALTs outcomes remained stable within an acceptable range. This sentiment was echoed by I2, I6, I12, and I15, who also highlighted the consistency of AI-based LTs. Although a few participants maintained that HLTs were fair and consistent, the majority of those who had taken HLTs voiced their frustration regarding the fluctuation in their scores. For example, I5 and I14 characterized the results of HLTs as being _"subjective and random"_ and _"unpredictable,"_ respectively. Additionally, I10 and I16 shared their experiences about the inconsistent outcomes of HLTs across different English test centers. Our online survey results further corroborated the test-takers' perception that ALTs are more consistent than HLTs. Specifically, 32.15% of survey participants who took ALTs perceived these language tests as consistent, in contrast to a mere 4.25% of survey participants who took HLTs.
In summary, our findings suggest that participants perceive ALTs as fairer and more consistent in comparison to HLTs. Prior studies indicate that AI models employed in automated scoring systems are trained on a broad spectrum of responses sourced from English test-takers with varied educational backgrounds, nationalities, and accents [50, 39]. Consequently, these AI models could potentially enhance overall fairness and consistency. Nevertheless, achieving balance among diverse groups within the training dataset remains a formidable challenge, which may contribute to an unavoidable bias in ALTs [35]. Conversely, professional human markers' assessments, subjected to their individual judgments and potential influence from external factors related to test-takers [16, 17, 42], can result in inevitable bias within HLTs.
**Finding RQ1.1.** Concerning fairness, our study's participant data generally suggests a perception of AI-based Language Tests (ALTs) as being more impartial and consistent than Human-based Language Tests. However, ALTs appear to demonstrate more bias in edge cases involving factors such as high-pitched voices, heavy accents, or specific ethnic groups.
**RQ1.2. Cost, Availability, and Result Release Time.** In our interview stage, a majority of the participants expressed satisfaction with the cost, availability, and result release time of ALTs over HLTs. Considering the cost, ALTs such as PTE and DET are priced at $235 and $59, respectively, while HLTs like TOEFL and IELTS require a minimum payment of $255, varying based on location. Regarding availability, PTE stands out as it is offered daily, in contrast to the fortnightly schedule of TOEFL and IELTS. When considering result release time, ALTs usually take a maximum of two days, whereas HLTs can take up to a week. Participants also expressed their viewpoints on the availability and result
release times of ALTs. For instance, I6 remarked that _"booking an IELTS test and getting the result take very long, so I took DET to at least have a language test result on hand for university applications."_ Similarly, I7 emphasized that _"the high availability and fast result release of PTE are very attractive for those who have limited time to get a satisfactory language test result."_ Supporting these views, our online survey indicated that 66.07% of respondents were satisfied with the cost of ALTs, with only 24.36% agreeing with the fee for HLTs. When assessing availability, 83.93% favored ALTs, while 50.35% were satisfied with HLTs. In terms of result release time, a significant 96.34% of respondents were content with ALTs, compared to the lesser satisfaction level of 39.54% for HLTs.
Our findings underscore the benefits of incorporating AI in LTs. From a cost perspective, the expense associated with developing, deploying, and maintaining AI models could potentially be more economical than the cost of training and employing professional human markers for scoring LTs. Moreover, ALTs offer a high degree of availability and produce results quickly, features that differentiate them from HLTs. Given that test-takers are sensitive to these factors, the majority of our study participants showed a preference for ALTs over HLTs.
**Finding RQ1.2.** AI-based Language Tests (ALTs), with their relatively lower costs, hold a significant appeal for English test-takers. Furthermore, in aspects such as availability and speed of result release, ALTs outperform Human-based Language Tests, making them a preferable choice.
**RQ1.3. Robustness and Reliability.** Prior research [9] reveals that often adopt diverse strategies aimed at achieving higher scores. This practice can affect the reliability of tests and compromise the integrity of the evaluation systems. In the context of Language Tests (LTs), we observe a similar pattern where English test-takers employ various tactics to enhance their scores without necessarily demonstrating their genuine language proficiency.
During the interview stage, some participants disclosed employing various _strategies_ in AI-based Language Tests (ALTs) to secure higher scores. These strategies, popular among test-takers and language tutors, were mentioned by I1, I3, I4, I5, I6, I7, I8, I10, I12, and I16 as effective ways to improve their test scores. I3 noted, _"applying tricks in PTE can largely improve the scores."_ I8 sincerely remarked, _"I enrolled in a crash course simply to learn how to trick AI. If such tricks were not taught, I would directly drop the course."_ I4 and I10 expressed the effectiveness of these strategies, stating that AI is still _"far from perfect"_ and _"unable to comprehend,"_ respectively. Furthermore, our online survey showed that strategies for ALTs focused on enhancing speaking fluency (26.93%), spoken discourse (15.28%), written discourse (12.09%), and speaking & written content (15.28%).
Our research reveals that participants taking HLTs also tend to utilize specific strategies to improve their scores; however, their views on the efficacy of
these tactics are varied. During the interview stage, I4 suggested that human markers could potentially identify these strategies and consequently downgrade the test scores.10 Despite this, a majority of the participants (I1, I3, I4, I6, I11, I14, and I16) taking HLTs are still inclined towards using these methods to enhance their English results. The data gathered from our online survey showed a higher propensity among ALT participants to employ strategies for better English scores. Notably, 55.36% of ALT participants admitted using various strategies, compared to just 7% of those taking HLTs.
Footnote 10: [https://www.scmp.com/comment/insight-opinion/united-states/article/2177403/how-english-testing-failing-chinese-students](https://www.scmp.com/comment/insight-opinion/united-states/article/2177403/how-english-testing-failing-chinese-students)
We summarize the strategies utilized by participants in both ALTs and HLTs as follows:
* _AI-based language tests:_
* **Templates for essay writing (I1, I2, I3, I4, I6, I7, I8, I10, I12, I16).** Test-takers have gathered essay-writing templates that consist of pre-prepared content, including specific sentences, clauses, and logical connectives. These templates are memorized and deployed in the essay writing sections to enhance written discourse and content.
* **Templates for open-response speaking (I1, I3, I4, I5, I7, I8, I11, I12).** Test-takers prepare templates, incorporating key points, transitions, and reasons for open-ended speaking queries.11 During the exam, test-takers simply substitute the placeholders in these templates with words or sentences pertinent to the questions. By leveraging this pre-constructed content, test-takers can prevent pauses associated with formulating grammatically correct clauses or substantial content. The primary objective of this strategy is to enhance spoken discourse, speaking fluency, and speaking content. Footnote 11: [https://webberz.in/blog/pte-describe-image-templates-to-achieve-high-score/](https://webberz.in/blog/pte-describe-image-templates-to-achieve-high-score/)
* **Non-stop talking (I4, I5, I7, I11, I12, I16).** In an attempt to secure higher scores, test-takers reiterate their responses during speaking tests, a tactic designed to mislead AI models of automated scoring systems. This strategy aims to elevate the speaking content.
* **No self-correction (I2, I7, I8, I10, I12).** In speaking tests, when a slip of the tongue occurs, test-takers deliberately overlook these errors, avoiding self-correction to sustain the appearance of fluency. This strategy is geared towards improving speaking fluency.
* **Spitting keywords fluently (I7, I8, I9).** Test-takers consistently articulate keywords with fluidity while disregarding grammatical accuracy and logical consistency. This strategy is intended to enhance both speaking content and fluency.
* **Single-sentence response (I7, I8).** Despite the requirement to recite an entire passage during spoken language reproduction tasks, test-takers
intentionally limit themselves to reading only a single sentence.12 This strategy aims to enhance speaking fluency. Footnote 12: [https://www.youtube.com/watch?v=bRgc5CHKKp0](https://www.youtube.com/watch?v=bRgc5CHKKp0)
* **Re-taking via disconnection (I6).** Test-takers manipulate the provision designed to address technical issues in DET tests.13 Specifically, if they perceive their performance on prior questions as inadequate, they intentionally disrupt their internet connection to initiate a test restart, thereby increasing their chances of obtaining higher scores. Footnote 13: [https://go.duolingo.com/securitywhitepaper](https://go.duolingo.com/securitywhitepaper)
* _Human-based language tests:_
* **Templates for essay writing (I2, I4, I7, I8, I9, I11, I13, I14).** This strategy is similar to ALTs.
* **Hitting word count (I6, I7, I11, I13, I14).** Test-takers endeavor to produce as much text as feasible while adhering to the maximum word count in an effort to secure higher scores. The goal of this strategy is to enhance the assessment of content in writing.
* **Templates for open-response speaking (I1, I3, I4, I5, I7, I8, I11, I12).** This strategy is similar to ALTs.
Participants sourced their strategies from various platforms, including language schools (I1, I8, I10), fellow test-takers (I5 and I13), and online resources (I6, I9, I12, I14, and I16). Additionally, I7, an English tutor, routinely took English tests to verify the effectiveness of these strategies. In summary, our findings reveal the widespread adoption of multiple strategies in both ALTs and HLTs to enhance test scores. Templates are a prevalent approach for both ALTs and HLTs. Nonetheless, ALTs have unique strategies, such as continuous speaking and avoidance of self-correction. These tactics target the vulnerabilities of AI models in automated scoring systems, specifically their robustness and reliability, thereby aiding test-takers in achieving superior AI-based language test scores.
**Finding RQ1.3.** Test-takers of both AI-based language tests (ALTs) and Human-based language tests implement various strategies to optimize their performance, particularly in the speaking and writing sections. Furthermore, test-takers of ALTs specifically endeavor to mislead the AI models used in automated scoring systems, intending to secure higher scores.
**RQ1.4. Transparency and Explainability.** Participants discussed the _transparency_ and _explainability_ of language tests (LTs), focusing on marking metrics, credibility, and the clarity of explanations. Despite the availability of published details on marking metrics from the test organizers, some participants (I4 and I11) expressed a lack of understanding of these metrics. Most
interviewees obtained information about the marking metrics from third-party sources, such as language schools (I1, I8, I12, I13, and I14), online resources (I2, I3, I6, I9, I10, I11, I12, I13, I14, and I16), friends (I4 and I5), and books (I5). The trustworthiness of the LTs was also questioned by participants, even in the face of assurances from test organizers.14\({}^{,}\)15\({}^{,}\)16\({}^{,}\)17 In the case of ALTs, some participants (I4 and I10) were unsure about the workings of automated scoring systems, despite the availability of this information in the public domain through academic papers [50] or online media.18 Test-takers of both ALTs and HLTs expressed a desire for their results to include more explanatory details to help them improve their language proficiency. However, the feedback they received was typically general, showing only broad score ranges without personalized input. This lack of detailed feedback led some participants (I3, I5, I11, and I12) to dismiss the information as "useless," while others (I5 and I6) expressed disappointment about the scanty explanation provided in their test results.
Footnote 14: [https://www.ielts.org/-/media/publications/quality-and-fairness/quality-and-fairness-2015-uk.ashx](https://www.ielts.org/-/media/publications/quality-and-fairness/quality-and-fairness-2015-uk.ashx)
Footnote 15: [https://www.ets.org/toefl/research/reliability-validity.html](https://www.ets.org/toefl/research/reliability-validity.html)
Footnote 16: [https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/efficacy-and-research/reports/PTE-Academic-Assessment-Efficacy-Report-2019.pdf](https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/efficacy-and-research/reports/PTE-Academic-Assessment-Efficacy-Report-2019.pdf)
Footnote 17: [https://blog.duolingo.com/fairness/](https://blog.duolingo.com/fairness/)
Footnote 18: [https://www.pearsonpte.com/scoring/automated-scoring](https://www.pearsonpte.com/scoring/automated-scoring)
Furthermore, some participants expressed confusion regarding the scoring methods used in AI-based language tests (ALTs). For example, PTE and DET employ unique scoring methodologies instead of a simple summation of all section scores. In PTE, each question evaluates multiple communicative skills simultaneously, with the performance on each question contributing to the final scores of different communicative skill sections. Additionally, the overall score is not a mere average of all communicative skill section scores. Participant I1 reported confusion when her overall score was lower than the average of all section scores. Similar experiences have also been documented online, with individuals sharing screenshots showing an overall score lower than any individual communicative skill section score.19 On the other hand, DET uses an adaptive testing method [36], where the final scores of the test-takers are dependent not only on the correctness of their answers but also on the difficulty level of each question. The difficulty level adapts based on the test-taker's performance on previous questions. This adaptive testing approach, as participants I6 and I12 observed, is perplexing and can induce anxiety and uncertainty during the tests due to its fluid nature.
Footnote 19: [https://www.xiaohongshu.com/explore/6380e146000000018013afb](https://www.xiaohongshu.com/explore/6380e146000000018013afb)
In summary, our findings reveal a prevalent lack of transparency and explainability in both ALTs and HLTs. Despite the availability of detailed information on the websites or guidelines of language test organizers, test-takers often find it challenging to understand these materials. Furthermore, there is a clear expectation among test-takers for well-explained test results. They believe such comprehensive feedback is essential for their understanding and improvement in language proficiency.
**Finding RQ1.4.** Test-takers anticipate both transparency and explainability in AI-based and Human-based language tests. However, the current level of information provided often falls short of these expectations.
**RQ1.5. Contestability and Accountability.** In both AI-based and Human-based language tests, official processes exist for test-takers to challenge their results. However, the experiences of our participants varied considerably when submitting an appeal. Some participants (I3, I6, I9, and I14) expressed reluctance to appeal due to the potential costs, the likelihood of unchanged outcomes, or even the fear of point reduction. I3 mentioned, _"the score was remarkably lower than I expected, but according to the rules of IELTS, even if the score were fixed, it would be increased by 0.5 at most, which was still not ideal to me, so I just didn't do it."_ Both I5 and I6 reported that their appeals did not lead to any changes in their scores. Overall, these findings suggest that test-takers harbor reservations about the fairness and contestability of language tests. The potential financial burden and fear of point reduction discourage them from contesting the results, thereby reducing the likelihood of test organizers being held accountable.
**Finding RQ1.5.** Due to factors such as substantial costs, perceived low probability of success, and potential score reductions, test-takers are often disinclined to challenge their results or hold language test organizers accountable.
**RQ1.6. Other Concerns.** During our interview stage, participants expressed several additional concerns, particularly related to the usage of the collected information. For instance, interviewee I1 highlighted that PTE used to include a "first-time test taker" label on the result report, this label even appears on the official sample score report.20 The participants voiced confusion about the purpose of this label and whether it led to differential treatment of test-takers. Further concerns were raised about the potential for language test providers to leverage algorithms to induce a "near-miss" effect [44], thereby encouraging test-takers to retake the tests more frequently.
Footnote 20: [https://www.pearsonpte.com/ctf-images/yqwtwibiobs4/STkz5xNp0H67FtzmLbP8yX/5d9493a87e90e1cbdbe6311Tb479a564/score_report.png](https://www.pearsonpte.com/ctf-images/yqwtwibiobs4/STkz5xNp0H67FtzmLbP8yX/5d9493a87e90e1cbdbe6311Tb479a564/score_report.png)
Additional participants, I2 and I12, expressed concerns about the profit-oriented nature of test providers. They shared the prevalent belief among test-takers and tutoring organizations that achieving comparable scores on the PTE or DET is apparently easier compared to the IELTS and TOEFL. These interviewees were concerned that this might be a marketing strategy designed to sway test-takers toward these tests. Interviewee I8 recounted a similar experience, describing how a conversation with a fellow passenger on a flight led them to switch from the IELTS to the PTE. They stated: _"On the airplane, I talked with the girl next to me about the language tests we took. I felt very
frustrated about taking IELTS. She told me that if I had taken PTE, I would not feel frustrated. I later took PTE and felt she was right. It is too easy to get high scores in PTE."_ Moreover, test-takers highlighted the non-interactive aspect of the speaking component in ALTs and the TOEFL. While certain participants (I3 and I4) favored these non-interactive speaking tests due to shyness, others maintained that _"language is about communicating with other humans"_ (I11). Therefore, they felt that speaking to a computer without engaging in authentic dialogue was not representative of real-world language use (I2, I11, and I16).
**Finding RQ1.6.** Participants in both AI-based and Human-based language tests expressed additional concerns, including the profit-driven motives of test providers, the validity of test results, and the lack of interactivity during the test.
**RQ2. What is the impact of using AI in language tests on test-takers?**
As an increasing number of institutions, organizations, and governments accept language tests (LTs), test-takers have more freedom to select their preferred test. In this research question, we aim to explore how the incorporation of AI impacts test-takers and how these individuals enhance their language proficiency during their test preparation period.
**RQ2.1. Choices of Tests.** From our online survey, we observed that 24 out of 99 participants indicated having taken multiple LTs. Figure 2 illustrates the reasons and dynamics behind participants' shifting language test choices, as derived from our survey data. We found that a substantial number of participants who switched tests transitioned from IELTS to PTE (17 participants). Factors such as result release time, test difficulty, and test availability significantly influenced this change. Furthermore, our survey revealed a preference for ALTs, such as PTE, among participants intending to immigrate, with only three out of the 14 potential immigrants choosing HLTs.
ALTs can partially alleviate the financial and time constraints for test-takers, but this could potentially spark more intense competition among them. Given that spots in highly respected universities are often limited, these institutions typically operate on a merit-based, first-come, first-served basis. The advantages of ALTs could exacerbate this competitive atmosphere. As an illustration, one of our interviewees, I7, an English tutor, highlighted this predicament, noting that students were compelled to _"test early, frequently, and repeatedly."_
**Finding RQ2.1.** Test-takers are increasingly opting for AI-based language tests, prompted by factors, such as faster result release times, perceived difficulty, and broad availability.
**RQ2.2. Language Improvement.** Test-takers typically use their language test scores as indicators to improve their language proficiency. While they expect the test results to guide their language proficiency improvement, many express dissatisfaction with the quality of feedback provided in the test results. Figure 3
displays the participants' consensus on the usefulness of the language test results for their language enhancement. Participants in ALTs, such as PTE and DET, generally provide positive feedback about the benefits of the test results. In contrast, participants from HLTs, including TOEFL and IELTS, present a more neutral viewpoint.
Figure 4 presents practice methods, including an official AI practice marker, a third-party AI practice marker, a human tutor, and self-feedback, that participants utilize in preparation for LTs in our online survey stage. Note that IELTS has no official AI practice marker. The figure reveals that participants preparing for ALTs and HLTs significantly rely on the human tutor and self-feedback. In particular, when rating on a 5-point Likert scale (where 4 or 5 indicates strong agreement), 80.74% and 88.89% of participants in ALTs and HLTs believe that
Figure 3: The participants’ agreement on the usefulness of test reports for language improvement
Figure 2: Sankey diagram on the change of test choices based on the results of online survey. The left column shows the departure tests, and the right column shows the destination tests. The middle column shows their reasons of changing the test choices.
the human tutor significantly enhances their test scores, respectively. Specifically, the human tutor scored the highest (3.95 for ALTs and 4.11 for HLTs), surpassing other methods, such as the official AI practice marker, third-party AI practice marker, and self-feedback.
During our interview stage, participants (I2, I7, I8, and I13) shared that employing the human tutor or acquiring the official AI practice marker was financially burdensome, leading them to opt for the third-party AI practice marker--a preference also illustrated in Figure 4. However, the third-party AI practice marker's extensive usage could negatively affect language learning effectiveness due to its misalignment with actual language proficiency (as noted by I1 and I2). Furthermore, I7 pointed out that if English test-takers implement the strategies discussed in RQ1 to enhance their scores--which may involve unnatural usage--this could potentially detriment their actual language proficiency.
**Finding RQ2.2.** Based on findings from participants and an online survey, a human tutor and an official AI practice marker proved beneficial for enhancing test-takers' language proficiency. However, their high cost leads test-takers to opt for a third-party AI marker, a choice that may yield less benefit or even produce adverse effects.
## 5 Discussion
In this section, we discuss the primary implications derived from our empirical findings related to the use of AI in language testing.
Our empirical study reveals that AI-based Language Tests (ALTs) have gained popularity among test-takers for various reasons, including consistency, rapid result release times, cost-effectiveness, and widespread availability. Furthermore, educational establishments, professional accreditation bodies, and governments are increasingly accepting ALT results. Specifically, as of February 1, 2023, Canada joined countries such as Australia, New Zealand, and the United Kingdom in accepting PTE scores for immigration purposes,21 signal
Figure 4: The participants’ choices of practice methods
ing the growing acceptance of ALTs. However, Human-based Language Tests (HLTs) still hold the edge over ALTs in terms of robustness and reliability. Despite considerable research into the trustworthiness of AI models over the past decade, many issues remain unresolved. Consequently, automated scoring systems' AI models may inherit these problems in language testing, leading to societal implications. For instance, imbalanced training data might exacerbate biases towards individuals from specific groups, undermining societal diversity, equity, and inclusion. Furthermore, test-taking strategies designed to maximize scores could compromise the validity of language tests as a merit-based assessment tool. As AI models of ALTs are often operated on a large scale, their impact on the community may be more profound and systematic compared to HLTs.
Footnote 10: [https://github.com/hLTs/](https://github.com/hLTs/)
The trustworthiness of language tests is crucial; however, limited transparency restricts access to such information. Even though AI-based language test organizers disclose papers or audit reports about their tests, their models and training data largely remain undisclosed, possibly due to commercial interests. By releasing datasheets [20] or model cards [37] without necessarily making them public, they could help inform relevant stakeholders about their trustworthiness. Furthermore, based on participant feedback, official AI practice markers have proven beneficial for test preparation and familiarization with language tests alongside traditional human tutoring. These AI practice markers also aid test-takers in comprehending the trustworthiness of AI models in automated scoring systems. However, their high cost deters test-takers from utilizing the official AI practice markers, leading them to resort to third-party AI practice markers, which could potentially impede their language learning progress. Our finding reveals that AI-based and Human-based language test organizers have a significant distance to cover in maintaining transparency and fostering trust among test-takers.
With the recent advancements in natural language understanding, more sophisticated technologies, specifically large language models (LLMs), have been developed for various applications, including practice interviews,11 AI assistants,12 and language learning assistants.13 Given the close correlation between language testing and language models, there is considerable potential for these technologies to be integrated into ALTs. Since the reliability of language tests carries significant societal implications, LLMs are not exempt from issues such as bias and a lack of explainability. Therefore, these LLMs must undergo comprehensive evaluations before deployment in practice.
Footnote 11: [https://github.com/hLTs/](https://github.com/hLTs/)
Footnote 12: [https://github.com/hLTs/](https://github.com/hLTs/)
Footnote 13: [https://github.com/hLTs/](https://github.com/hLTs/)
Footnote 14: [https://www.speak.com/](https://www.speak.com/)
## 6 Threats to Validity
While we adhered to established practices in designing our empirical study, the clarity of our questions might not have been optimal, potentially skewing the analysis of our dataset. In addition, we had no control over the authenticity of participant responses. To mitigate this problem, we manually filtered out invalid responses; however, some might remain. To facilitate a wide range of participant experiences and backgrounds, we employed various social media platforms to recruit English test-takers for our study. Nevertheless, the test-taker composition was imbalanced across various spectrums, leading to inherent biases in our collected dataset.
Our empirical study focused on the four major English language tests, including TOEFL, IELTS, PTE, and DET, as of February 2023. Therefore, our findings may not be generalizable beyond these specific language tests and timeframes. Moreover, our dataset, collected exclusively from English test-takers, may not reflect the perspectives of other stakeholders in the language testing domain.
## 7 Conclusion
Educational institutions, professional accreditation bodies, and governments frequently utilize language tests to gauge candidates' proficiency, informing merit-based selections or decisions. Recent advancements in AI technology have prompted test providers to leverage AI capabilities in developing automated scoring systems. As the trustworthiness of AI increasingly comes under scrutiny, the implications of AI usage in language tests require thorough exploration to comprehend their impacts on individuals and the larger community. To our knowledge, this research represents the first empirical study exploring these implications from the perspective of test-takers.
Our findings suggest that AI-based language tests (ALTs) are generally perceived as fairer and more consistent than Human-based language tests (HLTs), although they may exhibit more bias in certain scenarios, such as with high-pitched voices or strong accents. The allure of ALTs lies in their cost-effectiveness, superior accessibility, and speedy result release times, making them the preferred choice for many test-takers. In terms of reliability and robustness, test-takers of both ALTs and HLTs utilize a variety of strategies to attain higher scores, with AI-based language test-takers more likely to employ these strategies to mislead AI models within automated scoring systems. Despite a clear desire for transparency and explainability among test-takers, such information is often insufficiently provided. While human tutors and official AI practice markers can aid in improving test-takers' language proficiency, their high costs often lead test-takers to resort to third-party AI markers, which could potentially hinder their learning progress. In an era where AI is disrupting traditional patterns of language testing and learning, we present empirical evidence of relevant issues, raising awareness of their potential societal implications. We also advocate for
stakeholders to fully comprehend these implications prior to embracing the use of AI.
|
2305.19237 | Stabilized immersed isogeometric analysis for the
Navier-Stokes-Cahn-Hilliard equations, with applications to binary-fluid flow
through porous media | Binary-fluid flows can be modeled using the Navier-Stokes-Cahn-Hilliard
equations, which represent the boundary between the fluid constituents by a
diffuse interface. The diffuse-interface model allows for complex geometries
and topological changes of the binary-fluid interface. In this work, we propose
an immersed isogeometric analysis framework to solve the
Navier-Stokes-Cahn-Hilliard equations on domains with geometrically complex
external binary-fluid boundaries. The use of optimal-regularity B-splines
results in a computationally efficient higher-order method. The key features of
the proposed framework are a generalized Navier-slip boundary condition for the
tangential velocity components, Nitsche's method for the convective
impermeability boundary condition, and skeleton- and ghost-penalties to
guarantee stability. A binary-fluid Taylor-Couette flow is considered for
benchmarking. Porous medium simulations demonstrate the ability of the immersed
isogeometric analysis framework to model complex binary-fluid flow phenomena
such as break-up and coalescence in complex geometries. | Stein K. F. Stoter, Tom B. van Sluijs, Tristan H. B. Demont, E. Harald van Brummelen, Clemens V. Verhoosel | 2023-05-30T17:27:22Z | http://arxiv.org/abs/2305.19237v1 | Stabilized immersed isogeometric analysis for the Navier-Stokes-Cahn-Hilliard equations, with applications to binary-fluid flow through porous media
###### Abstract
Binary-fluid flows can be modeled using the Navier-Stokes-Cahn-Hilliard equations, which represent the boundary between the fluid constituents by a diffuse interface. The diffuse-interface model allows for complex geometries and topological changes of the binary-fluid interface. In this work, we propose an immersed isogeometric analysis framework to solve the Navier-Stokes-Cahn-Hilliard equations on domains with geometrically complex external binary-fluid boundaries. The use of optimal-regularity B-splines results in a computationally efficient higher-order method. The key features of the proposed framework are a generalized Navier-slip boundary condition for the tangential velocity components, Nitsche's method for the convective impermeability boundary condition, and skeleton- and ghost-penalties to guarantee stability. A binary-fluid Taylor-Couette flow is considered for benchmarking. Porous medium simulations demonstrate the ability of the immersed isogeometric analysis framework to model complex binary-fluid flow phenomena such as break-up and coalescence in complex geometries.
keywords: Navier-Stokes-Cahn-Hilliard, Immersed method, Isogeometric analysis, Binary-fluid flow, Diffuse interface, Porous media
## 1 Introduction
Binary-fluid flows, in which the two fluid components are separated by a molecular transition layer, are omnipresent in science and engineering. This article primarily focuses on imbibition processes in porous media - which occur in such diverse applications as inkjet printing, groundwater flows and reservoir engineering - but the same flow physics is relevant in numerous other contexts, including free-surface flows (_e.g._, waves, jets) and bubbly flows (_e.g._, reactor cooling, steam generation). The complex physical behavior of such flows makes the use of computational methods for their study indispensable [1].
The complexity of the domain and the loading conditions makes high-fidelity simulations of binary-fluid flows often extremely challenging, even when considering state-of-the-art computational techniques. The challenges related to such simulations are of both physical and numerical nature, _viz._: _(i)_ The interface layer separating the two fluids is subject to break-up and coalescence, changing not only the shape, but also the topology of the domain of each constituent; _(ii)_ The dynamics of the contact lines, _i.e._, the intersection of the interface layers with the boundary of the solid domain, is of essential importance and must be emulated accurately by the computational model; _(iii)_ The geometry of the binary-fluid flow domain can be very complex (_e.g._, scan-based micro-structures), rendering automatic high-quality mesh generation for the construction of (higher-order) stable approximation spaces nearly impossible.
To capture topological changes of the interface layer due to, _e.g._, break-up and coalescence, we employ a diffuse (or smeared) interface model. Diffuse-interface models describe the fluid components by a continuous phase field, which varies gradually over a thin-but-finite transition zone representing the interface layer. In contrast to sharp (or discrete) interface models, diffuse models have the intrinsic ability to accommodate topological changes without the need for explicit interface reconstruction [2]. Diffuse-interface models for two immiscible incompressible fluid species are generally described by the Navier-Stokes-Cahn-Hilliard equations [3; 4; 5; 6]. In this work we build upon the model by Abels, Garcke and Grun [6], on account of its thermodynamic consistency, incompressibility, and consistent reduction to the underlying single-fluid Navier-Stokes equations in pure species.
To properly model the dynamics of contact lines, we make use of the generalized Navier boundary condition [7; 8] for the velocity components tangent to the contact surface. This boundary condition circumvents the traction singularities [9] which are associated with the no-slip condition in sharp interface models [10]. The Navier-Stokes-Cahn-Hilliard model implicitly introduces a slip length [11] related to the diffuse interface thickness parameter (via the mobility), thereby in principle avoiding stress singularities. This intrinsic slip length is hard to control, however, and results in increasing stress concentrations and, ultimately, unbounded shear forces as the diffuse-interface thickness vanishes. A Navier-Stokes-Cahn-Hilliard model augmented with the generalized Navier boundary condition conjecturally permits a physically relevant sharp interface limit, although the nature and behavior of this limit are still open research questions [12; 13; 14].
To construct a suitable approximation space on domains that are complex both in terms of geometry and topology, we base our work on immersed isogeometric analysis. This analysis technique combines the favorable approximation properties of splines inherent to isogeometric analysis [15], with the topological flexibility of immersed methods (_e.g._, the Finite Cell Method [16; 17; 18] and CutFEM [19; 20; 21]). The combination of isogeometric analysis with immersed methods has been explored for a wide range of applications [22; 23; 24; 25; 26; 27; 28; 29]. Compared to boundary-fitted isogeometric analysis (and finite element techniques in general), the immersed analysis approach requires dedicated integration techniques for cut cells [30; 31], special treatment of essential boundary conditions [21], which is particularly non-trivial in the context of non-linear and multi-field equations [32; 33], and stabilization techniques to circumvent problems associated with small volume-fraction cut cells [34; 35]. In this work
we adopt the integration technique developed in Refs. [27; 36], making it possible to numerically evaluate integrals over both the immersed domain and its boundaries. In the context of multi-phase problems, immersed techniques have been considered previously in the restricted scope of the Cahn-Hilliard equations separately [37]. The Navier-Stokes-Cahn-Hilliard system considered in this work requires dedicated treatment of boundary-condition imposition and stabilization. For this, we draw inspiration from the skeleton-stabilized immersogeometric analysis framework for the (Navier-)Stokes equations developed in Refs. [38; 39].
In this contribution we propose a novel immersed isogeometric analysis formulation for the simulation of binary-fluid flows governed by the Navier-Stokes-Cahn-Hilliard equations. To impose boundary conditions on the immersed boundary, we derive a Nitsche formulation to control the normal component of the velocity. For the tangential components we propose the generalized Navier boundary condition, to regularize the traction singularity associated with interface pinning in the sharp-interface limit. We propose to discretize the formulation using equal-order optimal-regularity splines for all field variables. To stabilize this formulation, the skeleton- and ghost-stabilization techniques are extended to the problem considered here, focusing on minimizing the required number of penalty parameters. To focus on the novelties in terms of stabilized formulation and immersed isogeometric finite element discretization, our presentation is restricted to two-dimensional scenarios. The presented novelties are, to a large extent, not specific to this setting, but the adaptivity and high-performance computing aspects that are necessary to enable three-dimensional simulations are beyond the scope of this work.
We dedicate this contribution to Thomas J.R. Hughes for his lifetime achievements in computational mechanics. This contribution is strongly influenced by the seminal work conducted by Tom on all key aspects, _viz_. isogeometric analysis, multi-phase flows and phase-field modeling, and stabilization techniques. We also draw inspiration from the elegance with which Tom develops and combines sophisticated computational techniques to solve complex, multifaceted, problems, acknowledging the importance of the usability of advanced methods in engineering workflows.
The remainder of this article is structured as follows. In Section 2 we establish the diffuse-interface model equations appropriate for describing moving contact lines in binary-fluid flows. In Section 3 we introduce the necessary technology components to perform immersed finite element simulations, and in Section 4 we develop an immersed isogeometric analysis formulation for the diffuse-interface model. We demonstrate our immersed isogeometric analysis framework in Section 5 for a number of numerical experiments, including a binary-fluid Taylor-Couette flow benchmark case, and the binary-fluid flow through porous media. Conclusions are finally presented in Section 6.
## 2 The diffuse-interface model
We examine porous media imbibition of an isothermal binary-fluid system consisting of two immiscible incompressible Newtonian fluids, which we label as 1 and 2. The two prevalent approaches for modeling binary-fluid systems are by a sharp-interface representation and by a diffuse-interface representation, both illustrated in Fig. 1. On account of
he prominent occurrence of interface coalescence and break-up in porous media imbibition processes, we consider a Navier-Stokes-Cahn-Hilliard diffuse-interface model.
### Navier-Stokes-Cahn-Hilliard equations
In diffuse-interface models, the interface between the two immiscible fluids is characterized by a layer of finite thickness consisting of a mixture of both fluids. This layer represents a gradual transition between fluid 1 and fluid 2. The fluid-constituents in a diffuse-interface model are typically described by an order, or phase-field, parameter \(\varphi\in[-1,1]\), where \(\varphi=1\) signifies pure species 1 and \(\varphi=-1\) pure species 2. We utilize the Navier-Stokes-Cahn-Hilliard equations to describe the evolution of the mixture in terms of the volume-averaged velocity, \(\mathbf{u}\), the pressure, \(p\), and the newly introduced order parameter, \(\varphi\). Specifically, we adopt the model developed by Abels, Garcke, and Grun [6], on the basis of its thermodynamic consistency and its capacity to reduce to the single-fluid Navier-Stokes equations in scenarios involving pure species. The balance laws for this model are given by
\[\partial_{t}\left(\rho\mathbf{u}\right)+\nabla\cdot\left(\rho\mathbf{u} \otimes\mathbf{u}\right)+\nabla\cdot\left(\mathbf{u}\otimes\mathbf{J}\right)-\nabla\cdot \mathbf{\tau}-\nabla\cdot\mathbf{\zeta}+\nabla p =0, \tag{1a}\] \[\nabla\cdot\mathbf{u} =0,\] (1b) \[\partial_{t}\varphi+\nabla\cdot\left(\varphi\mathbf{u}\right)-\nabla \cdot\left(m\nabla\mu\right) =0. \tag{1c}\]
The closure relations for the relative mass flux \(\mathbf{J}\), the viscous stress \(\mathbf{\tau}\), the capillary stress \(\mathbf{\zeta}\) and the chemical potential \(\mu\) are given as
\[\mathbf{J} \coloneqq-\frac{\rho_{1}-\rho_{2}}{2}m\nabla\mu\,, \tag{2a}\] \[\mathbf{\tau} \coloneqq\eta\left(\nabla\mathbf{u}+\left(\nabla\mathbf{u}\right)^{T} \right)\,,\] (2b) \[\mathbf{\zeta} \coloneqq-\sigma\varepsilon\nabla\varphi\otimes\nabla\varphi+\bm {I}\left(\frac{\sigma\varepsilon}{2}|\nabla\varphi|^{2}+\frac{\sigma}{ \varepsilon}\Psi\right)\,,\] (2c) \[\mu =-\sigma\varepsilon\Delta\varphi+\frac{\sigma}{\varepsilon}\Psi^{ \prime}\,, \tag{2d}\]
Figure 1: Schematic of a porous medium binary-fluid system, modeled with either a diffuse interface representation or a sharp-interface representation.
in which \(\Psi=\Psi(\varphi)\) is the mixture energy density, for which we make use of the typical double well function
\[\Psi(\varphi)\coloneqq\frac{1}{4}\left(\varphi^{2}-1\right)^{2}\,. \tag{3}\]
The model parameters in Eqs. (1) and (2) involve the interface thickness parameter \(\varepsilon>0\) and the mobility parameter \(m>0\). The former is the diffuse interface length-scale, and the latter controls the diffusive time scale as well as the model-intrinsic wall-slip length scale [11; 40].
The material parameters involved in these model equations are the fluid-fluid surface tension \(\sigma_{12}=\frac{2\sqrt{2}}{3}\sigma\), the mixture density \(\rho\), and the mixture viscosity \(\eta\). The mixture density and viscosity generally depend on \(\varphi\). To ensure positive densities even for the non-physical scenario \(\varphi\notin[-1,1]\), we adopt the density extension [41]
\[\rho(\varphi)=\left\{\begin{array}{ll}\frac{1}{4}\rho_{2},&\varphi\leq-1-2 \lambda\,,\\ \frac{1}{4}\rho_{2}+\frac{1}{4}\rho_{2}\lambda^{-2}\left(1+2\lambda+\varphi \right)^{2},&\varphi\in\left(-1-2\lambda,-1-\lambda\right),\\ \frac{1+\varphi}{2}\rho_{1}+\frac{1-\varphi}{2}\rho_{2},&\varphi\in\left[-1- \lambda,1+\lambda\right],\\ \rho_{1}+\frac{3}{4}\rho_{2}-\frac{1}{4}\rho_{2}\lambda^{-2}\left(1+2\lambda- \varphi\right)^{2},&\varphi\in\left(1+\lambda,1+2\lambda\right),\\ \rho_{1}+\frac{3}{4}\rho_{2},&\varphi\geq 1+2\lambda\,,\end{array}\right. \tag{4}\]
with \(\lambda=\rho_{2}/\left(\rho_{1}-\rho_{2}\right)\). Such a density extension is particularly important when considering constituents of widely varying densities, such as water and air.
For the viscosity interpolation, we make use of the Arrhenius mixture-viscosity model [42]
\[\log\eta(\varphi)=\frac{\left(1+\varphi\right)\Lambda\log\eta_{1}+\left(1- \varphi\right)\log\eta_{2}}{\left(1+\varphi\right)\Lambda+\left(1-\varphi \right)}\,, \tag{5}\]
where \(\Lambda=\frac{\rho_{1}M_{2}}{\rho_{2}M_{1}}\) is the intrinsic volume ratio (with \(M_{1}\) and \(M_{2}\) the molar masses). For all future results, we use \(\Lambda=1\), for which the denominator in Eq. (5) is guaranteed to remain positive irrespective of \(\varphi\) (see _Remark 2_ in Ref. [14] for further details).
### Fluid-solid boundary condition
The behavior of a binary fluid in contact with a solid surface and, in particular, of the contact line corresponding to the intersection of the fluid-fluid interface with the solid surface, has been the subject of extensive investigation over several decades, and contemporary understanding is still incomplete. For sharp-interface models, the standard no-slip condition at fluid-solid interfaces leads to a non-integrable stress singularity at the contact line [10]. This stress singularity is removed by the introduction of slip [9]. Diffuse-interface models provide an intrinsic slip mechanism and, hence, regularization of the traction singularity [11]. However, in combination with a no-slip condition, the traction at the contact line still degenerates in the sharp-interface limit \(\varepsilon\rightarrow\,+\,0\).
To avoid the degeneration of the fluid traction at the contact line in the sharp-interface limit, in the present work we employ a generalized Navier boundary condition [8]. The generalized Navier boundary condition is an extension of the classical Navier slip condition [7],
including capillary effects. In the diffuse-interface setting, the generalized Navier boundary condition is given by
\[\mathbf{P}_{\Gamma}\big{(}\alpha_{\text{GN}}(\mathbf{u}-\mathbf{u}_{\text{wall}})+(\mathbf{ \tau}\mathbf{n}+\mathbf{\zeta}\mathbf{n})\big{)}-\nabla_{\Gamma}\sigma_{\text{\tiny SF}}( \varphi)=0\,, \tag{6}\]
with the generalized Navier model parameter \(\alpha_{\text{GN}}>0\) as the relaxation coefficient, \(\mathbf{P}_{\Gamma}(\cdot)=\mathbf{n}\times(\cdot)\times\mathbf{n}\) the tangential projection onto the solid surface, \(\nabla_{\Gamma}(\cdot)=\mathbf{P}_{\Gamma}\nabla(\cdot)\) the surface gradient, and \(\sigma_{\text{\tiny SF}}\) the solid-fluid surface tension according to
\[\sigma_{\text{\tiny SF}}(\varphi)=\frac{1}{4}(\varphi^{3}-3\varphi)(\sigma_{ \text{\tiny S2}}-\sigma_{\text{\tiny S1}})+\frac{1}{2}(\sigma_{\text{\tiny S1} }+\sigma_{\text{\tiny S2}})\,, \tag{7}\]
where \(\sigma_{\text{\tiny S1}}\geq 0\) and \(\sigma_{\text{\tiny S2}}\geq 0\) denote the solid-fluid surface tensions of fluid species 1 and 2, respectively; see also Refs. [43, 44, 11, 45]. Essentially, the generalized Navier boundary condition insists that the tangential slip velocity, \(\mathbf{P}_{\Gamma}(\mathbf{u}-\mathbf{u}_{\text{wall}})\), is proportional to the tangential component of the fluid traction including the capillary stress, according to \(\mathbf{P}_{\Gamma}(\mathbf{\tau}\mathbf{n}+\mathbf{\zeta}\mathbf{n})\), and the Marangoni traction emerging from the non-uniform fluid-solid surface tension, \(\nabla_{\Gamma}\sigma_{\text{\tiny SF}}(\varphi)\).
The description of the wetting behavior of the two fluid components at the solid surface provided by (6) is complemented by the dynamic contact angle condition
\[\alpha_{\text{DW}}(\partial_{t}\varphi+\mathbf{u}\cdot\nabla_{\Gamma}\varphi)+ \sigma\varepsilon\,\nabla\varphi\cdot\mathbf{n}+\sigma^{\prime}_{\text{\tiny SF}} (\varphi)=0\,, \tag{8}\]
with the dynamic wetting model parameter \(\alpha_{\text{DW}}\geq 0\) as a relaxation coefficient. For \(\alpha_{\text{DW}}=0\), the boundary condition (8) is referred to as the static contact-angle condition, and one can infer that (8) imposes the equilibrium contact angle \(\theta_{\text{\tiny E}}\) between the diffuse interface and the solid surface (interior to the liquid) in accordance with \(\sigma_{12}\cos(\theta_{\text{\tiny E}})+\sigma_{\text{\tiny S1}}-\sigma_{ \text{\tiny S2}}=0\). In the present work we restrict ourselves to stationary neutral wetting scenarios, _i.e._, \(\alpha_{\text{DW}}=0\) and \(\theta_{\text{\tiny E}}=\pi/2\). The latter implies that \(\sigma_{\text{\tiny S1}}=\sigma_{\text{\tiny S2}}=\sigma_{\text{\tiny SF}}\). The contact angle condition (8) then reduces to the homogeneous Neumann condition
\[\nabla\varphi\cdot\mathbf{n}=0\,. \tag{9}\]
We introduce this provision to avoid the trace terms in (8), which would severely complicate the weak formulation, requiring its own dedicated study. It is to be noted that the neutral wetting assumption also implies that the Marangoni term \(\nabla_{\Gamma}\sigma_{\text{\tiny SF}}(\varphi)\) in (6) vanishes.
In conjunction with the generalized Navier boundary condition and the contact angle condition, we consider the impermeability conditions
\[\mathbf{u}\cdot\mathbf{n} =0\,, \tag{10}\] \[-m\nabla\mu\cdot\mathbf{n} =0\,. \tag{11}\]
These conditions respectively impose that the convective and diffuse transport into the solid surface vanish.
## 3 Immersed isogeometric analysis
To evaluate the Navier-Stokes-Cahn-Hilliard model on complex domains, such as our prototypical porous medium domain shown in Fig. 2, we propose an immersed isogeometric analysis approach. In this section, we summarize the key technology components required to perform such an analysis. These are the construction of analysis-suitable spline spaces over non-boundary-fitted domains, discussed in Section 3.1, and the algorithms for evaluating volumetric and surface integrals on cut elements, discussed in Section 3.2.
### Non-boundary-fitted B-spline basis functions
The physical domain, \(\Omega\), with non-overlapping boundary segments \(\partial\Omega_{\text{in}}\cup\partial\Omega_{\text{out}}\cup\partial\Omega_{ \text{wall}}=\partial\Omega\) is immersed in an ambient domain, \(\mathcal{A}\supset\Omega\), which is typically cuboid; see Fig. 2a. The ambient domain is partitioned by a rectilinear mesh, \(\mathcal{T}^{h}_{\mathcal{A}}\), with elements \(K\), where the superscript \(h\) refers the size of the elements. Elements that do not intersect with the physical domain can be omitted, resulting in the (active) background mesh
\[\mathcal{T}^{h}:=\{K\,|\,K\in\mathcal{T}^{h}_{\mathcal{A}},K\cap\Omega\neq \emptyset\}. \tag{12}\]
The ambient domain mesh and (active) background mesh are illustrated in Fig. 2b. The mesh consisting of elements that are trimmed to the physical domain is denoted by
\[\mathcal{T}^{h}_{\Omega}:=\{K\cap\Omega\,|\,K\in\mathcal{T}^{h}\} \tag{13}\]
Figure 2: Illustration of the different domains and meshes used in the immersed setting, considering an artificially constructed porous medium as a prototypical example of a complex geometry.
and the corresponding mesh for the immersed boundary by
\[\mathcal{T}^{h}_{\partial\Omega}:=\{E\subset\partial\Omega\,|\,E=\partial K\cap \partial\Omega,\,K\in\mathcal{T}^{h}_{\Omega}\}. \tag{14}\]
The considered formulation, which we will detail in Section 4, incorporates stabilization terms formulated on the edges of the background mesh, which we refer to as the skeleton mesh
\[\mathcal{F}^{h}_{\text{skeleton}}:=\{F=\partial K\cap\partial K^{\prime}\,| \,\,K,K^{\prime}\in\mathcal{T}^{h},K\neq K^{\prime}\}. \tag{15}\]
Note that the faces, \(F\in\mathcal{F}^{h}_{\text{skeleton}}\), which are illustrated in Fig. 2c, can be partially outside of the domain, \(\Omega\), and that the boundary of the background mesh is not part of the skeleton mesh.
We also define the ghost mesh, illustrated in Fig. 2d, as the subset of the skeleton mesh composed of faces that belong to an element intersected by the domain boundary, _i.e._,
\[\mathcal{F}^{h}_{\text{ghost}}:=\{F\cap\partial K\,|\,F\in\mathcal{F}^{h}_{ \text{skeleton}},K\in\mathcal{G}\}, \tag{16}\]
where \(\mathcal{G}:=\{K\in\mathcal{T}^{h}\,\,|\,\,K\cap\partial\Omega\neq\emptyset\}\) is the collection of elements in the background mesh that are crossed by the immersed boundary.
The rectilinear mesh constructed over the ambient domain, \(\mathcal{T}^{h}_{\mathcal{A}}\), allows for the construction of multivariate B-spline basis functions as tensor products of univariate B-spline functions [46]. We herein consider optimal-regularity B-splines of order \(k\) and regularity \(k-1\), as illustrated in Fig. 3 for univariate B-splines, \(\{B_{i}\in C^{k-1}\}\), of various orders defined over an ambient domain with \(N_{\text{elem}}=5\) elements. Due to the \(C^{k-1}\)-continuity of optimal-regularity B-splines, the number of basis functions is generally substantially smaller than that of the corresponding Lagrange basis. For the illustrated univariate case, the number of optimal-regularity B-splines is equal to \(N_{\text{elem}}+k\), whereas the corresponding \(C^{0}\)-continuous Lagrange basis has \(k\cdot N_{\text{elem}}+1\) basis functions.
To restrict the approximation space to the physical domain, \(\Omega\), only the basis functions with support over the physical domain are considered. We denote this set of functions by
\[\mathcal{S}^{k}_{k-1}=\left\{N\in\{B_{i}\}:\text{supp}(N)\cap\Omega\neq \emptyset\right\} \tag{17}\]
and the corresponding \(N_{\text{dofs}}\)-dimensional approximation space by
\[\mathcal{V}^{h}=\text{span}\left(\mathcal{S}^{k}_{k-1}\right). \tag{18}\]
Figure 3: Optimal-regularity B-splines of various orders, \(k\), constructed over an ambient mesh with five elements. The physical domain is shown in blue and the part of the basis functions without support in the physical domain is printed by dotted lines.
**Remark 3.1** (Local refinements): _A limitation of tensor-product B-splines, to which the discussions in this article are restricted, is that these cannot be refined locally. In the context of immersed isogeometric analysis we have found the extension with local refinement capabilities through (truncated) hierarchical B-splines particularly convenient [14, 47]. Locally refined meshes can then be constructed by sequential bisectioning of selections of elements in the mesh, starting from a rectilinear mesh, after which a truncated hierarchical B-spline basis can be constructed over the immersed domain. This procedure is, e.g., detailed in Ref. [48] in the context of error estimation and adaptivity._
### Quadrature rules for non-boundary-fitted elements
The procedure used to construct integration quadrature on (a polygonal approximation of) the cut elements and their boundaries is illustrated in Fig. 4. This procedure builds on the octree subdivision integration strategy described in Ref. [17], which is a widely used approach due to its simplicity and robustness. In the octree procedure, elements in the background mesh that intersect the boundary of the computational domain are bisected into \(2^{d}\) integration sub-cells. If a sub-cell lies entirely within the domain, it is retained in the partitioning of the cut-element, whereas it is discarded if it lies entirely outside the domain. This bisectioning procedure is recursively applied to all the sub-cells that intersect the boundary. This recursion is terminated at a specified recursion depth by using the tessellation procedure proposed in Ref. [27] (see Refs. [31, 49] for implementation details). Through agglomeration of quadrature points on all of the sub-cells, cut-element integration rules can be constructed for both volumetric integrals (green squares in Fig. 4) and immersed boundary integrals (orange spheres in Fig. 4).
Figure 4: Volumetric (green squares) and surface (orange circles) quadrature rules obtained by the octree integration procedure with tessellation at the lowest bisectioning level.
## 4 Stabilized finite element formulation
The development of a Galerkin formulation of the model equations from Section 2, suitable for the discretization framework of Section 3, requires careful consideration on two fronts:
1. Boundary condition enforcement: strong imposition is no-longer an option at immersed boundaries, where boundary conditions must instead be enforced weakly.
2. Stabilization of the equal-order velocity-pressure pair and of basis functions with small support: when left untreated, small-cut elements lead to severely ill-conditioned (tangent) stiffness matrices.
In Sections 4.1 and 4.2 we propose a weak formulation where all boundary conditions are treated weakly, and in Section 4.3 this formulation is supplemented with edge stabilization terms to ensure inf-sup stability and small-cut insensitivity.
### Weak formulation
To arrive at a weak formulation where the majority of the boundary conditions can be enforced naturally, we choose to use a mixed formulation of the Cahn-Hilliard equations, introducing the chemical potential \(\mu\), as defined in equation (2d), as an additional unknown field. If we assume that the solution fields that satisfy the strong form of Eqs. (1) and (2d) are sufficiently smooth, then they also satisfy the following weighted residual statement, obtained by multiplying each of Eqs. (1) and (2d) by a test function, integrating over the domain and applying integration by parts wherever appropriate:
\[\int\limits_{\Omega}\Big{\{}\partial_{t}(\rho\mathbf{u})\cdot\mathbf{v}+ \nabla\cdot(\rho\mathbf{u}\otimes\mathbf{u})\cdot\mathbf{v}-\mathbf{J}\cdot\nabla\mathbf{v}\cdot \mathbf{u}+\mathbf{\tau}:\nabla^{s}\mathbf{v}+\mathbf{\zeta}:\nabla^{s}\mathbf{v}-p\nabla\cdot\mathbf{v }\Big{\}}\,\mathrm{d}V\] \[\qquad\qquad\qquad+\int\limits_{\partial\Omega}\Big{\{}\mathbf{J} \cdot\mathbf{n}\,\mathbf{u}\cdot\mathbf{v}+\big{(}-\mathbf{\tau}\mathbf{n}-\mathbf{\zeta}\mathbf{n}+p\mathbf{ n}\big{)}\cdot\mathbf{v}\Big{\}}\,\mathrm{d}S=0\,, \tag{19a}\] \[\int\limits_{\Omega}q\nabla\cdot\mathbf{u}\,\mathrm{d}V=0\,,\] (19b) \[\int\limits_{\Omega}\Big{\{}\partial_{t}\varphi\,\lambda+ \nabla\cdot(\varphi\mathbf{u})\nabla\lambda+m\nabla\mu\cdot\nabla\lambda\Big{\}}\, \mathrm{d}V-\int\limits_{\partial\Omega}m\nabla\mu\cdot\mathbf{n}\,\lambda\, \mathrm{d}S=0\,,\] (19c) \[\int\limits_{\Omega}\Big{\{}\mu\,\omega-\sigma\varepsilon\nabla \varphi\cdot\nabla\omega-\frac{\sigma}{\varepsilon}\Psi^{\prime}\omega\Big{\}} \,\mathrm{d}V+\int\limits_{\partial\Omega}\sigma\varepsilon\nabla\varphi\cdot \mathbf{n}\,\omega\,\mathrm{d}S=0\,. \tag{19d}\]
We may now substitute the generalized Navier boundary condition Eq. (6), the contact angle condition Eq. (8), and the diffusive impermeability condition Eq. (11) directly in the boundary integrals. The impermeability condition \(\mathbf{u}\cdot\mathbf{n}=0\) from Eq. (10) remains an essential condition, for which weak imposition requires specialized treatment (to be discussed in Section 4.2). In the discrete form this impermeability condition will then not be satisfied pointwise along the immersed boundaries. To avoid spurious in- or outflow of mass and kinetic energy ensuing from discretization errors, some manipulation of the weak formulation
is still in order. We negate the spurious kinetic energy flux through impermeable domain boundaries by introducing the skew-symmetric form of the non-linear advective term:
\[\begin{split}\int\limits_{\Omega}&\nabla\cdot(\rho \boldsymbol{u}\otimes\boldsymbol{u})\cdot\boldsymbol{v}\,\mathrm{d}V=\int \limits_{\Omega}\Big{\{}\frac{1}{2}\nabla\cdot(\rho\boldsymbol{u}\otimes \boldsymbol{u}\cdot\boldsymbol{v})-\frac{1}{2}\rho\boldsymbol{u}\cdot\nabla \boldsymbol{v}\cdot\boldsymbol{u}+\frac{1}{2}\nabla\rho\cdot\boldsymbol{u} \,\boldsymbol{u}\cdot\boldsymbol{v}\\ &+\frac{1}{2}\rho\nabla\cdot\boldsymbol{u}\,\boldsymbol{u}\cdot \boldsymbol{v}+\frac{1}{2}\rho\boldsymbol{u}\cdot\nabla\boldsymbol{u}\cdot \boldsymbol{v}\Big{\}}\,\mathrm{d}V=\int\limits_{\partial\Omega_{\mathrm{out} }}\boldsymbol{u}\cdot\boldsymbol{n}\,\frac{1}{2}\rho\boldsymbol{u}\cdot \boldsymbol{v}\,\mathrm{d}S\\ &+\int\limits_{\Omega}\Big{\{}-\frac{1}{2}\rho\boldsymbol{u} \cdot\nabla\boldsymbol{v}\cdot\boldsymbol{u}+\frac{1}{2}\nabla\rho\cdot \boldsymbol{u}\,\boldsymbol{u}\cdot\boldsymbol{v}+\frac{1}{2}\rho\boldsymbol{ u}\cdot\nabla\boldsymbol{u}\cdot\boldsymbol{v}\Big{\}}\,\mathrm{d}V\,.\end{split} \tag{20}\]
In the last line, the divergence free condition \(\nabla\cdot\boldsymbol{u}=0\) is substituted, as well as \(\boldsymbol{v}=\boldsymbol{0}\) on (conforming) inflow boundaries and \(\boldsymbol{u}\cdot\boldsymbol{n}=0\) on (immersed) domain boundaries representing impermeable walls.
Similarly, spurious mass flux through immersed boundaries can be negated by manipulation of the convective term in the order-parameter transport equation
\[\int\limits_{\Omega}\nabla\cdot(\varphi\boldsymbol{u})\lambda\,\mathrm{d}V=- \int\limits_{\Omega}\varphi\boldsymbol{u}\cdot\nabla\lambda\,\mathrm{d}V+\int \limits_{\partial\Omega_{\mathrm{in}}\cup\partial\Omega_{\mathrm{out}}} \boldsymbol{u}\cdot\boldsymbol{n}\,\varphi\lambda\,\mathrm{d}S\,, \tag{21}\]
where \(\boldsymbol{u}\cdot\boldsymbol{n}\) is also substituted on impermeable domain boundaries.
Combining Eqs. (19) to (21) and substituting all appropriate boundary conditions then leads to:
For a.e. \[t\in(0,T)\] (22a) \[\begin{split} r_{q}(U,q)&=\int\limits_{\Omega}q \nabla\cdot\boldsymbol{u}\,\mathrm{d}V=0\\ r_{\omega}(U,\omega)&=\int\limits_{\Omega}\Big{\{} \partial_{t}\varphi\,\lambda-\varphi\boldsymbol{u}\cdot\nabla\lambda+m\nabla \mu\cdot\nabla\lambda\Big{\}}\,\mathrm{d}V+\int\limits_{\partial\Omega_{ \mathrm{in}}\cup\partial\Omega_{\mathrm{out}}}\boldsymbol{u}\cdot\boldsymbol{n} \,\varphi\lambda\,\mathrm{d}S=0\\ r_{\lambda}(U,\lambda)&=\int\limits_{\Omega}\Big{\{} \mu\,\omega-\sigma\varepsilon\nabla\varphi\cdot\nabla\omega-\frac{\sigma}{ \varepsilon}\Psi^{\prime}\omega\Big{\}}\,\mathrm{d}V-\int\limits_{\partial \Omega_{\mathrm{wall}}}\sigma^{\prime}_{\mathrm{SF}}(\varphi)\,\omega\, \mathrm{d}S=0\end{split}\] (22b)
Let us note that in (19) we have adopted a formal functional setting, to avoid the many technical complications associated with a rigorous formulation.
In the above formulation, the tangential projection operators are removed from the generalized Navier boundary terms since the normal-flow essential condition is imposed directly on the function spaces, as is the inflow condition, per
\[\mathbf{V}_{\mathbf{g}}(\Omega)=\left\{\mathbf{v}\in\mathbf{H}^{1}(\Omega):\mathbf{v}= \mathbf{g}\text{ on }\partial\Omega_{\text{in}},\,\mathbf{v}\cdot\mathbf{n}=0\text{ on }\partial\Omega_{\text{ wall}}\right\}, \tag{23a}\] \[V_{g}(\Omega)=\left\{\omega\in H^{1}(\Omega):\omega=g\text{ on } \partial\Omega_{\text{in}}\right\}. \tag{23b}\]
**Remark 4.1**: _It should be noted that the test function \(\lambda\) for the transport equation for the phase field (22c), pairs with the trial function for the chemical potential, \(\mu\), and the test function \(\omega\) for the chemical-potential closure relation (22d), in fact pairs with the trial function for the order parameter, \(\varphi\). This apparent asymmetry in the formulation is consistent with the fact that (22c) (resp. (22d)) contains the principal part of the operator acting on \(\mu\) (resp. \(\varphi\)). This becomes important when considering the spaces which accommodate the variables for which essential conditions are imposed strongly: \(\varphi,\omega\in V_{\bullet}(\Omega)\) and \(\mu,\lambda\in H^{1}(\Omega)\). These choices of spaces then permit to select \(\lambda=1\), for which Eq. (22c) reduces to \(\partial_{t}\int_{\Omega}\varphi\,\mathrm{d}V=-\int_{\partial\Omega_{\text{ in}}\cup\partial\Omega_{\text{out}}}\mathbf{u}\cdot\mathbf{n}\,\varphi\,\mathrm{d}S\), which, even in discrete form, signifies exact conservation._
### Nitsche's method for imposition of convective impermeability
We discretize the weak formulation (22) in space using generally non-boundary-fitted optimal-regularity B-splines for all field variables, that is
\[\mathbf{u}^{h}\in[\mathcal{V}^{h}]^{d}_{\mathbf{u}_{\text{in}}},\qquad \qquad p^{h}\in\mathcal{V}^{h},\qquad\qquad\varphi^{h}\in\mathcal{V}^{h}_{ \varphi_{\text{in}}},\qquad\qquad\mu^{h}\in\mathcal{V}^{h}, \tag{24}\]
with the space \(\mathcal{V}^{h}\) as defined in equation (18). We assume that the inflow boundary does, in fact, align with the background mesh, whereby the essential inflow conditions \(\mathbf{u}=\mathbf{u}_{\text{in}}\) and \(\varphi=\varphi_{\text{in}}\)_can_ be imposed strongly:
\[[\mathcal{V}^{h}]^{d}_{\mathbf{g}} =\left\{\mathbf{v}\in[\mathcal{V}^{h}]:\mathbf{v}=\mathbf{g}\text{ on } \partial\Omega_{\text{in}}\right\}, \tag{25a}\] \[\mathcal{V}^{h}_{g} =\left\{\omega\in\mathcal{V}^{h}:\omega=g\text{ on }\partial\Omega_{\text{in}}\right\}. \tag{25b}\]
For the remaining essential condition, \(\mathbf{u}\cdot\mathbf{n}=0\) on \(\partial\Omega_{\text{wall}}\), we propose a variant of Nitsche's method.
Nitsche's method combines penalty enforcement of a condition with a consistency term and an appropriate symmetry term. For multi-field non-linear equations, care must be taken to assure that the penalty term provides a sufficient bound on the consistency and symmetry terms, relating to the inf-sup stability of the resulting form [33] as well as energy dissipation rates in the thermodynamic analysis [5; 32]. The consistency term arises naturally in the weak form when the test functions do not vanish on the immersed boundary where the constraint condition is prescribed. From Eq. (19), it may be inferred that the consistency term comprises normal components of the traction vector due to the diffusive, capillary, and pressure stresses, and the relative mass flux. The penalty and consistency terms then
become:
\[s^{\rm pen}_{\mathbf{v}}((\mathbf{u}^{h},p^{h},\mu^{h},\varphi^{h}),\mathbf{v}) :=\int\limits_{\partial\Omega_{\rm wall}}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Ghost- and skeleton-penalty stabilization
Since the equal-order spaces (24) are not inf-sup stable for the velocity-pressure pair, without further stabilization, spurious pressure oscillations will occur. We therefore augment the Galerkin formulation with the skeleton-stabilization operator proposed in Ref. [39], which reads
\[s_{q}^{\text{skeleton}}(p^{h},q)=\int\limits_{\mathcal{F}_{\text{ skeleton}}^{h}}\gamma_{\text{skeleton}}\,h^{2k+1}\eta^{-1}\llbracket\partial_{n}^{k}p^{h} \rrbracket\llbracket\partial_{n}^{k}q\rrbracket\,\mathrm{d}S. \tag{30}\]
In this expression, \(\partial_{n}^{k}(\cdot)\), represents the \(k\)-th order normal derivative, and \(\llbracket\cdot\rrbracket\) is the interface-jump operator. Note that, for optimal-regularity B-splines, all derivative-orders lower than \(k\) are continuous across the skeleton interfaces; see Fig. 5. The pressure stabilization (30), which acts on the complete skeleton, \(\mathcal{F}_{\text{skeleton}}^{h}\), penalizes jumps in higher-order pressure gradients, and can be regarded as the higher-order continuous version of the interior penalty method proposed in Ref. [52]. To ensure stability and optimality, the operator (30) must scale with the size of the faces as \(h^{2k+1}\). The parameter \(\gamma_{\text{skeleton}}\) should be set large enough to suppress pressure oscillations, and small enough to limit its influence on the accuracy of the solution.
To avoid stability problems associated with small or unfavorably-cut elements, ghost-penalty stabilization is applied to all field variables, that is,
\[s_{\boldsymbol{v}}^{\text{ghost}}(\boldsymbol{u}^{h},\boldsymbol {v}) :=\int\limits_{\mathcal{F}_{\text{shoot}}^{h}}\gamma_{\text{ghost }}\,h^{2k-1}\eta\,\llbracket\partial_{n}^{k}\boldsymbol{u}^{h}\rrbracket \cdot\llbracket\partial_{n}^{k}\boldsymbol{v}\rrbracket\,\mathrm{d}S, \tag{31a}\] \[s_{\omega}^{\text{ghost}}(\varphi^{h},\omega) :=\int\limits_{\mathcal{F}_{\text{ghost}}^{h}}\gamma_{\text{ghost }}\,h^{2k-1}\sigma\epsilon\llbracket\partial_{n}^{k}\varphi^{h}\rrbracket \llbracket\partial_{n}^{k}\omega\rrbracket\,\mathrm{d}S,\] (31b) \[s_{\lambda}^{\text{ghost}}(\mu^{h},\lambda) :=\int\limits_{\mathcal{F}_{\text{ghost}}^{h}}\gamma_{\text{ghost }}\,h^{2k-1}\,m\,\llbracket\partial_{n}^{k}\mu^{h}\rrbracket\llbracket \partial_{n}^{k}\lambda\rrbracket\,\mathrm{d}S. \tag{31c}\]
Note that, since the ghost mesh is a subset of the skeleton mesh, the pressure field is already stabilized through the operator (30).
Figure 5: Derivatives of the cubic (\(k=3\)) optimal-regularity B-spline basis shown in Fig. 3. Only the highest-order derivative is discontinuous over the skeleton mesh, which in this univariate illustration corresponds to the interior element boundaries.
The ghost-penalty operators (31) control the \(k^{\text{th}}\)-order normal derivative jumps over the interfaces of the elements which are intersected by the domain boundary. Since we consider splines of degree \(k\) with \(C^{k-1}\) continuity, only the jump in the \(k^{\text{th}}\) normal derivative is non-vanishing at the ghost mesh. This leads to a single penalization term, contrasting the case of Lagrange elements, where all derivatives are non-vanishing and require penalization. The ghost-stabilization terms are scaled with the size of the faces as \(h^{2k-1}\). They are also scaled with the physical parameters of their corresponding (vector)-Laplace terms in the Galerkin problem (32), which is also the reason for the \(\varphi\)-ghost-operator to be subtracted in (32) and relates to Remark 4.1. The pairing with the Laplace operators allows for the application of the same ghost-penalty parameter, \(\gamma_{\text{ghost}}\), for all fields. Appropriate selection of this parameter assures the stability of the formulation independent of the cut-cell configurations. To avoid loss of accuracy, the ghost-penalty parameter, \(\gamma_{\text{ghost}}\), should also not be too large [53].
### Concluding immersed isogeometric finite element formulation
In summary, we propose the following stabilized immersed isogeometric finite element formulation:
\[\begin{cases}\text{For a.e. }t\in(0,T),\,\text{find}\,\,U^{h}=(\mathbf{u}^{h},p^{h}, \varphi^{h},\mu^{h})\in[\mathcal{V}^{h}]^{d}_{\mathbf{u}_{\text{in}}}\times \mathcal{V}^{h}\times\mathcal{V}^{h}_{\varphi_{\text{in}}}\times\mathcal{V}^{h }\text{ s.t.:}\\ r_{\mathbf{v}}(U^{h},\mathbf{v})+s^{\text{nitsche}}_{\mathbf{v}}(U^{h},\mathbf{v})+s^{\text{ ghost}}_{\mathbf{v}}(\mathbf{u}^{h},\mathbf{v})=0&\forall\mathbf{v}\in[\mathcal{V}^{h}]^{d}_{\mathbf{0}} \\ r_{q}(p^{h},q)-s^{\text{sym}}_{q}(\mathbf{u}^{h},q)+s^{\text{skeleton}}_{q}(p^{h},q)= 0&\forall q\in\mathcal{V}^{h}\\ r_{\lambda}(U^{h},\lambda)+s^{\text{ghost}}_{\lambda}(\mu^{h},\lambda)=0&\forall \lambda\in\mathcal{V}^{h}\\ r_{\omega}(U^{h},\omega)-s^{\text{ghost}}_{\omega}(\varphi^{h},\omega)=0&\forall \omega\in\mathcal{V}^{h}_{0}\end{cases} \tag{32}\]
In this formulation, the (generally nonlinear) operators \(r_{\mathbf{v}}\), \(r_{q}\), \(r_{\lambda}\) and \(r_{\omega}\) follow directly from the weak formulation (22), the operators \(s^{\text{nitsche}}_{\mathbf{v}}\) and \(s^{\text{sym}}_{q}\) from Eqs. (26a), (27) and (28), and the stabilization operators \(s^{\text{ghost}}_{\mathbf{v}}\), \(s^{\text{skeleton}}_{q}\), \(s^{\text{ghost}}_{\lambda}\) and \(s^{\text{ghost}}_{\omega}\) from Eqs. (30) and (31).
## 5 Numerical experiments
In this section we study the proposed stabilized immersed isogeometric analysis formulation for a series of test cases. In Section 5.1 we consider a binary-fluid Taylor-Couette flow, which we use to benchmark the framework against a conventional boundary-fitted finite element analysis. In Section 5.2 an idealized porous medium with a periodic microstructure is considered to demonstrate the modeling capabilities of the proposed framework. Finally, in Section 5.3 a porous medium application is considered to demonstrate its ability to handle complex geometries.
Unless specified otherwise, the parameters of the Navier-Stokes-Cahn-Hilliard model as listed in Table 1 are used for all test cases. These parameters - which represent a water-air flow - have a strong influence on the mesh and time resolution required to accurately evaluate the model. In particular, the interface thickness, \(\varepsilon\), has a strong impact on the element size, and the mobility, \(m\), has a strong influence on the time step size. The model parameters
are selected such that stable and accurate results can be obtained on uniform background meshes with a moderate number of elements and with a moderate number of time steps. This enables studying the proposed immersed isogeometric analysis framework at an acceptable computational expense using various mesh and time step sizes, as well as studying its robustness with respect to cut-element configurations. The adaptive solution strategy for the NSCH equations presented in Ref. [40] can be tailored to our immersed framework (a theoretical basis for this is provided in Ref. [48]) to substantially reduce the computational effort associated with the uniform discretization considered here, but this extension is beyond the scope of this work. The test cases are restricted to the two-dimensional case and are implemented in the Python-based (isogeometric) finite element framework Nutils [54]. Although there are no fundamental obstacles in extending the work to three dimensions - both in terms of the formulation and in terms of the discretization method - the computational burden associated with three-dimensional simulations would necessitate the usage of an adaptive solution strategy and the implementation in a high-performance computing framework (iterative solvers with preconditioners, scalable parallel implementation, _etc._).
The computational domains for the upcoming test cases are constructed by trimming the domain through the octree-tessellation procedure discussed in Section 3.2 based on analytical level set functions. The octree depth for all cases is set to 3, meaning that the octree recursion is terminated after 3 element bisections. Fifth-order Gaussian quadrature rules are selected on the integration sub-cells, as they are found to yield a suitable balance between minimizing computational expense and the impact of integration errors on the presented results. It is
\begin{table}
\begin{tabular}{l r r l} \hline \hline \multicolumn{4}{c}{_Fluid-flow parameters_} \\ \hline Mass densities & \(\rho_{1}\) & \(1000\) & kg/m\({}^{3}\) \\ & \(\rho_{2}\) & \(1.3\) & kg/m\({}^{3}\) \\ Dynamic viscosities & \(\eta_{1}\) & \(1\cdot 10^{-3}\) & Pa s \\ & \(\eta_{2}\) & \(1.813\cdot 10^{-5}\) & Pa s \\ Surface tension & \(\sigma_{12}\) & \(72.8\cdot 10^{-3}\) & N/m \\ Generalized Navier parameter & \(\alpha_{\text{GN}}\) & \(100\) & Pa s/m \\ \hline \hline \multicolumn{4}{c}{_Phase-field parameters_} \\ \hline Interface thickness & \(\varepsilon\) & \(0.78125\cdot 10^{-6}\) & m \\ Mobility & \(m\) & \(3.0487\cdot 10^{-10}\) & m s\({}^{2}\)/kg \\ \hline \hline \multicolumn{4}{c}{_Stabilization parameters_} \\ \hline Nitsche penalty & \(\beta\) & \(100\) & – \\ Skeleton penalty & \(\gamma_{\text{skeleton}}\) & \(0.01\) & – \\ Ghost penalty & \(\gamma_{\text{ghost}}\) & \(0.01\) & – \\ \hline \hline \end{tabular}
\end{table}
Table 1: Properties of the Navier-Stokes-Cahn-Hilliard model used for the numerical experiments.
noted, however, that optimized integration rules can be used to enhance the performance of the framework [31].
For all immersed isogeometric simulations presented below, cubic (\(k=3\)) B-splines with optimal regularity are employed for all field variables. For the selection of the various stabilization parameters, we follow the empirical rules proposed in Ref. [39]. The Nitsche parameter is set to 100 and both the skeleton-penalty parameter and the ghost-penalty parameter are set to 0.01 (see Table 1). These parameters are found to provide a good balance between model stability (large enough) and impact on the accuracy of the approximation (small enough), for all considered simulations.
### Taylor-Couette flow
We consider the binary-fluid Taylor-Couette flow between two parallel plates moving in opposite directions, as illustrated in Fig. 6, and studied in Refs. [8; 55]. The fluid constituents are separated by a vertical interface in the center of the domain. To maintain symmetry, for this benchmark the mass density and viscosity are taken the same for both constituents, _i.e._, \(\rho_{1}=\rho_{2}=\rho=1000\)kg/m\({}^{3}\) and \(\eta_{1}=\eta_{2}=\eta=1\cdot 10^{-3}\)Pa s. Moreover, the mobility is decreased to \(m=3.0487\cdot 10^{-11}\)m s\({}^{2}\)/kg. The speed of the plate is increased gradually over the first second per \(u_{\rm wall}=\frac{1}{2}\big{(}1-\cos(\pi\,t)\big{)}10\) m/s, after which it is kept constant at \(u_{\rm wall}=10\)m/s until a steady state solution is obtained. In the initial state, the vertical diffuse interface is prescribed using an analytical approximation in accordance with the selected model parameters. On the left and right (far field) boundaries, a linear velocity profile is imposed, where \(u_{\rm slip}=(1+\frac{2\eta}{\alpha_{\rm GN}H})^{-1}u_{\rm wall}\). Such a profile corresponds to the far-field (pure species) Taylor-Couette steady state solution compatible with the generalized Navier boundary condition. The phase field is set to +1 on the left boundary, and to -1 on the right boundary.
A boundary-fitted mesh can straightforwardly be constructed on the rectangular domain considered here (Fig. 7a), making this test case suitable for benchmarking against a classical finite element discretization (Section 5.1.1). To study the proposed immersed isogeometric analysis method, the rectangular domain is immersed in a rotated ambient domain, as illustrated in Fig. 7b. By varying the rotation of the ambient domain, \(\theta\), the robustness with respect to cut-element configurations can be studied (Section 5.1.2).
To provide intuition for the studies presented in the remainder of this section, in Fig. 8
Figure 6: Illustration of the domain and boundary conditions for the binary-fluid Taylor-Couette benchmark case.
we illustrate the evolution of the phase field (left column) and velocity field (right column) obtained using an ambient domain mesh with element size \(h=0.3125\mu\)m, which is rotated by \(\theta=\frac{\pi}{8}\)rad in counter clock-wise direction. The top row shows the solution in the initial state, where the diffuse interface is vertical and the velocity is zero. At \(t=0.5\)s (second row), the plates are moving, which results in the interface being dragged along with the plates, resulting in a (mildly) curved interface. The generalized Navier boundary condition is observable through the mismatch between the speed of the fluid at the plate boundaries (approximately 1.7m/s away from the interface) and that of the plates (\(u_{\text{wall}}=5\)m/s). As time progresses, and the plate velocity is increased up to the maximum speed at \(t=1.0\)s (third row), the curvature of the interface increases, until a steady-state solution is obtained (bottom row). Note that the slip-velocity-based Reynolds number \(\text{Re}=33\) is sufficiently small to allow for a steady state solution, and that convective effects are small enough as not to require the use of convection-stabilization. In this steady state, the maximum rotation of the interface relative to its (vertical) initial position is 0.23rad, and the tangential stresses due to capillary effects induce sufficient slip to equilibrate the meniscus-to-wall contact point. Away from this contact point, the classical wedge-shape Huh-Scriven velocity profiles are retrieved [10].
#### 5.1.1 Benchmarking against boundary-fitted finite elements
To benchmark the proposed immersed isogeometric formulation, we compare the discretization described in Sections 3 and 4 to a boundary-fitted simulation, for which all essential boundary conditions can be imposed strongly: since the plates are aligned with the horizontal direction, the normal component of the velocity can be constrained by removal of the basis functions corresponding to the vertical velocity component. Taylor-Hood \(C^{0}\)-continuous finite elements with cubic velocity functions and quadratic pressure functions are considered to obtain a stable velocity-pressure discretization. The phase field and chemical potential field are discretized using cubic finite elements. In this boundary-fitted finite element setting, no additional stabilization of the weak form (22) is required.
In Fig. 9, the steady-state phase field obtained using traditional boundary-fitted finite element analysis (FEA) (left column) and the proposed immersed isogeometric analysis (IGA) (right column) are compared. In the top row, an element size of \(h=0.625\mu\)m is used, re
Figure 7: Illustration of the computational meshes used for the binary-fluid Taylor-Couette benchmark.
Figure 8: Time evolution of the phase field \(\varphi\) (left) and the velocity field \(\mathbf{u}\) (right) for the Taylor-Couette test case. The presented results are obtained using immersed isogeometric analysis with a \(h=0.3125\mu\)m ambient domain mesh rotated by \(\theta=\frac{\pi}{8}\)rad.
sulting in a mesh with \(80\times 16=1280\) elements for the boundary-fitted case. The bottom row presents the results for an element size of \(h=0.3125\mu\)m, resulting in a boundary-fitted mesh with \(160\times 32=5120\) elements. For both meshes, the obtained phase-field solutions for the boundary-fitted and immersed simulations are virtually indistinguishable, demonstrating the consistency of the stabilized immersed IGA formulation. The maximum interface rotation angle for the immersed case is within a few percent of the FEA reference result. This difference can be attributed to the fact that the far-field in/out-flow conditions are imposed at a finite distance from the interface. The error associated with this inconsistent application of the far-field condition differs between the FEA case and IGA case on account of the size and rotation of the ambient domain (_i.e._, the boundaries are located in a different position). Due to the optimal-regularity B-splines used in the isogeometric analysis, the number of degrees of freedom associated with these simulations is substantially (approximately 6 times) smaller than that for the FEA case.
Fig. 10 compares the divergence of the velocity field between the traditional FEA case and the immersed IGA, considering the same meshes as for the phase-field solutions in Fig. 9. Although, like the phase field, the velocity fields (not shown here) are virtually indistinguishable between the two methods, notable differences can be observed in the divergence of the velocity field. Since the mass conservation balance (22b) is solved weakly for both the FEA case and the immersed IGA case, discretization errors result in a non-zero divergence for both analyses, in particular in the vicinity of the interface. For both FEA and immersed IGA the error in the mass conservation decreases under mesh refinement, although the errors observed using traditional FEA are substantially smaller than for the immersed IGA case. We attribute this increased error in the velocity-divergence for the immersed IGA case to the reduced number of degrees of freedom in combination with the skeleton-penalty term required for stabilization. Although this reduction does not noticeably affect the phase field and the velocity field, it does impact the local conservation properties. Furthermore, it is observed that while the errors in the divergence are essentially restricted to the interface in the boundary-fitted case, in the immersed case errors are also observed along the immersed boundaries. This is due to the penalized and conflicting nature of the divergence-free condition and the convective impermeability condition in the cut elements. In our numerical experiments we have not experienced any negative consequences due to this phenomenon, but it should be recognized as a potential source of instabilities.
#### 5.1.2 Robustness with respect to cut-element configurations
To study the robustness of the proposed formulation in terms of how elements are cut, various rotations of the ambient domain mesh are considered. Fig. 11 shows the phase field (left column) and velocity field (right column) for three different orientations of the ambient mesh. The considered mesh size is \(h=0.3125\mu\)m. The top row concerns a very small rotation angle of 0.001rad. This mesh rotation leads to very thin sliver cuts along the boundaries. The presented results are virtually indistinguishable from the reference solution discussed above. The absence of instabilities associated with the Nitsche boundary condition and ill-conditioning problems [34] conveys that the ghost-penalty stabilization is effective. The middle and bottom rows of Fig. 11 pertain to the case of a \(\frac{\pi}{8}\)rad and \(\frac{\pi}{4}\)rad rotation
Figure 10: Comparison between a boundary fitted FEA and an immersed IGA of the steady state velocity-divergence solution, \(\nabla\cdot\mathbf{u}(t=\infty)\), for a coarse and a fine mesh. Note that the color bars have different ranges.
Figure 9: Comparison between a boundary fitted FEA and an immersed IGA of the steady state phase-field solution, \(\varphi(t=\infty)\), for a coarse and a fine mesh.
of the ambient mesh, respectively. Also for these orientations, the obtained solutions are indistinguishable from the reference solution, despite the different cut-element shapes caused by the different rotations.
### Lattice of circular inclusions
To demonstrate the capability of the immersed isogeometric analysis framework to handle arbitrary geometries, we consider a water-air binary-fluid system (Table 1) flowing through a lattice of circular inclusions (Fig. 12). The simulated unit cell has a width of \(40\mu\)m and a height of \(20\mu\)m. The circular inclusions have a radius of \(10\mu\)m and an offset of \(40\mu\)m in both the horizontal and vertical direction. In the immersed isogeometric analysis framework, this problem requires a rectangular ambient domain corresponding to the bounding box of the unit cell. Construction of a boundary-fitted isogeometric analysis mesh is possible, but would require a multi-patch description.
The flow through the lattice is forced by means of a constant, horizontally oriented, body force \(F_{\mathrm{b}}=1\cdot 10^{9}\)N/m\({}^{3}\). In line with the lattice interpretation of this problem, we consider periodic boundary conditions for the problem on the left and right boundaries, meaning that the field variables attain identical values on corresponding points on the inflow and outflow boundaries. In our simulations, this periodic behavior is enforced by constructing an ambient domain mesh with periodic B-splines in the horizontal direction. Note that this construction is trivial on account of the rectangular ambient domain. On the top and bottom boundaries of the domain, symmetry boundary conditions are considered, meaning that the vertical velocity vanishes, as do the horizontal traction and the normal gradients of the phase field and chemical potential.
Figure 11: Steady-state phase-field, \(\varphi(t=\infty)\), and velocity field, \(\mathbf{u}(t=\infty)\), solutions computed with the immersed IGA framework on ambient domain meshes with a mesh size of \(h=0.3125\mu\)m and various mesh rotations \(\theta\).
We discretize the ambient domain with elements of size \(h=0.25\mu\)m. After trimming, this results in a discrete problem with 43,085 degrees of freedom. The initial time step size is set to \(\Delta t=0.5\mu\)s. With interface velocities as high as approximately 70m/s, the use of time step adaptation is required. In the presented results this is implemented by dividing the time step by two when the nonlinear problem fails to converge, and restoring to the original time step after 8 converged time steps using the smaller step size; see also Ref. [40]. In the initial state we consider the two phases to be distributed as vertically oriented lamellae, with the interfaces positioned at the shortest distances between the obstacles, as illustrated in Fig. 12. The horizontal body force drives these lamellae past the obstacles, requiring them to break up and eventually to realign with the flow direction.
Fig. 13 illustrates the transient behavior of the lamellae passing through the unit cell. Initially, the water lamella (corresponding to \(\varphi=1\), shown in dark blue) enters the domain from the left. As the flow velocity is stalled by the obstacles, the fluids move fastest near the center of the free space between the inclusions. This causes the interfaces to curve, as observed already in Fig. 13a. In this figure, the air-water interface approaches the upper-right obstacle. Due to the low velocities close to the obstacle boundary, a thin layer of water remains, eventually resulting in the trapped water droplet shown in Fig. 13c. Due to the Ostwald ripening effect, this satellite droplet is quickly re-absorbed by the inflowing water lamella, see Fig. 13e. Fig. 13g shows the phase field after the lamellae have moved through two complete cycles of the periodic unit-cell. At this time instance, the water lamella coalesces with the detaching tail of the upstream lamella. After the isolation and re-absorption of a few more droplets, which may be observed in Figs. 13g and 13i, the fully re-segregated and realigned state shown in Fig. 13k is realized after approximately 5 cycles. The steady state, illustrated in Fig. 14, is reached shortly after. In their new horizontal orientation, no further topological changes to the lamellae are induced by the obstacles.
Figure 12: Illustration of the domain and boundary conditions for the binary-fluid flow through a lattice of circular inclusions.
Figure 13: Time evolution of the phase field \(\varphi\) (left) and velocity field \(\mathbf{u}\) (right), showing the motion of the lamellae past the obstacles.
Figure 13: Time evolution of the phase field \(\varphi\) (left) and velocity field \(\mathbf{u}\) (right), showing the motion of the lamellae past the obstacles.
### Porous medium
To illustrate the capabilities of the immersed isogeometric analysis framework, we study the evolution of a water-air binary-fluid system propagating through the example porous medium depicted in Fig. 15. The width and height of the domain are \(200\mu\)m and \(50\mu\)m, respectively. On the left boundary of the domain, the velocity is prescribed by a horizontal flow with a parabolic profile which satisfies the generalized Navier condition. The maximum velocity of the inflow profile is set to \(5\)m/s. On the inflow boundary, the phase field is constrained to \(\varphi=1\), representing the (dark blue) water phase. On the top boundary, symmetry conditions are imposed, similar to the lattice problem discussed above. Outflow boundary conditions are considered on the bottom and right boundaries, meaning that the traction is set to zero. We make use of a mesh size of \(h=0.625\mu\)m, leading to 185,618 degrees-of-freedom systems of equations, and a time step size of \(\Delta t=0.05\mu\)s.
Fig. 16a shows the initial condition, where the diffuse interface is positioned at the
Figure 14: The phase field \(\varphi\) (left) and velocity field \(\mathbf{u}\) (right), showing the steady state of the flow in a porous medium with circular inclusions.
Figure 15: Illustration of the domains and inflow conditions for the binary flow through an example porous medium.
Figure 16: Time evolution of the phase field \(\varphi\) (left) and velocity field \(\mathbf{u}\) (right) for the porous medium test case.
Figure 16: Time evolution of the phase field \(\varphi\) (left) and velocity field \(\mathbf{u}\) (right) for the porous medium test case.
Figure 16: Time evolution of the phase field \(\varphi\) (left) and velocity field \(\mathbf{u}\) (right) for the porous medium test case.
inflow boundary. At \(t=9\mu\)s the water-front, \(\varphi=1\), has moved inside the domain and meets the strong curvature of the bottom boundary of the solid domain. This stagnates the motion of the interface along that boundary. At \(t=10\mu\)s, in Fig. 16e, the inflow overcomes this stagnation, resulting in high velocities and rapid change of the interface shape. In Fig. 16g, the inflow phase branches between the leftmost outflow channel and the top channel, following the motion of the fluid. At \(t=25.9\mu\)s two trapped air bubbles (corresponding to \(\varphi=-1\), shown in light blue) emerge at the top symmetry boundary. The small air bubble is quickly re-absorbed by the larger bubble, as illustrated in the progression snapshots of Figs. 16i, 16k and 16m. Since the velocity in this region is very small, this isolated region becomes stable in time. The high radius-of-curvature of the solid wall at the top effectively pins the water-front from Fig. 16i until Fig. 16q, where the second outflow is reached, creating a second entrapment of air in the porous domain.
## 6 Conclusion
We have developed an immersed isogeometric analysis (IGA) framework to simulate binary-fluid flows on complex domains, such as encountered in, _e.g._, imbibition processes in porous media. A stabilized Galerkin formulation is proposed to robustly discretize the Navier-Stokes-Cahn-Hilliard diffuse-interface model, which describes the behavior of the binary-fluid flow by means of a velocity field, a pressure field, a phase field, and a chemical potential field. In this formulation, only the normal component of the velocity field is treated as an essential boundary condition, which is weakly enforced by Nitsche's method. For the tangential components of the velocity field, the use of a generalized Navier boundary condition is proposed. This model addresses the contact-line pinning problem related to the no-slip boundary condition and can be treated as a natural boundary condition in the immersed IGA formulation. All the boundary conditions for the phase field and chemical potential field are also natural conditions and do not require special treatment.
The proposed framework uses optimal-regularity B-splines of the same order for all fields, resulting in a higher-order discretization with relatively few degrees of freedom compared to its Lagrange finite element counterpart. The construction of this basis is straightforward in the immersed setting, as the ambient mesh in which the computational domain is embedded is rectilinear. To obtain stable results when using this equal-order spline basis, the Galerkin formulation is amended with two forms of stabilization. First, to ensure inf-sup stability of the velocity-pressure pair, use is made of skeleton stabilization. This stabilization assigns a penalty to jumps in the higher-order non-vanishing normal derivative of the pressure field across the faces of the background mesh. Furthermore, to ameliorate problems associated with small or unfavorably cut elements, the remaining fields are stabilized through a ghost-penalty term. This ghost penalty takes the same form as the skeleton-stabilization penalty, but only needs to be applied on faces in the vicinity of the immersed boundary. The use of optimal-regularity B-splines reduced the number of stabilization parameters to two, _i.e._, one skeleton-stabilization parameter and one ghost-penalty parameter.
We have demonstrated the developed immersed isogeometric analysis framework using a range of test cases. The first test case focuses on benchmarking, by considering the well
understood binary-fluid Taylor-Couette flow between two parallel plates. The computational domain for this problem is rectangular and can hence straightforwardly be meshed, facilitating the comparison with a traditional mixed finite element formulation. The immersed framework is tested by rotation of the background mesh, which gives insights on the robustness of the method with respect to cut-element configurations. From the results we conclude that the immersed IGA framework reproduces the finite element benchmark results well, regardless of the encountered cut-element configurations, and despite its modest number of degrees of freedom. The results also show that the immersed framework has an increased error in the divergence of the velocity field, albeit that this error reduces under mesh refinement. For the considered cases, the divergence errors are not detrimental to the simulations.
Besides the Taylor-Couette flow, two porous medium test cases with complex immersed geometries are considered. The results for these cases show that the water-air flow behavior can be captured well by the immersed IGA framework. In particular, the motion of the diffuse interface along the solid boundary is observed to be influenced in a physically sound way by the geometry of the boundary. As the considered simulations pertain to uniform meshes, the overall number of degrees of freedom is strongly influenced by the interface thickness. This implies that the considered simulations, although two-dimensional, are computationally intensive. Extension to three-dimensional cases warrants the use of mesh-adaptivity with local refinements [31, 40] and requires implementation in a high-performance computing environment.
The results presented in this work focus on demonstrating the binary-fluid flow modeling capabilities of the immersed IGA framework. In future work, the mesh dependency study of the current work should be detailed, including a study of the influence of the stability parameters on the accuracy. As future work, the ideas behind the developed stabilized formulation can be carried over to a broader class of multi-physics and phase-field problems.
## Acknowledgement
S.K.F. Stoter, T.B. van Sluijs and T.H.B. Demont gratefully acknowledge the financial support through the Industrial Partnership Program _Fundamental Fluid Dynamics Challenges in Inkjet Printing_ (_FIP_), a joint research program of Canon Production Printing, Eindhoven University of Technology, University of Twente, and the Netherlands Organization for Scientific Research (NWO). All simulations have been performed using the open source software package Nutils [54].
|
2306.07559 | Marking anything: application of point cloud in extracting video target
features | Extracting retrievable features from video is of great significance for
structured video database construction, video copyright protection and fake
video rumor refutation. Inspired by point cloud data processing, this paper
proposes a method for marking anything (MA) in the video, which can extract the
contour features of any target in the video and convert it into a feature
vector with a length of 256 that can be retrieved. The algorithm uses YOLO-v8
algorithm, multi-object tracking algorithm and PointNet++ to extract contour of
the video detection target to form spatial point cloud data. Then extract the
point cloud feature vector and use it as the retrievable feature of the video
detection target. In order to verify the effectiveness and robustness of
contour feature, some datasets are crawled from Dou Yin and Kinetics-700
dataset as experimental data. For Dou Yin's homogenized videos, the proposed
contour features achieve retrieval accuracy higher than 97% in Top1 return
mode. For videos from Kinetics 700, the contour feature also showed good
robustness for partial clip mode video tracing. | Xiangchun Xu | 2023-06-13T06:16:49Z | http://arxiv.org/abs/2306.07559v1 | # Marking anything: Application of point cloud in extracting video target features
###### Abstract
Extracting retrievable features from video is of great significance for structured video database construction, video copyright protection and fake video rumor refutation. Inspired by point cloud data processing, this paper proposes a method for marking anything (MA) in the video, which can extract the contour features of any target in the video and convert it into a feature vector with a length of 256 that can be retrieved. The algorithm uses YOLO-v8 algorithm, multi-object tracking algorithm and PointNet++ to extract contour of the video detection target to form spatial point cloud data. Then extract the point cloud feature vector and use it as the retrievable feature of the video detection target. In order to verify the effectiveness and robustness of contour feature, some datasets are crawled from Dou Yin and Kinetics-700 dataset as experimental data. For Dou Yin's homogenized videos, the proposed contour features achieve retrieval accuracy higher than 97% in Top1 return mode. For videos from Kinetics 700, the contour feature also showed good robustness for partial clip mode video tracing.
Video Feature Extraction MOT Point Cloud
## 1 Introduction
Designing retrievable video features is necessary. With the development of the mobile Internet, anyone can publish videos on streaming media platforms. Massive videos bring higher management requirements. As shown in Fig.1 a)-b), there are many videos with homogeneous content on the Internet at present. Constructing unique ID information for these videos is beneficial to video creators and streaming media Operators. Moreover, the current short video platform is full of many rumored videos, the most classic way of spreading rumors is shown in Fig.1 c), which is to make misleading edits to old videos and replace titles in order to attract attention. Therefore, videos with homogeneous content can be retrieved based on video content extraction features, which is convenient for creators and operation platforms to manage videos. It is also possible to trace the source of false videos as a basis for quickly dispelling rumors.
According to the retrieval mode, the retrieval can be divided into video retrieval based on video, image, text[1]. Quite a lot of research has been done on text and image retrieval in academia and industry. Whether it is by matching the voice and subtitles of the video, or by matching the key frames of the video, the existing database technology can meet the precise retrieval of the target video. However, there are few related literature reports on video retrieval by video, or in other words, video retrieval based on video content. The main reason is that compared with words and graphics, videos not only contain rich spatial information, but also rich temporal information. At the same time, for better fluency, the frame rate is also constantly increasing with the advancement of photography technology. Therefore, how to effectively and robustly encode feature vectors for excessively redundant video information is a key step in realizing video retrieval.
At present, the management of many video files is usually to store the video in the file system, and the retrieval of the target video is generally to retrieve the additional information of the video, such as the description of the video, the storage time of the video, and the cover of the video. These types of retrieval method are difficult to achieve accurate video retrieval, because the video additional information is not a direct description of the content of the video itself. In many application scenarios, such attachment information is often missing or not representative. With the development of computer vision technology, there are also many documents or patents that extract video key frames and use neural
method has a small amount of calculation, it does not require too much storage content. However, this method does not make full use of all the information of the video and loses a huge amount of information in the time dimension, therefore, its accuracy often falls short of the ideal level.
With the update of computer hardware, the computing power of current computers has been greatly improved compared with the past. With this advantage, it is possible to use related methods of visual deep learning to analyze video content frame by frame, and to realize vectorized feature extraction of videos based on video content.
Inspired by laser point cloud data processing technology and visual algorithms, this paper proposes an algorithm for extracting feature vectors based on video content. The algorithm first uses the YOLO-v8 model to realize the semantic segmentation of the video frame by frame, so as to obtain the mask information of the detection target; then realizes the tracking of the motion trajectory of the detection target on the video time axis through the multi-object tracking algorithm; Based on these data, the spatial point cloud representation of the detection target is obtained through the boundary extraction technology; finally, the feature encoding of the detection target point cloud data is realized through the PointNet++ network, and this is used as the retrieval basis of the point cloud data.
The first chapter of this paper introduces the current technical solutions and problems faced by video retrieval technology, and briefly introduces the algorithm for extracting video feature vectors based on video content proposed in this paper; the second chapter introduces the visual algorithm and The related work of point cloud algorithm; the third chapter introduces the algorithm framework and training method proposed in this paper in detail; the fourth chapter introduces the validity and robustness verification of the proposed algorithm on real-world data.
## 2 Related Work
### MOT algorithm
The video multi-object tracking algorithm (MOT) is a classic task in computer vision. The main task is to track multiple detection targets in the video and output the position and inter-frame motion trajectory of all detection targets in the video frame. The main processing flow of this type of algorithm for video is divided into two steps, which are intra-frame semantic segmentation and inter-frame motion vector prediction.
Intra-frame semantic segmentation refers to the use of algorithms to determine the pixel position of the detection target frame by frame. Among them, the semantic segmentation model based on YOLO or transformer mechanism is a common algorithm[2, 3]. Among of them, the YOLO model is a target recognition model based on convolutional neural networks, which can realize end-to-end target recognition of images. Specifically, the algorithm transforms the target recognition classification task into a regression problem and realizes target detection by predicting the bounding box position and target category for each cell in the image. After updating in recent years, the YOLO model has made great progress in detection speed and accuracy. Currently, the YOLO-v8 model has been released.
Inter-frame tracking refers to predicting the position of a detection target in the current frame in the next frame. According to the number of moving targets to be tracked at one time, inter-frame target tracking is generally divided
Figure 1: Homogenized video and deceptive video editing.
into single target tracking and multi-object tracking (MOT). The multi-object tracking algorithm can track and detect multiple objects in the video at one time, and can process data efficiently, so it is a commonly used algorithm. SORT is a simple multi-object tracking algorithm that can be processed in real time[4; 5]. It uses the Kalman filter to predict the position of the object and uses the Hungarian algorithm to match the detection target in the current frame with the detection target in the previous frame. At the same time, in order to improve the accuracy of matching, the SORT algorithm also uses the IOU (Intersection over Union) algorithm to measure the degree of overlap between two objects. The SORT algorithm is fast and can calculate and track in real time, but it is more suitable for static shots, and the tracking effect for dynamic shots is not idea. The DeepSORT algorithm is further improved based on the SORT algorithm[6]. It uses the convolutional neural network to calculate the characteristics of the detection target and calculates the similarity of the detection target between frames through the cosine distance. At the same time, the model introduces an appearance model to deal with object occlusion and appearance changes, which further improves the robustness of the tracking model. DeepSORT also further enhances the ability to detect objects in dynamic shots.
The OC-SORT model also requires two processes of target detection and tracking, but based on the previous model, the target center tracking mode is added, which calculates the center of the target as the tracking method instead of tracking the outline of the target[7]. This method further improves the robustness of the target tracking algorithm and reduces the probability of target loss due to occlusion or deformation of the target. The Deep-OC-SORT algorithm is further improved based on the OC-SORT model to obtain higher tracking accuracy and robustness[8]. The difference from the former is that the model chooses to use the CNN model to extract the features of the detected target as the basis for target matching and tracking.
### Point cloud algorithm
In real life, point cloud data is a common form of data representation in radar signal processing. The processing methods for point cloud data are usually acquisition, filtering, down sampling, segmentation, feature extraction, registration, etc[9; 10]. Generally, different algorithm combinations are selected according to the needs of the task[11]. For common point cloud data processing tasks, including but not limited to point cloud recognition, segmentation and clustering, etc., traditional algorithms generally use the extracted point cloud features as the pre-input of these tasks. For example, in point cloud recognition, scholars often use descriptors that characterize the shape of point clouds as the input feature vectors of the model. According to the scale of feature extraction, point cloud descriptors are generally divided into local descriptors and global descriptors[12; 13].
Andrew E et al. proposed that the spin image feature (Spin Image) is a local point cloud feature extraction method based on image processing[14]. The basic idea is to convert the point cloud into a two-dimensional grayscale image and then perform statistics on it. processing and analysis. In the specific implementation, each point in the point cloud is regarded as a central point, the distribution of the nearest point is calculated, and the information is encoded into a vector of uniform length to describe the characteristics of the point cloud. The SHOT (Signature of Histograms of Orientations) feature is a local point cloud feature extraction method proposed by Tombari et al[15]. It extracts point cloud features by calculating the relative position and normal vector of the surrounding points of a point in the point cloud. FPFH (Fast Point Feature Histogram) is also a local point cloud feature descriptor, which is optimized for the PFH (Point Feature Histogram) algorithm[16]. In comparison to the PFH algorithm, the FPFH algorithm can greatly reduce the amount of calculation while maintaining accuracy and is suitable for real-time processing and large-scale point cloud data processing. The features of Spin Image, SHOT, and FPFH are all point cloud local feature extractions, which have good rotation invariance and scale invariance. Based on predecessors, RoPS and GASD descriptors further improve the rotation and scale invariance of local features through rotation statistics or the introduction of a fixed coordinate system[17; 18]. However, point cloud data generally does not only calculate the local features of a point, and usually needs to randomly sample multiple points and calculate their features. This leads to a large amount of calculation required for this type of algorithm in the process of practical application. However, point cloud global features can better reduce the amount of computation.
With the development of deep learning technology, neural networks for point cloud data processing have also been widely reported[19; 20]. In order to learn the surface features of point cloud data, Charles R. Qi et al. proposed the PointNet neural network in 2017[21]. PointNet is an end-to-end deep learning model that can directly process point cloud data without converting point clouds into voxels, images, or extracting other types of features. The model processes each point in the point cloud through a fully connected neural network, and then extracts the feature representation of the point cloud through the pooling layer. Although PointNet is an end-to-end model that can quickly classify and segment point cloud data, this model does not consider the local characteristics of point clouds, so the recognition effect in some point clouds with complex results is not ideal. PointNet++ introduces a hierarchical feature extraction method based on PointNet[22]. Specifically, the PointNet++ model divides the point cloud data, then learns local features for different parts of the point cloud data, and then realizes the fusion of point cloud features in different
regions through the fully connected layer. Therefore, compared with the former, the PointNet++ model has a better learning effect on the local features of point cloud data, and it also performs ideally in point cloud classification and segmentation tasks.
In recent years, there are also many models that combine the traditional features of point clouds with PointNet++ and have achieved good results in fields such as face recognition[19, 23]. But for the video retrieval task faced in this paper, the PointNet++ model itself has low computational complexity and simple structure, so it is more suitable as a video content feature extraction model.
## 3 Marking Anything Algorithm Description
Fig.2 shows the overall flow of the Marking anything algorithm. The algorithm model can divided into three modules, which are the target detection module shown in Fig.2-I, the point cloud generation module shown in Fig.2-II, and the point cloud feature extraction module shown in Fig.2-III.
The main function of the target detection module in Part I is to obtain the mask information of the detected target in each frame of the video, which includes the correspondence between the detected targets in the frame. As shown in Fig.2-a), the module loads the video at the input and obtains the video frame sequence \(\{f_{1},f_{2},f_{3},\ldots,f_{n}\}\). Module I first uses YOLO-v8 to perform target detection and pixel-level semantic segmentation frame by frame to obtain the mask information sequence of each detected target. The mask data of each frame is shown in Fig.2-a) blue mask. Then, as shown in Fig.2-b), module I uses the Deep-OC-SORT model to achieve the corresponding matching between multiple target frames, to obtain the motion vector information of the object between video frames. In summary, after being processed by module I, the mask sequences of all detected targets in the current video are obtained, as shown in Formula-(1), (2).
\[Video\rightarrow\{C_{1},C_{2},C_{3},\ldots,C_{m}\} \tag{1}\]
\[C_{i}=\{mask_{1},mask_{2},mask_{3},\ldots,mask_{n}\} \tag{2}\]
Among them, \(C_{i}\) represents the \(i_{th}\) target detected in the video, \(mask_{j}\in\Re^{H\times W}\) represents the \(j_{th}\) frame mask data map of the \(i_{th}\) detected target, each element of which is a Boolean value, H and W respectively indicate the height and width of the mask image.
After being processed by module I, the mask information of each detected target in each frame of the video is output. Module II receives the information and processes it to obtain the point cloud outline of each target in the time dimension of the video. Specifically, for a single detection target, as shown in Fig.2-c), module II uses the Canny edge algorithm to extract the contour of a single target mask data \(mask_{j}\), obtain a single target boundary map \(canny_{j}\), and then obtain all the boundaries of the detection target Graph sequence \(\{canny_{1},canny_{2},canny_{3},\ldots,canny_{n}\}\),\(canny_{j}\in\Re^{H\times W}\). Next, it is necessary to convert the boundary sequence information of the detection target \(C_{i}\) into the three-dimensional
Figure 2: MA Algorithm Flowchart.
coordinate data of the point cloud. The conversion relationship is shown in Formula-(3).
\[Mask_{(H,W,N)}\to PCD_{(X,Y,Z)} \tag{3}\]
Among them, \(Mask\) represents the mask sequence coordinate system, and \(H\), \(W\), and \(N\) represent the height, width, and sequence number of the mask image, respectively. PCD represents the point cloud data, and \(X\), \(Y\), and \(Z\) represent the three coordinate systems of the point cloud, corresponding to the three coordinate systems of \(H\), \(W\), \(N\), respectively.
The converted point cloud data is the spatial point cloud representation of a single detected object in the video, as shown in Fig.2-d). At this point, the point cloud expressions \(\{PCD_{1},PCD_{2},PCD_{3},\dots,PCD_{m}\}\) of all detected targets are obtained. Because there are too many points in a single point cloud data, in order to simplify the subsequent calculation complexity, this paper uses the method of down sampling the farthest point and normalizing the coordinates to simplify the point cloud data, and obtain a new multi-object point cloud expression \(\{PCD^{{}^{\prime}}_{1},PCD^{{}^{\prime}}_{2},PCD^{{}^{\prime}}_{3},\dots,PCD^{{ }^{\prime}}_{m}\}\). The simplified point cloud data is shown in Fig.2-e).
The PointNet++ network in Part III is mainly used to extract the surface feature expression vector of point cloud data. The model is mainly used for the classification and segmentation of point cloud data. In order to adapt to the goal of extracting the contour feature vector of point cloud data, this paper uses the Modelnet40 data set to pre-train the model for classification tasks. The output vector of the last fully connected layer of the model is used as the expression vector of the point cloud shape feature, which is obtained from \(\{PCD^{{}^{\prime}}_{1},PCD^{{}^{\prime}}_{2},PCD^{{}^{\prime}}_{2},\dots,PCD^{ {}^{\prime}}_{m}\}\) through the PointNet++ network \(\{feature_{1},feature_{2},feature_{3},\dots,feature_{m}\},feature_{i}\in \Re^{1\times 256}\).
As mentioned above, after the processing of the three modules, the contour feature expression of all detection targets in the video is obtained, and the expression vector is the retrieval basis of the video detection target.
## 4 Experiments
### A simple searchable video database
Before starting to verify the validity of contour features, it is necessary to design the retrieval process and validity verification parameters of video, that is, to design several simple and retrievable video databases. As shown in Fig.3, the matching process for a detection target needs to be divided into two parts, namely building an offline database and online target recognition. Specifically, the offline database uses the MA algorithm proposed in this paper to extract the contour features of multiple targets from all the collected videos and store them in the database. Online target recognition is to extract the contour features of a single target in a single video, and then use this feature to calculate the corresponding Euclidean distance with the features of all targets in the offline database. Then sort the calculated Euclidean distance in ascending order and return the previous group or the first five groups of video ID numbers that match the most. These videos contain similar detection targets to the detected video.
The retrieval accuracy of the experimental data set design in this paper refers to the ratio of the correctly retrieved videos in top1 or top5 in the data set to the total number of data sets.
Figure 3: A searchable video database flow chart.
### The performance of contour feature on video retrieval
In order to verify the effectiveness of detecting the contour features of objects, this paper selects video clips with high content similarity from DouYin. The detected targets and video numbers of the three datasets are shown in Tab.1. The detection targets are people, cats, and dogs. The screenshots of different video clips are shown in Fig.1 a)-b). In these video clips, the silhouettes, actions, and positions of people, cats, and dogs have high Similarity, so these videos can be used to test the effectiveness of the contour features proposed in this paper for extracting features from similar video content.
Use the experimental data set retrieval process proposed in 4.1 to extract the contour features of the corresponding detection target from the above video data. In order to verify the impact of point cloud point count on retrieval accuracy, this paper retains different points for each category of point cloud when down sampling, which are 128 points, 256 points, 512 points, 1024 points, and 3072 points. Note that because down sampling selects the method of sampling the farthest point, there is a certain degree of randomness. Therefore, according to the retrieval method in 4.1, the process of online recognition and offline database down sampling for the point cloud of the same target is required twice. Finally, the retrieval accuracy of Top1 and Top5 three types of target videos is shown in Fig.4.
As shown in Fig.4, whether it is the Top1 or Top5 retrieval method, the overall retrieval accuracy increases with the increase of the number of retained sampling points. Among them, when the number of points reaches 3072 points, the retrieval accuracy of the three types of objects reaches the maximum, and the specific accuracy is shown in Tab.2. Among the collected videos, the similarity between person-related videos is greater than that of cat and dog videos. However, when this type of video has 3072 sampling points, the Top1 accuracy reaches 97.4%, and the Top5 accuracy reaches 99.6%. The retrieval accuracy of cat and dog videos is also close to 100.0% under the Top1 retrieval method. The above data show that the video object contour feature proposed in this paper is effective for constructing video content retrievable features. At the same time, the number of sampling points is 3072 points, which can ensure better retrieval accuracy.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Target Type & Person & Cat & Dog \\ \hline Number & 730 & 356 & 60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: DouYin dataset
Figure 4: Influence of the point count of point cloud on the accuracy of contour feature retrieval, a) Top1 best matching accuracy, b) Top5 best matching accuracy.
### The performance of contour feature on edited video retrieval
In order to further verify the effectiveness of the contour feature and expand the application scenarios of this feature. This paper randomly selects 1000 groups of videos containing humans from the Kinetics-700 dataset, and edits these videos, including changing the video aspect ratio, flipping the frame order of the video playback, cropping the video length, mirroring the video, and cropping the video to obtain a partial screen, play video at 2x speed, play video at 0.5x speed, and rotate video 90 degrees clockwise. In order to simplify the verification process, this paper selects the person with the longest appearance time as the only retrieval target for a video with multiple persons.
According to the MA algorithm proposed in this paper, the above-mentioned original video and the edited video are extracted from the contour features of the persons in the video for traceability retrieval. In order to verify the validity and robustness of the retrieval features of the contour features proposed in this paper after the video is edited. The retrieval accuracy of Top1 and Top5 is shown in Tab.3. For videos with changed aspect ratios and videos played in reverse order, the features proposed in this paper still have certain retrieval significance in the Top1 retrieval mode. In the Top5 retrieval mode, videos edited along the time dimension and mirrored videos also have retrieval significance.
However, the features proposed in this paper do not have effective practical value for retrieval for videos played at variable speed and rotated videos. This is because the MA algorithm does not have good rotation invariance for the features extracted from the point cloud in the PointNet++ part.
It is worth noting that for the retrieval of unedited videos, the Top1 retrieval accuracy reaches 99.7%, which is higher than the Top1 retrieval accuracy of 97.4% for the person videos in Section 4.2. This is because the contours, motions and other features of the targets in the person videos extracted by the Kinetics dataset are not similar, and the videos selected in Section 4.2 are high similar.
## 5 Conclusion
This paper proposes an algorithm for marking anything in the video, which can extract the contour features of any target in the video and convert it into a feature vector with a length of 256 that can be retrieved through the point cloud feature extraction model PointNet++. At the same time, this paper selects some videos with high content similarity and deliberately edited videos and constructs a simple and retrievable video database through the MA algorithm, which verifies that the contour features proposed in this paper have high effectiveness and robustness.
The contour feature extraction method proposed in this paper can be used to build a video database to manage and trace the video. In practical application scenarios, tracing the source of similar videos can not only mark the video with unique id information, but also be used to trace the source of false videos, which has quickly realized the rumor refusing. However, the current PointNet++ does not have a good rotation invariance, so the traceability of the rotated video is not ideal. In the future, we will continue to further improve the robustness of the algorithm by adding features with rotation invariance or strengthening the training data.
|
2303.15783 | On Causal Equivalence by Tracing in String Rewriting | We introduce proof terms for string rewrite systems and, using these, show
that various notions of equivalence on reductions known from the literature can
be viewed as different perspectives on the notion of causal equivalence. In
particular, we show that permutation equivalence classes (as known from the
lambda-calculus and term rewriting) are uniquely represented both by trace
graphs (known from physics as causal graphs) and by so-called greedy multistep
reductions (as known from algebra). We present effective maps from the former
to the latter, topological multi-sorting TM, and vice versa, the proof term
algebra [[ ]]. | Vincent van Oostrom | 2023-03-28T07:52:24Z | http://arxiv.org/abs/2303.15783v1 | # On Causal Equivalence by Tracing in String Rewriting
###### Abstract
We introduce proof terms for string rewrite systems and, using these, show that various notions of equivalence on reductions known from the literature can be viewed as different perspectives on the notion of causal equivalence. In particular, we show that _permutation equivalence_ classes (as known from the \(\lambda\)-calculus and term rewriting) are uniquely represented both by _trace graphs_ (known from physics as causal graphs) and by so-called _greedy multistep_ reductions (as known from algebra). We present effective maps from the former to the latter, _topological multi-sorting_\(\mathsf{TS}\), and vice versa, the _proof term algebra_\(\llbracket\rrbracket\).
## 1 Introduction
We are interested in all aspects of computations as modelled by rewrite systems. Here, we are interested in _finite_ computations 'doing the same work up to the order of the tasks performed'. This can be analysed from the perspective of _causality_ with the idea that it is exactly the causally independent tasks that can be reordered. In [35, Chapter 8] we presented five conceptually distinct ways to mathematically model _causal_ equivalence [35, Section 8.1.3] of computations in term rewriting, based on _permutation_, _labelling_, _standardisation_, _extraction_ and _projection_, respectively, and showed them to coincide.
Though coincidence of the above five perspectives gave us confidence in having captured causal equivalence within term rewriting, at the time we failed to relate them to the further perspective put forward there based on _tracing_[35, Section 8.6]. The problem2 to do so resides in term rewrite rules that are _non-linear_. For example, consider a reduction \(f(a)\to g(a,a)\to g(b,a)\to g(b,c)\) in the term rewrite system having rules \(a\to b\), \(a\to c\) and \(f(x)\to g(x,x)\), with the last rule non-linear (it _copies_\(x\)). The single occurrence of the symbol \(a\) in \(f(a)\)_traces3_ to both occurrences of \(a\) in \(g(a,a)\), hence _causes_ both subsequent steps. However, the fact that \(a\) was _copied_ by the first step is _not_ captured by tracing; \(f\) traces to neither copy of \(a\), though the first step rewriting \(f\) is the _cause_ of having the two further steps rewriting the copies of \(a\), in the first place. In this paper we show that only non-linearity is problematic for tracing. More precisely, we show that for string rewriting, which is inherently _linear_, causal equivalence does have a simple characterisation based on tracing, namely by _tragrs_.4
Footnote 2: For the _trace_ relations presented in [35, Section 8.6.1]; see Definitions 8.6.7, 8.6.17, Lemma 8.6.14, and Proposition 8.6.18.
Footnote 3: Tragr is short for _trace_ graph and pronounced as _tracker_, with the idea that a tragr _tracks_ what happens to symbols.
In Section 2 we adapt the theory of _permutation_ equivalence of finite computations as developed in [35, Chapter 8] for term rewriting, to the case of string rewriting at hand. To represent, interpret, and prove properties about, finite computations by algebraic and inductive means, we adapt _proof_ terms [35, Chapter 8] to represent the finite reductions of a string rewrite system. In Section 3 we introduce tragrs as a formalisation of Wolfram's notion of _causal graph_[39]. We relate proof terms to tragrs by on the one hand giving an algebra \(\llbracket\rrbracket\) interpreting proof terms as tragrs preserving permutation equivalence. and on the other hand presenting a topological multi-sorting algorithm \(\mathsf{TS}\) mapping tragrs back to proof
terms. In Section 4 we show that any proof term may be transformed, by repeatedly swapping _loath_ pairs,4 into a permutation equivalent _greedy_[8] multistep reduction, and that the latter are in bijective correspondence to tragrs via the maps \(\llbracket\rrbracket\) and \(\mathsf{TS}\). This then allows us to wrap up and conclude that tragrs (and greedy multistep reductions) serve as unique representatives of permutation equivalence classes.
Footnote 4: A loath pair intuitively is a pair of consecutive steps that are causally independent, so that the second step could have been in parallel to the first, but it was too _loath_ to do so.
**Remark 1**.: _This short paper was provoked by a remark made to me5 in 2020 by Jan Willem Klop, that Wolfram's causal graphs [40] should characterise permutation equivalence, for string rewriting. Having followed Wolfram's physics project myself and having observed that its developments frequently ran parallel to those in [4, Chapter 8], in particular that there was a close connection between causal graphs and the trace relations in [4, Section 8.6.1] discussed above, allowed me to reply immediately, confirming Klop's intuition, referring him to [4, Chapter 8] and drawing Figure 2. At the time I was reluctant to develop that further, as the idea was simple and did not solve the problem for the non-linear case left open in [4]. Later, in 2022,6 I realised that a single picture was too cryptic, and that only because the results are simple here do we entertain hope to extend them to the complex non-linear case._
Footnote 5: While employed as UniversitässassistentIn in the Computational Logic group at the University of Innsbruck.
Footnote 6: While employed as Research Associate in the Mathematical Foundations of computation group at the University of Bath.
## 2 Proof terms for string rewriting
We adapt the theory of permutation equivalence from term rewriting [4, Chapter 8] to string rewriting, guided by that strings can be represented as terms so that extant theory for term rewriting can be adapted to string rewriting. String rewriting affords better properties than term rewriting due to linearity: whereas term rewrite steps may be _non-linear_ as they can _replicate_, erase or copy, subterms of arbitrary sizes, string rewrite steps cannot do so; they are _linear_. We moreover forbid left- and right-hand sides of string rewrite rules to be the empty string. This restriction makes sense from the perspective of causality [41] as it entails all steps being _ex materia_ (forbidding _ex nihilo_ steps) and having _bounded_ causes; cf. the 4\({}^{\text{th}}\) item of Remark 9. By imposing these (linearity and non-emptiness) restrictions, we are in a sweet spot; the resulting string rewrite systems have _sufficient_ structure to _express_ the different perspectives on causal equivalence mentioned in the introduction, and these perspectives can in turn be proven equivalent in a _simple_ way due to the absence of replication. As in [4, Chapter 8], to state and prove results _proof_ terms are our tool of choice for representing the reductions of a string rewrite system.
The usual definition of the finite strings over an alphabet \(\Sigma\) as the free monoid over \(\Sigma\) is abstract. To be able to deal with matters of representation, we instead will be concrete here.
**Definition 1**.: _A term rewrite system is oudenadic if all rule and function symbols have arity \(0\), it has a nullary symbol \(\varepsilon\) (empty string) and a binary composition symbol \(\cdot_{h}\), and terms are considered modulo \(\equiv_{M}\) induced by the monoid laws, i.e. \(\varepsilon\cdot_{h}s=s\), \(s\cdot_{h}\varepsilon=s\), and \((s\cdot_{h}t)\cdot_{h}u=s\cdot_{h}(t\cdot_{h}u)\)._
**Remark 2**.: _Our terminology oudenadic is an attempt to highlight that the representation employed here associates nullary function symbols to letters, and to contrast it with the usual monadic representation, which associates a unary function symbol to each letter; cf. [4, Section 3.4.4] for an account of both._
_In our modelling, strings are closed oudenadic terms over the alphabet modulo the monoid laws. Uniquely representing such equivalence classes can itself be achieved by term rewriting: orienting the above monoid laws from left to right yields a complete (confluent and terminating) term rewrite system, having as normal forms strings of shape either \(\varepsilon\) or \(a_{1}\ldots a_{n}\) for some \(n\geq 1\)._
We refer to \(\cdot_{h}\) as _horizontal_ composition to distinguish it from _vertical_ composition \(\cdot_{v}\) below (see Definition 2), based on that we adhere to a convention of drawing strings horizontally and reductions vertically in figures. That meshes well with thinking of strings as being extended in space (1-dimensional, horizontally) and of reductions as extended in time (vertically); cf. Figure 1. We assume \(\cdot_{h}\) is infix and right-associative and that it is left implicit, i.e. is represented by juxtaposition. To that end, we assume \(\Sigma\) has _unique reading_: if \(a_{1}\ldots a_{n}=b_{1}\ldots b_{m}\) for \(a_{i},b_{j}\in\Sigma\), then \(n=m\) and \(a_{k}=b_{k}\) for all \(1\leq k\leq n\), cf. [36].
**Example 1**.: _The alphabet \(\Sigma:=\{A,B\}\) has unique reading. Per our conventions \(\mathit{ABAAB}\) abbreviates the term \(A\cdot_{h}(B\cdot_{h}(A\cdot_{h}(A\cdot_{h}B)))\), which is closed and in normal form with respect to the monoid rules, so serves as the unique representative of the string (an \(\equiv_{M}\)-equivalence class containing, e.g., \((\mathit{AB})(\mathit{AA})B\))._
_The alphabet \(T:=\{\mathit{AB},B,A\}\) does not have unique reading, e.g. \(\mathit{ABB}\) can be viewed as being composed of the two letters \(\mathit{AB}\) and \(\mathit{B}\), and alternatively of the three letters \(A\), \(\mathit{B}\) and \(\mathit{B}\)._
Concretely, a _string_ rewrite system _over_ an alphabet \(\Sigma\) is an oudenadic term rewrite system having the letters in \(\Sigma\) as nullary function symbols, with sources and targets of rules being nonempty strings (cf. the introduction), and steps taking place modulo \(\equiv_{M}\). We use \(a,b,\ldots\) as variables for letters, \(A,B,\ldots\) as concrete letters, and \(A,B\) for the concrete letters of our running example, as in Example 1.
We consider term rewrite systems in the sense of [35, Chapters 8 and 9], meaning that rules themselves will feature as _symbols_ whose arity (0 for the oudenadic systems we consider here) is the number of variables in the rule, and rules come equipped with source / target functions mapping them to their lhs / rhs. This enables expressing reductions, and more generally proofs in rewrite logic [21], as _proof_ terms [35], terms over a signature comprising the letters, the rules, and a binary composition symbol \(\cdot_{v}\) representing the transitivity inference rule of rewrite logic [21]. In turn, this allows us to represent the key notion of this paper, the notion of causal equivalence, as an equivalence on reductions / proof _terms_.
**Definition 2**.: _Consider for an oudenadic term rewrite system \(\langle\Sigma,P\rangle\), the extension of the oudenadic signature for \(\Sigma\) by the rules \(\rho\) in \(P\) as nullary symbols and the binary composition symbol \(\cdot_{v}\). Proof terms are a subset of the terms over this signature defined inductively, together with source and target functions src and tgt to strings, in Table 1, where we use \(\gamma\colon s\geqslant t\) to denote that \(\gamma\) is a proof term having string \(s\) as source and string \(t\) as target, and employ \(\gamma,\delta,\zeta,\eta,\ldots\) to range over proof terms._
We abbreviate vertical composition \(\cdot_{v}\) to \(\cdot\), assume it is right-associative, and that it binds weaker than horizontal composition \(\cdot_{h}\) / juxtaposition.
**Remark 3**.: _By the vertical composition being on strings the target of \(\gamma\) is only required to be equivalent modulo the monoid laws to the source of \(\delta\) in (transitivity). We have \(t\colon t\geqslant t\) for every oudenadic term \(t\)._
The name proof term for such terms is justified by that they can be viewed as a _proof_ that their target string is _reachable_ from their source string by using the rewrite rules. Building on Example 1, we take the following as a running example to illustrate concepts and results.
\begin{table}
\begin{tabular}{l l l l} (empty) & \(\varepsilon\colon\) & \(\varepsilon\geqslant\varepsilon\) & \\ (letter) & \(a\colon\) & \(a\geqslant a\) & for each letter \(a\) \\ (rule) & \(\rho\colon\) & \(\ell\geqslant r\) & for each rule \(\rho\colon\ell\to r\) \\ (juxtaposition) & \(\gamma_{1}\gamma_{2}\colon s_{1}s_{2}\geqslant t_{1}t_{2}\) & if \(\gamma_{i}\colon s_{i}\geqslant t_{i}\) \\ (transitivity) & \(\gamma\cdot\delta\colon\) & \(s\geqslant u\) & if \(\gamma\colon s\geqslant t\), and \(\delta\colon t\geqslant u\) \\ \end{tabular}
\end{table}
Table 1: Proof terms for string rewriting
**Example 2**.: _Let \(\langle\Sigma,P\rangle\) be the string rewrite system having rules \(P:=\{\alpha:BB\to A,\beta:AAB\to BAAB\}\). The proof term \(\gamma:=AB\beta\cdot A\alpha AAB\cdot A\beta\cdot BAAB\cdot B\beta AAB\cdot \alpha ABAAB\cdot A\beta AAB\) proves the reachability statement \(ABABAB\geqslant ABAABAAB\). An alternative witness to that statement is the proof term \(\gamma:=AB\beta\cdot A\alpha\beta\cdot BAAB\cdot B\beta BAAB\cdot\alpha BAAB\). For the (vertical) compositions in these proof terms to be well-defined it is essential to work modulo the monoid laws. For instance, although the target \((BAAB)AAB\) of \(\beta AAB\) and the source \(B(AAB)AAB\) of \(\beta BAAB\) are distinct as oudenadic terms, they are both represented by the string \(BAABAAB\), allowing their vertical composition in \(\gamma\)._
Although in the example the proof terms \(\gamma\) and \(\gamma^{\prime}\) intuitively do 'the same amount of work', the latter is shorter than the former. This is due to that the former is maximally sequentialised, performing one step at the time, whereas the latter is maximally concurrent, performing steps as soon as possible as concurrency permits.
**Definition 3**.: _A multistep is a proof term without vertical compositions. It is empty / a (single) step if it has no / one occurrence of a rule. A reduction (a multistep reduction) either is an empty multistep or a vertical composition, associated to the right, of steps (resp. nonempty multisteps). Permutation equivalence\(\equiv\) between proof terms is induced by the laws in Table 2, where the sides of the laws are restricted to proof terms, i.e. sources and targets of the proof terms \(\gamma,\delta,\zeta,\eta\) and the oudenadic terms \(s,t\) are assumed to match appropriately._
We use \(\Phi,\Psi,X,\ldots\) to range over multisteps, and \(\phi,\psi,\chi,\ldots\) to range over steps. Observe that the source / target of the left- and right-hand side of each law in Table 2 are the same (as strings).
**Remark 4**.: _The requirements on the sources and targets in the laws of Table 2 boil down to working in a typed algebraic structure [29]. For instance, in that setting a category is a typed monoid, allowing composition of morphisms only if their sources and targets match. In Table 2 the requirements are most prominent in the (exchange) law. For example, for rules \(\alpha:A\to A^{\prime}C,\alpha^{\prime}:A^{\prime}\to A^{\prime\prime},\beta:B \to B^{\prime},\beta^{\prime}:CB^{\prime}\to B^{\prime\prime}\), though we do have \(\alpha\beta\cdot\alpha^{\prime}\beta^{\prime}:AB\geqslant A^{\prime\prime}B^{ \prime\prime}\), the expression \((\alpha\cdot\alpha^{\prime})(\beta\cdot\beta^{\prime})\) is not even a proof term since, e.g., the target \(A^{\prime}C\) of \(\alpha\) does not match the source \(A^{\prime}\) of \(\alpha^{\prime}\)._
**Remark 5**.: _Our reductions, as proof terms of a specific shape, are formally distinct from the classical notion of a reduction, as a finite sequence of steps, in rewriting [2, 35]. However, since there is an obvious bijection between both we feel the confusion is acceptable. For instance, the proof term \(\gamma:=AB\beta\cdot A\alpha AAB\cdot A\beta\cdot BAAB\cdot B\beta AAB\cdot \alpha AABAAB\cdot A\beta AAB\cdot BAAB\geqslant ABAABAAB\) corresponds to:_
\[ABAA\beta\to ABBAAB\to AA\Delta AB\to BAABAB\to BA\Delta BAAB\to\underline{BBAABAB} \to AA\Delta BAAB\to ABAABAAB\]
_Similarly, the proof term \(\gamma^{\prime}:=AB\beta\cdot A\alpha\beta\cdot BAAB\cdot B\beta AAB\cdot \alpha BAAB\) corresponds to the following sequence of multisteps, where we employ the notation \(\multimap\) of [35, Chapter 8] for multisteps:_
\[ABAA\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Lemma 1** (Logicality).: _If \(\gamma\colon s\geqslant t\) for some proof term \(\gamma\), then there is a reduction \(\gamma^{\prime}\colon s\geqslant t\) with \(\gamma\equiv\gamma^{\prime}\)._
Proof.: By induction and cases on \(\gamma\).
* the empty string \(\epsilon\) is an empty reduction;
* a single letter \(a\) is an empty reduction;
* a single rule \(\rho\) is a single step reduction from its lhs to its rhs;
* suppose we have a proof term \(\gamma\mathrel{\mathop{:}}=\gamma_{1}\gamma_{2}\colon s_{1}s_{2}\geqslant t _{1}t_{2}\) with \(\gamma_{i}\colon s_{i}\geqslant t_{i}\). By the IH we have reductions \(\gamma_{i}^{\prime}\colon s_{i}\geqslant t_{i}\) with \(\gamma_{i}\equiv\gamma_{i}^{\prime}\). Set \(\gamma^{\prime}\) to \(\gamma_{1}^{\prime}\langle s_{2}\rangle\cdot\langle t_{1}\rangle\gamma_{2}^{\prime}\), where for a reduction \(\zeta\) and string \(u\), \(\zeta\langle u\rangle\) denotes the reduction obtained by suffixing each step of \(\zeta\) by \(u\), and symmetrically for \(\langle u\rangle\zeta\). One easily verifies \(\gamma^{\prime}\colon s_{1}s_{2}\geqslant t_{1}t_{2}\), and also that \(\gamma\equiv\gamma^{\prime}\) by using (exchange) and vertical units repeatedly. Then by repeated vertical associativity applied to \(\gamma^{\prime}\) we obtain a reduction, except in case one or both of the \(\gamma_{i}^{\prime}\) is the empty reduction in which case we conclude by eliding one such by a horizontal unit.
* by vertically composing the reductions obtained by the IH for the constituent proof terms, possibly followed by associating to right and eliding empty reductions as before.
The proof is effective, transforming proof terms into reductions witnessing the same reachability.
**Example 3**.: _The procedure underlying the proof of Lemma 1 transforms the proof term (in fact a multistep reduction) \(\gamma^{\prime}\) of Example 2 into the reduction \(\gamma\). To see this it suffices, since vertical compositions transform homomorphically, to note that the multisteps \(A\alpha\beta\) and \(\alpha\beta AAB\) in \(\gamma^{\prime}\) are transformed into the (two-step) reductions \(A\alpha AAB\cdot AA\beta\) and \(\alpha AABAAB\cdot A\beta AAB\) in \(\gamma\), respectively._
**Remark 6**.: _Logicality is the raison d'etre for the field of rewriting [36, 37], allowing to reduce reachability to reducibility. Cf. [22, Lemma 3.6] for the corresponding logicality result for term rewriting._
Although the logicality lemma allows to represent any proof term by a reduction, the latter is in general far from unique (up to permutation equivalence). For instance, in Example 3 we could have chosen to transform the multistep \(A\alpha\beta\) in \(\gamma^{\prime}\) into the two step reduction \(ABB\beta\cdot A\alpha BAAB\) instead, giving rise to a reduction permutation equivalent to \(\gamma^{\prime}\) but distinct from \(\gamma\). Intuitively this is caused by that factorising a proof term into a sequence of steps forces to order steps in _some_ (arbitrary) way even though they may be causally independent. For instance, \(\alpha\) and \(\beta\) in the multistep \(A\alpha\beta\) are concurrent / causally independent, but still must be ordered to obtain a reduction; both orders will do. Such a representation favours sequentiality over concurrency and length over width, so to speak. In the next sections we will go in the opposite direction, maximally favouring concurrency over sequentiality and width over length.
From that perspective, the proof term \(\gamma\mathrel{\mathop{:}}=AB\beta\cdot A\alpha AAB\cdot AA\beta\cdot BAB\cdot B \beta AAB\cdot\alpha AABAB\cdot A\beta AAB\) is a proof of the reachability statement \(ABAAB\geqslant ABABAABAB\) that is wasteful in two ways:
* This can be remedied by proceeding greedily [8], employing proper _multi_steps instead of steps. For instance, the \(2^{\text{nd}}\) and \(3^{\text{rd}}\) steps \(A\alpha AAB\cdot A\alpha B\cdot ABBAAB\geqslant AABAAB\) in \(\gamma\) can be combined into the single multistep \(A\alpha\beta\cdot ABBAAB\geqslant AABAAB\). Proceeding greedily, combining as many of the single steps into multisteps as possible, and as early as possible, turns \(\gamma\) into the shorter _greedy_ multistep reduction \(\gamma^{\prime}\mathrel{\mathop{:}}=AB\beta\cdot A\alpha\beta\cdot BAB\cdot B \beta AAB\cdot\alpha BAB\). As we will show, greedy multistep reductions may serve as _unique_ representatives of permutation equivalence classes.
* (Multi)steps not only represent what changes (via the rules in it) but also what _does not change_ (via the letters in it); cf. the frame problem [32]. As a consequence, in general proof terms predominantly consist of letters; this holds true in particular both for \(\gamma\) and \(\gamma^{\prime}\). _Causal graphs_[39]
(cf. Figure 2 left) remedy this by eliding letters, only keeping the causal dependencies between rule symbols. This suffices, as we will show, to let causal graphs serve as _unique_ representatives of permutation equivalence classes.
To express and relate both remedies we will employ a bit of _residual_ theory (going back to [7]) for multi-steps below. To avoid things becoming too heavy for this short paper, we only develop the residual theory necessary here and in an ad hoc informal fashion, referring the reader to Chapter 8 of [36] in general and to Section 8.7 in particular, for background on (from the perspective of permutation equivalence) and a formal treatment of, residuation.
**Definition 4**.: _For multisteps \(\Phi\), \(\Psi\) having the same source, we write \(\Phi\subseteq\Psi\) to denote that \(\Phi\) is contained in \(\Psi\), meaning that \(\Phi\) is obtained from \(\Psi\) by mapping some occurrences of rule symbols to their source. In that case, we denote by \(\Psi/\Phi\) the residual of \(\Psi\) after \(\Phi\), that is, the multistep obtained from \(\Psi\) by mapping the other occurrences of rules (the complement of those selected for \(\Phi\subseteq\Psi\)) to their target._
**Example 4**.: \(ABBAAB\)_, \(ABB\beta\), \(A\alpha A\) and \(A\alpha\beta\) are the four multisteps contained in \(A\alpha\beta\) in Example 2. We have, e.g., \(A\alpha\beta/ABB\beta=A\alpha BAAB\) and \(A\alpha\beta/A\alpha AAB=A\alpha\beta\). Observe that if \(\Phi\subseteq\Psi\) and \(\Phi\) is nonempty, then fewer rule symbols occur in \(\Psi/\Phi\) than in \(\Psi\) by linearity of string rewriting._
## 3 Trace graphs by proof term algebra
We give a proof term algebra \(\llbracket\rrbracket\) into tragrs, trace graphs, based on causal graphs [39]. The algebra is shown to model permutation equivalence in that it maps permutation equivalent proof terms to the same tr
Looking at Figure 1 the correspondence between evolutions and reductions is clear though informal. Evolutions nicely illustrate the point argued above in (too large) that letters (the white and black boxes representing \(A\) and \(B\)) add nothing to the representation; the source and target strings and the causal dependencies (represented by directed edges) between the rule symbols would suffice to read back the multistep reduction \(\gamma^{\prime}\) (permutation equivalent to \(\gamma\)) from the evolution. That idea will be formalised below using the notion of _tragr_, short for _trace graph_, illustrated for \(\gamma\,\gamma^{\prime}\) in Figure 2.
**Remark 7**.: _The book [38] being intended for a general audience, causal graphs are not sufficiently formalised there to state our results here; in particular, causal graphs lack what we call below an interface (dags of the source and target strings). Tragrs are our way to overcome that deficiency. We believe that if Wolfram were to formalise his notion of causal graph, he would end up with something similar to tragrs._
**Definition 5**.: _Given a string rewrite system \((\Sigma,P)\), a tragr from string \(s\) to string \(t\) is a port graph [34] (see also [3, 16, 34, 25]). comprising the following three parts, as visualised in:_
* _the dag of source string_ \(s\) _having for every occurrence of a letter_ \(b\) _in_ \(s\) _a node labelled_ \(b\)_, having (in clockwise order) an input port of type_7__\(*\)_, an output port of type_ \(*\)_, and an output port of type_ \(b\)_. The nodes are connected in a straight line by edges of type_ \(*\)_, terminated by a node labelled_ \(\epsilon\) _having an input port of type_ \(*\)_, and an output port of type_ \(\epsilon\)_._ Footnote 7: Types serve here only to enable indicating / visualising connections between ports and edges conveniently (cf. Example 6).
* _a dag, the_ causal _graph, of nodes labelled by rule symbols_ \(\rho\) _having (in clockwise order) as input ports the letters of the source string of_ \(\rho\) _and as output ports the letters of (the reverse of) the target string, with each port having the type of its letter;_ Footnote 8: The dag of target string \(t\), as for the source string but in reverse direction, i.e. with input and output port of type_ \(*\)_swapped._
* _the dag of target string_ \(t\)_, as for the source string but in reverse direction, i.e. with input and output port of type_ \(*\)_swapped._
Figure 2: Causal graph (left) and tragr from _ABAAB_ to _ABAABAAB_ (right)
_The tragr is required to be a planar dag, to only have edges from input to output ports of the same type, and to have exactly two ports without edges, both of type \(*\): the first input port of the source string and the first output port of the target string. (See Remark 10 for more on the planarity requirement.)_
We indicate _the_ input / output ports of a tragr by dangling edges, and refer to the dags of the source and target strings combined as its _interface_.
**Example 6**.: _The graph on the right in Figure 2 is a tragr with the types of edges being indicated by color._
**Remark 8**.: _Tragrs are not (too large) in the sense discussed above; letters only feature in the interface but not in the causal graph of a tragr; cf. the text below [36, Def. 8.6.17]._
**Definition 6**.: _For a string rewrite system \((\Sigma,P)\) the proof term algebra \(\llbracket\rrbracket\) on tragrs, interpreting each proof term \(\gamma\colon s\geqslant t\) as a tragr \(\llbracket\gamma\rrbracket\) from \(s\) to \(t\), is given by:_
* _(letter and empty)_ \(\llbracket a\rrbracket\) _and_ \(\llbracket\varepsilon\rrbracket\) _are the tragrs:_
* _(rule)_ \(\llbracket\rho\rrbracket\) _is a tragr having the straight line dags for its source and target as interface, comprising a single rule node connected to the interface in an orderly way, illustrated for rules_ \(\alpha\) _and_ \(\beta\) _by:_
* (juxtaposition)_ \(\llbracket\gamma\delta\rrbracket\) _is obtained from_ \(\llbracket\gamma\rrbracket\) _and_ \(\llbracket\delta\rrbracket\) _by removing the_ \(\varepsilon s\) _from the former, and redirecting the input and output of the latter accordingly:_
* (transitivity)_ _The tragr_ \(\llbracket\gamma\cdot\delta\rrbracket\) _is obtained from_ \(\llbracket\gamma\rrbracket\) _and_ \(\llbracket\delta\rrbracket\) _by connecting the output of the former to the input of the latter, and subsequently eliding the intermediate interface:_
* (rule)_ \(\varepsilon\)_
_where elision from the middle to the right is achieved by normalising with respect to the rules:_
Observe that if \(\gamma\colon s\geqslant t\) then \(\llbracket\gamma\rrbracket\) indeed is a tragr from \(s\) to \(t\).
**Example 7**.: _The tragrs \(\llbracket\gamma\rrbracket\) and \(\llbracket\gamma\rrbracket\) of the permutation equivalent \(\gamma,\gamma^{\prime}\) are as on the right in Figure 2._
**Remark 9**.:
* _Elision_ \(\Rightarrow\) _is complete: terminating because the number of nodes decreases in each step and confluent because elision can be viewed as an interaction net rule_ _[_17_]__._
* \(\llbracket\gamma\rrbracket\) _is finite so that all maximal paths in it lead from its input to its output, using that nodes have at least one input / output port, by the assumption that left- and right-hand sides are non-empty._
* _We modelled trace graphs, tragrs, after the_ trace relations _of_ _[_36_, Definition 8.6.17 / Figure 8.37]_ _with the main difference between both being that the latter do not allow_ parallel _edges between the same two nodes. That makes the latter unsuitable for our purposes here; only knowing_ that _a rule causally depends on another not_ how_, is in general not sufficient to read back proof terms. For instance, for rules_ \(A\to BBB\)_,_ \(BB\to B\) _and_ \(BB\to C\)_, the reductions_ \(A\to BBB\to BB\to C\) _and_ \(A\to BBB\to BB\to CB\) _induce the same trace relation, despite not being permutation equivalent._8 Footnote 8: It is interesting to compute their respective tragrs and see that / how they differ.
* _If we were to allow right-hand sides to be empty as in_ \(\alpha:A\to\varepsilon\)_, then the number of occurrences of_ \(\alpha\) _that may cause the left-hand side of another rule may be unbounded as illustrated by the multisteps of shape_ \(B\alpha^{n}B:BA^{n}B\geqslant BB\) _for rule_ \(\beta:BB\to\ldots\) _and any_ \(n\in\mathbb{N}\)_, despite that none of the As trace to the rule_ \(\beta\) _(its left-hand side BB)._9 _Swapping left- and right-hand sides in this example illustrates the problem with allowing left-hand sides to be empty._ Footnote 9: Cf. [36, Definition 8.6.64] for a hack to overcome (by reifying ‘emptyiness’) the problem caused by such _collapsing_ rules.
* _The algebra_ \(\llbracket\rrbracket\) _illustrates that horizontal and vertical composition are closely related to parallel and series composition of graphs._
We show that \(\llbracket\rrbracket\) maps permutation equivalent proof terms to the same tragr,10 see Example 7, but defer showing the converse to the next section (see Theorems 1 and 2).
Footnote 10: Stated differently, we show the proof term algebra \(\llbracket\rrbracket\) is a _model_ (in an appropriate typed sense of [36, Definition 2.5.1(v)]) of permutation equivalence given in Table 2. The proof for trace graphs follows that for trace relations [36, Lemma 8.6.14].
**Lemma 2**.: \(\llbracket\rrbracket\) _maps permutation equivalent proof terms to the same11 tragr._
Footnote 11: Formally, the same up to graph isomorphism.
Proof.: We show for each law in Table 2 its left- and right-hand sides are mapped to the same tragr by \(\llbracket\rrbracket\):
* For the monoid laws (h-left unit), (h-right unit) and (h-associativity) for horizontal composition, the former two follow from that the parallel composition of a tragr with \(\llbracket\varepsilon\rrbracket\) on either side, amounts to first introducing and then immediately removing \(\varepsilon\)s. Associativity holds since removing the \(\varepsilon\)s and redirecting the respective input and output edges are local and independent actions.
* For the monoid laws (v-left unit), (v-right unit) and (v-associativity) for vertical composition, the former two follow from that for any string \(s\), \(\llbracket s\rrbracket\) is a _ladder_, a tragr only comprising the straight line graphs of its source and target string, each the reverse of the other. For the sequential
composition with a ladder on either side, elision amounts to the immediate removal of (the reverse of) the ladder. Associativity holds since elision is complete (confluent and terminating) and can be postponed until after connecting the respective input and output ports, which are local and independent actions.
* The (exchange) law holds by combining the reasoning in the previous two items; combining removal of \(\epsilon\)s with elision \(\Rightarrow\) is complete and can be postponed until after redirecting the input and output edges, which are local and independent actions; 12 see Figure 3.
Footnote 12: For the reasoning to apply it is essential that _both_ sides of (exchange) law are proof terms, i.e. that sources and targets of the constituting proof terms should match appropriately so that both sides indeed yield tragrs; cf. Remark 4.
We conclude this section with showing that any tragr can be read back into a multistep reduction, by means of a procedure that we dub topological _multi_sorting, which is analogous to topological sorting but selects in each stage _all_ minimal elements, instead of just a _single_ one, cf. [29].
**Definition 7**.: _The topological multi-sorting function \(\mathsf{TS}\) mapping a tragr from \(s\) to \(t\) to a multistep reduction having \(s\) as source and \(t\) as target, is defined by induction on size and cases on its causal graph._
_If the causal graph is empty, planarity of tragrs dictates the tragr is a ladder (as in the Proof of Lemma 2; cf. the bottom-right of Appendix A), so we have \(s=t\) and may return the empty multistep \(s\)._
_If the causal graph is non-empty, let its minimal layer \(\mathsf{M}\) comprise its minimal nodes w.r.t. the partial order induced by (taking the reflexive-transitive closure of) the \(\mathsf{dag}\). To construct the multistep \(\Phi\) we juxtapose, starting from the input of the tragr, the labels (letters) of nodes in the \(\mathsf{dag}\) for \(s\) not covered by nodes in \(\mathsf{M}\), interspersed with the labels (rule symbols) of those covering nodes in \(\mathsf{M}\). Let \(s^{\prime}\) be the target of \(\Phi\), and consider the tragr obtained by replacing for every node labelled by some rule \(\rho\) in \(T\) the source \(\mathsf{dag}\) of \(\rho\) by its target \(\mathsf{dag}\). By planarity it follows (cf. the top row of Appendix A) that the resulting tragr is from \(s^{\prime}\) to \(t\). Therefore, it suffices to vertically compose \(\Phi\) with the \(\mathsf{TS}\)-image of this tragr, which exists by the IH._
_In both cases, we obtain a vertical composition of multisteps having \(s\) as source and \(t\) as target, giving rise to a multistep reduction after removing a trailing empty multistep._
**Example 8**.: _Applying topological multi-sorting \(\mathsf{TS}\) to the tragr on the right in Figure 2 gives rise to the \(6\) successive stages displayed in Appendix A._
**Remark 10**.: _Note how the planarity requirement on tragrs of Definition 5 was used in Definition 7 to guarantee that all tragrs having an empty causal graph read back as the empty multistep on their source (and target). In particular, planarity disallows'rewirings' having crossing edges._
**Lemma 3**.: \(\llbracket\rrbracket\) _after \(\mathsf{TS}\) is the identity on tragrs._
Figure 3: Tragr illustrating (exchange)
Proof.: By induction on size and cases on the causal graph of a tragr from \(s\) to \(t\).
If the causal graph is empty then by the observation in Definition 7 and Remark 10, the tragr is a ladder, \(s=t\), and it is mapped to the empty multistep \(s\). Then \(\llbracket s\rrbracket\) is _the_ ladder from \(s\) to \(s\) again, as follows by induction on the length of \(s\) and Definition 6, using (letter) / (empty) in the base case and (juxtaposition) in the induction step.
If the causal graph is not empty, then per the construction in Definition 7, it is obtained by (transitivity) from the tragr from \(s\) to \(s^{\prime}\) of its minimal layer \(\mathsf{M}\), and the tragr from \(s^{\prime}\) to \(t\) of its remaining nodes / causal graph \(\mathsf{R}\). We conclude by that the former is obtained from \(\llbracket\Phi\rrbracket\) for \(\Phi\colon s\geqslant s^{\prime}\) the multistep constructed from \(\mathsf{M}\) in Definition 6, and by the induction hypothesis for the latter. To see the former, one proceeds as for the empty causal graph additionally using (rule) in the base case.
**Remark 11**.: _In fact, any way to transform a tragr into a proof term by repeatedly decomposing the tragr by means of the inverses of (juxtaposition) and (transitivity), mirrored by composing the corresponding proof terms by means of horizontal and vertical composition respectively, and transforming the base cases (letter), (empty) and (rule) in the natural way, will preserve the result (Lemma 3). The particular such transformation \(\mathsf{TS}\) was chosen here because it does not just yield any proof term but a greedy multistep reduction, which will be essential for the unique representation purposes of the next section._
## 4 Greedy multistep reductions
We first give a standard algorithm for transforming a proof term into a permutation equivalent _greedy_ one [8], and next show there is a bijection between such greedy multistep reductions and tragrs. From this we conclude, in a semantic way, that both constitute unique representatives of permutation equivalence classes of proof terms.
We give a novel description of greediness and the greedy algorithm of [8], based on the analogy with sorting and standardisation [16, 36] in the literature. In sorting, (adjacent) _inversions_ are consecutive elements that are out-of-order, and in term rewriting, _anti-standard_ pairs [16, 21] are consecutive steps in a reduction such that the latter is outside (to the left of) the former. Such pairs of out-of-order elements are of interest since they provide a _local_ characterisation both of _being sorted_, i.e. the _absence_ of such pairs, and of bringing the list / reduction _closer to being sorted_, by _permuting_ the out-of-order pair. This makes both processes amenable to a rewriting approach, with bubblesort being an example for sorting and the extraction of the leftmost-contracted-redex being an example for standardisation [16, 14, 14, 36, 21, 6]. To make the greedy algorithm fit the mould, we define _loath_ pairs as consecutive multisteps where some rule symbol in the \(2^{\mathsf{nd}}\) is _not caused_ by the \(1^{\mathsf{st}}\), so may be permuted up front, signalling non-greediness. This is phrased in terms of residuation; see Definition 4.
**Definition 8**.: _A proof term is greedy if it is a multistep reduction without loath pairs, where a pair \(\Phi\cdot\Psi\) of consecutive multisteps is loath if there is a step \(X\) co-initial with \(\Phi\) such that \(\Phi\subseteq X\) and having residual step \(\psi:=X/\Phi\) with \(\psi\subseteq\Psi\). Swapping \(X\) for \(\Phi\cdot\Psi\) then results in \(X\cdot(\Psi/\psi)\). Exhaustive swapping followed by removing trailing empty multisteps yields a greedy decomposition._
**Example 9**.: _The multistep reduction \(\gamma^{\prime}\) is greedy, but \(\gamma\) isn't as is clear from \(\Delta\beta\cdot\overline{\alpha\alpha\Delta\beta\cdot A\Delta\beta}\cdot\beta A\Delta B\cdot\overline{\beta A\Delta B\cdot\overline{\alpha A\Delta\beta A}\cdot\Delta\beta A\Delta B}\), where we have overlined its loath pairs, and underlined the rule symbols and their left-hand sides involved in swapping. The loath pair \(A\alpha\underline{A}\Delta\beta\cdot A\Delta\beta\) swaps into \(A\alpha\underline{\beta}\cdot A\Delta\beta\underline{A}\Delta\), and \(\alpha\underline{A}\Delta\underline{A}\Delta\Delta B\cdot\Delta\Delta\Delta\) swaps into \(\alpha\underline{\beta A}\cdot\Delta\underline{A}\Delta\Delta\Delta B\). As one may verify, exhaustive swapping yields \(\gamma^{\prime}\cdot\Delta\underline{A}\Delta\underline{A}\Delta\Delta B \cdot\overline{\beta A}\Delta\Delta B\), hence a greedy decomposition of \(\gamma\) is \(\gamma^{\prime}\). Intuitively, this is as desired since \(\gamma^{\prime}\) exhibits maximal concurrency while performing the same tasks performed in \(\gamma\)._
By standard residual theory [36, Chapter 8], swapping yields a pair of consecutive multisteps permutation equivalent to the original pair, as in the example. Moreover, the size (number of rule symbols) of the \(2^{\text{nd}}\) multistep decreases per construction, so swapping decreases the _Sekar-Ramakrishnan measure_[36, Definition 8.5.17], measuring a multistep reduction by the lexicographic product of the sizes of the multisteps in it from tail to head. Since if necessary we may first transform a proof term into a permutation equivalent (single step hence multistep) _reduction_ by the Logicality Lemma 1, we have:
**Lemma 4**.: _A proof term can be transformed into a permutation equivalent greedy multistep reduction._
**Remark 12**.: _To give an idea how residual theory [36, Table 8.5 in Section 8.7.3] may be employed to show swapping preserves permutation equivalence, first note that \(\Phi\subseteq X\) entails \(\Phi/X\) is an empty multistep. Therefore, by commutativity of join \(X\equiv X\cdot(\Phi/X)\equiv\Phi\cdot(X/\Phi)\). Similarly, \(X/\Phi=\psi\subseteq\Psi\) entails \(\Psi\equiv(X/\Phi)\cdot(\Psi/(X/\Phi))\). By combining both \(\Phi\cdot\Psi\equiv\Phi\cdot(X/\Phi)\cdot(\Psi/(X/\Phi))\equiv X\cdot(\Psi/( X/\Phi))=X\cdot(\Psi/\psi)\)._
**Remark 13**.: _An efficient procedure for searching for loath pairs can be based on the observation that due to linearity of string rewrite systems, an occurrence of either a source or target of a rule can be identified with a pattern in the sense of [36, Definition 8.6.21], i.e. with a convex set of positions in the tree of the string having vertices as boundary. Following the main idea of [25], to see whether \(\Phi\cdot\Psi\) is loath, it therefore suffices to check whether each pattern of a source of a rule occurring in \(\Psi\) has overlap with some target of a rule occurring in \(\Phi\). Since a pattern in a string simply is an interval, characterised by the two vertices constituting its boundary, a single top-down pass through both string-trees checking disjointness of intervals via their boundaries, suffices. If for some pattern there is no overlap, we obtain a loath pair by setting \(X\) to \(\Phi\) in which the pattern was replaced by the rule. For example, see Figure 4,
_using underlining to indicate patterns, that \(\alpha\)\(\alpha\)\(\alpha\)\(\beta
We show that when computing the \(\mathsf{TS}\)-image of a tragr, consecutive stages yield multisteps that are not loath pairs, by induction on the number of stages. There is only something to show when there is more than one stage. So suppose \(\mathsf{TS}\) yields a composition \(\Phi\cdot\gamma\) with \(\Phi\) obtained from the minimal layer of rule nodes \(\mathsf{M}\) of the tragr, and \(\gamma\) from its remaining nodes / causal graph \(\mathsf{R}\), non-empty by assumption. By the IH \(\gamma\) is greedy, and non-empty so has some first multistep, say \(\Psi\), constructed from the minimal layer, say \(\mathsf{N}\), of \(\mathsf{R}\). Per definition of \(\mathsf{TS}\) each of the nodes in \(\mathsf{N}\) is reachable from some node in \(\mathsf{M}\). Since there are no edges between the nodes in a single layers, this entails that for each of the nodes in \(\mathsf{N}\) there is an edge to it from some node in \(\mathsf{M}\). As a consequence, cf. Remark 13, the corresponding pair \(\Phi\cdot\Psi\) of consecutive multisteps is greedy / not loath. Thus, \(\llbracket\rrbracket\) maps to greedy multistep reductions.
* That \(\mathsf{TS}\) after \(\llbracket\rrbracket\) is the identity on greedy multistep reductions, we show by induction on the length of such a reduction. We employ the no(ta)tions of Definition 7, in particular we employ \(\mathsf{M}\) to denote the layer of minimal elements of (the causal graph of) a tragr. For the empty and single-multistep reductions this is trivial. Otherwise, the reduction has shape \(\Phi\cdot\gamma\). By definition \(\llbracket\Phi\cdot\gamma\rrbracket\) is the serial composition of \(\llbracket\Phi\rrbracket\) and \(\llbracket\gamma\rrbracket\) and we claim that by greediness the steps in the minimal layer of the tragr \(\llbracket\Phi\cdot\gamma\rrbracket\) are those of \(\llbracket\Phi\rrbracket\), i.e. \(\mathsf{M}(\llbracket\Phi\cdot\gamma\rrbracket)=\mathsf{M}(\llbracket\Phi \rrbracket)\). Then, \(\Phi\) is the result of the first stage of \(\mathsf{TS}\) and \(\mathsf{TS}(\llbracket\Phi\cdot\gamma\rrbracket)=\Phi\cdot\mathsf{TS}( \llbracket\gamma\rrbracket)=\Phi\cdot\gamma\) by the IH for \(\gamma\). It remains to prove the claim that \(\mathsf{M}(\llbracket\Phi\cdot\gamma\rrbracket)=\mathsf{M}(\llbracket\Phi \rrbracket)\) for a greedy multistep reduction of shape \(\Phi\cdot\gamma\), so with \(\gamma\) non-empty. Since \(\mathsf{M}(\llbracket\Phi\cdot\gamma\rrbracket)\supseteq\mathsf{M}(\llbracket \Phi\rrbracket)\) trivially holds, for arbitrary multistep reductions, suppose for a proof by contradiction that \(\mathsf{M}(\llbracket\Phi\cdot\gamma\rrbracket)\subseteq\mathsf{M}( \llbracket\Phi\rrbracket)\) does not hold, for \(\Phi\cdot\gamma\) of minimal length. Then there must be some node in \(\mathsf{M}(\llbracket\gamma\rrbracket)\) in \(\mathsf{M}(\llbracket\Phi\cdot\gamma\rrbracket)\), per construction of \(\llbracket\Phi\cdot\gamma\rrbracket\) as the serial composition of \(\llbracket\Phi\rrbracket\) and \(\llbracket\gamma\rrbracket\). By minimality this node must in fact be in \(\mathsf{M}(\llbracket\Psi\rrbracket)\) for \(\Psi\) the first multistep of \(\gamma\), with the node corresponding to, say, step \(\psi\subseteq\Psi\). But then \(\Phi\cdot\Psi\) would be a loath pair, as it allows swapping the join of \(\Phi\) with \(\psi\).14 This contradicts the assumed greediness of \(\Phi\cdot\gamma\). Footnote 14: More precisely, the join of \(\Phi\) with the origin of \(\psi\) along the converse of \(\Phi\), which is a step acting on an interval in the dag of the source string of \(\Phi\), as observed in Remark 13. Note our reasoning would fail if rules were allowed to have empty left- or right-hand sides: If \(\psi\) were due to a rule with an empty left-hand side, or if \(\Phi\) were to contain a rule with an empty right-hand side, then \(X\) might not be swappable.
* The converse direction, that \(\llbracket\rrbracket\) after \(\mathsf{TS}\) is the identity on tragrs, follows from Lemma 3.
We can now establish our main result, that one may compute a greedy multistep reduction, unique modulo permutation equivalence, for any proof term by first evaluating into its tragr / causal graph (using \(\llbracket\rrbracket\)), followed by the topological multi-sort (using \(\mathsf{TS}\)) yielding the greedy multistep reduction.
**Theorem 2**.: _For every proof term \(\gamma\), there exists a unique greedy multistep reduction \(\gamma^{\prime}\) such that \(\gamma\equiv\gamma^{\prime}\)._
Proof.: Lemma 4 shows existence. To show uniqueness, consider greedy multistep reductions \(\gamma^{\prime}\) and \(\gamma^{\prime\prime}\) both permutation equivalent to \(\gamma\). By Lemma 2, \(\llbracket\gamma^{\prime}\rrbracket\) and \(\llbracket\gamma^{\prime\prime}\rrbracket\) are the same tragr. Therefore, \(\gamma^{\prime}=\mathsf{TS}(\llbracket\gamma^{\prime}\rrbracket)=\mathsf{ TS}(\llbracket\gamma^{\prime\prime}\rrbracket)=\gamma^{\prime\prime}\) by \(\mathsf{TS}\) being inverse to \(\llbracket\rrbracket\) on greedy multistep reductions by Theorem 1..
**Remark 14**.:
* _The proof only employs one direction (the first item in the proof) of Theorem_ 1_._
* _As a consequence, using that the greedy multistep reductions_ are _the normal forms w.r.t. swapping, we have that swapping is confluent on multistep reductions. This could alternatively be established via Newman's Lemma, using that swapping is terminating and showing local confluence._
**Example 10**.: _The greedy multistep reduction \(\gamma^{\prime}\) is the unique representative of the permutation equivalence class of \(\gamma\). Both are mapped to the tragr on the right in Figure 2 by the proof term algebra \(\llbracket\rrbracket\), and topological multi-sorting._
## 5 Conclusions
We have shown that Levy's notion of permutation equivalence [19] as known from _term rewriting_[36] corresponds, after specialising it to _string rewriting_, to the notion of causal equivalence as employed by Wolfram in his _physics_ project [39, 40]. This we achieved by introducing trace graphs, tragrs refining Wolfram's notion of causal graph, as representatives of permutation equivalence classes of reductions. Representing reductions as terms themselves, so-called proof terms [22], allowed us to specify the representation map, from reductions to tragrs, effectively by means of a (proof term) algebra that models permutation equivalence. To show that representatives are unique, we gave a map back from tragrs to so-called greedy multistep reductions as known from Dehornoy's work in _algebra_[8], using a topological multi-sorting procedure, showing both maps to be inverse to each other.
The study of _causality_ spans all the sciences, cf. [28], hence it is no surprise that it has been discussed and mathematically modelled in many ways; to mention a few [23, 19, 5, 34, 38, 39, 13, 22, 12, 18, 17, 20, 19, 15, 10, 7, 30]. From that point of view our results can be seen as linking models of causality known from _rewriting_[19] (permutation equivalence), _algebra_[8] (greedy multistep reductions) and _physics_[40] (causal graphs), respectively. In general, we think that linking different perspectives on the _same_ notion, as we did for causal equivalence here but also before in [36, Chapter 8], is important. Therefore we find it surprising that in the literature mentioned cross-references beyond the borders of the specific field (rewriting, algebra, physics, category theory, proof theory, concurrency theory,...) of a paper, are few and far between. We hope that our short paper can contribute to creating at least some awareness of that, in our opinion unfortunate, situation for causal equivalence, and the interest in overcoming it.
**Remark 15**.: _Not being a physicist, I am not in a position to assess the potential relation of the various causal models to physics, cf. [40, Section 8]. However, I do think it already methodologically interesting to see how far one can push causal models, which phenomena can or cannot be reconstructed from them. For example, in [9] it is argued that time cannot be reconstructed from purely causal models (such as rewriting), as the latter fail to explain synchronicity.16 That makes one wonder whether adding natural structure (think of a notion of strategy or a metric) could overcome that, could make time emergent._
Footnote 16: Roughly: Why are two identical but independent clocks seen to be in the same state?
The concepts and techniques developed and employed here are simple and natural.17 For instance, topological multi-sorting could be easily presented in undergraduate Discrete Mathematics or Data Structures and Algorithms courses. We view this as a strength rather than as a weakness. Only _because_ the results are simple and natural do we entertain the hope to extend them to more complex cases. In particular, we hope that the results developed here for _string_ rewriting can serve as a stepping stone for tackling the problem, left open in [36, Chapter 8] (see Remark 1), of giving a characterisation of permutation equivalence for _term_ rewriting18 by _tracing_. Interesting (classes of) term rewrite systems to target are:
Footnote 17: This could be the reason for the observed disjointedness of the literature on causal equivalence: natural notions are likely to be developed autonomously multiple times. _Therefore_ we think it worthwhile to (try to) link such notions.
Footnote 18: But also other types of _structured_ rewrite systems such as graph rewrite systems come to mind.
* _Linear_ systems. In the first-order case linearity can be brought about by requiring for each rule that every variable occurs either zero or one time in _both_ its sides. The characterisation should cover not only string rewrite systems via their _monadic_ embedding (see Remark 2), and _chemical_ systems [36, Example 8.6.1], but also rules like \(x+0\to x\).19 In the higher-order case [36, Chapter 11] linearity could be brought about by restricting to a _linear_ substitution calculus [33].
* _Non-linear_ systems. As illustrated in the introduction, the problem in the non-linear case is that the replicating effect of a non-linear term rewrite step is not represented in the term structure, so cannot be traced. One could hope this can be overcome by reifying replication (so it becomes _traceable_), by making the _substitution calculus_[27] suitably explicit. _Sharing graphs_ as known from the theory of optimal reduction [2, 26] suggest themselves, in both the first- and higher-order cases, with Combinatory Logic and the \(\lambda\beta\)-calculus, respectively, concrete systems to try potential characterisations on.
All our results are effective and constructive, but we did not study their complexity. However, we do hope that the concrete representations of permutation equivalence classes by means of tragrs (certain graphs) and greedy multistep reduction (certain terms) could be useful for such, cf. [8].
AcknowledgmentsWe thank Jan Willem Klop for making the initial remark, Nao Hirokawa and the reviewers and participants of Termgraph 2022 in Haifa for feedback, and the reviewers for thorough reading and many helpful comments and suggestions.
|
2307.07670 | Efficient Adversarial Attacks on Online Multi-agent Reinforcement
Learning | Due to the broad range of applications of multi-agent reinforcement learning
(MARL), understanding the effects of adversarial attacks against MARL model is
essential for the safe applications of this model. Motivated by this, we
investigate the impact of adversarial attacks on MARL. In the considered setup,
there is an exogenous attacker who is able to modify the rewards before the
agents receive them or manipulate the actions before the environment receives
them. The attacker aims to guide each agent into a target policy or maximize
the cumulative rewards under some specific reward function chosen by the
attacker, while minimizing the amount of manipulation on feedback and action.
We first show the limitations of the action poisoning only attacks and the
reward poisoning only attacks. We then introduce a mixed attack strategy with
both the action poisoning and the reward poisoning. We show that the mixed
attack strategy can efficiently attack MARL agents even if the attacker has no
prior information about the underlying environment and the agents' algorithms. | Guanlin Liu, Lifeng Lai | 2023-07-15T00:38:55Z | http://arxiv.org/abs/2307.07670v1 | # Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning
###### Abstract
Due to the broad range of applications of multi-agent reinforcement learning (MARL), understanding the effects of adversarial attacks against MARL model is essential for the safe applications of this model. Motivated by this, we investigate the impact of adversarial attacks on MARL. In the considered setup, there is an exogenous attacker who is able to modify the rewards before the agents receive them or manipulate the actions before the environment receives them. The attacker aims to guide each agent into a target policy or maximize the cumulative rewards under some specific reward function chosen by the attacker, while minimizing the amount of manipulation on feedback and action. We first show the limitations of the action poisoning only attacks and the reward poisoning only attacks. We then introduce a mixed attack strategy with both the action poisoning and the reward poisoning. We show that the mixed attack strategy can efficiently attack MARL agents even if the attacker has no prior information about the underlying environment and the agents' algorithms.
## 1 Introduction
Recently reinforcement learning (RL), including single agent RL and multi-agent RL (MARL), has received significant research interests, partly due to its many applications in a variety of scenarios such as the autonomous driving, traffic signal control, cooperative robotics, economic policy-making, and video games [1, 2, 3, 4, 5, 6, 7, 8]. In MARL, at each state, each agent takes its own action, and these actions jointly determine the next state of the environment and the reward of each agent. The rewards may vary for different agents. In this paper, we focus on the model of Markov Games (MG) [9]. In this class of problems, researchers typically consider learning objectives such as Nash equilibrium (NE), correlated equilibrium (CE) and coarse correlated equilibrium (CCE) etc. A recent line of works provide non-asymptotic guarantees for learning NE, CCE or CE under different assumptions [10, 11, 12, 13, 14, 15, 16].
As RL models, including single agent RL and MARL, are being increasingly used in safety critical and security related applications, it is critical to developing trustworthy RL systems. As a first step towards this important goal, it is essential to understand the effects of adversarial attacks on RL systems. Motivated by this, there have been many recent works that investigate adversarial attacks on single agent RL under various settings [17, 18, 19, 20, 21, 22, 23].
On the other hand, except the ones that will be reviewed below, existing work on adversarial attacks on MARL is limited. In this paper, we aim to fill in this gap and systematically investigate the impact of adversarial attacks on online MARL. We consider a setting in which there is an attacker sits between the agents and the environment, and can monitor the states, the actions of the agents and the reward signals from the environment. The attacker is able to manipulate the feedback or action of the agents. The objective of the MARL learner is to learn an equilibrium. The attacker's goal is to force the agents to learn a target policy or to maximize the cumulative rewards under some specific reward function chosen by the attacker, while minimizing the amount of the manipulation on feedback and action. Our contributions are follows.
### Contributions
1) We propose an adversarial attack model in which the attacker aims to force the agent to learn a policy selected by the attacker (will be called target policy in the sequel) or to maximize the cumulative rewards under some specific reward function chosen by the attacker. We use loss and cost functions to evaluate the effectiveness of the adversarial attack on MARL agents. The cost is the cumulative sum of the action manipulations and the reward manipulations. If the attacker aims to force the agents to learn a target policy, the loss is the cumulative number of times when the agent does not follow the target policy. Otherwise, the loss is the regret to the policy that maximizes the attacker's rewards. It is clearly of interest to minimize both the loss and cost.
2) We study the attack problem in three different settings: the white-box, the gray-box and the black-box settings. In the white-box setting, the attacker has full information of the underlying environment. In the gray-box setting, the attacker has no prior information about the underlying environment and the agents' algorithm, but knows the target policy that maximizes its cumulative rewards. In the black-box setting, the target policy is also unknown for the attacker.
3) We show that the effectiveness of action poisoning only attacks and reward poisoning only attacks is limited. Even in the white-box setting, we show that there exist some MGs under which no action poisoning only Markov attack strategy or reward poisoning only Markov attack strategy can be efficient and successful. At the same time, we provide some sufficient conditions under which the action poisoning only attacks or the reward poisoning only attacks can efficiently attack MARL algorithms. Under such conditions, we introduce an efficient action poisoning attack strategy and an efficient reward poisoning attack strategy, and analyze their cost and loss.
4) We introduce a mixed attack strategy in the gray-box setting and an approximate mixed attack strategy in the black-box setting. We show that the mixed attack strategy can force any sub-linear-regret MARL agents to choose actions according to the target policy specified by the attacker with sub-linear cost and sub-linear loss. We further investigate the impact of the approximate mixed attack strategy attack on V-learning [15], a simple, efficient, decentralized algorithm for MARL.
### Related works
**Attacks on Single Agent RL:** Adversarial attacks on single agent RL have been studied in various settings [17; 18; 19; 20; 21; 22; 23]. For example, [17; 20; 24] study online reward poisoning attacks in which the attacker could manipulate the reward signal before the agent receives it. [25] studies online action poisoning attacks in which the attacker could manipulate the action signal before the environment receives it. [24] studies the limitations of reward only manipulation or action only manipulation in single-agent RL.
**Attacks on MARL:**[26] considers a game redesign problem where the designer knows the full information of the game and can redesign the reward functions. The proposed redesign methods can incentivize players to take a specific target action profile frequently with a small cumulative design cost. [27; 28] study the poisoning attack on multi-agent reinforcement learners, assuming that the attacker controls one of the learners. [29] studies the reward poisoning attack on offline multi-agent reinforcement learners.
**Defense Against Attacks on RL:** There is also recent work on defending against adversarial attacks on RL [30; 31; 32; 33; 34; 35]. These work focus on the single-agent RL setting where an adversary can corrupt the reward and state transition.
## 2 Problem setup
### Definitions
To increase the readability of the paper, we first introduce some standard definitions related to MARL that will be used throughout of the paper. These definitions mostly follow those defined in [15]. We denote a tabular episodic MG with \(m\) agents by a tuple \(\text{MG}(\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{m},H,P,\{R_{i}\}_{i=1}^{m})\), where \(\mathcal{S}\) is the state space with \(|\mathcal{S}|=S\), \(\mathcal{A}_{i}\) is the action space for the \(i^{\text{th}}\) agent with \(|\mathcal{A}_{i}|=A_{i}\), \(H\in\mathbb{Z}^{+}\) is the number of steps in each episode. We let \(\mathbf{a}:=(a_{1},\cdots,a_{m})\) denote the joint action of all the \(m\) agents and \(\mathcal{A}:=\mathcal{A}_{i}\times\cdots\times\mathcal{A}_{m}\) denote the joint action space. \(P=\{P_{h}\}_{h\in[H]}\) is a collection of transition matrices. \(P_{h}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the probability transition function that maps state-action-state pair to a probability, \(R_{i,h}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) represents the reward function for the \(i^{\text{th}}\) agent in the step \(h\). In this paper, the probability transition functions and the reward functions can be different at different steps. We note that this MG model incorporates both cooperation and competition because the reward functions of different agents can be arbitrary.
**Interaction protocol:** The agents interact with the environment in a sequence of episodes. The total number of episodes is \(K\). In each episode \(k\in[K]\) of MG, the initial states \(s_{1}\) is generated randomly by a distribution \(P_{0}(\cdot)\). Initial states
may be different between episodes. At each step \(h\in[H]\) of an episode, each agent \(i\) observes the state \(s_{h}\) and chooses an action \(a_{i,h}\) simultaneously. After receiving the action, the environment generates a random reward \(r_{i,h}\in[0,1]\) for each agent \(i\) derived from a distribution with mean \(R_{i,h}(s_{h},\mathbf{a}_{h})\), and transits to the next state \(s_{h+1}\) drawn from the distribution \(P_{h}(\cdot|s_{h},\mathbf{a}_{h})\). \(P_{h}(\cdot|s,\mathbf{a})\) represents the probability distribution over states if joint action \(\mathbf{a}\) is taken for state \(s\). The agent stops interacting with environment after \(H\) steps and starts another episode. At each time step, the agents may observe the actions played by other agents.
**Policy and value function:** A Markov policy takes actions only based on the current state. The policy \(\pi_{i,h}\) of agent \(i\) at step \(h\) is expressed as a mappings \(\pi_{i,h}:\mathcal{S}\rightarrow\Delta_{\mathcal{A}_{i}}\). \(\pi_{i,h}(a_{i}|s)\) represents the probability of agent \(i\) taking action \(a_{i}\) in state \(s\) under policy \(\pi_{i}\) at step \(h\). A deterministic policy is a policy that maps each state to a particular action. For notation convenience, for a deterministic policy \(\pi_{i}\), we use \(\pi_{i,h}(s)\) to denote the action \(a_{i}\) which satisfies \(\pi_{i,h}(a_{i}|s)=1\). We denote the product policy of all the agents as \(\pi:=\pi_{1}\times\cdots\times\pi_{m}\). We also denote \(\pi_{-i}:=\pi_{1}\times\cdots\times\pi_{i-1}\times\pi_{i+1}\times\cdots\times \pi_{m}\) to be the product policy excluding agent \(i\). If every agent follows a deterministic policy, the product policy of all the agents is also deterministic. We use \(V^{\pi}_{i,h}:\mathcal{S}\rightarrow\mathbb{R}\) to denote the value function of agent \(i\) at step \(h\) under policy \(\pi\) and define \(V^{\pi}_{i,h}(s):=\mathbb{E}\left[\sum_{h^{\prime}=h}^{H}r_{i,h^{\prime}}|s_{ h}=s,\pi\right]\). Given a policy \(\pi\) and step \(h\), the \(i^{\text{th}}\) agent's \(Q\)-function \(Q^{\pi}_{i,h}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) of a state-action pair \((s,\mathbf{a})\) is defined as: \(Q^{\pi}_{i,h}(s,\mathbf{a})=\mathbb{E}\left[\sum_{h^{\prime}=h}^{H}r_{i,h^{\prime} }|s_{h}=s,\mathbf{a}_{h}=\mathbf{a},\pi\right]\).
**Best response:** For any policy \(\pi_{-i}\), there exists a best response of agent \(i\), which is a policy that achieves the highest cumulative reward for itself if all other agents follow policy \(\pi_{-i}\). We define the best response of agent \(i\) towards policy \(\pi_{-i}\) as \(\mu^{\dagger}(\pi_{-i})\), which satisfies \(\mu^{\dagger}(\pi_{-i}):=\arg\max_{\pi_{i}}V^{\pi_{i}^{\perp}\pi_{-i}}_{i,h}(s)\) for any state \(s\) and any step \(h\). We denote \(\max_{\pi_{i}}V^{\pi_{i}^{\perp}\pi\times\pi_{-i}}_{i,h}(s)\) as \(V^{\dagger,\pi_{-i}}_{i,h}(s)\) for notation simplicity. By its definition, we know that the best response can always be achieved by a deterministic policy.
Nash Equilibrium (NE) is defined as a product policy where no agent can improve his own cumulative reward by unilaterally changing his strategy.
**Nash Equilibrium (NE) [15]:** A product policy \(\pi\) is a NE if for all initial state \(s\), \(\max_{i\in[m]}(V^{\dagger,\pi_{-i}}_{i,1}(s)-V^{\pi}_{i,1}(s))=0\) holds. A product policy \(\pi\) is an \(\epsilon\)-approximate Nash Equilibrium if for all initial state \(s\), \(\max_{i\in[m]}(V^{\dagger,\pi_{-i}}_{i,1}(s)-V^{\pi}_{i,1}(s))\leq\epsilon\) holds.
**General correlated policy:** A general Markov correlated policy \(\pi\) is a set of \(H\) mappings \(\pi:=\{\pi_{h}:\Omega\times\mathcal{S}\rightarrow\Delta_{\mathcal{A}}\}_{h\in[H]}\). The first argument of \(\pi_{h}\) is a random variable \(\omega\in\Omega\) sampled from some underlying distributions. For any correlated policy \(\pi=\{\pi_{h}\}_{h\in[H]}\) and any agent \(i\), we can define a marginal policy \(\pi_{-i}\) as a set of \(H\) maps \(\pi_{i}=\{\pi_{h,-i}:\Omega\times\mathcal{S}\rightarrow\Delta_{\mathcal{A}_{-i }}\}_{h\in[H]}\), where \(\mathcal{A}_{-i}=\mathcal{A}_{1}\times\cdots\times\mathcal{A}_{i-1}\times \mathcal{A}_{i+1}\times\cdots\times\mathcal{A}_{m}\). It is easy to verify that a deterministic joint policy is a product policy. The best response value of agent \(i\) towards policy \(\pi_{-i}\) as \(\mu^{\dagger}(\pi_{-i})\), which satisfies \(\mu^{\dagger}(\pi_{-i}):=\arg\max_{\pi_{i}}V^{\pi_{i}\times\pi_{-i}}_{i,h}(s)\) for any state \(s\) and any step \(h\).
**Coarse Correlated Equilibrium (CCE)[15]:** A correlated policy \(\pi\) is an CCE if for all initial state \(s\), \(\max_{i\in[m]}(V^{\dagger,\pi_{-i}}_{i,1}(s)-V^{\pi}_{i,1}(s))=0\) holds. A correlated policy \(\pi\) is an \(\epsilon\)-approximate CCE if for all initial state \(s\), \(\max_{i\in[m]}(V^{\dagger,\pi_{-i}}_{i,1}(s)-V^{\pi}_{i,1}(s))\leq\epsilon\) holds.
**Strategy modification:** A strategy modification \(\phi_{i}\) for agent \(i\) is a set of mappings \(\phi_{i}:=\{(\mathcal{S}\times\mathcal{A})^{h-1}\times\mathcal{S}\times\mathcal{ A}_{i}\rightarrow\mathcal{A}_{i}\}_{h\in[H]}\). For any policy \(\pi_{i}\), the modified policy (denoted as \(\phi_{i}\circ\pi_{i}\)) changes the action \(\pi_{i,h}(\omega,s)\) under random sample \(\omega\) and state \(s\) to \(\phi_{i}((s_{1},\mathbf{a}_{1},\ldots,s_{h},a_{i,h}),\pi_{i,h}(\omega,s))\). For any joint policy \(\pi\), we define the best strategy modification of agent \(i\) as the maximizer of \(\max_{\phi_{i}}V^{(\phi_{i}\circ\pi_{i})\odot\pi_{-i}}_{i,1}(s)\) for any initial state \(s\).
**Correlated Equilibrium (CE)[15]:** A correlated policy \(\pi\) is an CE if for all initial state \(s\), \(\max_{i\in[m]}\max_{\phi_{i}}(V^{(\phi_{i}\circ\pi_{i})\odot\pi_{-i}}_{i,1}(s)-V^ {\pi}_{i,1}(s))=0\). A correlated policy \(\pi\) is an \(\epsilon\)-approximate CE if for all initial state \(s\), \(\max_{i\in[m]}\max_{\phi_{i}}(V^{(\phi_{i}\circ\pi_{i})\odot\pi_{-i}}_{i,1}(s)-V^ {\pi}_{i,1}(s))\leq\epsilon\) holds.
In Markov games, it is known that an NE is an CE, and an CE is an CCE.
**Best-in-hindsight Regret:** Let \(\pi^{k}\) denote the product policy deployed by the agents for each episode \(k\). After \(K\) episodes, the best-in-hindsight regret of agent \(i\) is defined as \(\text{Reg}_{i}(K,H)=\max_{\pi^{\prime}_{i}}\sum_{k=1}^{K}[V^{\pi^{\prime}_{i}, \pi^{k}_{-i}}_{i,1}(s^{k}_{1})-V^{\pi^{k}}_{i,1}(s^{k}_{1})]\).
### Poisoning attack setting
We are now ready to introduce the considered poisoning attack setting, in which there is an attacker sits between the agents and the environment. The attacker can monitor the states, the actions of the agents and the reward signals from the environment. Furthermore, the attacker can override actions and observations of agents. In particular, at each episode \(k\) and step \(h\), after each agent \(i\) chooses an action \(a_{i,h}^{k}\), the attacker may change it to another action \(\widetilde{a}_{i,h}^{k}\in\mathcal{A}_{i}\). If the attacker does not override the actions, then \(\widetilde{a}_{i,h}^{k}=a_{i}\). When the environment receives \(\widetilde{a}_{h}^{k}\), it generates random rewards \(r_{i,h}^{k}\) with mean \(R_{i,h}(s_{h}^{k},\widetilde{\mathbf{a}}_{h}^{k})\) for each agent \(i\) and the next state \(s_{h+1}^{k}\) is drawn from the distribution \(P_{h}(\cdot|s_{h}^{k},\widetilde{\mathbf{a}}_{h}^{k})\). Before each agent \(i\) receives the reward \(r_{i,h}^{k}\), the attacker may change it to another reward \(\widetilde{r}_{i,h}^{k}\). Agent \(i\) receives the reward \(\widetilde{r}_{i,h}^{k}\) and the next state \(s_{h+1}^{k}\) from the environment. Note that agent \(i\) does not know the attacker's manipulations and the presence of the attacker and hence will still view \(\widetilde{r}_{i,h}^{k}\) as the reward and \(s_{h+1}^{k}\) as the next state generated from state-action pair \((s_{h}^{k},\mathbf{a}_{h}^{k})\).
In this paper, we call an attack as _action poisoning only attack_, if the attacker only overrides the action but not the rewards. We call an attack as _reward poisoning only attack_ if the attacker only overrides the rewards but not the actions. In addition, we call an attack as _mixed attack_ if the attack can carry out both action poisoning and reward poisoning attacks simultaneously.
The goal of the MARL learners is to learn an equilibrium. On the other hand, the attacker's goal is to either force the agents to learn a target policy \(\pi^{\dagger}\) of the attacker's choice or to force the agents to learn a policy that maximizes the cumulative rewards under a specific reward function \(R_{\dagger,h}:\mathcal{S}\times\mathcal{A}\rightarrow(0,1]\) chosen by the attacker. We note that this setup is very general. Different choices of \(\pi^{\dagger}\) or \(R_{\dagger,h}\) could lead to different objectives. For example, if the attacker aims to reduce the benefit of the agent \(i\), the attacker's reward function \(R_{\dagger,h}\) can be set to \(1-R_{i,h}\), or choose a target policy \(\pi^{\dagger}\) that is detrimental to the agent \(i\)'s reward. If the attacker aims to maximize the total rewards of a subset of agents \(\mathcal{C}\), the attacker's reward function \(R_{\dagger,h}\) can be set to \(\sum_{i\in C}R_{i,h}\), or choose a target policy \(\pi^{\dagger}=\arg\max\sum_{i\in C}V_{i,1}^{\pi}(s_{1})\) that maximizes the total rewards of agents in \(\mathcal{C}\). We assume that the target policy \(\pi^{\dagger}\) is deterministic and \(R_{i,h}(s,\pi^{\dagger}(s))>0\). We measure the performance of the attack over \(K\) episodes by the total attack cost and the attack loss. Set \(\mathbbm{1}(\cdot)\) as the indicator function. The attack cost over \(K\) episodes is defined as \(\text{Cost}(K,H)=\sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{i=1}^{m}\Big{(}1(\widetilde{ a}_{i,h}^{k}\neq a_{i,h}^{k})+|\widetilde{r}_{i,h}^{k}-r_{i,h}^{k}|\Big{)}\).
There are two different forms of attack loss based on the different goals of the attacker.
If the attacker's goal is to force the agents to learn a target policy \(\pi^{\dagger}\), the attack loss over \(K\) episodes is defined as \(\text{Loss1}(K,H)=\sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{i=1}^{m}\mathbbm{1}\left(a _{i,h}^{k}\neq\pi_{i,h}^{\dagger}(s_{i,h}^{k})\right)\).
If the attacker's goal is to force the agents to maximize the cumulative rewards under some specific reward function \(R_{\dagger}\) chosen by the attacker, the attack loss over \(K\) episodes is defined as \(\text{Loss2}(K,H)=\sum_{k=1}^{K}[V_{\dagger,1}^{\pi^{*}}(s_{1}^{k})-V_{\dagger,1 }^{\pi^{k}}(s_{1}^{k})]\). Here, \(V_{\dagger,1}^{\pi}(s)\) is the expected cumulative rewards in state \(s\) based on the attacker's reward function \(R_{\dagger}\) under product policy \(\pi\) and \(V_{\dagger,1}^{\pi^{*}}(s)=\max_{\pi}V_{\dagger,1}^{\pi}(s)\). \(\pi^{k}\) denote the product policy deployed by the agents for each episode \(k\). \(\pi^{*}\) is the optimal policy that maximizes the attacker's cumulative rewards. We have \(\text{Loss2}(K,H)\leq H*\text{Loss1}(K,H)\).
Denote the total number of steps as \(T=KH\). In the proposed poisoning attack problem, we call an attack strategy _successful_ if the attack loss of the strategy scales as \(o(T)\). Furthermore, we call an attack strategy _efficient and successful_ if both the attack cost and attack loss scale as \(o(T)\).
The attacker aims to minimize both the attack cost and the attack loss, or minimize one of them subject to a constraint on the other. However, obtaining optimal solutions to these optimization problems is challenging. As the first step towards understanding the attack problem, we show the limitations of the action poisoning only or the reward poisoning only attacks and then propose a simple mixed attack strategy that is efficient and successful.
Depending on the capability of the attacker, we consider three settings: the white-box, the gray-box and the black-box settings. The table below summarizes the differences among these settings.
## 3 White-box attack strategy and analysis
In this section, to obtain insights to the problem, we consider the white-box model, in which the attacker has full information of the underlying MG \((\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{m},H,P,\{R_{i}\}_{i=1}^{m})\). Even in the white-box attack model, we show that there exist some environments where the attacker's goal cannot be achieved by reward poisoning only attacks or action
poisoning only attacks in Section 3.1. Then, in Section 3.2 and Section 3.3, we provide some sufficient conditions under which the action poisoning attacks alone or the reward poisoning attacks alone can efficiently attack MARL algorithms. Under such conditions, we then introduce an efficient action poisoning attack strategy and an efficient reward poisoning attack strategy.
### The limitations of the action poisoning attacks and the reward poisoning attacks
As discussed in Section 2, the attacker aims to force the agents to either follow the target policy \(\pi^{\dagger}\) or to maximize the cumulative rewards under attacker's reward function \(R_{\dagger}\). In the white-box poisoning attack model, these two goals are equivalent as the optimal policy \(\pi^{*}\) on the attacker's reward function \(R_{\dagger}\) can be calculated by the Bellman optimality equations. To maximize the cumulative rewards under attacker's reward function \(R_{\dagger}\) is equivalent to force the agents follow the policy \(\pi^{\dagger}=\pi^{*}\).
Existing MARL algorithms [14, 15] can learn an \(\epsilon\)-approximate {NE, CE, CCE} with \(\widetilde{\mathcal{O}}(1/\epsilon^{2})\) sample complexities. To force the MARL agents to follow the policy \(\pi^{\dagger}\), the attacker first needs to attack the agents such that the target policy \(\pi^{\dagger}\) is the unique NE in the observation of the agents. However, this alone is not enough to force the MARL agents to follow the policy \(\pi^{\dagger}\). Any other distinct policy should not be an \(\epsilon\)-approximate CCE. The reason is that, if there exists an \(\epsilon\)-approximate CCE \(\pi\) such that \(\pi(\pi^{\dagger}(s)|s)=0\) for any state \(s\), the agents, using existing MARL algorithms, may learn and then follow \(\pi\), which will lead the attack loss to be \(\mathcal{O}(T)=\widetilde{\mathcal{O}}(KH)\). Hence, we need to ensure that any \(\epsilon\)-approximate CCE stays in the neighborhood of the target policy. This requirement is equivalent to achieve the following objective: for all \(s\in\mathcal{S}\), and policy \(\pi\),
\[\max_{i\in[m]}(\widetilde{V}^{\dagger,\pi^{\dagger}_{i-1}}_{i,1} (s)-\widetilde{V}^{\pi^{\dagger}_{i}}_{i,1}(s))=0; \tag{1}\] \[\text{if }\pi\text{ is a product policy and }\pi\neq\pi^{\dagger},\text{ then }\max_{i\in[m]}( \widetilde{V}^{\dagger,\pi_{-i}}_{i,1}(s)-\widetilde{V}^{\pi}_{i,1}(s))>0;\] \[\text{if }\pi(\pi^{\dagger}(s^{\prime})|s^{\prime})=0\text{ for all }s^{\prime},\text{ then }\max_{i\in[m]}( \widetilde{V}^{\dagger,\pi_{-i}}_{i,1}(s)-\widetilde{V}^{\pi}_{i,1}(s))>\epsilon,\]
where \(\widetilde{V}\) is the expected reward based on the post-attack environments.
We now investigate whether there exist efficient and successful attack strategies that use action poisoning alone or reward poisoning alone. We first show that the power of action poisoning attack alone is limited.
**Theorem 1**: _There exists a target policy \(\pi^{\dagger}\) and a MG \((\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{m},H,P,\{R_{i}\}_{i=1}^{m})\) such that no action poisoning Markov attack strategy alone can efficiently and successfully attack MARL agents by achieving the objective in (1)._
We now focus on strategies that use only reward poisoning. If the post-attack mean reward \(\widetilde{R}\) is unbounded and the attacker can arbitrarily manipulate the rewards, there always exists an efficient and successful poisoning attack strategy. For example, the attacker can change the rewards of non-target actions to \(-H\). However, such attacks can be easily detected, as the boundary of post-attack mean reward is distinct from the boundary of pre-attack mean reward. The following theorem shows that if the post-attack mean reward has the same boundary conditions as the pre-attack mean reward, the power of reward poisoning only attack is limited.
**Theorem 2**: _If we limit the post-attack mean reward \(\widetilde{R}\) to have the same boundary condition as that of the pre-attack mean reward \(R\), i.e. \(\widetilde{R}\in[0,1]\), there exists a MG \((\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{m},H,P,\{R_{i}\}_{i=1}^{m})\) and a target policy \(\pi^{\dagger}\) such that no reward poisoning Markov attack strategy alone can efficiently and successfully attack MARL agents by achieving the objective in (1)._
\begin{table}
\begin{tabular}{l l l l} \hline \hline & white-box attacker & gray-box attacker & black-box attacker \\ \hline MG & Has full information & No information & No information \\ \(\pi^{\dagger}\) & Can be calculated if \(R_{\dagger}\) given & Required and given & Not given \\ \(R_{\dagger}\) & Not required if \(\pi^{\dagger}\) given & Not required if \(\pi^{\dagger}\) given & Required and given \\ Loss1 & Suitable by specify \(\pi^{\dagger}\) & Suitable & Not suitable \\ Loss2 & Suitable if \(R_{\dagger}\) given & Suitable if \(R_{\dagger}\) given & Suitable \\ \hline \hline \end{tabular}
\end{table}
Table 1: Differences of the white/gray/black-box attackers
The proofs of Theorem 1 and Theorem 2 are provided in Appendix C. The main idea of the proofs is as follows. In successful poisoning attacks, the attack loss scales as \(o(T)\) so that the agents will follow the target policy \(\pi^{\dagger}\) in at least \(T-o(T)\) times. To efficiently attack the MARL agents, the attacker should avoid to attack when the agents follow the target policy. Otherwise, the poisoning attack cost will grow linearly with \(T\). The proofs of Theorem 1 and Theorem 2 proceed by constructing an MG and a target policy \(\pi^{\dagger}\) where the expected rewards under \(\pi^{\dagger}\) is always the worst for some agents if the attacker avoids to attack when the agents follow the target policy.
### White-box action poisoning attacks
Even though Section 3.1 shows that there exists MG and target policy such that the action poisoning only attacks cannot be efficiently successful, here we show that it can be efficient and successful for a class of target policies. The following condition characterizes such class of target policies.
**Condition 1**: _For the underlying environment MG \((\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{m},H,P,\{R_{i}\}_{i=1}^{m})\), the attacker's target policy \(\pi^{\dagger}\) satisfies that for any state \(s\) and any step \(h\), there exists an action \(\mathbf{a}\) such that \(V_{i,h}^{\pi^{\dagger}}(s)>Q_{i,h}^{\pi^{\dagger}}(s,\mathbf{a}),\) for any agent \(i\)._
Under Condition 1, we can find a worse policy \(\pi^{-}\) by
\[\pi_{h}^{-}(s)= \operatorname*{arg\,max}_{\mathbf{a}\in\mathcal{A}}\min_{i\in[m]} \left(V_{i,h}^{\pi^{\dagger}}(s)-Q_{i,h}^{\pi^{\dagger}}(s,\mathbf{a})\right)\ s.t.\forall i\in[m],V_{i,h}^{\pi^{ \dagger}}(s)>Q_{i,h}^{\pi^{\dagger}}(s,\mathbf{a}). \tag{2}\]
Under this condition, we now introduce an effective white-box action attack strategies: \(d\)-portion attack. Specifically, at the step \(h\) and state \(s\), if all agents pick the target action, i.e., \(\mathbf{a}=\pi_{h}^{\dagger}(s)\), the attacker does not attack, i.e. \(\widetilde{\mathbf{a}}=\mathbf{a}=\pi_{h}^{\dagger}(s)\). If some agents pick a non-target action, i.e., \(\mathbf{a}\neq\pi_{h}^{\dagger}(s)\), the \(d\)-portion attack sets \(\widetilde{\mathbf{a}}\) as
\[\widetilde{\mathbf{a}}=\begin{cases}\pi_{h}^{\dagger}(s),\text{with probability }d_{h}(s,\mathbf{a})/m\\ \pi_{h}^{-}(s),\text{with probability }1-d_{h}(s,\mathbf{a})/m,\end{cases} \tag{3}\]
where \(d_{h}(s,\mathbf{a})=m/2+\sum_{i=1}^{m}\mathbb{1}(a_{i}=\pi_{i,h}^{\dagger}(s))/2\).
**Theorem 3**: _If the attacker follows the \(d\)-portion attack strategy on the MG agents, the best response of each agent \(i\) towards the target policy \(\pi_{-i}^{\dagger}\) is \(\pi_{i}^{\dagger}\). The target policy \(\pi^{\dagger}\) is an [NE, CE, CCE] from any agent's point of view. If every state \(s\in\mathcal{S}\) is reachable at every step \(h\in[H]\) under the target policy, \(\pi^{\dagger}\) is the unique [NE, CE, CCE]._
The detailed proof can be found in Appendix D.1. Theorem 3 shows that the target policy \(\pi^{\dagger}\) is the unique {NE, CE, CCE} under the \(d\)-portion attack. Thus, if the agents follow an MARL algorithm that is able to learn an \(\epsilon\)-approximate {NE, CE, CCE}, the agents will learn a policy approximate to the target policy. We now discuss the high-level idea why the \(d\)-portion attack works. Under Condition 1, \(\pi^{-}\) is worse than the target policy \(\pi^{\dagger}\) at the step \(H\) from every agent's point of view. Thus, under the \(d\)-portion attack, the target action strictly dominates any other action at the step \(H\), and \(\pi^{\dagger}\) is the unique {NE, CE, CCE} at the step \(H\). From induction on \(h=H,H-1,\cdots,1\), we can further prove that the \(\pi^{\dagger}\) is the unique {NE, CE, CCE} at any step \(h\). We define \(\Delta_{i,h}^{\dagger-}(s)=Q_{i,h}^{\pi^{\dagger}}(s,\pi_{h}^{\dagger}(s))-Q_{i,h}^{\pi^{\dagger}}(s,\pi_{h}^{-}(s))\) and the minimum gap \(\Delta_{min}=\min_{h\in[H],s\in\mathcal{S},i\in[m]}=\Delta_{i,h}^{\dagger-}(s)\). In addition, any other distinct policy is not an \(\epsilon\)-approximate CCE with different gap \(\epsilon<\Delta_{min}/2\). We can derive upper bounds of the attack loss and the attack cost when attacking some special MARL algorithms.
**Theorem 4**: _If the best-in-hindsight regret \(\text{Reg}(K,H)\) of each agent's algorithm is bounded by a sub-linear bound \(\mathcal{R}(T)\) for any MG in the absence of attack, and \(\min_{s\in\mathcal{S},i\in[m]}\Delta_{i,h}^{\dagger-}(s)\geq\sum_{h^{\prime}= h+1}^{H}\max_{s\in\mathcal{S},i\in[m]}\Delta_{i,h^{\prime}}^{\dagger-}(s)\) holds for any \(h\in[H]\), then \(d\)-portion attack will force the agents to follow the target policy with the attack loss and the attack cost bounded by_
\[\mathbb{E}[\text{LossI}(K,H)]\leq 2m^{2}\mathcal{R}(T)/\Delta_{min},\ \mathbb{E}[\text{Cost}(K,H)]\leq 2m^{3} \mathcal{R}(T)/\Delta_{min}. \tag{4}\]
### White-box reward poisoning attacks
As stated in Theorem 2, the reward poisoning only attacks may fail, if we limit the post-attack mean reward \(\widetilde{R}\) to satisfy the same boundary conditions as those of the pre-attack mean reward \(R\), i.e. \(\widetilde{R}\in[0,1]\). However, similar to the case with action poisoning only attacks, the reward poisoning only attacks can be efficiently successful for a class of target policies. The following condition specifies such class of target policies.
**Condition 2:**_For the underlying environment MG \((\mathcal{S},\{\mathbf{A}_{i}\}_{i=1}^{m},H,P,\{R_{i}\}_{i=1}^{m})\), there exists constant \(\eta>0\) such that for any state \(s\), any step \(h\), and any agent \(i\), \((R_{i,h}(s,\pi^{\dagger}(s))-\eta)/(H-h)\geq\Delta_{R}>0\) where \(\Delta_{R}=[\max_{s\times a\times h^{\prime}}R_{i,h^{\prime}}(s,a)-\min_{s\times a \times h^{\prime}}R_{i,h^{\prime}}(s,a)]\)._
We now introduce an effective white-box reward attack strategies: \(\eta\)-gap attack. Specifically, at the step \(h\) and state \(s\), if agents all pick the target action, i.e., \(\mathbf{a}=\pi_{h}^{\dagger}(s)\), the attacker does not attack, i.e. \(\widetilde{r}_{i,h}=r_{i,h}\) for each agent \(i\). If agent \(i\) picks a non-target action, i.e., \(\mathbf{a}\neq\pi_{h}^{\dagger}(s)\), the \(\eta\)-gap attack sets \(\widetilde{r}_{i,h}=R_{i,h}(s,\pi^{\dagger}(s))-(\eta+(H-h)\Delta_{R})\mathbb{ 1}\left(a_{i}\neq\pi_{i,h}^{\dagger}(s)\right)\) for each agent \(i\). From Condition 2, we have \(\widetilde{r}_{i,h}\in[0,1]\).
**Theorem 5**: _If the attacker follows the \(\eta\)-gap attack strategy on the MG agents, the best response of each agent \(i\) towards any policy \(\pi_{-i}\) is \(\pi_{i}^{\dagger}\). The target policy \(\pi^{\dagger}\) is the {NE, CE, CCE} from any agent's point of view. If every state \(s\in\mathcal{S}\) is reachable at every step \(h\in[H]\) under the target policy, \(\pi^{\dagger}\) is the unique {NE, CE, CCE}._
The detailed proof can be found in Appendix E.1. Theorem 5 shows that the target policy \(\pi^{\dagger}\) is the unique {NE, CE, CCE} under the \(\eta\)-gap attack. Thus, if the agents follow an MARL algorithm that is able to learn an \(\epsilon\)-approximate {NE, CE, CCE}, the agents will learn a policy approximate to the target policy. Here, we discuss the high-level idea why the \(\eta\)-gap attack works. \(\Delta_{R}\) is the difference between the upper bound and the lower bound of the mean rewards. Condition 2 implies that each action is close to other actions from every agent's point of view. Although we limit the post-attack mean reward \(\widetilde{R}\) in \([0,1]\), the target policy can still appear to be optimal by making small changing to the rewards. Under Condition 2 and the \(\eta\)-gap attacks, the target actions strictly dominates any other non-target actions by at least \(\eta\) and any other distinct policy is not an \(\epsilon\)-approximate CCE with different gap \(\epsilon<\eta\). Thus, \(\pi^{\dagger}\) becomes the unique {NE, CE, CCE}. In addition, we can derive upper bounds of the attack loss and the attack cost when attacking MARL algorithms with sub-linear best-in-hindsight regret.
**Theorem 6**: _If the best-in-hindsight regret \(\text{Reg}(K,H)\) of each agent's algorithm is bounded by a sub-linear bound \(\mathcal{R}(T)\) for any MG in the absence of attack, then \(\eta\)-gap attack will force the agents to follow the target policy with the attack loss and the attack cost bounded by_
\[\mathbb{E}\left[\text{Loss}I(k,H)\right]\leq m\mathcal{R}(T)/\eta,\ \mathbb{E} \left[\text{Cost}(K,H)\right]\leq m^{2}\mathcal{R}(T)/\eta. \tag{5}\]
We note that proposed sufficient conditions (namely Condition 1 and Condition 2), under which the action poisoning only attacks or the reward poisoning only attacks can be efficient and successful, may be strict. They may not always hold in practice. This motivates us to investigate mixed attack strategy to be discussed in the sequel.
## 4 Gray-box attack strategy and analysis
In the gray-box attack setting, the attacker has no prior information about the underlying environment and the agents' algorithm, and it only observes samples generated when the agents interact with the environment. However, the attacker is given the target policy \(\pi^{\dagger}\). Since the \(\eta\)-gap reward attack strategy and \(d\)-portion action attack strategy described in Section 3 for the white-box setting rely on the information of the underlying environment, these two attack strategies are not applicable in the gray-box setting. In addition, without the information of the underlying environment, the attacker cannot check whether the action poisoning attack alone or the reward poisoning attack alone can be efficiently successful. Building on insights obtained from the white-box attack strategies, we develop a mixed attack strategy for MG in the gray-box attack setting.
In the proposed mixed attack strategy, at the step \(h\) and state \(s\), if agent \(i\) picks the target action, i.e., \(a_{i,h}=\pi_{i,h}^{\dagger}(s)\), the attacker does not override the action and the reward, i.e. \(\widetilde{a}_{i,h}=a_{i,h}\) and \(\widetilde{r}_{i,h}=r_{i,h}\). If agent \(i\) picks a non-target action, i.e., \(a_{i,h}\neq\pi_{i,h}^{\dagger}(s)\), the attacker overrides its action \(\widetilde{a}_{i,h}=\pi_{i,h}^{\dagger}(s)\) and then overrides the reward \(\widetilde{r}_{i,h}=0\).
**Theorem 7**: _If the attacker follows the mixed attack strategy the best response of each agent \(i\) towards any product policy \(\pi_{-i}\) is \(\pi_{i}^{\dagger}\). The optimal policy \(\pi^{\dagger}\) is the unique {NE, CE, CCE}._
The detailed proof can be found in Appendix F.1. Here, we discuss the high-level idea why the mixed attack works. Under the mixed attacks, the state transitions are the same over the different actions and the reward of the non-target actions is worse than the target action. Thus, in the post-attack environment, the target policy is better than any other policy from any agent's point of view, and any other distinct policy is not an \(\epsilon\)-approximate CCE with different gap \(\epsilon<R_{min}\), where \(R_{min}=\min_{h\in[H]}\min_{s\in\mathcal{S}}\min_{i\in[m]}R_{i,h}(s,\pi_{h}^{ \dagger}(s))\). Thus, \(\pi^{\dagger}\) is the unique {NE, CE, CCE}. In addition, we can derive upper bounds of the attack loss and the attack cost when attacking some special MARL algorithms.
**Theorem 8**: _If the best-in-hindsight regret \(\text{Reg}(K,H)\) of each agent's algorithm is bounded by a sub-linear bound \(\mathcal{R}(T)\) for any MG in the absence of attacks, then the mixed attacks will force the agents to follow the target policy \(\pi^{\dagger}\) with the attack loss and the attack cost bounded by_
\[\mathbb{E}\left[\text{Loss1}(K,H)\right]\leq m\mathcal{R}(T)/R_{min},\ \mathbb{E}[\text{Cost}(K,H)]\leq 2m \mathcal{R}(T)/R_{min}. \tag{6}\]
## 5 Black-box attack strategy and analysis
In the black-box attack setting, the attacker has no prior information about the underlying environment and the agents' algorithm, and it only observes the samples generated when the agents interact with the environment. The attacker aims to maximize the cumulative rewards under some specific reward functions \(R_{\dagger}\) chosen by the attacker. But unlike in the gray-box case, the corresponding target policy \(\pi^{\dagger}\) is also unknown for the attacker. After each time step, the attacker will receive the attacker reward \(r_{\dagger}\). Since the optimal (target) policy that maximizes the attacker's reward is unknown, the attacker needs to explore the environment to obtain the optimal policy. As the mixed attack strategy described in Section 4 for the gray-box setting relies on the knowledge of the target policy, it is not applicable in the black-box setting.
However, by collecting observations and evaluating the attacker's reward function and transition probabilities of the underlying environment, the attacker can perform an approximate mixed attack strategy. In particular, we propose an approximate mixed attack strategy that has two phases: the exploration phase and the attack phase. In the exploration phase, the attacker explores the environment to identify an approximate optimal policy, while in the attack phase, the attacker performs the mixed attack strategy and forces the agents to learn the approximate optimal policy. The total attack cost (loss) will be the sum of attack cost (loss) of these two phases.
```
1:Stopping time \(\tau\). Set \(B(N)=(H\sqrt{S}+1)\sqrt{\log(2AH\tau/\delta)/(2N)}\).
2:Initialize \(\overline{Q}_{\dagger,h}(s,\mathbf{a})=\overline{V}_{\dagger,h}(s,\mathbf{a})=H\), \(\underline{Q}_{\dagger,h}(s,\mathbf{a})=\underline{V}_{\dagger,h}(s,\mathbf{a})=0\), \(\overline{V}_{\dagger,H+1}=\underline{V}_{\dagger,H+1}=\mathbf{0}\), \(\Delta=\infty\), \(N_{0}(s)=N_{h}(s,\mathbf{a})=N_{h}(s,\mathbf{a},s^{\prime})=0\) and \(\hat{R}_{\dagger,h}(s,\mathbf{a})=0\) for any \((s,s^{\prime},\mathbf{a},i,h)\).
3:for episode \(k=1,\ldots,\tau\)do
4:for step \(h=H,\ldots,1\)do
5:for each \((\mathbf{s},\mathbf{a})\in\mathcal{S}\times\mathcal{A}\) with \(N_{h}(s,\mathbf{a})>0\)do
6: Update \(\overline{Q}_{\dagger,h}(s,\mathbf{a})=\min\{\hat{R}_{\dagger,h}+\hat{\mathbb{P}} _{h}\overline{V}_{\dagger,h+1}(s,\mathbf{a})+B(N_{h}(s,\mathbf{a})),H\}\) and \(\underline{Q}_{\dagger,h}(s,\mathbf{a})=\max\{\hat{R}_{\dagger,h}+\hat{\mathbb{P} }_{h}\underline{V}_{\dagger,h+1}(s,\mathbf{a})-B(N_{h}(s,\mathbf{a})),0\}\).
7:endfor
8:for each \(s\in\mathcal{S}\) with \(N_{h}(s,\mathbf{a})>0\)do
9: Update \(\pi_{h}(s)=\max_{\mathbf{a}\in\mathcal{A}}\overline{Q}_{\dagger,h}(s,\mathbf{a})\).
10: Update \(\overline{V}_{\dagger,h}(s,\mathbf{a})=\overline{Q}_{\dagger,h}(s,\pi_{h}(s))\) and \(\underline{V}_{\dagger,h}(s,\mathbf{a})=\underline{Q}_{\dagger,h}(s,\pi_{h}(s))\).
11:endfor
12:endfor
13:if\(\mathbb{E}_{s\sim\tilde{P}_{0}(\cdot)}(\overline{V}_{\dagger,1}(s)-\underline {V}_{\dagger,1}(s))+H\sqrt{\frac{S\log(2\tau/\delta)}{2k}}\leq\Delta\)then
14:\(\Delta=\mathbb{E}_{s\sim\tilde{P}_{0}(\cdot)}(\overline{V}_{\dagger,1}(s)- \underline{V}_{\dagger,1}(s))+H\sqrt{\frac{S\log(2\tau/\delta)}{2k}}\) and \(\pi^{\dagger}=\pi\).
15:endif
16:for step \(h=1,\ldots,H\)do
17: Attacker overrides each agent's action by changing \(a_{i,h}\) to \(\widetilde{a}_{i,h}\), where \(\widetilde{\mathbf{a}}_{h}=\pi_{h}(s_{h})\).
18: The environment returns the reward \(r_{i,h}\) and the next state \(s_{h+1}\) according to action \(\widetilde{\mathbf{a}}_{h}\). The attacker receive its reward \(r_{\dagger,h}\).
19: Attacker overrides each agent's reward by changing \(r_{i,h}\) to \(\widetilde{r}_{i,h}=1\).
20: Add \(1\) to \(N_{h}(s_{h},\widetilde{\mathbf{a}}_{h})\) and \(N_{h}(s_{h},\widetilde{\mathbf{a}}_{h},s_{h+1})\). \(\hat{\mathbb{P}}_{h}(\cdot|s_{h},\widetilde{\mathbf{a}}_{h})=N_{h}(s_{h}, \widetilde{\mathbf{a}}_{h},\cdot)/N_{h}(s_{h},\widetilde{\mathbf{a}}_{h})\)
21: Update \(\hat{R}_{\dagger,h}(s_{h},\widetilde{\mathbf{a}}_{h})=\hat{R}_{\dagger,h}(s_{h}, \widetilde{\mathbf{a}}_{h})+(r_{\dagger,t}-\hat{R}_{\dagger,h}(s_{h},\widetilde{ \mathbf{a}}_{h})/N_{h}(s_{h},\widetilde{\mathbf{a}}_{h})\).
22:endfor
23: Update \(N_{0}(s_{1})=N_{0}(s_{1})+1\) and \(\hat{\mathbb{P}}_{0}(\cdot)=N_{0}(\cdot)/k\).
24:endfor
25:Return \(\pi^{\dagger}\).
```
**Algorithm 1**Exploration phase for Markov games
In the exploration phase, the approximate mixed attack strategy uses an optimal-policy identification algorithm, which is summarized in Algorithm 1. It will return an approximate optimal policy \(\pi^{\dagger}\). Note that \(\pi^{k}\) denotes the product policy deployed by the agents for each episode \(k\). \(\overline{V}\) is the upper bound of \(V^{\pi^{*}}\) and \(\underline{V}\) is the lower bound of \(V^{\pi^{k}}\). By minimizing \(\overline{V}-\underline{V}\), Algorithm 1 finds an approximate optimal policy \(\pi^{\dagger}\). Here, we assume that the reward on the approximate optimal policy \(\pi^{\dagger}\) is positive, i.e. \(R_{min}=\min_{h\in[H]}\min_{s\in\mathcal{S}}\min_{i\in[m]}R_{i,h}(s,\pi^{ \dagger}_{h}(s))>0\). In the exploration phase, the attacker will override both the agents' actions and rewards.
After the exploration phase, the approximate mixed attack strategy performs the attack phase. The attacker will override both the agents' actions and rewards in this phase. At the step \(h\) and state \(s\), if agent \(i\) picks the action \(\pi^{\dagger}_{i,h}(s)\), the attacker does not override actions and rewards, i.e. \(\widetilde{a}_{i,h}=a_{i,h}\) and \(\widetilde{r}_{i,h}=r_{i,h}\). If agent \(i\) picks action \(a_{i,h}\neq\pi^{\dagger}_{i,h}(s)\), the attacker overrides the action \(\widetilde{a}_{i,h}=a_{i,h}\) and then overrides the reward \(\widetilde{r}_{i,h}=0\). The attack strategy in the attack phase is same with the mixed attack strategy. From Theorem 7, in the attack phase, the best response of each agent \(i\) towards product policy \(\pi^{\dagger}_{-i}\) is \(\pi^{\dagger}_{i}\) and \(\pi^{\dagger}\) is the unique NE. Here, we discuss the high-level idea why the approximate mixed attack works. The attacker finds an approximate optimal policy \(\pi^{\dagger}\) by Algorithm 1. If \(\pi^{*}\) is close to \(\pi^{\dagger}\) and the exploration phase is sub-linear time dependent, the performance of the approximate mixed attack strategy will be close to the mixed attack strategy. We build a confidence bound to show the value function difference between \(\pi^{*}\) and \(\pi^{\dagger}\) in the following lemma.
**Lemma 1**: _If the attacker follows the Algorithm 1 on the agents, for any \(\delta\in(0,1)\), with probability at least \(1-5\delta\), the following bound holds:_
\[\mathbb{E}_{s_{1}\sim P_{0}(\cdot)}[V^{\pi^{*}}_{\uparrow,1}(s_{1})-V^{\pi^{ \dagger}}_{\uparrow,1}(s_{1})]\leq 2H^{2}S\sqrt{2A\log(2SAH\tau/\delta)/\tau}. \tag{7}\]
We now investigate the impact of the approximate mixed attack strategy attack on V-learning [15], a simple, efficient, decentralized algorithm for MARL. The reader's convienience, we list V-learning in Appendix G.2.
**Theorem 9**: _Suppose ADV_BANDT_UPDATE of V-learning follows Algorithm 3 in Appendix G.2 and it chooses hyper-parameter \(w_{t}=\alpha_{t}\left(\prod_{i=2}^{t}(1-\alpha_{i})\right)^{-1}\), \(\gamma_{t}=\sqrt{\frac{H\log B}{Bt}}\) and \(\alpha_{t}=\frac{H+1}{H+t}\). For given \(K\) and any \(\delta\in(0,1)\), let \(\iota=\log(mHSAK/\delta)\). The attack loss and the attack cost of the approximate mixed attack strategy during these \(K\) episodes are bounded by_
\[\begin{split}&\mathbb{E}\left[\text{Loss2}(K,H)\right]\leq H \tau+\frac{40}{R_{min}}m\sqrt{H^{9}ASK\iota}+2H^{2}SK\sqrt{2A\iota/\tau},\\ &\mathbb{E}\left[\text{Cost}(K,H)\right]\leq 2mH\tau+\frac{80}{R_{ min}}\sqrt{H^{5}ASK\iota}.\end{split} \tag{8}\]
_Let \(\hat{\pi}\) be the executing output policy of V-learning, the attack loss of the executing output policy \(\hat{\pi}\) is upper bounded by_
\[V^{\pi^{*}}_{\uparrow,1}(s_{1})-V^{\hat{\pi}}_{\uparrow,1}(s_{1})\leq\frac{20mS }{R_{min}}\sqrt{\frac{H^{7}A\iota}{K}}+\frac{2\tau mH^{2}S}{K}+2H^{2}S\sqrt{2A \iota/\tau}. \tag{9}\]
If we choose the stopping time of the exploration phase \(\tau=K^{2/3}\), the attack loss and the attack cost of the approximate mixed attack strategy during these \(K\) episodes are bounded by \(\mathcal{O}(K^{2/3})\) and \(V^{\pi^{*}}_{\uparrow,1}(s_{1})-V^{\hat{\pi}}_{\uparrow,1}(s_{1})\leq\mathcal{ O}(K^{-1/3})\).
## 6 Conclusion
In this paper, we have introduced an adversarial attack model on MARL. We have discussed the attack problem in three different settings: the white-box, the gray-box and the black-box settings. We have shown that the power of action poisoning only attacks and reward poisoning only attacks is limited. Even in the white-box setting, there exist some MGs, under which no action poisoning only attack strategy or reward poisoning only attack strategy can be efficient and successful. We have then characterized conditions when action poisoning only attacks or only reward poisoning only attacks can efficiently work. We have further introduced the mixed attack strategy in the gray-box setting that can efficiently attack any sub-linear-regret MARL agents. Finally, we have proposed the approximate mixed attack strategy in the black-box setting and shown its effectiveness on V-learning. This paper raises awareness of the trustworthiness of online multi-agent reinforcement learning. In the future, we will investigate the defense strategy to mitigate the effects of this attack. |
2303.04856 | AMSwarm: An Alternating Minimization Approach for Safe Motion Planning
of Quadrotor Swarms in Cluttered Environments | This paper presents a scalable online algorithm to generate safe and
kinematically feasible trajectories for quadrotor swarms. Existing approaches
rely on linearizing Euclidean distance-based collision constraints and on
axis-wise decoupling of kinematic constraints to reduce the trajectory
optimization problem for each quadrotor to a quadratic program (QP). This
conservative approximation often fails to find a solution in cluttered
environments. We present a novel alternative that handles collision constraints
without linearization and kinematic constraints in their quadratic form while
still retaining the QP form. We achieve this by reformulating the constraints
in a polar form and applying an Alternating Minimization algorithm to the
resulting problem. Through extensive simulation results, we demonstrate that,
as compared to Sequential Convex Programming (SCP) baselines, our approach
achieves on average a 72% improvement in success rate, a 36% reduction in
mission time, and a 42 times faster per-agent computation time. We also show
that collision constraints derived from discrete-time barrier functions (BF)
can be incorporated, leading to different safety behaviours without significant
computational overhead. Moreover, our optimizer outperforms the
state-of-the-art optimal control solver ACADO in handling BF constraints with a
31 times faster per-agent computation time and a 44% reduction in mission time
on average. We experimentally validated our approach on a Crazyflie quadrotor
swarm of up to 12 quadrotors. The code with supplementary material and video
are released for reference. | Vivek K. Adajania, Siqi Zhou, Arun Kumar Singh, Angela P. Schoellig | 2023-03-08T19:43:25Z | http://arxiv.org/abs/2303.04856v1 | AMSwarm: An Alternating Minimization Approach for Safe Motion Planning of Quadrotor Swarms in Cluttered Environments
###### Abstract
This paper presents a scalable online algorithm to generate safe and kinematically feasible trajectories for quadrotor swarms. Existing approaches rely on linearizing Euclidean distance-based collision constraints and on axis-wise decoupling of kinematic constraints to reduce the trajectory optimization problem for each quadrotor to a quadratic program (QP). This conservative approximation often fails to find a solution in cluttered environments. We present a novel alternative that handles collision constraints without linearization and kinematic constraints in their quadratic form while still retaining the QP form. We achieve this by reformulating the constraints in a polar form and applying an Alternating Minimization algorithm to the resulting problem. Through extensive simulation results, we demonstrate that, as compared to Sequential Convex Programming (SCP) baselines, our approach achieves on average a 72% improvement in success rate, a 36% reduction in mission time, and a 42 times faster per-agent computation time. We also show that collision constraints derived from discrete-time barrier functions (BF) can be incorporated, leading to different safety behaviours without significant computational overhead. Moreover, our optimizer outperforms the state-of-the-art optimal control solver ACADO in handling BF constraints with a 31 times faster per-agent computation time and a 44% reduction in mission time on average. We experimentally validated our approach on a Crazyflie quadrotor swarm of up to 12 quadrotors. The code with supplementary material and video are released for reference.
## I Introduction
Quadrotor swarms have a great potential in applications such as search and rescue [1], mapping and environmental monitoring [2], and payload transport [3]. As compared to single quadrotors, quadrotors swarms offer increased flexibility, efficiency, and robustness [4].
In this paper, we consider the problem of motion planning for quadrotor swarms in cluttered environments and treat it as a trajectory optimization problem to be solved. In this context, the most straightforward approach is to formulate one joint optimization problem that computes trajectories for all quadrotors. Existing works have used both global mixed-integer linear programming [5] and local optimization-based Sequential Convex Programming (SCP) [6] approaches for solving the joint trajectory optimization problem. The solution space of these approaches is large but they quickly become intractable as the number of quadrotors grows.
Distributed approaches provide a more scalable alternative, where each quadrotor solves an independent trajectory optimization problem taking into account the predicted trajectories of its neighbours [7, 8, 9, 10]. The predicted trajectories are often assumed to be shared by the neighbouring quadrotors. These approaches have successfully demonstrated swarm motion planning for tens of quadrotors. However, we will show that their performance in terms of scalability, mission time, and computation time remains poor in highly cluttered environments. While there exist works that further incorporate a high-level discrete planner to improve the swarm performance in cluttered environments [11, 12, 13], in this work, we focus on the low-level trajectory optimization problem and propose an algorithm that addresses the limitations of existing distributed trajectory optimization baselines.
Some of the limitations of [7, 8, 9] and other related works such as [6] and [14] can be attributed to the underlying trajectory optimizer that relies on axis-wise decoupling of kinematic constraints and linearization of collision avoidance constraints. These affine approximations are made to obtain a quadratic program (QP) that can be solved efficiently. However, the computational benefits come at the expense of a reduced solution space. Moreover, while replanning in a receding horizon setting, the collision constraints are not active until the planned trajectories intersect with the neighbouring quadrotors or obstacles [15]. This reduces the responsiveness of the collision avoidance behaviour. One way to mitigate this issue is to
Fig. 1: Experimental demonstration of our distributed alternating minimization based approach for quadrotor swarm motion planning in challenging scenes. Link to video: [http://tiny.cc/AWSwarmVideo](http://tiny.cc/AWSwarmVideo). Link to code and supplementary material: [https://github.com/utiasDSL/AWSwarm](https://github.com/utiasDSL/AWSwarm).
horizon; however, this increases the computation time.
Our proposed optimizer addresses both limitations discussed above: the conservativeness of existing approaches due to approximations and the late responsiveness to neighbouring quadrotors or obstacles. We show that by reformulating the quadratic kinematic constraints and collision constraints into polar form and applying an Alternating Minimization (AM) algorithm to the resulting optimization problem, we can retain a QP without requiring any linearization. As a result, we obtain more aggressive motions with improved metrics for swarm planning, such as mission time and success rate. Our formulation naturally extends to the case when collision avoidance is modelled by a discrete-time barrier function (BF) [15]. This dramatically improves safety metrics such as clearance to neighbouring quadrotors and obstacles while incurring no significant computational cost. To the best of our knowledge, only a few works, such as [16, 17], have incorporated BF constraints over the entire planning horizon; most works such as [18] and [19] consider a one-step reactive planning approach. Among the multi-step approaches, ours is the first to formulate trajectory planning with BF constraints as a QP (see Section II-C).
We compare our approach with the SCP baselines from [7, 8, 9] and show on average a \(72\%\) improvement in success rate, a \(36\%\) reduction in mission time, and a \(42\) times faster per-agent computation time. Additionally, we show that the proposed approach with BF constraints allows us to introduce different safety behaviours. We further show that our optimizer's handling of discrete-time BF constraints outperforms the state-of-the-art solver ACADO [20] with a \(31\) times faster per-agent computation time and a \(44\%\) reduction in mission time on average.
## II Distributed Motion Planning Problem
Our goal is to generate smooth, collision-free, and kinematically feasible trajectories that navigate \(N\) quadrotors from their initial positions \(\mathbf{p}_{i,o}\) to their desired goal positions \(\mathbf{p}_{i,g}\) in an obstacle-rich and possibly dynamic environment. The vector \(\mathbf{p}=[x,\,y,\,z]^{T}\) is the three-dimensional position of the quadrotor, the subscript \(i\) is the quadrotor index, and the subscripts \(o\) and \(g\) denote initial and goal variables.
Similar to [7, 8], we formulate the quadrotor swarm motion planning as a distributed trajectory optimization problem, where the computation for each quadrotor is parallelized. At each time step, the quadrotors exchange the planned trajectories from the previous step and re-optimize the trajectories towards their goal positions subject to constraints. The distributed optimization problem is solved in a receding horizon fashion until each quadrotor reaches its goal position.
We note that, in this work, the quadrotors' trajectories are optimized online to account for possible dynamic obstacles. We assume that the obstacles' current positions and velocities are available to each quadrotor.
### _Optimization Problem Formulation_
At each planning step, the optimization problem solved by quadrotor \(i\) is formulated as follows:
\[\min_{\mathbf{p}_{i}}\;w_{g}\sum_{k=K-\kappa}^{K-1}\left\|\mathbf{ p}_{i}[k]-\mathbf{p}_{i,g}\right\|^{2}+w_{s}\sum_{k=0}^{K-1}\left\|\mathbf{p}_{i}^ {(q)}[k]\right\|^{2}\] (1a) s.t. \[\mathbf{p}_{i}^{(q)}[0]=\mathbf{p}_{i,a}^{(q)},\;\forall q=\{0,1,2\} \tag{1b}\] \[\mathbf{\underline{p}}\preceq\mathbf{p}_{i}[k]\preceq\overline{ \mathbf{p}},\;\forall k\] (1c) \[\left\|\dot{\mathbf{p}}_{i}[k]\right\|^{2}\leq\overline{v}^{2},\;\forall k\] (1d) \[\underline{f}^{2}\leq\left\|\dot{\mathbf{p}}_{i}[k]+\mathbf{g} \right\|^{2}\leq\overline{f}^{2},\forall k\] (1e) \[h_{ij}[k]=\left\|\mathbf{\Theta}_{ij}^{-1}(\mathbf{p}_{i}[k]- \boldsymbol{\xi}_{j}[k])\right\|^{2}-1\geq 0,\;\forall k,j, \tag{1f}\]
where \(k\) is the discrete-time index, \(K\) is the planning horizon length, \(\left\|\cdot\right\|\) denotes the Euclidean norm, and the superscript \((q)\) denotes the \(q\)-th time derivative of a variable.
The cost function consists of two terms. The first term is the goal cost that penalizes the deviation of the position of the quadrotor from the specified goal position over the last \(\kappa<K\) steps in the prediction horizon; the second term is the smoothness cost that penalizes the \(q\)-th derivatives of the position trajectory. The constants \(w_{g}\) and \(w_{s}\) are weights trading off the respective cost terms.
The equality constraints (1b) set the initial position of the trajectory and the higher derivatives to be consistent with the current values of the quadrotor. The inequalities (1c)-(1e) enforce bounds on the position \((\mathbf{p},\overline{\mathbf{p}})\), bounds on the velocity \((-\overline{v},\overline{v})\), and bounds on the acceleration \((\underline{f},\overline{f})\). The inequalities (1f) enforce the collision avoidance requirement with either the \(j\)-th neighbouring quadrotor or obstacle with position \(\boldsymbol{\xi}_{j}[k]\). The matrix \(\boldsymbol{\Theta}_{ij}\) is a diagonal matrix with \((a,b,c)\) being its element. These scalars \((a,b,c)\) characterize the axis lengths of the ellipsoidal envelopes around the neighbouring quadrotors or the obstacles. The vector \(\mathbf{g}=[0,\,0,\,g]^{T}\) is the gravitational acceleration vector, where \(g\) is the acceleration due to gravity.
_Alternative Collision Avoidance Constraint:_ The condition in (1f) is commonly found in works on quadrotor swarms (e.g., [7, 8, 13, 21]). A fundamental problem with the standard collision avoidance constraint (1f) is that these inequalities do not get activated until the planned trajectory intersects with the neighbouring quadrotors or obstacles [15]. Due to the receding horizon nature of the planning, a quadrotor only tries to avoid collisions with its neighbours or obstacles when it is sufficiently close to them. Increasing the planning horizon can mitigate this issue but at the cost of increased computation time. An alternative approach is to use BF constraints to induce a desired collision avoidance behaviour [15]:
\[h_{ij}[k]-h_{ij}[k-1]\geq-\gamma\;h_{ij}[k-1],\;\forall k,j, \tag{2}\]
where \(\gamma\in[0,1]\) is a constant controlling how fast the quadrotor is allowed to approach the constraint boundary given by \(h_{ij}=0\). Smaller values of \(\gamma\) generally result
in more gradual and conservative collision avoidance behaviours. With \(\gamma=1\), we recover the original collision-avoidance constraint in (1f).
### _Trajectory Parameterization_
We parameterize the \(x\)-, \(y\)-, and \(z\)-position trajectories for each quadrotor as Bernstein polynomials of degree \(n\). For instance, the \(x\)-position trajectory for the \(i\)-th quadrotor is
\[\begin{bmatrix}x_{i}[0]&x_{i}[1]&\ldots&x_{i}[K-1]\end{bmatrix}^{T}=\mathbf{W} \mathbf{c}_{i,x}, \tag{3}\]
where \(\mathbf{W}\in\mathbb{R}^{K\times(n+1)}\) is the Bernstein basis matrix and \(\mathbf{c}_{i,x}\) are the coefficients associated with it. The \(k\)-th row and \(m\)-th column element of \(\mathbf{W}\) is \([\mathbf{W}]_{km}=\binom{n}{m}(1-t/(K-1)\delta t)^{n-m}(t/(K-1)\delta t)^{m}\), where \(\delta t\) is the discrete-time step size, and \(t=k\delta t\) is the continuous-time variable. The higher derivatives of the position trajectory have the general form \(\mathbf{W}^{(q)}\mathbf{c}_{i,x}\), where \(\mathbf{W}^{(q)}\) is the \(q\)-th derivative of the Bernstein basis matrix. The position trajectories for the \(y\)- and \(z\)-directions are defined in a similar way.
### _Challenges in Solving the Optimization Problem_
The optimization problem in (1a)-(1f) is a non-convex quadratically constrained quadratic program (QCQP). Existing works (e.g., [6, 7, 8]) achieve a more favourable QP form by deriving the convex approximation of (1d)-(1f): the velocity and acceleration bounds are split into axis-wise affine bounds, and the non-convex collision avoidance constraints are approximated as affine constraints through linearization along a trajectory. These approximations can lead to a substantial loss of the feasible space (see Fig. 3 in [22], Fig. 2 in [23]).
Achieving a QP structure with the BF constraints (2) is even more challenging. To see this, we rewrite (2) as
\[-h_{ij}[k]+(1-\gamma)h_{ij}[k-1]\leq 0,\;\forall k,j. \tag{4}\]
The constraint is non-convex, and the non-convexity comes from the first term \(-h_{ij}[k]\). If we linearize the first term, we get an exact but a more conservative convex substitute for the BF constraint. However, the resulting constraint still remains quadratic in the decision variables due to the presence of \(h_{ij}[k-1]\). Linearizing the complete right-hand side of (4) will lead to a QP form. However, satisfaction of a completely linearized version may not imply satisfaction of (4) [24].
## III The Alternating Minimization Algorithm
This section presents our main algorithmic results: an AM-based linearization-free trajectory optimizer for solving the motion planning problem introduced in Sec. II-A. We first present the solution for the case with standard collision constraints (1f). We then show how it naturally extends to the BF constraints (2).
### _Constraints Reformulation_
In our proposed approach, one key ingredient that enables us to bring the QCQP problem to a QP form is a polar reformulation of the quadratic constraints (1d)-(1f). Here we present a general form of the polar representation presented in [25] for quadratic constraints.
Consider an inequality of the form \(||\mathbf{M}(\mathbf{v}-\mathbf{v}_{0})||^{2}\mathbf{\leq}\eta^{2}\) (or \(||\mathbf{M}(\mathbf{v}-\mathbf{v}_{0})||^{2}\mathbf{\geq}\eta^{2}\)) with \(\mathbf{M}\) being a diagonal matrix with positive entries. The inequality constraint can be equivalently written in a polar form as follows: \(\mathbf{f}=\mathbf{M}(\mathbf{v}-\mathbf{v}_{0})-d\;\boldsymbol{\omega}( \alpha,\beta)=0\) with \(d\leq\eta\) (or \(d\geq\eta\)). Here, \(\boldsymbol{\omega}(\alpha,\beta)=[\cos\alpha\sin\beta,\;\sin\alpha\sin\beta,\; \cos\beta]^{T}\) is a unit direction vector pointing from \(\mathbf{v}_{0}\) to \(\mathbf{v}\) with \(\alpha\) being the azimuthal angle and \(\beta\) being the polar angle, and the scalar \(d\) is the magnitude of the vector \(\mathbf{M}(\mathbf{v}-\mathbf{v}_{0})\).
Using the polar reparametrization, we can write the quadratic constraints (1d), (1e), and (1f) as the following constraint sets:
\[\mathcal{C}_{i,v}[k] =\{\dot{\mathbf{p}}_{i}[k]\in\mathbb{R}^{3}\;|\;\mathbf{f}_{i,v}[k ]=0,d_{i,v}[k]\leq\overline{v}\},\;\forall k, \tag{5}\] \[\mathcal{C}_{i,a}[k] =\{\dot{\mathbf{p}}_{i}[k]\in\mathbb{R}^{3}\;|\;\mathbf{f}_{i,a}[k ]=0,\underline{f}\leq d_{i,a}[k]\leq\overline{f}\},\;\forall k,\] (6) \[\mathcal{C}_{ij,c}[k] =\{\mathbf{p}_{i}[k]\in\mathbb{R}^{3}\;|\;\mathbf{f}_{ij,c}[k]=0,d _{ij,c}[k]\geq 1\},\;\forall k,j, \tag{7}\]
where the functions \(\mathbf{f}_{ij,c}\), \(\mathbf{f}_{i,v}\), and \(\mathbf{f}_{i,a}\) are
\[\mathbf{f}_{i,v}[k] =\dot{\mathbf{p}}_{i}[k]-d_{i,v}[k]\;\boldsymbol{\omega}(\alpha_{ i,v}[k],\beta_{i,v}[k]),\] \[\mathbf{f}_{i,a}[k] =\ddot{\mathbf{p}}_{i}[k]+\mathbf{g}-d_{i,a}[k]\;\boldsymbol{ \omega}(\alpha_{i,a}[k],\beta_{i,a}[k]),\] \[\mathbf{f}_{ij,c}[k] =\boldsymbol{\Theta}_{ij}^{-1}(\mathbf{p}_{i}[k]-\boldsymbol{ \xi}_{j}[k])-d_{ij,c}[k]\boldsymbol{\omega}(\alpha_{ij,c}[k],\beta_{ij,c}[k]).\]
Note that \((\alpha_{\cdot,\cdot},\beta_{\cdot,\cdot},d_{\cdot,\cdot})\) are the parameters of the polar form representations of the constraints and will be computed by our optimizer together with the trajectory.
### _Reformulated Problem_
Before deriving the final form of our reformulated problem, we rewrite the polar form constraints derived in the previous subsection in a compact matrix form. Given the parametrization in (3), the equality part of constraints (5), (6), (7) can be represented as
\[\begin{split}\overbrace{\widetilde{\mathbf{A}}}^{\mathbf{A}}& \widetilde{\mathbf{A}}^{\mathbf{A}}\end{split}\overbrace{\widetilde{ \mathbf{A}}}^{\mathbf{A}_{i,v}}\begin{split}&\overbrace{ \begin{bmatrix}\mathbf{d}_{i,v}\cos\boldsymbol{\alpha}_{i,v}\sin\boldsymbol{ \beta}_{i,v}\\ \mathbf{d}_{i,a}\cos\boldsymbol{\alpha}_{i,a}\sin\boldsymbol{\beta}_{i,a}\\ \mathbf{\xi}_{x}+a\mathbf{d}_{i,c}\cos\boldsymbol{\alpha}_{i,c}\sin\boldsymbol{ \beta}_{i,c}\\ \hline\mathbf{d}_{i,v}\sin\boldsymbol{\alpha}_{i,v}\sin\boldsymbol{\beta}_{i,v} \\ \mathbf{d}_{i,a}\sin\boldsymbol{\alpha}_{i,a}\sin\boldsymbol{\beta}_{i,a}\\ \mathbf{\xi}_{y}+b\mathbf{d}_{i,c}\cos\boldsymbol{\alpha}_{i,c}\sin\boldsymbol{ \beta}_{i,c}\\ \hline\mathbf{d}_{i,v}\cos\boldsymbol{\beta}_{i,v}\\ -\mathbf{g}+\mathbf{d}_{i,a}\cos\boldsymbol{\beta}_{i,a}\\ \boldsymbol{\xi}_{z}+c\mathbf{d}_{i,c}\cos\boldsymbol{\beta}_{i,c}\end{bmatrix} }^{\mathbf{b}}\overbrace{\begin{bmatrix}\mathbf{d}_{i,v}\cos\boldsymbol{ \alpha}_{i,v}\sin\boldsymbol{\beta}_{i,v}\\ \mathbf{d}_{i,a}\cos\boldsymbol{\alpha}_{i,a}\sin\boldsymbol{\beta}_{i,a}\\ \mathbf{d}_{i,v}\cos\boldsymbol{\beta}_{i,v}\\ -\mathbf{g}+\mathbf{d}_{i,a}\cos\boldsymbol{\beta}_{i,a}\\ \boldsymbol{\xi}_{z}+c\mathbf{d}_{i,c}\cos\boldsymbol{\beta}_{i,c}\end{bmatrix} }^{\mathbf{c}_{i,1}}\overbrace{\begin{bmatrix}\mathbf{d}_{i,v}\cos\boldsymbol{ \alpha}_{i,v}\sin\boldsymbol{\beta}_{i,v}\\ \mathbf{d}_{i,a}\cos\boldsymbol{\alpha}_{i,a}\sin\boldsymbol{\beta}_{i,a}\\ \mathbf{d}_{i,v}\cos\boldsymbol{\alpha}_{i,c}\sin\boldsymbol{\beta}_{i,v}\\ -\mathbf{g}+\mathbf{d}_{i,a}\cos\boldsymbol{\beta}_{i,a}\\ \boldsymbol{\xi}_{z}+c\mathbf{d}_{i,c}\cos\boldsymbol{\beta}_{i,c}\end{bmatrix} }^{\mathbf{b}}\overbrace{\begin{bmatrix}\mathbf{d}_{i,v}\cos\boldsymbol{ \alpha}_{i,v}\sin\boldsymbol{\beta}_{i,v}\\ \mathbf{d}_{i,a}\cos\boldsymbol{\alpha}_{i,a}\sin\boldsymbol{\beta}_{i,a}\\ \mathbf{d}_{i,v}\cos\boldsymbol{\alpha}_{i,a}\sin\boldsymbol{\beta}_{i,a}\\ -\mathbf{g}+\mathbf{d}_{i,a}\cos\boldsymbol{\beta}_{i,a}\\ \boldsymbol{\xi}_{z}+c\mathbf{d}_{i,c}\cos\boldsymbol{\beta}_{i,c}\end{bmatrix}}^{ \mathbf{b}}\overbrace{\begin{bmatrix}\mathbf{d}_{i,v}\cos\boldsymbol{\alpha}_{i,v} \sin\boldsymbol{\beta}_{i,v}\\ \mathbf{d}_{i,v}\cos\boldsymbol{\alpha}_{i,v}\sin\boldsymbol{\beta}_{i,a}\\ \mathbf{d}_{i,v}\cos\boldsymbol{\alpha}_{i,v}\sin\boldsymbol{\beta}_{i,v}\\ -\mathbf{g}+\mathbf{d}_{i,a}\cos\boldsymbol{\alpha}_{i,a}\\ \boldsymbol{\xi}_{z}+c\mathbf{d}_{i,c}\cos\boldsymbol{\beta}_{i,c}\end{bmatrix}}^{ \mathbf{b}}\overbrace{\begin{bmatrix}\mathbf{d}_{i,v}\cos\boldsymbol
Here, \(\widetilde{\mathbf{A}}=\begin{bmatrix}\mathbf{\dot{W}}^{T}&\mathbf{\dot{W}}^{T}& \mathbf{F}_{c}^{T}\end{bmatrix}^{T}\), the matrix \(\mathbf{F}_{c}\) is constructed by vertically stacking the matrix \(\mathbf{W}\) as many times as the number of neighbouring quadrotors and obstacles present in the environment. The vectors \((\boldsymbol{\xi}_{x},\boldsymbol{\xi}_{y},\boldsymbol{\xi}_{z})\) are formed by vertically stacking the corresponding variables \((\xi_{j,x}[k],\xi_{j,y}[k],\xi_{j,z}[k])\) at different time steps of the prediction horizon and for all the neighbouring quadrotors and obstacles. In a similar fashion, \((\boldsymbol{\alpha}_{\cdot,\cdot},\boldsymbol{\beta}_{\cdot,\cdot},\mathbf{d} _{\cdot,\cdot})\) are formed by vertically stacking \((\alpha_{\cdot,\cdot}[k],\beta_{\cdot,\cdot}[k],d_{\cdot,[k]})\). The vector \(\widetilde{\mathbf{g}}\) is formed by vertically stacking \(g\) as many times as the length of prediction horizon.
Using the derivations above, we now write the reformulated optimization problem as
\[\min_{\boldsymbol{\zeta}_{i,1},\boldsymbol{\zeta}_{i,2}, \boldsymbol{\zeta}_{i,3}} \frac{1}{2}\boldsymbol{\zeta}_{i,1}^{T}\mathbf{Q}\boldsymbol{\zeta }_{i,1}+\mathbf{q}^{T}\boldsymbol{\zeta}_{i,1}\] (9a) s.t. \[\mathbf{A}\boldsymbol{\zeta}_{i,1}=\mathbf{b}(\boldsymbol{\zeta }_{i,2},\boldsymbol{\zeta}_{i,3}) \tag{9b}\] \[\mathbf{G}\boldsymbol{\zeta}_{i,1}\preceq\mathbf{h}\] (9c) \[\boldsymbol{\zeta}_{i,1}\in\mathcal{C}_{\zeta_{i,1}},\boldsymbol{ \zeta}_{i,3}\in\mathcal{C}_{\zeta_{i,3}}, \tag{9d}\]
where \(\boldsymbol{\zeta}_{i,1}=[\mathbf{c}_{i,x}^{T},\mathbf{c}_{i,y}^{T},\mathbf{c }_{i,2}^{T}]^{T}\), \(\boldsymbol{\zeta}_{i,2}=[\boldsymbol{\alpha}_{i,c}^{T},\boldsymbol{\alpha}_{i,c}^{T},\boldsymbol{\alpha}_{i,v}^{T},\boldsymbol{\beta}_{i,c}^{T},\boldsymbol {\beta}_{i,a}^{T},\boldsymbol{\beta}_{i,v}^{T}]^{T}\), and \(\boldsymbol{\zeta}_{i,3}=[\mathbf{d}_{i,c},\,\mathbf{d}_{i,a},\,\mathbf{d}_{i,v }]^{T}\) are the variables to be optimized. The matrix \(\mathbf{Q}\) and vector \(\mathbf{q}\) are formed from the objective function (1a). The matrix \(\mathbf{G}\) and vector \(\mathbf{h}\) in the inequality constraint (9c) stem from the positional bounds (1c).
The set \(\mathcal{C}_{\boldsymbol{\zeta}_{i,1}}\)\(=\)\(\{\boldsymbol{\zeta}_{i,1}\in\mathbb{R}^{3n}\,|\,\mathbf{C}\boldsymbol{\zeta}_{i,1}= \mathbf{e}\}\) encodes the initial conditions (1b). Here, the matrix \(\mathbf{C}=[\mathbf{W}_{0}^{T},\,\,\mathbf{\dot{W}}_{0}^{T},\,\,\mathbf{\ddot{ W}}_{0}^{T}]^{T}\), and the subscript \(0\) represents the 0-\(tn\) row of the respective matrices. The vector \(\mathbf{e}=[\mathbf{p}_{i,a}^{T},\,\,\mathbf{\dot{p}}_{i,a}^{T},\,\,\mathbf{ \ddot{p}}_{i,a}^{T}]^{T}\) contains the current position, velocity and acceleration values. The set \(\mathcal{C}_{\boldsymbol{\zeta}_{i,3}}\) consists of the conditions on each of the variables \((\mathbf{d}_{i,j,c},\mathbf{d}_{i,v},\mathbf{d}_{i,a})\) derived from the polar reformulation.
### _Relaxation and Solution by AM_
The optimization (9a)-(9d) has some hidden convex structures which makes it suitable for AM-based approaches. To exploit these structures, we first relax the non-convex equality (9b) and affine (9c) constraints as penalties in the following form:
\[\min_{\boldsymbol{\zeta}_{i,1}\in\mathcal{C}_{\zeta_{i,1}}, \boldsymbol{\zeta}_{i,3}\in\mathcal{C}_{\zeta_{i,3}}}\frac{1}{2}\boldsymbol{ \zeta}_{i,1}^{T}\mathbf{Q}\boldsymbol{\zeta}_{i,1}+\mathbf{q}^{T}\boldsymbol{ \zeta}_{i,1}-\langle\boldsymbol{\lambda}_{i},\boldsymbol{\zeta}_{i,1}\rangle\] \[+\frac{\rho}{2}\left\|\mathbf{A}\boldsymbol{\zeta}_{i,1}-\mathbf{b} (\boldsymbol{\zeta}_{i,2},\boldsymbol{\zeta}_{i,3})\right\|^{2}+\frac{\rho}{2} \left\|\mathbf{G}\boldsymbol{\zeta}_{i,1}-\mathbf{h}+\mathbf{s}_{i}\right\|^{2}. \tag{10}\]
The parameter \(\rho\) trades-off satisfaction of constraint residual with the minimization of primary cost function. The slack variable \(\mathbf{s}_{i}\geq\mathbf{0}\) is unknown, and we discuss shortly how these are obtained within an AM setup. The vector \(\boldsymbol{\lambda}_{i}\) is called the Lagrange multiplier and is crucial for driving the constraint residuals to zero [26].
Algorithm 1 summarizes the AM steps for minimizing (10), wherein the superscript \(l\) in \({}^{l}(.)\) represents the value of \((.)\) at \(l\)-th iteration of the algorithm. At each step of the AM, only one of \(\boldsymbol{\zeta}_{i,1},\boldsymbol{\zeta}_{i,2},\boldsymbol{\zeta}_{i,3}\) are optimized while the rest are held fixed at values obtained in the previous update. Each step in Algorithm 1 is either a convex QP or has a closed-form solution. We discuss these observations below in detail.
_Step_ (S1): We solve for \(\boldsymbol{\zeta}_{i,1}\) while keeping the other variables constant. We see that the problem is an equality-constrained convex QP whose solution boils down to solving a set of linear equations:
\[\begin{bmatrix}\dot{\mathbf{A}}&\mathbf{C}^{T}\\ \mathbf{C}&\mathbf{0}\end{bmatrix}\begin{bmatrix}l+1\boldsymbol{\zeta}_{i,1}\\ \boldsymbol{\mu}\end{bmatrix}=\begin{bmatrix}l\ddot{\mathbf{b}}\\ \mathbf{e}\end{bmatrix}, \tag{11}\]
where the matrix \(\dot{\mathbf{A}}\)\(=\)\(\mathbf{Q}+\rho\mathbf{G}^{T}\mathbf{G}+\rho\mathbf{A}^{T}\mathbf{A}\), the vector \(l\ddot{\mathbf{b}}\)\(=\)\(\mathbf{q}+\rho\mathbf{G}^{T}(\mathbf{h}-l\,\mathbf{s}_{i})+\rho \mathbf{A}^{T}\mathbf{b}-l\,\boldsymbol{\lambda}_{i}\), and \(\boldsymbol{\mu}\) are the dual variables associated with the equality constraints.
_Step_ (S2): We now solve for \(\boldsymbol{\zeta}_{i,2}\). As an example, the optimization problem for the variables \((\boldsymbol{\alpha}_{i,c},\boldsymbol{\beta}_{i,c})\) is
\[l+1\boldsymbol{\alpha}_{i,c},l+1\boldsymbol{\beta}_{i,c}= \arg\min_{\alpha_{i,c},\beta_{i,c}}\] \[\left\|\mathbf{F}_{c}\,^{l+1}\mathbf{c}_{i,x}-\boldsymbol{\xi}_{x }-a^{l}\mathbf{d}_{i,c}\cos\boldsymbol{\alpha}_{i,c}\sin\boldsymbol{\beta}_{i,c} \right\|^{2}\] \[+\left\|\mathbf{F}_{c}\,^{l+1}\mathbf{c}_{i,y}-\boldsymbol{\xi}_{y }-b^{l}\mathbf{d}_{i,c}\sin\boldsymbol{\alpha}_{i,c}\sin\boldsymbol{\beta}_{i,c} \right\|^{2}\] \[+\left\|\mathbf{F}_{c}\,^{l+1}\mathbf{c}_{i,z}-\boldsymbol{\xi}_{z }-c^{l}\mathbf{d}_{i,c}\cos\boldsymbol{\beta}_{i,c}\right\|^{2}. \tag{12}\]
The minimization (12) is simply a projection of \((l+1\mathbf{c}_{i,x},l+1\,\mathbf{c}_{i,y},l+1\,\mathbf{c}_{i,z})\) onto ellipsoids centered at \((\boldsymbol{\xi}_{x},\boldsymbol{\xi}_{y},\boldsymbol{\xi}_{z})\) and has a closed-form solution [25]. Similarly, we can obtain \((l+1\boldsymbol{\alpha}_{i,v},l+1\,\boldsymbol{\beta}_{i,v})\) and \((l+1\boldsymbol{\alpha}_{i,a},l+1\,\boldsymbol{\beta}_{i,a})\).
_Step_ (S3): The optimization over \(\mathbf{d}_{i,c}\) involves solving the following QP:
\[l+1\mathbf{d}_{i,c}=\arg\min_{\mathbf{d}_{i,c}\geq 1}\] \[\left\|\mathbf{F}_{c}\,^{l+1}\mathbf{c}_{i,x}-\boldsymbol{\xi}_{x }-a\mathbf{d}_{i,c}\cos^{l+1}\boldsymbol{\alpha}_{i,c}\sin^{l+1}\boldsymbol{ \beta}_{i,c}\right\|^{2}\] \[+\left\|\mathbf{F}_{c}\,^{l+1}\mathbf{c}_{i,y}-\boldsymbol{\xi}_{y }-b\mathbf{d}_{i,c}\sin^{l+1}\boldsymbol{\alpha}_{i,c}\sin^{l+1}\boldsymbol{ \beta}_{i,c}\right\|^{2}\] \[+\left\|\mathbf{F}_{c}\,^{l+1}\mathbf{c}_{i,z}-\boldsymbol{\xi}_{z }-c\mathbf{d}_{i,c}\cos^{l+1}\boldsymbol{\beta}_{i,c}\right\|^{2}. \tag{13}\]
Each element of \(\mathbf{d}_{i,c}\) is decoupled from each other. Thus (13) reduces to parallel single variable QPs, each of which can be solved in closed form. We clip the resulting solution to \((1,\infty)\) to satisfy the lower bound on \(\mathbf{d}_{i,c
By comparing (7) and (14), we can see that the constraints differ only in the feasible region definition of \(d_{ij,c}[k]\). When \(\gamma{=}1\), the constraints are equivalent.
We integrate (14) into Algorithm 1 through a minor modification in _Step_ (S3), specifically QP (13). Let \({}^{l}d_{ij,c}[k]\) be the value of \(d_{ij,c}[k]\) obtained at the \(l\)-th iteration of Algorithm 1. We can use this value to approximate the feasible region of \(d_{ij,c}[k]\) for BF constraints at the \((l+1)\)-th iteration as
\[d_{ij,c}[k]\geq 1+(1-\gamma)({}^{l}d_{ij,c}[k-1]-1). \tag{15}\]
The right-hand side of (15) is constant, and thus the feasible region of \(d_{ij,c}[k]\) for BF constraints is approximated through a simple lower bound. We can now just solve the QP (13) and clip the obtained value to this lower bound to solve for the optimal \(d_{ij,c}[k]\) at iteration \((l+1)\).
## IV Simulation Results
This section provides a simulation analysis and comparison of our approach against the state-of-the-art baselines [7, 9, 20]. We denote our approach as "Ours (Quadratic)", and the "Quadratic" in the parenthesis refers to quadratic kinematic constraints. The proposed approach and the baselines are implemented in C++. The codes are available here [28]. All the simulations were executed on a PC with Intel Xeon CPU with 8 cores and 16 GB of RAM, running at 3 GHz.
The prediction horizon length is set to \(K{=}30\) with a discretization of \(0.1s\). The Bernstein polynomials used to parameterize the position trajectories have a degree of \(n{=}10\). In the trajectory optimization problem (1a)-(1f), we set \(w_{g}{=}7000\), \(w_{s}{=}100\), \(\kappa{=}5\) and penalize the acceleration (\(q{=}2\)) trajectory in the cost. In the constraints, we set \(\overline{v}{=}1.73ms^{-1}\), \(f{=}0.3g\) and \(\overline{f}{=}1.5g\). For collision avoidance with neighbouring quadrotors, we set \(\boldsymbol{\Theta}_{ij}{=}\text{diag}(0.17m,0.17m,0.45m)\), but a collision is declared with \(\boldsymbol{\Theta}_{coll}{=}\text{diag}(0.13m,0.13m,0.40m)\). A quadrotor \(j\) is considered to be a potential conflict for quadrotor \(i\) if at any prediction step of the horizon, \(\left\|(\boldsymbol{\Theta}_{ij}+\boldsymbol{\Theta}_{p})^{-1}(\boldsymbol{ \mathbf{p}}_{i}[k]-\boldsymbol{\xi}_{j}[k])\right\|^{2}{\geq}1\) holds, where \(\boldsymbol{\Theta}_{p}{=}\text{diag}(0.2m,0.2m,0.2m)\). Similarly, we choose appropriate parameters for the collision constraints with obstacles. In Algorithm 1, we set maxiter\(=\)\(2000\), thresold\(=0.01\), and the penalty parameter \(\rho{=}min(1.3^{l},5e10^{5})\), where \(l\) is the iteration count of the algorithm.
### _Distributed Swarm Baselines_
We compare our proposed approach Ours (Quadratic) with \(\gamma{=}1\) with three different distributed swarm baselines:
1. SCP (On-demand) [8]: This approach relies on linearization of collision avoidance constraints and axiswise decoupling of the kinematic bounds. It uses a so-called On-demand strategy where it tries to resolve only the first predicted collision.
2. SCP (Continuous) [9]: This approach is similar to SCP (On-demand) but it adds collision constraints over the entire prediction horizon.
3. Ours (Axiswise): This approach is the same as Ours (Quadratic) in the sense that it does not rely on linearizing collision constraints but has axis-wise kinematic bounds. The maximum and minimum values for the axis-wise velocity bounds are \(\pm\overline{v}/\sqrt{3}\), and the acceleration limits can be computed such that it satisfies extreme cases (see (13) and (14) of [6]).
We consider a cluttered environment with \(16\) cylindrical static obstacles in a volume of \(4m\times 4m\times 2m\) and vary swarms from \(10\) to \(50\). We tested the approaches in \(100\) configurations for each swarm size with randomized start-goal and obstacle positions. A trial is successful if all the quadrotors reach their assigned goal positions without collisions and under a time limit of \(20s\).
### _Comparative Analysis_
_Success Rate:_ The first plot in Fig. 2 summarizes the improvement achieved by Ours (Quadratic) and Ours (Axiswise) over SCP (On-demand), and SCP (Continuous) in the success-rate metric. For swarm size up to 30, our approaches provide around \(11\%\)-\(62\%\) improvement over SCP (On-demand) and \(1\%\)-\(19\%\) over SCP (Continuous). As swarm size increase to 50, the performance gap between our approaches and the SCP (On-demand) and SCP (Continuous) swell to \(39\%\)-\(72\%\). The explanation is that the SCP baselines replace the non-convex collision constraints as hyperplane constraints, a conservative approximation of free space. Furthermore, SCP (On-demand) adds only a handful of collision constraints that result in unsafe separation and, ultimately, collisions in highly cluttered settings. For swarm size \(50\), we also see that Ours (Quadratic) has \(15\%\) more success rate than Ours (Axiswise) as the former has larger access to acceleration and velocity bounds.
_Computation Time per Agent_: The middle plot in Fig. 2 shows the average computation time per agent for the different swarm sizes. Both of our approaches have similar computation time per agent and are substantially faster than SCP (On-demand) and SCP (Continuous). For the swarm size of \(10\) case, we see that our approaches are \(1.7\) times faster than SCP (On-demand), and \(18.7\) times faster than SCP (Continuous). With a swarm size of \(50\), our approaches are still \(1.7\) times faster than SCP (On-demand) but are \(42\) times faster than SCP (Continuous). This excellent performance of our approaches can be attributed to the fact that AM optimizer of Algorithm 1 requires us to solve only one equality-constrained QP per iteration. On the contrary, the SCP (Continuous) solves constrained QP with a large number of inequality constraints. Moreover, it has to incorporate one slack variable per collision constraint to prevent potential in-feasibility in the optimization problem. The strategy of incorporating only the most imminent collision in SCP (On-demand) offers improvement in computation time per agent but at the expense of success rate as described previously.
_Mission Time:_ The rightmost plot in Fig. 2 presents the average time taken by the swarm to reach their desired goal positions. We see that SCP (On-demand) performs the worst as the on-demand strategy leads to agents being closer to each other and obstacles, thus, slowing down the progress toward their goals. SCP (Continuous) performs relatively better than SCP (On-demand), but our approaches have the lowest mission times. For swarm size \(40\), we see that Ours (Quadratic) on average shows \(36\%\) and \(15\%\) reduction in mission time than SCP (On-demand) and SCP (Continuous), respectively. Interestingly, the trends show that Ours (Axiswise), on average, completed the task \(0.5s\)-\(0.9s\) faster than Ours (Quadratic).
### _Trade-off Between Performance and Safety_
Figure 3 showcases the performance comparison of Ours (Quadratic) with three different values of \(\gamma\). We ran the same \(100\) configurations and recorded the smallest inter-agent distance and distance-to-obstacles observed at each time step. With \(\gamma{=}0.95\), we see \(3.98\%\)-\(8.69\%\) improvement in inter-agent distances over \(\gamma{=}1\). The improvements increase to \(7.11\%\)-\(12.35\%\) with a more conservative \(\gamma{=}0.9\). We see a similar trend in distance-to-obstacles. This improvement in clearance comes at the expense of increased mission time (see first plot in Fig. 3). We also observed that the computation time per agent increased with decreasing \(\gamma\). For swarm size \(50\), the average computation time per agent is \(1.61ms\), \(82\%\) increase over \(\gamma{=}1\). Nevertheless, our approach is still real-time.
### _Off-the-shelf Solver Baseline_
We now compare Ours (Quadratic) against the optimal control solver ACADO [20]. ACADO is provided with the original BF constraints (2), while Ours (Quadratic) uses the reformulation presented in (14). We ran an antipodal position exchange with a swarm size of \(2\), \(4\), \(6\), and \(8\) with no static obstacles. Figure 4 presents the observed metrics. We see that the computation time per agent scaling for Ours (Quadratic) is linear, while for ACADO, it increases quadratically with the number of agents. Furthermore, Ours (Quadratic) can complete the task faster than ACADO.
## V Experimental Evaluation
We tested our approach on our Crazyflie 2.0 swarm testbed. The quadrotors' trajectories are computed on a single computer, and we send position and velocity trajectories to the underlying lower controller based on [29]. More details about the testbed can be found here [28].
A video summarizing the experimental results can be found here: [http://tiny.cc/AMSwarmVideo](http://tiny.cc/AMSwarmVideo). The algorithm is tested on various challenging scenarios. First, a 12 quadrotor swarm performs a head-on transition where a single quadrotor is in conflict with other 11 quadrotors at
Fig. 4: Performance comparison of our approach and ACADO with BF constraints (\(\gamma=0.9\)) in antipodal position exchange scenario with no static obstacles. ACADO could accomodate only a maximum of 8 agents.
Fig. 3: Average performance comparison of our approach in point-to-point transition setting with different values of \(\gamma\) in the barrier function constraints. The environment is of \(32m^{3}\) in volume and has 16 static obstacles. For each swarm size, \(100\) configurations were executed.
Fig. 2: Average performance comparison of approaches in point-to-point transition setting with an increasing number of swarm sizes in a fixed volume of \(32m^{3}\) and with \(16\) static obstacles. 100 configurations were run for each swarm size.
a time in an obstacle-free environment. Second, we repeat the same transition but with 6 static obstacles in the environment. We see that our approach navigates the quadrotors to their desired goals in an agile and smooth manner. Third, we qualitatively show a 12 drone random transition in an obstacle-free setting with two values of \(\gamma\). We see quadrotors with conservative \(\gamma\) show a safe, evasive behaviour. Lastly, a formation of 8 quadrotors performs a transition in the presence of an unpredictable human. With the help of BF constraints, each quadrotor is able to navigate around the human safely.
## VI Conclusion
We presented a novel contribution toward making quadrotor swarm navigation more reliable and scalable. We showed how to formulate the original problem with quadratic collision and kinematic constraints as a QP without relying on conservative approximations. Furthermore, our approach can naturally handle more sophisticated collision avoidance constraints based on discrete-time BFs. In simulation, our optimizer significantly outperformed SCP-based approaches that have been common in recent works. Similarly, our approach also proved to be more computationally efficient than the state-of-the-art optimal control solver ACADO when considering BF constraints. In experiments, we showed the efficacy of the proposed algorithm in challenging scenarios in both obstacle-free and cluttered environments.
|
2304.08013 | CLIP-Lung: Textual Knowledge-Guided Lung Nodule Malignancy Prediction | Lung nodule malignancy prediction has been enhanced by advanced deep-learning
techniques and effective tricks. Nevertheless, current methods are mainly
trained with cross-entropy loss using one-hot categorical labels, which results
in difficulty in distinguishing those nodules with closer progression labels.
Interestingly, we observe that clinical text information annotated by
radiologists provides us with discriminative knowledge to identify challenging
samples. Drawing on the capability of the contrastive language-image
pre-training (CLIP) model to learn generalized visual representations from text
annotations, in this paper, we propose CLIP-Lung, a textual knowledge-guided
framework for lung nodule malignancy prediction. First, CLIP-Lung introduces
both class and attribute annotations into the training of the lung nodule
classifier without any additional overheads in inference. Second, we designed a
channel-wise conditional prompt (CCP) module to establish consistent
relationships between learnable context prompts and specific feature maps.
Third, we align image features with both class and attribute features via
contrastive learning, rectifying false positives and false negatives in latent
space. The experimental results on the benchmark LIDC-IDRI dataset have
demonstrated the superiority of CLIP-Lung, both in classification performance
and interpretability of attention maps. | Yiming Lei, Zilong Li, Yan Shen, Junping Zhang, Hongming Shan | 2023-04-17T06:29:14Z | http://arxiv.org/abs/2304.08013v1 | # CLIP-Lung: Textual Knowledge-Guided Lung
###### Abstract
Lung nodule malignancy prediction has been enhanced by advanced deep-learning techniques and effective tricks. Nevertheless, current methods are mainly trained with cross-entropy loss using one-hot categorical labels, which results in difficulty in distinguishing those nodules with closer progression labels. Interestingly, we observe that clinical text information annotated by radiologists provides us with discriminative knowledge to identify challenging samples. Drawing on the capability of the contrastive language-image pre-training (CLIP) model to learn generalized visual representations from text annotations, in this paper, we propose CLIP-Lung, a textual knowledge-guided framework for lung nodule malignancy prediction. First, CLIP-Lung introduces both class and attribute annotations into the training of the lung nodule classifier without any additional overheads in inference. Second, we designed a channel-wise conditional prompt (CCP) module to establish consistent relationships between learnable context prompts and specific feature maps. Third, we align image features with both class and attribute features via contrastive learning, rectifying false positives and false negatives in latent space. The experimental results on the benchmark LIDC-IDRI dataset have demonstrated the superiority of CLIP-Lung, both in classification performance and interpretability of attention maps.
Keywords:Lung nodule classification vision-language model prompt learning.
## 1 Introduction
Lung cancer is one of the most fatal diseases worldwide, and early diagnosis of the pulmonary nodule has been identified as an effective measure to prevent lung cancer. Deep learning-based methods for lung nodule classification have been widely studied in recent years [9, 10, 12]. Usually, the malignancy prediction is often formulated as benign-malignant binary classification [9, 19], and the higher classification performance and explainable attention maps are impressive. Most previous works employ a learning paradigm that utilizes cross-entropy loss between predicted probability distributions and ground-truth one-hot labels. Furthermore, inspired by ordered labels of nodule progression, researchers have turned their attention to ordinal regression methods to evaluate the benign-unsure-malignant
classification task [2, 11, 13, 18, 21], where the training set additionally includes nodules with uncertain labels. Indeed, the ordinal regression-based methods are able to learn ordered manifolds and to further enhance the prediction accuracy.
However, the aforementioned methods still face challenges in distinguishing visually similar samples with adjacent rank labels. For example in Fig. 1 (a), since we conduct unimodal contrastive learning and map the samples onto a spherical space, the false positive nodule with a malignancy score of 2.75 has a closer distance to that with a score of 4.75, and the false negative one should not be closer to that of score 2.5. To address this issue, we found that the text attributes, such as "subtlety", "sphericity", "margin", and "lobulation", annotated by radiologists can exhibit the differences between these hard samples. Therefore, we propose leveraging the text annotations to guide the learning of visual features. In reality, this also aligns with the fact that the annotated text information represents the direct justification for identifying lesion regions in the clinic.
To integrate text annotations into the image-domain learning process, an effective text encoder providing precise text features is required. Fortunately, recent advancements in vision-language models, such as contrastive language-image pre-training (CLIP) [16], provide us with a powerful pre-trained text encoder learned from text-based supervisions and have shown impressive results in downstream vision tasks. Nevertheless, it is ineffective to directly transfer CLIP to medical tasks due to the data covariate shift. Therefore, in this paper, we propose CLIP-Lung, a framework to classify lung nodules using image-text pairs. Specifically, CLIP-Lung constructs learnable text descriptions for each nodule, including class- and attribute-level. Inspired by CoCoOp [20], we proposed a channel-wise conditional prompt (CCP) module to allow nodule descriptions to guide the generation of informative feature maps. Different from CoCoOp, CCP constructs specific learnable context prompts conditioned on grouped feature maps and triggers more explainable attention maps such as Grad-CAM [17], whereas CoCoOp provides only the common condition for all the prompt tokens. Then, we design a textual knowledge-guided contrastive learning based on obtained image features and textual features involving classes and attributes. Experimental
Figure 1: Motivation of CLIP-Lung. (a) Unimodal contrastive learning. (b) Proposed textual knowledge-guided contrastive learning. Yellow values are annotated malignancy scores. Dashed boxes contain pairs of textual attributes and annotated values.
results on LIDC-IDRI [1] dataset demonstrate the effectiveness of learning with textual knowledge for improving lung nodule malignancy prediction.
The contributions of this paper are summarized as follows.
1. We proposed CLIP-Lung for lung nodule malignancy prediction which leverages clinical textual knowledge to enhance the image encoder and classifier.
2. We designed a channel-wise conditional prompt module to establish consistent relationships among the correlated text tokens and feature maps.
3. We align the image features with class and attribute features through contrastive learning while generating more explainable attention maps simultaneously.
## 2 Methodology
### Overview
Problem formulationIn this paper, we arrange the lung nodule classification dataset as \(\{\mathcal{I},\mathcal{Y},\mathcal{C},\mathcal{A}\}\), where \(\mathcal{I}=\{\mathbf{I}_{i}\}_{i=1}^{N}\) is an image set contained \(N\) lung nodule images. \(\mathcal{Y}=\{y_{i}\}_{i=1}^{N}\) is the corresponding class label set and \(y_{i}\in\{1,2,\ldots,K\}\), and \(K\) is the number of classes. \(\mathcal{C}=\{\mathbf{c}_{k}\}_{k=1}^{K}\) is a set of text embeddings of classes. Finally, \(\mathcal{A}=\{\mathbf{a}_{m}\}_{m=1}^{M}\) is the set of attribute embeddings, where each element \(\mathbf{a}_{m}\in\mathbb{R}^{d\times 1}\) is a vector representing the embedding of an attribute word such as "spiculation". Then, for a given sample \(\{\mathbf{I}_{i},y_{i}\}\), our aim is to learn a mapping \(f_{\mathbf{\theta}}:\mathbf{I}_{i}\mapsto y_{i}\), where \(f\) is a deep neural network parameterized by \(\mathbf{\theta}\).
CLIP-LungIn Fig. 2 (a), the training framework contains an image encoder \(f_{\mathbf{\theta}}\) and a text encoder \(g_{\mathbf{\phi}}\). First, the input image \(\mathbf{I}_{i}\) is fed into \(f_{\mathbf{\theta}}\) and then generates the feature maps. Then according to Fig. 2 (b), the feature maps are converted to
Figure 2: Illustration of the proposed CLIP-Lung.
channel-wise feature vectors \(f_{\mathbf{\theta}}(\mathbf{I}_{i})=\mathbf{F}_{t,:}\) and then to learnable tokens \(\mathbf{l}^{\prime}_{t}\). Second, we initialize the context tokens \(\mathbf{l}_{t}\) and add them with \(\mathbf{l}^{\prime}_{t}\) to construct the learnable prompts, where \(T\) is the number of context words. Next, the concatenation of the class token and \(\mathbf{l}_{t}+\mathbf{l}^{\prime}_{t}\) is used as input of text encoder yielding the class features \(g_{\mathbf{\phi}}(\mathbf{c}_{k})=\mathbf{C}_{k,:}\), note that \(\mathbf{C}_{k,:}\) is conditioned on channel-wise feature vectors \(\mathbf{F}_{t,:}\). Finally, the attribute tokens \(\mathbf{a}_{m}\) are also fed into the text encoder to yield corresponding attribute features \(g_{\mathbf{\phi}}(\mathbf{a}_{m})=\mathbf{A}_{m,:}\). Note that the vectors \(\mathbf{F}_{t,:}\), \(\mathbf{l}_{t,:}\), \(\mathbf{l}^{\prime}_{t,:}\), and \(\mathbf{C}_{k,:}\) are with the same dimension \(d=512\) in this paper. Consequently, we have image feature \(\mathbf{F}\in\mathbb{R}^{T\times d}\), class feature \(\mathbf{C}\in\mathbb{R}^{K\times d}\), and attribute feature \(\mathbf{A}\in\mathbb{R}^{M\times d}\) to conduct the textual knowledge-guided contrastive learning.
### Instance-Specific Attribute Weighting
For the attribute annotations, all the lung nodules in the LIDC-IDRI dataset are annotated with the _same_ eight attributes: "subtlety", "internal structure", "calcification", "sphericity", "margin", "lobulation", "spiculation", and "texture" [4, 8], and the annotated value for each attribute is ranged from 1 to 5 except for "calcification" that is ranged from 1 to 6. In this paper, we fix the parameters of a pre-trained text encoder so that the generated eight text feature vectors are the same for all the nodules. Therefore, we propose an instance-specific attribute weighting scheme to distinguish different nodules. For the \(i\)-th sample, the weight for each \(\mathbf{a}_{m}\) is calculated through normalizing the annotated values:
\[w_{m}=\frac{\exp(v_{m})}{\sum_{m=1}^{M}\exp(v_{m})}, \tag{1}\]
where \(v_{m}\) denotes the annotated value of \(\mathbf{a}_{m}\). Then the weight vector of the \(i\)-th sample is represented as \(\mathbf{w}_{i}=(w_{1},w_{2},\dots,w_{M})^{\top}\in\mathbb{R}^{M\times 1}\). Hence, the element-wise multiplication \(\mathbf{w}_{i}\cdot\mathbf{A}_{i}\) is unique to \(\mathbf{I}_{i}\).
### Channel-wise Conditional Prompt
CoCoOp [20] firstly proposed to learn context prompts for vision-language models conditioned on visual features. However, it is inferior to align context words with partial regions of the lesion. Therefore, we propose a channel-wise conditional prompt (CCP) module, in Fig. 2 (b), to split latent feature maps into \(T\) groups and then flatten them into vectors \(\mathbf{F}_{t,:}\). Next, we denote \(h(\cdot)\) as a context net that is composed of a multi-layer perceptron (MLP) with one hidden layer, and each learnable context token is now obtained by \(\mathbf{l}^{\prime}_{t}=h(\mathbf{F}_{t,:})\). Hence, the conditional prompt for the \(t\)-th token is \(\mathbf{l}_{t}+\mathbf{l}^{\prime}_{t}\). In addition, CCP also outputs the \(\mathbf{F}_{t,:}\) for image-class and image-attribute contrastive learning.
### Textual Knowledge-Guided Contrastive Learning
Contrastive learning can effectively shorten the distances between positive pairs and increase the distances between negative ones [3, 5, 7], and vision-language
models also applied contrastive learning using cross-modal image-text pairs and achieved generalized image and text encoders [16]. In CLIP-Lung, our aim is to align \(\mathbf{F}\in\mathbb{R}^{T\times d}\) with \(\mathbf{C}\in\mathbb{R}^{K\times d}\) and \(\mathbf{A}\in\mathbb{R}^{M\times d}\) as illustrated in Fig. 2, _i.e._, using class and attribute knowledge to regularize the feature maps.
Image-class alignmentFirst, the same to CLIP, we align the image and class information by minimizing the cross-entropy (CE) loss based on the prediction probability \(p_{\text{i}}\):
\[\mathcal{L}_{\text{IC}}=-\sum_{t=1}^{T}\sum_{k=1}^{K}y_{i}\text{log}(p_{\text {i}}),\quad p_{\text{i}}=\frac{\exp(\sigma(\mathbf{F}_{t,:},\mathbf{C}_{k,:})/\tau)}{ \sum_{k^{\prime}=1}^{K}\exp(\sigma(\mathbf{F}_{t,:},\mathbf{C}_{k^{\prime},:})/\tau)}, \tag{2}\]
where \(\mathbf{C}_{k,:}=g_{\mathbf{\phi}}(\mathbf{c}_{k}\bigoplus(\mathbf{l}_{1}+\mathbf{l}^{\prime}_{1}, \mathbf{l}_{2}+\mathbf{l}^{\prime}_{2},\dots,\mathbf{l}_{T}+\mathbf{l}^{\prime}_{T}))\in \mathbb{R}^{d\times 1}\) and "\(\bigoplus\)" denotes concatenation, _i.e._, \(\mathbf{C}_{k,:}\) is conditioned on learnable prompts \(\mathbf{l}_{t}+\mathbf{l}^{\prime}_{t}\). \(\sigma(\cdot,\cdot)\) calculates cosine similarity and \(\tau\) is temperature term. Therefore, \(\mathcal{L}_{\text{IC}}\) implements the contrastive learning between channel-wise features and corresponding class features, _i.e._, the ensemble of grouped image-class alignment results.
Image-attribute alignmentIn addition to image-class alignment, we further expect the image features to be correlated with specific attributes. So we conduct image-attribute alignment by minimizing the InfoNCE loss [5, 16]:
\[\mathcal{L}_{\text{IA}}=-\sum_{t=1}^{T}\sum_{m=1}^{M}\text{log}\frac{\exp( \sigma(\mathbf{F}_{t,:},\mathbf{w}_{m,:}\cdot\mathbf{A}_{m,:})/\tau)}{\sum_{m^{\prime}=1} ^{M}\exp(\sigma(\mathbf{F}_{t,:},\mathbf{w}_{m^{\prime},:}\cdot\mathbf{A}_{m^{\prime},:})/ \tau)}. \tag{3}\]
Due to each vector \(\mathbf{F}_{t,:}\) is mapped from the \(t\)-th group of feature maps through context net \(h(\cdot)\), then \(\mathcal{L}_{\text{IA}}\) indicates which attribute the \(\mathbf{F}_{t,:}\) is closest to. Therefore, certain feature maps can be guided by specific annotated attributes.
Class-attribute alignmentAlthough the image features have been aligned with classes and attributes, the class embeddings obtained by the pre-trained CLIP encoder may shift in latent space. This will result in inconsistent class space and attribute space, _i.e._, annotated attributes do not match the corresponding classes, which is contradictory to the actual clinical diagnosis. To avoid this weakness, we further align the class and attribute features:
\[\mathcal{L}_{\text{CA}}=-\sum_{k=1}^{K}\sum_{m=1}^{M}\text{log}\frac{\exp( \sigma(\mathbf{C}_{k,:},\mathbf{w}_{m,:}\cdot\mathbf{A}_{m,:})/\tau)}{\sum_{m^{\prime}=1} ^{M}\exp(\sigma(\mathbf{C}_{k,:},\mathbf{w}_{m^{\prime},:}\cdot\mathbf{A}_{m^{\prime},:})/ \tau)}, \tag{4}\]
and this loss implies semantic consistency between classes and attributes.
Finally, the total loss function is defined as follows:
\[\mathcal{L}=\mathbb{E}_{\mathbf{I}_{i}\in\mathcal{I}}\big{[}\mathcal{L}_{\text{CE }}+\mathcal{L}_{\text{IC}}+\alpha\cdot\mathcal{L}_{\text{IA}}+\beta\cdot \mathcal{L}_{\text{CA}}\big{]}, \tag{5}\]
where \(\alpha\) and \(\beta\) are hyperparameters for adjusting the losses and are set as \(1\) and \(0.5\), respectively. \(\mathcal{L}_{\text{CE}}\) denotes the cross-entropy loss between predicted probabilities obtained by the classifier and the ground-truth labels. Note that during the inference phase, test images are only fed into the trained image encoder and classifier, therefore, CLIP-Lung does not introduce any additional computational overhead in inference.
## 3 Experiments
### Dataset and Implementation Details
DatasetLIDC-IDRI 1 is a dataset for pulmonary nodule classification or detection based on low-dose CT, which involves 1,010 patients. All the nodules were labeled with scores from 1 to 5, indicating the malignancy progression. We cropped all the nodules with a square shape of a doubled equivalent diameter at the annotated center, then resized samples to the volume of \(32\times 32\times 32\). Following [9, 11], we modified the first layer of the image encoder to be with 32 channels. According to existing works [11, 18], we regard a nodule with an average score between 2.5 and 3.5 as unsure nodules, benign and malignant categories are those with scores lower than 2.5 and larger than 3.5, respectively. In this paper, we construct three sub-datasets: LIDC-A contains three classes of nodules both in training and test sets; according to [11], we construct the LIDC-B, which contains three classes of nodules _only_ in the training set, and the test set contains benign and malignant nodules; LIDC-C includes benign and malignant nodules both in training and test sets.
Footnote 1: [https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=1966254](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=1966254)
Experimental settingsIn this paper, we apply the CLIP pre-trained text encoder ViT-B/16 as the text encoder for CLIP-Lung, and the image encoder we used is ResNet-18 [6] due to the relatively smaller scale of training data. The image encoder is initialized randomly. Note that for the text branch, we froze the parameters of the text encoder, and update the learnable tokens \(\mathbf{l}\) and \(\mathbf{l^{\prime}}\) during training. The learning rate is 0.001 following the cosine decay, the optimizer is stochastic gradient descent with momentum 0.9 and weight decay 0.00005. The temperature \(\tau\) is initialized as 0.07 and updated during training. All of our experiments are implemented with PyTorch [15] and trained with NVIDIA A100 GPUs. The experimental results are reported with average values through five randomly independent split folds. For different classes, we report the recall and F1-score values, and "\(\pm\)" indicates standard deviation.
### Experimental Results and Analysis
Performance comparisonsIn Table 1, we compare the classification performances on the LIDC-A dataset, where we regard the benign-unsure-malignant as an ordinal relationship. Compared with ordinal classification methods such as Poisson, NSB, UDM, and CORF, CLIP-Lung achieves the highest accuracy and F1-scores for the three classes, which demonstrates the effectiveness of textual knowledge-guided learning. CLIP and CoCoOp also outperform ordinal classification methods and show the superiority of large-scale pre-trained text encoders. Furthermore, CLIP-Lung obtained higher recalls than CLIP and CoCoOp _w.r.t._ benign and malignant classes, however, the recall of unsure is lower than theirs, we argue that this is due to the indistinguishable textual annotations such as similar attributes of different nodules.
In Table 2, we compare the performances on LIDC-B and LIDC-C datasets. CLIP-Lung obtains higher evaluation values other than recalls of benign class. We conjecture the reason is that most of the benign nodules are with similar appearances and subtle differences in text attributes, therefore, aligning these two types of features is difficult and the text features will be biased to those of malignant nodules.
Visual features and attention mapsTo illustrate the influence of incorporating class and attribute knowledge, we provide the t-SNE [14] Grad-CAM [17] results obtained by CLIP, CoCoOp, and CLIP-Lung. In Fig. 3, we can see that CLIP yields a non-compact latent space for two kinds of nodules. CoCoOp and CLIP-Lung alleviate this phenomenon, which demonstrates that the learnable prompts guided by nodule classes are more effective than fixed prompt engineering. Further, compared with CLIP-Lung, CoCoOp could not consider the attribute information to learn the prompts, therefore, it results in more false negatives in latent space. From the attention maps we can observe that CLIP cannot precisely capture spiculation and lobulation regions that are highly correlated with malignancy. Simultaneously, CLIP-Lung performs better than CoCoOp, which demonstrates the guidance from textual descriptions such as "spiculation".
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Accuracy} & \multicolumn{2}{c}{Benign} & \multicolumn{2}{c}{Malignant} & \multicolumn{2}{c}{Unsure} \\ \cline{3-10} & & Recall & F1 & Recall & F1 & Recall & F1 \\ \hline CE Loss & 54.2\(\pm\)0.6 & 72.2 & 62.0 & 64.4 & 61.3 & 29.0 & 36.6 \\ Poisson [2] & 52.7\(\pm\)0.7 & 60.5 & 56.8 & 58.4 & 58.7 & 41.0 & 44.1 \\ NSB [13] & 53.4\(\pm\)0.7 & **80.7** & 63.0 & **67.3** & 63.8 & 16.0 & 24.2 \\ UDM [18] & 54.6\(\pm\)0.4 & 76.7 & 64.3 & 49.5 & 53.5 & 32.5 & 39.5 \\ CORF [21] & 56.8\(\pm\)0.4 & 71.3 & 63.3 & 61.3 & 62.3 & 38.5 & 44.3 \\ CLIP [16] & 56.6\(\pm\)0.3 & 59.5 & 59.2 & 55.2 & 60.0 & 53.9 & 52.2 \\ CoCoOp [20] & 56.8\(\pm\)0.6 & 59.0 & 59.2 & 55.2 & 60.0 & **55.1** & 52.8 \\
**CLIP-Lung** & **60.9\(\pm\)**0.4 & 67.5 & **64.4** & 60.9 & **66.3** & 53.4 & **54.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification results on the test set of LIDC-A.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Accuracy} & \multicolumn{2}{c}{LIDC-B} & \multicolumn{4}{c}{LIDC-C} \\ \cline{3-10} & & \multicolumn{2}{c}{Benign} & \multicolumn{2}{c}{Malignant} & \multicolumn{2}{c}{Benign} & \multicolumn{2}{c}{Malignant} \\ \cline{3-10} & & Recall & F1 & Recall & F1 & Recall & F1 \\ \hline CE Loss & 83.3\(\pm\)0.6 & 92.4 & 88.4 & 63.4 & 70.3 & 85.5\(\pm\)0.5 & 91.5 & 89.7 & 72.3 & 75.6 \\ Poisson [2] & 81.8\(\pm\)0.4 & 94.2 & 87.7 & 54.5 & 65.1 & 84.0\(\pm\)0.3 & 87.9 & 88.3 & 75.2 & 74.5 \\ NSB [13] & 78.1\(\pm\)0.5 & 90.6 & 85.8 & 50.5 & 60.7 & 84.9\(\pm\)0.7 & 91.0 & 89.2 & 71.3 & 74.6 \\ UDM [18] & 79.3\(\pm\)0.4 & 87.0 & 86.2 & 62.4 & 67.7 & 84.6\(\pm\)0.5 & 88.8 & 88.8 & 75.2 & 75.2 \\ CORF [21] & 81.5\(\pm\)0.3 & **95.9** & 87.8 & 49.5 & 62.8 & 83.0\(\pm\)0.2 & 87.9 & 87.7 & 72.3 & 72.6 \\ CLIP [16] & 83.6\(\pm\)0.6 & 92.0 & 88.7 & 64.4 & 70.4 & 87.5\(\pm\)0.3 & 92.0 & 91.0 & 77.0 & 78.8 \\ CoCoOp [20] & 86.8\(\pm\)0.7 & 94.5 & 90.9 & 69.0 & 75.9 & 88.2\(\pm\)0.6 & **95.0** & 91.8 & 72.4 & 78.8 \\
**CLIP-Lung** & **87.5\(\pm\)**0.3 & 94.5 & **91.7** & **72.3** & **79.0** & **89.5\(\pm\)**0.4 & 94.0 & **92.8** & **80.5** & **82.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification results on test sets of LIDC-B and LIDC-C.
Ablation studiesIn Fig. 3, we verify the effectiveness of different loss components on the three constructed datasets. Based on \(\mathcal{L}_{\mathrm{IC}}\), \(\mathcal{L}_{\mathrm{IA}}\) and \(\mathcal{L}_{\mathrm{CA}}\) improve the performances on LIDC-A, indicating the effectiveness of capturing fine-grained features of ordinal ranks using class and attribute texts. However, they perform relatively worse on LIDC-B and LIDC-C, especially the \(\mathcal{L}_{\mathrm{IC}}+\mathcal{L}_{\mathrm{CA}}\). That is to say, \(\mathcal{L}_{\mathrm{IA}}\) is more important in latent space rectification, _i.e._, image-attribute consistency. In addition, we observe that \(\mathcal{L}_{\mathrm{IC}}+\mathcal{L}_{\mathrm{IA}}\) performs better than \(\mathcal{L}_{\mathrm{IA}}+\mathcal{L}_{\mathrm{CA}}\), which is attributed to that \(\mathcal{L}_{\mathrm{CA}}\) regularizes the image features indirectly.
## 4 Conclusion
In this paper, we proposed a textual knowledge-guided framework for pulmonary classification, named CLIP-Lung. We explored the utilization of clinical textual annotations based on large-scale pre-trained text encoders. CLIP-Lung aligned the different modalities of features generated from nodule classes, attributes, and images through contrastive learning. Most importantly, CLIP-Lung establishes correlations between learnable prompt tokens and feature maps using the proposed CCP module, and this guarantees explainable attention maps localizing fine-grained clinical features. Finally, CLIP-Lung outperforms compared methods, including CLIP on LIDC-IDRI benchmark. Future work can concentrate on extending CLIP-Lung with more diverse textual knowledge.
Figure 3: The t-SNE (**Left**) and Grad-CAM (**Right**) results.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \(\mathcal{L}_{\mathrm{IC}}\) & \(\mathcal{L}_{\mathrm{IA}}\) & \(\mathcal{L}_{\mathrm{CA}}\) & LIDC-A & LIDC-B & LIDC-C \\ \hline ✓ & & & 56.8\(\pm\)0.6 & 86.8\(\pm\)0.7 & 88.2\(\pm\)0.6 \\ ✓ & ✓ & & 59.4\(\pm\)0.4 & 86.8\(\pm\)0.6 & 86.7\(\pm\)0.4 \\ & ✓ & ✓ & 58.1\(\pm\)0.2 & 85.7\(\pm\)0.6 & 87.5\(\pm\)0.5 \\ ✓ & & ✓ & 56.9\(\pm\)0.3 & 84.7\(\pm\)0.4 & 84.0\(\pm\)0.7 \\ ✓ & ✓ & ✓ & 60.9\(\pm\)0.4 & 87.5\(\pm\)0.5 & 89.5\(\pm\)0.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on different losses. We report classification accuracies. |
2305.07009 | Fault-tolerant quantum algorithm for symmetry-adapted perturbation
theory | The efficient computation of observables beyond the total energy is a key
challenge and opportunity for fault-tolerant quantum computing approaches in
quantum chemistry. Here we consider the symmetry-adapted perturbation theory
(SAPT) components of the interaction energy as a prototypical example of such
an observable. We provide a guide for calculating this observable on a
fault-tolerant quantum computer while optimizing the required computational
resources. Specifically, we present a quantum algorithm that estimates
interaction energies at the first-order SAPT level with a Heisenberg-limited
scaling. To this end, we exploit a high-order tensor factorization and block
encoding technique that efficiently represents each SAPT observable. To
quantify the computational cost of our methodology, we provide resource
estimates in terms of the required number of logical qubits and Toffoli gates
to execute our algorithm for a range of benchmark molecules, also taking into
account the cost of the eigenstate preparation and the cost of block encoding
the SAPT observables. Finally, we perform the resource estimation for a heme
and artemisinin complex as a representative large-scale system encountered in
drug design, highlighting our algorithm's performance in this new benchmark
study and discussing possible bottlenecks that may be improved in future work. | Cristian L. Cortes, Matthias Loipersberger, Robert M. Parrish, Sam Morley-Short, William Pol, Sukin Sim, Mark Steudtner, Christofer S. Tautermann, Matthias Degroote, Nikolaj Moll, Raffaele Santagati, Michael Streif | 2023-05-11T17:52:44Z | http://arxiv.org/abs/2305.07009v2 | # Fault-tolerant quantum algorithm
###### Abstract
The efficient computation of observables beyond the total energy is a key challenge and opportunity for fault-tolerant quantum computing approaches in quantum chemistry. Here we consider the symmetry-adapted perturbation theory (SAPT) components of the interaction energy as a prototypical example of such an observable. We provide a guide for calculating this observable on a fault-tolerant quantum computer while optimizing the required computational resources. Specifically, we present a quantum algorithm that estimates interaction energies at the first-order SAPT level with a Heisenberg-limited scaling. To this end, we exploit a high-order tensor factorization and block encoding technique that efficiently represents each SAPT observable. To quantify the computational cost of our methodology, we provide resource estimates in terms of the required number of logical qubits and Toffoli gates to execute our algorithm for a range of benchmark molecules, also taking into account the cost of the eigenstate preparation and the cost of block encoding the SAPT observables. Finally, we perform the resource estimation for a heme and artemisinin complex as a representative large-scale system encountered in drug design, highlighting our algorithm's performance in this new benchmark study and discussing possible bottlenecks that may be improved in future work.
The computation of expectation values of observables other than the total molecular energy is a foundational task in quantum chemistry, for which fault-tolerant quantum computers (FTQC) are expected to provide speed-ups for those systems where classical computers cannot find an accurate solution [1; 2; 3; 4]. Some of the most important observable properties, such as the Born-Oppenheimer potential energy landscape, the adiabatic excitation energy landscape, the total intermolecular interaction energies, polarizabilities, and various spectroscopical properties, can all be written in terms of the total molecular energy or its derivatives [5; 6]. For these cases, many recent efforts have focused on optimizing the computational cost, bringing several orders of magnitude improvements [7; 8; 6; 9]. However, many other properties of molecular systems cannot be optimally defined as a linear combination of total energies and require a specific quantum algorithm to calculate their expectation value [10; 11]. For example, the one-particle density, total kinetic energy, and multipole moments of a molecular wavefunction do not allow an efficient expression in terms of the total energy [12]. The components of the symmetry-adapted perturbation theory (SAPT) represent an additional example of such kind of observables [13; 14]. This type of observables plays a pivotal role in the characterization and featurization of intermolecular interactions between two weakly interacting sub-systems, with practical applications in molecules and materials design for polymers, catalysts, batteries, and drugs [15; 16].
SAPT is a variant of Rayleigh-Schrodinger perturbation theory specifically designed to describe fermionic systems. It restores the Pauli exclusion principle for antisymmetric wavefunctions at every order of perturbation theory. In classical computing, SAPT is a state-of-the-art method that directly calculates the interaction energy, \(E_{\text{int}}=E_{AB}-E_{A}-E_{B}\), defined as the energy difference between the weakly interacting systems (monomers A and B), and the system in which the monomers interact, referred to as the dimer (AB), see Refs. [17; 18; 19]. While the so-called supermolecular approach [20; 21] calculates the interaction energy by combining the results of three separate energy calculations, SAPT computes \(E_{\text{int}}\) indirectly as an observable estimation task by decomposing the interaction energy in terms of physically interpretable quantities such as electrostatic, exchange, dispersion and induction energy contributions (see Fig. 1).
Previous studies have focused on noisy intermediate-scale quantum (NISQ) computations of these observables [22; 23], where the uncertainty scaling is far from the optimal Heisenberg-limited scaling that can be achieved by fault-tolerant quantum algorithms [11]. Furthermore, the scientific literature has been missing an in-depth study of the computational cost of calculating the SAPT components on a fault-tolerant quantum computer.
Here, we present a Heisenberg-limited methodology for calculating the first-order SAPT terms on a fault-tolerant
quantum computer. We take advantage of a recently proposed Heisenberg-limited method for expectation values estimation (called QSP-EVE) [24], exploiting the latest techniques in quantum simulation, such as block encodings and quantum signal processing. We tailor the QSP-EVE algorithm to calculate the expectation value \(\langle\Psi_{A}\Psi_{B}|\hat{F}|\Psi_{A}\Psi_{B}\rangle\) defined with respect to the two ground states of the two sub-systems \(|\Psi_{A}\rangle\) and \(|\Psi_{B}\rangle\), to compute first-order symmetry-adapted perturbation theory observables.
In this work, we introduce a framework for implementing SAPT on a fault-tolerant quantum computer with Heisenberg scaling of the uncertainty and we determine the corresponding algorithmic resource costs. We accomplish this goal through three major contributions we outline in the following sections: (i) derivation of first-order SAPT operators using a second quantization picture (Sec. I.1). The operators were derived in both the full and active space pictures and are useful in both the monomer and dimer centered bases, (ii) tailoring of the QSP-EVE algorithm for the estimation of expectation values \(\langle\Psi_{A}\Psi_{B}|\hat{F}|\Psi_{A}\Psi_{B}\rangle\) of observables \(\hat{F}\) (Sec. I.2) and (iii) the development of tensor factorization and block encoding arithmetic techniques specifically designed for SAPT operators that reduce the total cost of the PREPARE and SELECT operators associated with the block encodings, as well as the \(\ell_{1}\) norm of the block encoded observables (Sec. I.3). To evaluate the performance of our framework, we compare the cost of the proposed tensor-factorized SAPT-EVE algorithm with a set of benchmark molecules against an equivalent _sparse_ SAPT-EVE algorithm that encodes the SAPT operators through the conventional Jordan-Wigner mapping (Sec. II.1). Finally, we present a new benchmark study relevant to drug design, which may be of independent interest to the quantum computing community, involving the interaction of a heme with an artemisinin drug molecule (Sec. II.2). We finalize our work by discussing the complete resource cost of our algorithm and outlining possible avenues of improvement for future work (Sec. III).
## I Theoretical results
### Symmetry-adapted perturbation theory
Ranking ligands according to their binding strength with a substrate is a fundamental task in computational drug design [25]. SAPT aims to calculate the interaction energy \(E_{\text{int}}\) of the full system (the dimer) as a sum of physically interpretable contributions from the substrate and ligand (monomer A and monomer B), \(E_{\text{int}}=E_{\text{pol}}^{(1)}+E_{\text{exch}}^{(1)}+E_{\text{pol}}^{(2) }+E_{\text{exch}}^{(2)}\cdots\), through the use of a symmetry-adapted Rayleigh-Schrodinger perturbative expansion. In this manuscript, we derive the second quantized operators based on a methodology first presented in the paper by Moszynski _et al._[26], which presented the SAPT formulation using a density matrix formalism. The advantage of this approach is that it will work both on the dimer-centered basis as well as on the monomer-centered basis. In Appendix B, we provide a full derivation of the first-order SAPT operators highlighting the permutational symmetries inherent to all operators. Here, we summarize the results for the first-order interaction energy under the \(S^{2}\) approximation (see Appendix B), which may be decomposed in terms of electrostatic polarization energy \(E_{\text{pol}}^{(1)}\) and exchange energy \(E_{\text{exch}}^{(1)}\) contributions,
\[E_{\text{int}}^{(1)}=E_{\text{pol}}^{(1)}+E_{\text{exch}}^{(1)}=\langle\hat{V }\rangle+\left(\langle\widehat{VP}_{\text{s}}\rangle-\langle\hat{V}\rangle \,\langle\hat{P}\rangle\right), \tag{1}\]
and is fully captured by defining an electrostatic \(\hat{V}\), exchange \(\hat{P}\), and symmetric electrostatic-exchange \(\widehat{VP}_{\text{s}}\) operator with excitation operators, \(\hat{E}_{\text{pr}^{\prime}}=\hat{a}_{\hat{\sigma}}^{\dagger}\hat{a}_{\hat{ \sigma}^{\prime}}\) and \(\hat{E}_{\text{ox}^{\prime}}=\hat{b}_{\hat{\sigma}}^{\dagger}\hat{b}_{\hat{ \sigma}^{\prime}}\), defined for each monomer. In the SAPT framework, two independent sets of fermionic operators \(\hat{a}_{\hat{\sigma}}^{\dagger}/\hat{a}_{\hat{\tau}}\) and \(\hat{b}_{\hat{\sigma}}^{\dagger}/\hat{b}_{\hat{\sigma}}\) are used which obey the conventional
Figure 1: Overview of an interaction energy problem encountered in drug design. (a) Schematics of monomer A, monomer B, and dimer. (b) Supermolecular and (c) SAPT approaches to calculate the interaction energy. First-order SAPT energy terms are determined from expectation values of electrostatic \(\hat{V}\), exchange \(\hat{P}\) and electrostatic-exchange \(\widehat{VP}_{\text{s}}\) operators.
fermionic commutation relations but fully commute with one another, \([\hat{a}_{\text{r}}^{\dagger},\hat{b}_{\text{q}}]=[\hat{a}_{\text{r}},\hat{b}_{ \text{q}}^{\dagger}]=[\hat{a}_{\text{r}}^{\dagger},\hat{b}_{\text{q}}^{\dagger }]=[\hat{a}_{\text{r}},\hat{b}_{\text{q}}]=0\). Explicitly, the SAPPT operators are written as:
\[\hat{V} =\tfrac{1}{2}\sum_{\mathbf{p},\mathbf{q}}\left(v_{\text{q}_{1} \text{q}_{2}}^{{}^{\mathrm{P}_{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{ 1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2 }}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1} }{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{ }_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{2}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{}_{ \mathrm{1}}{}_{\mathrm{1}}{}_{\mathrm{1}}{
for the SAPT calculation. The QSP-EVE algorithm's correctness, probability of success, and other details can be found in Ref. [24].
The SAPT-EVE algorithm is comprised of three different quantum phase estimation procedures. Each phase estimation calculation features a special iterate \(\mathcal{U}_{F}\) for each observable \(\hat{F}=\{\hat{V},\hat{P},\widetilde{VP}_{\text{s}}\}\). The iterate \(\mathcal{U}_{F}\) is deliberately constructed such that for some eigenstates \(|F+\rangle\) and \(|F-\rangle\), we have
\[\mathcal{U}_{F}|F\pm\rangle=\exp\left(\pm i2\arccos\sqrt{\frac{1-\langle\hat{F }/\lambda_{F}\rangle}{8}}\right)|F\pm\rangle \tag{11}\]
where \(\langle\hat{F}/\lambda_{F}\rangle\) is the desired expectation value of the respective observable (relative to its \(\ell_{1}\) norm \(\lambda_{F}\)) on the monomer reference states. Phase estimation with the operator \(\mathcal{U}_{F}\) provides an estimate of the eigenphase of Eq. (11), allowing us to extract \(\langle\hat{F}\rangle\).
The circuit for \(\mathcal{U}_{F}=\mathcal{R}_{\tau}\mathcal{R}_{\pi}\) is visualized in Fig. 2. The first reflection circuit, \(\mathcal{R}_{\pi}\), is used to flag the monomer ground-states, \(|\Psi_{A},\Psi_{B}\rangle\), non-destructively allowing them to be used in the rest of the algorithm coherently, thereby preserving Heisenberg scaling. \(\mathcal{R}_{\pi}\) features inner phase estimation circuits \(\mathsf{IQPE_{A}}\) and \(\mathsf{IQPE_{B}}\) applied on the registers associated with monomers A and B. Within these circuits, we employ QSP techniques as in [24] to implement the rounding of the eigenphases. Quantum signal processing requires a block encoding of the Hamiltonian for each monomer, which will bring an \(\ell_{1}\) norm dependence of both Hamiltonians to the total complexity of the algorithm, as shown below. The necessary analysis of the expectation value error that sets the degree of the QSP polynomials is similar to Ref. [24] and is summarized for the SAPT-EVE algorithm in Appendix E. An open-controlled, coupled-reflection circuit using \(\mathsf{Ref_{A}}\) and \(\mathsf{Ref_{B}}\) is also used to reflect the binary eigenphases (known to \(p\) bits of precision) associated with the monomer ground states.
The second reflection, \(\mathcal{R}_{\tau}\), in the circuit of Fig. 2 is a controlled version of the observable block encoding, \(\mathcal{B}[\hat{F}/\lambda_{F}]\), discussed in more detail in Refs. [33, 34, 8, 35]. The asymptotic Toffoli gate cost, \(C_{\text{Toff}}\), of the SAPT-EVE algorithm is given by,
\[C_{\text{Toff}}=\mathcal{O}\left(\frac{\lambda_{F}}{\varepsilon_{F}}\left[ \left(\frac{\lambda_{A}}{\Delta_{A}}C_{A}+\frac{\lambda_{B}}{\Delta_{B}}C_{B} \right)\log\frac{\lambda_{F}}{\varepsilon_{F}}+C_{F}\right]\right)\,, \tag{12}\]
where \(\lambda_{A}\), \(\lambda_{B}\) and \(\lambda_{F}\) denote the \(\ell_{1}\) norms of \(\hat{H}_{A}\), \(\hat{H}_{B}\) and \(\hat{F}\), respectively. \(\Delta_{A}\) and \(\Delta_{B}\) are the respective spectral gaps between the ground state and first excited state of Hamiltonians \(\hat{H}_{A/B}\), \(\varepsilon_{F}\) is the specific precision allocated to the respective observable \(\hat{F}\), and \(C_{A}\), \(C_{B}\), and \(C_{F}\) correspond to the cost of the block encoded operators for the Hamiltonian operators of monomers \(A\) and \(B\) as well as the observable \(\hat{F}\).
## Appendix C SAPT operator encoding
In this section, we present the _sparse_ and _tensor factorization_ schemes, which can be used for block encoding all of the SAPT operators in the SAPT-EVE algorithm. To unify both approaches, we use the Majorana representation, which expresses Eqs. (2)-(4) in terms of Hermitian and self-inverse operators, \(\hat{\gamma}_{\text{p},0}/\hat{\gamma}_{\text{p},1}\) and \(\hat{\gamma}_{\text{q},0}/\hat{\gamma}_{\text{q},1}\), defined for each monomer separately (see Appendix F for details). The electrostatic and exchange operators are written as,
\[\hat{V} =\tfrac{1}{4}\sum_{\text{p}_{\text{Q}}}v_{\text{q}_{\text{Q}}}^{ \text{p}p}+\tfrac{i}{4}\sum_{\text{$\hat{r}_{1}$}\text{p}_{2}}f_{\text{$\hat{r }_{1}$}\text{$\hat{r}_{2}$}}^{(A)}\hat{\gamma}_{\text{$\hat{r}_{1}$},0}\hat{ \gamma}_{\text{$\hat{r}_{2}$},1} \tag{13}\] \[+\tfrac{i}{4}\sum_{\text{$\text{Q}_{1}$}\text{$\hat{\text{Q}_{2} }$}}f_{\text{$\hat{r}_{1}$}\text{$\hat{r}_{2}$}}^{(B)}\hat{\gamma}_{\text{$ \text{q}_{1}$},0}\hat{\gamma}_{\text{$\hat{\text{Q}_{2}}$},1}\] \[-\tfrac{1}{4}\sum_{\text{$\text{PQ}$}}\text{sym}(v_{\text{$\text {Q}_{1}$}\text{$\hat{\text{Q}_{2}}$}}^{\text{$\hat{r}_{1}$}\text{$\hat{r}_{2}$}}) \hat{\gamma}_{\text{$\hat{r}_{1}$},0}\hat{\gamma}_{\text{$\hat{r}_{2}$},1}\hat{ \gamma}_{\text{$\text{Q}_{1}$},0}\hat{\gamma}_{\text{$\hat{\text{Q}_{2}}$},1},\]
and
\[\hat{P} =-\tfrac{1}{4}\sum_{\text{$\text{PQ}$}}S_{\text{$\text{Q}$}}^{ \text{p}}S_{\text{$\text{Q}$}}^{\text{p}}-\tfrac{i}{4}\sum_{\text{$\text{P}_{ 1}$}\text{$\hat{\text{P}_{2}$}$}}S_{\text{$\text{Q}$}}^{\text{p}}S_{\text{$ \text{Q}$}}^{\text{p}}\hat{\gamma}_{\text{$\hat{r}_{1}$},0}\hat{\gamma}_{\text{$ \hat{r}_{2}$},1} \tag{14}\] \[-\tfrac{i}{4}\sum_{\text{$\text{Q}_{1}$}\text{$\hat{\text{Q}_{2} }$}}S_{\text{$\text{Q}_{2}$}}^{\text{$\hat{r}_{2}$}}S_{\text{$\text{Q}$}}^{ \text{p}}\hat{\gamma}_{\text{$\hat{\text{Q}_{1}$}$}}\hat{\gamma}_{\text{$\text{Q}_{ 2}$},1}\hat{\gamma}_{\text{$\text{Q}_{2}$},1}\] \[+\tfrac{1}{4}\sum_{\text{$\text{PQ}$}}\text{sym}(S_{\text{$\text {Q}_{2}$}}^{\text{$\hat{r}_{1}$}}S_{\text{$\text{Q}_{2}$}}^{\text{$\hat{r}_{2}$}}) \hat{\gamma}_{\text{$\hat{r}_{1}$},0}\hat{\gamma}_{\text{$\hat{r}_{2}$},1}\hat{ \gamma}_{\text{$\text{Q}_{1}$},0}\hat{\gamma}_{\text{$\text{Q}_{2}$},1},\]
Figure 2: Iterate \(\mathcal{U}_{F}\) used in SAPT-EVE algorithm to determine the expectation values of operators, \(\hat{F}=\{\hat{V},\hat{P},\widetilde{VP}_{\text{s}}\}\). The iterate consists of two reflection circles, \(\mathcal{R}_{\pi}\) and \(\mathcal{R}_{\tau}\), which act on 7 quantum registers in total. \(\mathsf{enc}[A]\) / \(\mathsf{enc}[B]\) are auxiliary registers used for the block encodings of the monomer Hamiltonians \(H_{A}\) and \(H_{B}\). \(\mathsf{PhaseA}\) / \(\mathsf{PhaseB}\) are phase registers that hold expressions of eigenphases for monomer Hamiltonians \(H_{A}\) and \(H_{B}\). \(\mathsf{simA}\) / \(\mathsf{simB}\) are quantum registers representing the wavefunctions of monomers A and B. \(\mathsf{enc}[F]\) is the auxiliary register for the block encoding of the observable. Additional details of the iterate may be found in the main text.
where we have defined \(f^{(A)}_{{\rm P}_{1}{\rm P}_{2}}=\sum_{\rm q}v^{{\rm P}_{1}{\rm P}_{2}}_{\rm q_{ \rm Q}}\) and \(f^{(B)}_{{\rm P}_{1}{\rm P}_{2}}=\sum_{\rm r}v^{{\rm P}_{\rm q}}_{\rm q_{\rm Q}}\). Here, the operator, \({\rm sym}(\cdot)\), symmetrizes all of the tensors with respect to the monomer indices \({\rm p}/{\rm q}\) independently (see Appendix B.4 for details). Furthermore, the electrostatic-exchange operator is found to be composed of seven terms (ignoring the additive constant),
\[\widehat{VP}_{\rm s}=\widehat{VP}_{A}+\widehat{VP}_{B}+\widehat{VP}_{\rm 1m}+ \widehat{VP}_{\rm 1\ell}+\widehat{VP}_{2}+\widehat{VP}_{3}+\widehat{VP}_{4} \tag{15}\]
where
\[\widehat{VP}_{A} =-\tfrac{i}{4}\sum_{\bf p}{\rm sym}(\kappa^{(A)}_{{\rm P}_{1}{ \rm P}_{2}})\hat{\gamma}_{{\rm P}_{1},0}\hat{\gamma}_{{\rm P}_{2},1} \tag{16}\] \[+\tfrac{1}{8}\sum_{\bf p}{\rm sym}(\Lambda^{{\rm P}_{1}{\rm P}_{ 2}}_{{\rm P}_{3}{\rm P}_{4}})\hat{\gamma}_{{\rm P}_{1},0}\hat{\gamma}_{{\rm P }_{2},1}\hat{\gamma}_{{\rm P}_{3},0}\hat{\gamma}_{{\rm P}_{4},1},\] \[\widehat{VP}_{B} =-\tfrac{i}{4}\sum_{\bf q}{\rm sym}(\kappa^{(B)}_{{\rm Q}_{1}{ \rm Q}_{2}})\hat{\gamma}_{{\rm Q}_{1},0}\hat{\gamma}_{{\rm Q}_{2},1}\] (17) \[+\tfrac{1}{8}\sum_{\bf q}{\rm sym}(\Lambda^{{\rm Q}_{1}{\rm Q}_{ 2}}_{{\rm Q}_{2}{\rm Q}_{4}})\hat{\gamma}_{{\rm Q}_{1},0}\hat{\gamma}_{{\rm Q }_{2},1}\hat{\gamma}_{{\rm Q}_{3},0}\hat{\gamma}_{{\rm Q}_{4},1},\] \[\widehat{VP}_{\rm 1m} =\tfrac{1}{8}\sum_{\bf p,Q}{\rm sym}(\Lambda^{{\rm P}_{1}{\rm P}_{ 2}}_{{\rm Q}_{1}{\rm Q}_{2}})\,\hat{\gamma}_{{\rm P}_{1},0}\hat{\gamma}_{{\rm P }_{2},1}\hat{\gamma}_{{\rm Q}_{1},0}\hat{\gamma}_{{\rm Q}_{2},1},\] (18) \[\widehat{VP}_{\rm 1\ell} =\tfrac{1}{4}\sum_{\bf p,Q}{\rm sym}(\Lambda^{{\rm P}_{1}{\rm Q}_ {2}{\rm P}_{2}})\hat{\gamma}_{{\rm P}_{1},0}\hat{\gamma}_{{\rm P}_{2},1}\hat {\gamma}_{{\rm Q}_{1},0}\hat{\gamma}_{{\rm Q}_{2},1},\] (19) \[\widehat{VP}_{2} =\tfrac{i}{8}\sum_{\bf p,Q}{\rm sym}(\Lambda^{{\rm P}_{1}{\rm P}_ {2}{\rm P}_{4}{\rm P}_{3}{\rm Q}_{2}})\Big{[}\hat{\gamma}_{{\rm P}_{1},0} \hat{\gamma}_{{\rm P}_{2},1}\hat{\gamma}_{{\rm P}_{3},0}\hat{\gamma}_{{\rm P} _{4},1}\] \[\hat{\gamma}_{{\rm Q}_{1},0}\hat{\gamma}_{{\rm Q}_{2},1}\Big{]},\] (20) \[\widehat{VP}_{3} =\tfrac{i}{8}\sum_{\bf p,Q}{\rm sym}(\Lambda^{{\rm P}_{1}{\rm Q}_ {4}}_{{\rm Q}_{1}{\rm Q}_{2}{\rm P}_{2}}S^{{\rm Q}_{3}}_{{\rm P}_{2}})\Big{[} \hat{\gamma}_{{\rm P}_{1},0}\hat{\gamma}_{{\rm P}_{2},1}\] \[\hat{\gamma}_{{\rm Q}_{1},0}\hat{\gamma}_{{\rm Q}_{2},1}\hat{ \gamma}_{{\rm Q}_{3},0}\hat{\gamma}_{{\rm Q}_{4},1}\Big{]},\] (21) \[\widehat{VP}_{4} =\tfrac{1}{2}(\hat{V}\hat{P}+\hat{P}\hat{V}). \tag{22}\]
In this representation, we find two independent monomer operators given by \(\widehat{VP}_{A}\) and \(\widehat{VP}_{B}\) consisting of both one-body and two-body terms, as well as five intermonomer operators: \(\widehat{VP}_{\rm 1m}\), \(\widehat{VP}_{\rm 1\ell}\), \(\widehat{VP}_{2}\), \(\widehat{VP}_{3}\), and \(\widehat{VP}_{4}\). Here, \(\widehat{VP}_{\rm 1m}\) and \(\widehat{VP}_{\rm 1\ell}\) correspond to spin-mixed and spin-locked contributions of the same order. Additionally, we have defined \(\kappa^{(A)}_{{\rm P}_{1}{\rm P}_{2}}/\kappa^{(B)}_{{\rm Q}_{1}{\rm Q}_{2}}\) as two-index tensors that appear within the one-body terms of monomers A/B as well as the general four-index tensor, \(\Lambda^{{\rm J}_{1}{\rm J}_{2}}_{{\rm J}_{2}{\rm J}_{2}}\), where \({\rm J}\in\{{\rm P},{\rm Q}\}\), which is assumed to be symmetrized in these equations and represents a dressed version of the intermolecular tensor appearing in the electrostatic-exchange operator, Eq. (4). Due to the verbosity of the equations, all of the tensor elements are defined explicitly in Appendix F. It is important to note that the form of the SAPT operators in Eqs. (13)-(15) is the same in both the full space and active space pictures apart from the definition of tensor elements themselves. This property emerges naturally in the Majorana representation, but it is not clearly evident otherwise.
To block encode the SAPT operators given by Eqs. (13)-(15) in the _sparse_ representation, we use the Jordan-Wigner transformation on each of the two monomers written as,
\[\hat{\gamma}_{{\rm P},0}=\hat{X}_{{\rm P},}\hat{Z}_{{\rm P}-1}, \cdots\hat{Z}_{0,}, \tag{23}\] \[\hat{\gamma}_{{\rm P},1}=\hat{Y}_{{\rm P},}\hat{Z}_{{\rm P}-1}, \cdots\hat{Z}_{0,}, \tag{24}\]
for monomer A and,
\[\hat{\gamma}_{{\rm Q},0}=\hat{X}_{{\rm Q},}\hat{Z}_{{\rm Q}-1}, \cdots\hat{Z}_{0,}, \tag{25}\] \[\hat{\gamma}_{{\rm Q},1}=\hat{Y}_{{\rm Q},}\hat{Z}_{{\rm Q}-1}, \cdots\hat{Z}_{0,}, \tag{26}\]
for monomer B. The SELECT circuits required to implement these Pauli strings are presented explicitly in Ref. [35]. The _sparse_ SAPT-EVE algorithm uses the Jordan-Wigner mapping of the second-quantized operators above and proceeds to load the non-zero tensor coefficients found in Eqs. (13)-(15), thereby applying the so-called sparse block encoding method introduced by Berry _et al._[34]. We use the sparse method to benchmark the tensor factorization procedure in the Numerical Results section, Sec. II. In Appendix F, we present the full expressions for the \(\ell_{1}\) norms in the _sparse_ representation accounting for additional permutational symmetries of certain tensors which help reduce the \(\ell_{1}\) norm further, as pointed out in Ref. [36].
We now consider the tensor factorization procedure required for decomposing the four-index tensors appearing in Eq. (13) as well as Eqs. (16)-(21). In the following, we will work in the spatial orbital basis with lower-case indices \(p/q\) denoting the spatial molecular orbitals of monomer A/B respectively. Our approach is analogous to the low-rank double factorization technique outlined in previous works [37, 15], but contains small differences to account for the unique permutational symmetries of each tensor. The tensor factorization procedure consists of a two-step process. In the first step, the general SAPT tensor, \(\Lambda^{{\rm J}_{1}j_{2}}_{j3j_{4}}\), is factorized as,
\[\Lambda^{{\rm J}_{1}j_{2}}_{j_{3}j_{4}}=\begin{cases}\sum_{\rm r}s^{(z)}_{i}[{\bf u} ^{(z)}_{i}]_{j_{1}j_{2}}[{\bf u}^{(z)}_{t}]_{j_{3}j_{4}}&\text{if }\Lambda^{{\rm J}_{1}j_{2}}_{j_{4}}=\Lambda^{{\rm J}_{3}j_{4}}_{j_{1}j_{2}},\\ \sum_{\rm r}s^{(z)}_{t}[{\bf u}^{(z)}_{t}]_{j_{1}j_{2}}[{\bf w}^{(z)}_{t}]_{j_{3}j _{4}}&\text{otherwise},\end{cases} \tag{27}\]
where \(s^{(z)}_{t}\) correspond to eigenvalues/singular values and \({\bf u}^{(z)}_{t}/{\bf w}^{(z)}_{t}\) correspond to eigenvectors or singular vectors depending on whether the symmetry condition is satisfied. Here, we use the label \(z\) to represent all of the unique tensor sub-blocks that appear in first-order SAPT theory found in Eqs. (13) and (16)-(21). The factorization procedure is based on grouping the \(j_{1}j_{2}/j_{3}j_{4}\) indices into two indices \(j/j^{\prime}\) where \(j\in\{p,q\}\). This implies that the four-index tensor \(\Lambda^{{\rm J}_{1}j_{2}}_{j_{3}j_{4}}\) is mapped to a matrix with indices \(j\) and \(j^{\prime}\). As a result, the condition, \(\Lambda^{{\rm J}_{1}j_{
seven tensors will have a different rank, which we label by \(N_{1}^{(z)}\).
In the second step, each column vector \(\mathbf{u}_{t}^{(z)}/\mathbf{w}_{t}^{(z)}\) is reshaped into a matrix and the following factorization procedure is performed:
\[[\mathbf{u}_{t}^{(z)}]_{j_{1}j_{2}}=\begin{cases}\sum\limits_{k}\alpha_{kt}^{(z) }U_{t,kj_{1}}^{(z)}U_{t,kj_{2}}^{(z)}&\text{if }[\mathbf{u}_{t}^{(z)}]_{j_{1}j_{2}}=[\mathbf{u}_{t}^{(z)}]_{j_{2}j_{1}},\\ \sum\limits_{k}\beta_{kt}^{(z)}U_{t,kj_{1}}^{(z)}V_{t,kj_{2}}^{(z)}&\text{ otherwise.}\end{cases} \tag{28}\]
The second factorization procedure is performed for each index \(t\) from the first factorization step and corresponds to an eigendecomposition/SVD with eigenvalues/singular values given by, \(\alpha_{kt}^{(z)}/\beta_{kt}^{(z)}\), labeled by the index, \(k\). For a particular \(t\), the rank of the second factorization will be given by \(N_{2}^{2}\). Here, \(U_{t,kj_{1}}^{(z)}\) and \(V_{t,kj_{2}}^{(z)}\) may be interpreted as three-index tensors where we use the convention that \(U\) corresponds to monomer A and \(V\) is associated with monomer B. For a particular \(t\), the two-index tensors \([U_{t}^{(z)}]_{k,j_{1}}\) and \([V_{t}^{(z)}]_{k,j_{2}}\) are equal to unitary matrices that rotate the molecular orbitals of each monomer independently. Eqs. (27)-(28) completely define the tensor factorization process that is required for all of the SAPT tensors. We emphasize that the tensor factorization procedure outlined above is not unique. However, we found that it provided the smallest \(\ell_{1}\) norms consistently for all benchmark cases.
In addition to the four-index tensor decomposition outlined above, we also performed a singular value decomposition of the rectangular overlap matrix \(S_{q}^{p}\),
\[S_{q}^{p}=\sum_{n}^{N_{S}}s_{n}U_{pn}^{(s)}V_{qn}^{(s)}\,, \tag{29}\]
where \(s_{n}\) denotes the singular values, while \(U_{pn}^{(s)}\) and \(V_{qn}^{(s)}\) denote the unitary orbital transformation matrices for monomer A and B, respectively. The rank \(N_{S}\) of this decomposition will be upper bounded by the number of spatial orbitals of the smaller monomer, \(N_{S}\leq\min(N_{A},N_{B})\). This dependence provides several advantages when one of the monomers is much smaller than the other.
To finalize the tensor factorization representation for all of the SAPT operators, a final decomposition is required for the tensors appearing in the one-body terms in Eqs. (13) and (16)-(17). In all cases, the two-index tensors \(f_{p1p_{2}}^{(A)}\), \(f_{q1q_{2}}^{(B)}\), \(\kappa_{p1p_{2}}^{(A)}\) and \(\kappa_{q1q_{2}}^{(B)}\), will be symmetric therefore a standard eigendecomposition can be performed,
\[f_{p1p_{2}}^{(A)} =\sum\limits_{k}s_{k}^{(A)}U_{p_{1}k}^{(f)}U_{p_{2}k}^{(f)}, \tag{30}\] \[f_{q1q_{2}}^{(B)} =\sum\limits_{l}s_{l}^{(B)}V_{q_{1}l}^{(f)}V_{q_{2}l}^{(f)},\] (31) \[\kappa_{p1p_{2}}^{(A)} =\sum\limits_{k}\tilde{s}_{k}^{(A)}U_{p_{1}k}^{(\kappa)}U_{p_{2} k}^{(\kappa)},\] (32) \[\kappa_{q1q_{2}}^{(B)} =\sum\limits_{l}\tilde{s}_{l}^{(B)}V_{q_{1}l}^{(\kappa)}V_{q_{2} l}^{(\kappa)}, \tag{33}\]
where \(s_{k}^{(A)}\), \(s_{l}^{(B)}\), \(\tilde{s}_{k}^{(A)}\), \(\tilde{s}_{l}^{(B)}\) denote the eigenvalues while \(U_{p_{1}k}^{(f)}U_{p_{2}k}^{(\kappa)}\) denote the eigenvectors for monomer A, and \(V_{q_{1}l}^{(f)}/V_{q_{1}l}^{(\kappa)}\) denote the eigenvectors of monomer B. In matrix form, the eigenvectors correspond to the unitary orbital transformation matrices required for monomer A and B respectively.
Based on the tensor factorization procedure outlined in Eqs. (27)-(33), the appropriate block encoded operators can be constructed by carefully loading the eigenvalues \(s_{k}^{(A)}\), \(s_{l}^{(B)}\), \(\tilde{s}_{k}^{(A)}\), \(\tilde{s}_{l}^{(B)}\) as well as the singular coefficients \(s_{t}^{(z)}\), \(\alpha_{kt}^{(z)}/\beta_{kt}^{(z)}\) and \(s_{n}\) using the PREPARE and SELECT circuits outlined in Appendix G. After some additional manipulations, we found the following expressions for the \(\ell_{1}\) norms of all of the SAPT operators in the tensor factorization (tf) representation,
\[\lambda_{V}^{(\text{tf})}= \sum\limits_{k}|s_{k}^{(A)}|+\sum\limits_{l}|s_{l}^{(B)}|+\sum \limits_{tkl}|s_{t}^{(v)}\alpha_{kt}^{(A_{v})}\alpha_{lt}^{(B_{v})}|, \tag{34}\] \[\lambda_{P}^{(\text{tf})} =\frac{1}{2}\lambda_{s}^{2}+\sum\limits_{n}|s_{n}|^{2},\] (35) \[\lambda_{\text{VP}_{s}}^{(\text{tf})} =\frac{1}{2}\sum\limits_{k}|\tilde{s}_{k}^{(A)}|+\frac{1}{2}\sum \limits_{l}|\tilde{s}_{l}^{(B)}|\] (36) \[+\frac{1}{4}\sum\limits_{tkl}(|s_{t}^{(A)}\alpha_{kt}^{(A_{v})} \alpha_{lt}^{(A_{v})}|+|s_{t}^{(B_{v})}\alpha_{kt}^{(B_{v})}\alpha_{lt}^{(B_{ v})}|)\] \[+\frac{1}{2}\sum\limits_{tkl}(|s_{t}^{(1m)}\alpha_{kt}^{(A_{in})} \alpha_{lt}^{(B_{in})}|+|s_{t}^{(1\ell)}\beta_{kt}^{(1\ell)}\beta_{lt}^{(1 \ell)}|)\] \[+\frac{\lambda_{s}}{2}\sum\limits_{tkl}(|s_{t}^{(2)}\alpha_{kt}^{( 2)}\beta_{lt}^{(2)}|+|s_{t}^{(3)}\alpha_{kt}^{(3)}\beta_{lt}^{(3)}|)\] \[+\lambda_{P}^{(\text{tf})}\sum\limits_{tkl}|s_{t}^{(v)}\alpha_{kt}^ {(A_{v})}\alpha_{lt}^{(B_{v})}|,\]
where we have defined \(\lambda_{s}=\sum_{n}|s_{n}|\) for notational clarity. We have also derived the corresponding \(\ell_{1}\) norms for the active space picture in Appendix F. The complete block encoding circuits for \(\hat{V}\) and \(\hat{P}\) can be found in Appendix G. For illustration purposes, we have also derived the block encoding circuits for the intermonomer operators \(\widetilde{VP}_{\text{1m}}\), \(\widetilde{VP}_{\text{1}\ell}\) and \(\widetilde{VP}_{\text{4}}\) of the electrostatic-exchange operator, \(\widetilde{VP}_{\text{5}}\), in Eq. (15). The block encoding circuits for \(\widetilde{VP}_{A}\) and \(\widetilde{VP}_{B}\) are equivalent to the conventional second-quantized, double-factorized Hamiltonian and may be found in Refs. [8; 15]. It is important to emphasize that \(\widetilde{VP}_{\text{4}}\) represents the dominant contribution to the total block encoding compilation cost of the electrostatic-exchange operator. In Appendix G.3, we exploit the fact that \(\widetilde{VP}_{\text{4}}\) may be written as the product of two operators, \(\hat{V}\) and \(\hat{P}\), in order to reduce the overall compilation cost. In this regard, while we found that all of the terms in the electrostatic-exchange operator have an upper bound asymptotic Toffoli gate scaling of \(\mathcal{O}(N_{A}^{3})\) with respect to the number of spatial orbitals \(N_{A}\) (assuming \(N_{A}\approx N_{B}\)), \(\widetilde{VP}_{\text{4}}\) is accompanied by an additive \(\mathcal{O}(N_{A}^{2})\) contribution that makes it quadratically
larger than any other term. Our resource estimates for the electrostatic-exchange operator in Section II take into account \(\widetilde{VP}_{A}\), \(\widetilde{VP}_{B}\), \(\widetilde{VP}_{1\text{m}}\), \(\widetilde{VP}_{1\ell}\) and \(\widetilde{VP}_{4}\) (additional details can be found in Appendix F) which allows for an adequate assessment of the total resource cost for the SAPT-EVE algorithm.
## II Numerical results
### Benchmark results for small molecular systems
We first investigate the advantages of the tensor factorization procedure over the standard sparse method. The overall performance of any scheme is primarily determined by the \(\ell_{1}\) norm \(\lambda_{F}\) of the observable \(F\). In Fig. 3, we compare the \(\ell_{1}\) norms of the sparse and tensor factorization schemes as a function of basis set size for three different molecular systems: an ammonia dimer, a water dimer, and a benzene-water complex (see Appendix H for details of the molecular systems).
We observe that the \(\ell_{1}\) norm of the exchange operator is often the smallest while \(\lambda_{\text{VP}}\) is often the largest in the limit of large basis sets, regardless of the encoding scheme. The difference between sparse encoding and double factorization \(\ell_{1}\) norm is small at smaller basis set sizes. However, for large basis sets, such as cc-pVDZ, the tensor factorization encoding scheme greatly reduces the \(\ell_{1}\) norm ranging from two to three orders of magnitude.
While the \(\ell_{1}\) norm provides a multiplicative factor that is fundamental to the total runtime, the total cost of the algorithm is also dependent on the eigenstate reflection circuit \(\mathcal{R}_{\pi}\) as well as the block encoding cost of each observable. The eigenstate reflection cost, in turn, depends on the \(\ell_{1}\) norm of each monomer Hamiltonian required for block encoding and is inversely proportional to the spectral gap between the ground and first excited state in each of the monomers, as detailed in Eq. (12). In Fig. 4, we present the overall logical qubit and Toffoli gate costs for the three molecular systems considered above. The left panel of Fig. 4 presents the total cost of the SAPT-EVE algorithm for the three molecular systems. The right panel of Fig. 4 presents a breakdown of the costs of the eigenstate preparation reflection oracle, \(\mathcal{R}_{\pi}\), as well as SAPT block encoding oracle, \(\mathcal{R}_{\tau}\) respectively.
In addition to using the tensor-factorization techniques outlined above, our resource estimates used various optimization schemes for reducing the Toffoli count using efficient quantum adders and data-loading QROM implementation techniques outlined in previous works [35; 38]. In particular, we optimize the total runtime of the algorithm in order to determine the total first-order interaction energy, Eq.(1), to chemical accuracy (\(\epsilon=0.0016\) Hartree). This optimization was based on \(\ell_{1}\) norm upper bounds for the expectation values resulting in conservative and pessimistic estimates, which could be improved with alternative approaches as discussed in Appendix E.
Based on the data from Fig. 4, we find that the resource cost of each dimer system is highly dependent on the atomic composition and corresponding basis set size, which, in turn, affects the total number of qubits and \(\ell_{1}\) norm of the Hamiltonian and observable. One of the main conclusions of this work is readily seen by comparing the Toffoli gate count for the total algorithm and the eigenstate reflection, \(\mathcal{R}_{\pi}\). We observe that the dominant cost of the total algorithm is due to \(\mathcal{R}_{\pi}\), highlighting one of the key bottlenecks for observable estimation and emphasizing the subroutine that requires further improvement in future work. The data from this figure also suggests that brute-force basis set extrapolation, which amounts to increasing the basis set size and number of spin-orbitals on the quantum computer, does not scale well even for small molecular benchmark systems. For instance, the benzene-water system in the cc-pVDZ basis will require over 6000 logical qubits and Toffoli gate counts on the order of \(10^{19}\). Both of these requirements are substantial and highlight the need for alternative strategies. For this reason, one of the major contributions of our work is the derivation of the first-order active space SAPT operators, which drastically reduces resource costs. In Appendix H, we present the active space
Figure 3: Basis set dependence of the \(\ell_{1}\) norm for the sparse and tensor factorized SAPT observables for three different molecular systems (top: water dimer, middle: ammonia dimer and bottom: benzene-water dimer).
resource costs for the small molecule benchmark set. The following section presents the active space resource cost analysis for a benchmark test system relevant to drug design.
## B Benchmark result for drug design: heme-artemisinin
We now consider a more interesting application for the quantum computer that approaches the limit of current classical computing consisting of a heme interacting with an artemisinin molecule at a separation distance of 2.11 A, as shown in Fig. 5. In the following, we discuss why this benchmark system is interesting for drug design.
Artemisinin is a popular plant-derived anti-malaria drug. While the exact mechanisms of action of the artemisinin drug are not completely understood [39, 40], the interaction with the iron center of heme is known to play an integral role [39]. An established mechanism of action suggests that artemisinin gets reduced by the Fe(II) center upon heme binding and is concerted with the cleavage of the peroxide bond resulting in an oxygen-centered radical. It is postulated that a rearrangement yields carbon-centered radicals (see [41] for a detailed mechanism), which have been observed experimentally by spin trapping [42]. To further optimize synthetic analogs to this drug, a better understanding of the mechanism of action would be key. To this end, the activation of artemisinin by O-O bond cleavage has been studied using density functional theory (DFT) [43, 44, 45]. However, none of these studies included a heme model complex explicitly. Several other studies have investigated the reactivity of heme complexes using DFT [46, 47, 48], as well as more advanced quantum computational methods [49, 50, 51, 52], including a resource estimation cost for fault-tolerant quantum computing [53]. Several studies [52, 53] have recommended using a large active space with more than 40 orbitals to include the key Fe-\(d\) and heme-\(\pi\) orbitals [53]. To study its interaction with artemisinin, the ideal next step would be to simulate the interaction of both systems together. Unfortunately, this requires even larger active spaces (see below), making it intractable for many classical methods. As a result, this system is an appealing target for fault-tolerant quantum computing in a pharmaceutical context. Here, we focus on the first step of the decomposition mechanism, the binding, which is accompanied by an electron transfer and the cleavage of the peroxo bond, see Fig. 13 in Appendix I.
In this work, we located the transition state for this reaction step and the corresponding molecular geometry is the core example in this study (see Fig. 5). The heme is treated as a neutral-charge open-shell system with two unpaired electrons. In contrast, the artemisinin system is treated as a closed-shell system. However, we emphasize that our SAPT derivation remains applicable in the most general case of two open-shell monomers. In Table 1, we estimate the FTQC resources for the three SAPT observ
Figure 4: Resource estimation for benchmark molecules (separated by the color of the symbols) in the all-electron (no active space) picture. Different basis sets ({STO-3G, 6-31G, cc-pVDZ }) are encoded by the hue of the symbols while the different operators (\(\{\hat{V},\hat{P},\widetilde{VP}_{s}\}\)) are encoded by the symbol shape. (a) Toffoli and logical qubit cost for total SAPT-EVE algorithm for our molecule test set. The block encoding cost included in the total cost of estimating the electrostatic-exchange operator, \(\widetilde{VP}_{s}\), include the \(\widetilde{VP}_{A}\), \(\widetilde{VP}_{B}\), \(\widetilde{VP}_{1\text{m}}\), \(\widetilde{VP}_{1\text{\ell}}\), and \(\widetilde{VP}_{4}\) contributions as discussed in the main text. (b) Toffoli and qubit cost comparison between the eigenstate reflection oracle, \(\mathcal{R}_{\pi}\), and the observable block encoding oracle, \(\mathcal{R}_{\pi}\). Even though the cost of both reflections scales is similar in terms of number of qubits, it is clear from the plot that the Toffoli gate cost of \(\mathcal{R}_{\pi}\) is orders of magnitude larger than \(\mathcal{R}_{\pi}\) and will dominate the total cost of the algorithm.
ables in the (42e, 43o) active space for heme and (48e, 40o) active space for artemisinin. The orbital-optimized active spaces were determined via DMRG calculations (additional details can be found in Appendix I). For comparison, the all-electron (full space) picture would require (222e, 529o) for heme and (144e, 366o) for artemisinin in the cc-pVDZ basis.
In Tab. 1, we observe larger Toffoli gate count estimates in the total SAPT-EVE algorithm for the heme-artemisinin benchmark compared to the small molecule benchmark set (\(10^{20}\) compared to \(10^{19}\)), which we found was largely due to the smaller spectral gaps observed in heme and artemisinin. As pointed out in Ref. [36], the \(\ell_{1}\) norm of the Hamiltonian operators (and SAPT observables) also seem to acquire a slightly larger power dependence with respect to the number of active space orbitals, which also contributes to the large resource count estimate presented in the table. This is also observed in Tab. 4 and Fig. 15 in Appendix K. While these estimates show that the resource cost is substantially reduced compared to the non-active space methodology, the eigenstate reflection remains a fundamental bottleneck for the total algorithm. This may become detrimental for systems with small energy gaps and further work is required to determine whether alternative approaches for the eigenstate reflection subroutine could be used. For more details on the algorithm's performance, we also present call graphs in Figs. 16-18 in Appendix K. Tab. 4 also provides a system parameter and resource cost breakdown of every dimer system considered in this work for completeness.
To finalize this section, we also discuss the cost of the supermolecular approach which is used to estimate the interaction energy, \(E_{\text{int}}\). For the quantum resource estimation, we used the double-factorized representation of the monomer and dimer Hamiltonians presented in Refs. [8; 15]. In the supermolecular approach, three separate quantum phase estimation runs are performed. The full resource cost is provided in Tab. 3 in Appendix J, where we found that the total number of Toffoli gates required for each run was typically on the order of \(10^{10}\) with a total qubit cost on the order of \(10^{3}\). While the supermolecular approach provides an accurate estimation of the interaction energy, it does not provide a decomposition with respect to the electrostatic or exchange energy contributions which can be used to improve the understanding of binding mechanisms in drug molecules. Nevertheless, this highlights the chasm between the two methodologies as well as the broader difference between eigenvalue/energy estimation and observable estimation in the context of fault-tolerant quantum computing.
## III Discussion and Conclusion
In this paper, we have investigated the calculation of binding energies through the first-order SAPT formalism on a fault-tolerant device. We see this as a paradigmatic task in drug discovery that is well understood classically but where no quantum primitive yet exists. Translating this subroutine from classical to quantum computation requires considerable work on the algorithm selection (see previous work in Ref. [24]) and adjusting all elements of this algorithm to the specific use case, as highlighted in this paper. Blind application of QSP-EVE without splitting into monomers, block encoding the SAPT operator without taking into account the product of operators structure, or input of an all-electron molecular Hamiltonian all would lead to much higher costs. This work has systematically compared several implementations of observable estimation and showed that observable-specific tensor factorizations and block encoding methodologies can dramatically reduce the algorithm's total runtime by many orders of magnitude. We presented an end-to-end resource estimation of the algorithm considering the ground-state preparation for each of the two monomers and the cost of block encodings tailored to the specific SAPT observables working both in the full space and active space pictures. While this work represents a solid first step in developing Heisenberg-limited fault-tolerant quantum algorithms beyond energy estimation, there remains much room for improvement in future work. For instance, developing the second-order SAPT energy contributions is still required, and explorations of alternative techniques that go beyond the \(S^{2}\) approximation should also be considered.
Ultimately, we have observed two major bottlenecks for this algorithm. The first bottleneck consists of the eigenstate reflection subroutine. Our work suggests that
Figure 5: Transition state structure of the binding of artemisinin to the Fe at the center of the heme complex; the process is concerted with an electron transfer and cleavage of the peroxo bond (using the \(\omega\)B97X-D functional see computational methods in Appendix I for details).
alternative approaches for this reflection are needed to make the cost of observable estimation more practical. The second bottleneck consists of the \(\ell_{1}\) norm of the observable, \(F\). In this regard, however, there are possible paths forward that we envision could help reduce the \(\ell_{1}\) norm. For instance, methods such as tensor hypercontraction [8] and regularized double factorization methods [54, 55] should help reduce the \(\ell_{1}\) norm of the SAPT observables and hence improve the performance of the current SAPT-EVE algorithm. In conjunction, quantile estimation methods have also been proposed to replace the \(\ell_{1}\) norm dependence with the standard deviation (\(\sigma_{F}\)) of the observable that could reduce the overall runtime [56]. While our work represents a key step in developing SAPT for fault-tolerant quantum computing, future methods that integrate all of the mentioned techniques will have the ability to make the SAPT-EVE algorithm much more competitive and a useful alternative to the supermolecular approach.
Looking at this work through a broader lens, we wonder if this amount of effort is roughly universal in mapping non-total-energy-observables to efficient Heisenberg-limited FTQC methodologies. In this work, we encountered two main barriers to progress: (1) verbosity of mapping and (2) extensive overhead over standard QPE-type methodology to obtain the desired observable estimation values, even after extensive optimization. Overall, the first point seems palatable, as this effort is roughly similar in scope to the efforts needed to write an efficient block-encoded variant of the original QPE-type total energy method and is done once during the algorithm design stage. The second point is more concerning - it still seems that eigenvalue-based observables are considerably more efficient than general observables, even after extensive optimization. This deserves more research, and the mapping of even more non-total-energy observable properties to the FTQC environment will likely help to resolve this story and the question at the top of this paragraph. In any case, the present work may be taken as just one of many forays into this difficult but rewarding effort to map general observables to FTQC.
**ACKNOWLEDGMENTS**
MS, WP, SS, and SMS thank all our colleagues at Psi Quantum for useful discussions. In particular, we thank Owen Williams for implementing the callgraph visualization engine and Sam Pallister for discussing some details of the factorized operator block encodings. The authors thank Clemens Utschig-Utschig for his comments on the manuscript and support during the project.
## References
* Cao _et al._ [2019]Y. Cao, J. Romero, J. P. Olson, M. Degroote, P. D. Johnson, M. Kieferova, I. D. Kivlichan, T. Menke, B. Peropadre, N. P. D. Sawaya, S. Sim, L. Veis, and A. Aspuru-Guzik, Chemical Reviews **119**, 10856 (2019), pMID: 31469277, [https://doi.org/10.1021/acs.chemrev.8b00803](https://doi.org/10.1021/acs.chemrev.8b00803).
* McArdle _et al._ [2020]S. McArdle, S. Endo, A. Aspuru-Guzik, S. C. Benjamin, and X. Yuan, Rev. Mod. Phys. **92**, 015003 (2020).
* Bauer _et al._ [2020]B. Bauer, S. Bravyi, M. Motta, and G. K.-L. Chan, Chemical Reviews **120**, 12685 (2020), pMID: 33090772, [https://doi.org/10.1021/acs.chemrev.9b00892](https://doi.org/10.1021/acs.chemrev.9b00892).
* Liu _et al._ [2022]H. Liu, G. H. Low, D. S. Steiger, T. Haner, M. Reiher, and M. Troyer, Materials Theory **6**, 11 (2022).
* Huggins _et al._ [2022]W. J. Huggins, K. Wan, J. McClean, T. E. O'Brien, N. Wiebe, and R. Babbush, Phys. Rev. Lett. **129**, 240501 (2022).
* O'Brien _et al._ [2022]T. E. O'Brien, M. Streif, N. C. Rubin, R. Santagati, Y. Su, W. J. Huggins, J. J. Goings, N. Moll, E. Kyoseva, M. Degroote, C. S. Tautermann, J. Lee, D. W. Berry, N. Wiebe, and R. Babbush, Phys. Rev. Res. **4**, 043210 (2022).
* Reiher _et al._ [2017]M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, Proceedings of the National Academy of Sciences **114**, 7555 (2017).
* Lee _et al._ [2021]J. Lee, D. W. Berry, C. Gidney, W. J. Huggins, J. R. McClean, N. Wiebe, and R. Babbush, PRX Quantum **2**, 030305 (2021).
* Poulin _et al._ [2021]D. Poulin, A. Kitaev, D. S. Steiger, M. B. Hastings, and
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{System Parameters} & \multicolumn{4}{c}{Observable Parameters} & \multicolumn{4}{c}{Gates} & \multicolumn{4}{c}{Qubits} \\ \(N_{A}\) & \(N_{B}\) & \(\lambda_{A}\) & \(\lambda_{B}\) & \(\Delta_{A}\) & \(\Delta_{B}\) & \(\hat{F}\) & \(\lambda_{F}\) & \(\varepsilon_{F}\) & \(\Lambda_{F}\) & Total & \(\mathcal{R}_{\pi}\) & \(\mathcal{R}_{\tau}\) & Total & \(\mathcal{R}_{\pi}\) & \(\mathcal{R}_{\tau}\) \\ \hline \multirow{3}{*}{43} & \multirow{3}{*}{40} & \multirow{3}{*}{232.2} & \multirow{3}{*}{361.8} & \multirow{3}{*}{0.0069} & \multirow{3}{*}{0.1212} & \(\hat{V}\) & 65.5 & \(7.29\times 10^{-5}\) & \(8.99\times 10^{5}\) & \(9.74\times 10^{19}\) & \(1.16\times 10^{13}\) & \(1.34\times 10^{4}\) & 3724 & 2615 & 3617 \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ \end{tabular} \begin{tabular}{c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{System Parameters} & \multicolumn{4}{c}{Observable Parameters} & \multicolumn{4}{c}{Gates} & \multicolumn{4}{c}{Qubits} \\ \(N_{A}\) & \(N_{B}\) & \(\lambda_{A}\) & \(\lambda_{B}\) & \(\Delta_{A}\) & \(\Delta_{B}\) & \(\hat{F}\) & \(\lambda_{F}\) & \(\varepsilon_{F}\) & \(\Lambda_{F}\) & Total & \(\mathcal{R}_{\pi}\) & \(\mathcal{R}_{\tau}\) & Total & \(\mathcal{R}_{\pi}\) & \(\mathcal{R}_{\tau}\) \\ \hline \multirow{3}{*}{43} & \multirow{3}{*}{40} & \multirow{3}{*}{232.2} & \multirow{3}{*}{361.8} & \multirow{3}{*}{0.0069} & \multirow{3}{*}{0.1212} & \(\hat{V}\) & 65.5 & \(7.29\times 10^{-5}\) & \(8.99\times 10^{5}\) & \(9.74\times 10^{19}\) & \(1.16\times 10^{13}\) & \(1.34\times 10^{4}\) & 3724 & 2615 & 3617 \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ \end{tabular}
\begin{tabular}{c c
M. Troyer, Phys. Rev. Lett. **121**, 010501 (2018).
* Brassard _et al._ [2002]G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, Contemporary Mathematics **305**, 53 (2002).
* Knill _et al._ [2007]E. Knill, G. Ortiz, and R. D. Somma, Phys. Rev. A **75**, 012328 (2007).
* Jensen [2017]F. Jensen, _Introduction to computational chemistry_ (John wiley & sons, 2017).
* Bukowski _et al._ [2013]R. Bukowski, W. Cencek, P. Jankowski, M. Jeziorska, B. Jeziorski, S. Kucharski, V. Lotrich, A. Misquitta, R. Moszynski, K. Patkowski, _et al._, Sequential and parallel versions. User's Guide. Revision SAPT2012 **2** (2013).
* Smith _et al._ [2020]D. G. A. Smith, L. A. Burns, A. C. Simonnett, R. M. Parrish, M. C. Schieber, R. Galvelis, P. Kraus, H. Kruse, R. Di Remigio, A. Alenaizan, A. M. James, S. Lehtola, J. P. Misiewicz, M. Scheurer, R. A. Shaw, J. B. Schriber, Y. Xie, Z. L. Glick, D. A. Sirianni, J. S. O'Brien, J. M. Waldrop, A. Kumar, E. G. Hohenstein, B. P. Pritchard, B. R. Brooks, I. Schaefer, Henry F., A. Y. Sokolov, K. Patkowski, I. DePrince, A. Eugene, U. Bozkaya, R. A. King, F. A. Evangelista, J. M. Turney, T. D. Crawford, and C. D. Sherrill, The Journal of Chemical Physics **152**, 10.1063/5.0006002 (2020).
* von Burg _et al._ [2021]V. von Burg, G. H. Low, T. Haner, D. S. Steiger, M. Reiher, M. Roettcher, and M. Troyer, Physical Review Research **3**, 033055 (2021).
* Santagati _et al._ [2023]R. Santagati, A. Aspuru-Guzik, R. Babbush, M. Degroote, L. Gonzalez, E. Kyoseva, N. Moll, M. Oppel, R. M. Parrish, N. C. Rubin, M. Streif, C. S. Tautermann, H. Weiss, N. Wiebe, and C. Utschig-Utschig, arXiv e-prints, arXiv:2301.04114 (2023), arXiv:2301.04114 [quant-ph].
* Jeziorski _et al._ [1993]B. Jeziorski, R. Moszynski, A. Ratkiewicz, S. Rybak, K. Szalewicz, and H. L. Williams, Methods and Techniques in Computational Chemistry: METECC **94**, 79 (1993).
* Szalewicz [2012]K. Szalewicz, WIREs Computational Molecular Science **2**, 254 (2012).
* Hohenstein _et al._ [2011]E. G. Hohenstein, R. M. Parrish, C. D. Sherrill, J. M. Turney, and I. Schaefer, Henry F., The Journal of Chemical Physics **135**, 10.1063/1.3656681 (2011).
* Gutowski _et al._ [1986]M. Gutowski, J. Van Lenthe, J. Verbeek, F. Van Duijneveldt, and G. Chalasinski, Chemical Physics Letters **124**, 370 (1986).
* Gutowski _et al._ [1987]M. Gutowski, F. B. V. Duijneveldt, G. Chalasinski, and L. Piela, Molecular Physics **61**, 233 (1987).
* Malone _et al._ [2022]F. D. Malone, R. M. Parrish, A. R. Welden, T. Fox, M. Degroote, E. Kyoseva, N. Moll, R. Santagati, and M. Streif, Chem. Sci. **13**, 3094 (2022).
* Loipersberger _et al._ [2023]M. Loipersberger, F. Malone, R. M. Parrish, A. R. Welden, T. Fox, M. Degroote, E. Kyoseva, N. Moll, R. Santagati, and M. Streif, Chem. Sci., (2023).
* Steudtner _et al._ [2023]M. Steudtner, S. Morley-Short, W. Pol, S. Sim, C. L. Cortes, M. Loipersberger, R. M. Parrish, M. Degroote, N. Moll, R. Santagati, and M. Streif, arXiv e-prints, arXiv:2303.14118 [quant-ph].
* Cournia _et al._ [2017]Z. Cournia, B. Allen, and W. Sherman, Journal of Chemical Information and Modeling **57**, 2911 (2017), pMID: 29243483, [https://doi.org/10.1021/acs.jcim.7b00564](https://doi.org/10.1021/acs.jcim.7b00564).
* Moszynski _et al._ [1994]R. Moszynski, B. Jeziorski, S. Rybak, K. Szalewicz, and H. L. Williams, The Journal of Chemical Physics **100**, 5080 (1994).
* Cerezo _et al._ [2021]M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, and P. J. Coles, Nature Reviews Physics **3**, 625 (2021).
* Bharti _et al._ [2022]K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin-Lea, A. Anand, M. Degroote, H. Heimonen, J. S. Kottmann, T. Menke, W.-K. Mok, S. Sim, L.-C. Kwek, and A. Aspuru-Guzik, Rev. Mod. Phys. **94**, 015004 (2022).
* Aaronson [2018]S. Aaronson, in _Proceedings of the 50th annual ACM SIGACT symposium on theory of computing_ (2018) pp. 325-338.
* Huang _et al._ [2020]H.-Y. Huang, R. Kueng, and J. Preskill, Nature Physics **16**, 1050 (2020).
* Knill _et al._ [2007]E. Knill, G. Ortiz, and R. D. Somma, Physical Review A **75**, 012328 (2007), quant-ph/0607019.
* Rall [2020]P. Rall, Phys. Rev. A **102**, 022408 (2020).
* Berry _et al._ [2018]D. W. Berry, M. Kieferova, A. Scherer, Y. R. Sanders, G. H. Low, N. Wiebe, C. Gidney, and R. Babbush, npj Quantum Information **4**, 22 (2018).
* Berry _et al._ [2019]D. W. Berry, C. Gidney, M. Motta, J. R. McClean, and R. Babbush, Quantum **3**, 208 (2019).
* Babbush _et al._ [2018]R. Babbush, C. Gidney, D. W. Berry, N. Wiebe, J. McClean, A. Paler, A. Fowler, and H. Neven, Phys. Rev. X **8**, 041015 (2018).
* Koridon _et al._ [2021]E. Koridon, S. Yalouz, B. Senjean, F. Buda, T. E. O'Brien, and L. Visscher, Phys. Rev. Res. **3**, 033127 (2021).
* Motta _et al._ [2021]M. Motta, E. Ye, J. R. McClean, Z. Li, A. J. Minnich, R. Babbush, and G. K.-L. Chan, npj Quantum Information **7**, 83 (2021).
* Hao Low _et al._ [2018]G. Hao Low, V. Kliuchnikov, and L. Schaeffer, arXiv e-prints, arXiv:1812.00954 (2018), arXiv:1812.00954 [quant-ph].
* Posner and O'Neill [2004]G. H. Posner and P. M. O'Neill, Accounts of Chemical Research **37**, 397 (2004), pMID: 15196049.
* O'Neill _et al._ [2010]P. M. O'Neill, V. E. Barton, and S. A. Ward, Molecules **15**, 1705 (2010).
* Mercer _et al._ [2011]A. E. Mercer, I. M. Copple, J. L. Maggs, P. M. O'Neill, and B. K. Park, Journal of Biological Chemistry **286**, 987 (2011).
* Wu _et al._ [1998]W.-M. Wu, Y. Wu, Y.-L. Wu, Z.-J. Yao, C.-M. Zhou, Y. Li, and F. Shan, Journal of the American Chemical Society **120**, 3316 (1998).
* Taranto _et al._ [2006]A. G. Taranto, J. W. de Mesquita Carneiro, and M. T. de Araujo, Bioorganic & Medicinal Chemistry **14**, 1546 (2006).
* Moles _et al._ [2006]P. Moles, M. Oliva, and V. S. Safont, The Journal of Physical Chemistry A **110**, 7144 (2006), pMID: 16737265.
* Moles _et al._ [2008]P. Moles, M. Oliva, and V. S. Safont, Tetrahedron **64**, 9448 (2008).
* Hirao _et al._ [2014]H. Hirao, N. Thellamurege, and X. Zhang, Frontiers in Chemistry **2**, 10.3389/fchem.2014.00014 (2014).
* Romelt _et al._ [2017]C. Romelt, J. Song, M. Tarrago, J. A. Rees, M. van Gastel, T. Weyhermuller, S. DeBeer, E. Bill, F. Neese, and S. Ye, Inorganic Chemistry **56**, 4745 (2017), pMID: 28379689.
* Derrick _et al._ [2022]J. S. Derrick, M. Loipersberger, S. K. Nistanaki, A. V. Rothweiler, M. Head-Gordon, E. M. Nichols, and C. J. Chang, Journal of the American Chemical Society **144**, 11656 (2022), pMID: 35749266.
* Altun _et al._ [2019]A. Altun, M. Saitow, F. Neese, and G. Bistoni, Journal of Chemical Theory and Computation **15**, 1616 (2019), pMID: 30702888.
* Lee _et al._ [2020]J. Lee, F. D. Malone, and M. A. Morales, Journal of Chemical Theory and Computation **16**, 3019 (2020),
pMID: 32283932.
* Tarrago _et al._ [2021]M. Tarrago, C. Romelt, J. Nehrkorn, A. Schnegg, F. Neese, E. Bill, and S. Ye, Inorganic Chemistry **60**, 4966 (2021), pMID: 33739093.
* Li Manni and Alavi [2018]G. Li Manni and A. Alavi, The Journal of Physical Chemistry A **122**, 4935 (2018), pMID: 29595978.
* Goings _et al._ [2022]J. J. Goings, A. White, J. Lee, C. S. Tautermann, M. Degroote, C. Gidney, T. Shiozaki, R. Babbush, and N. C. Rubin, Proceedings of the National Academy of Sciences **119**, e2203533119 (2022), [https://www.pnas.org/doi/pdf/10.1073/pnas.2203533119](https://www.pnas.org/doi/pdf/10.1073/pnas.2203533119).
* Rubin _et al._ [2022]N. C. Rubin, J. Lee, and R. Babbush, Journal of Chemical Theory and Computation **18**, 1480 (2022), pMID: 35166529.
* Oumarou _et al._ [2022]O. Oumarou, M. Scheurer, R. M. Parrish, E. G. Hohenstein, and C. Gogolin, arXiv e-prints, arXiv:2212.07957 (2022), arXiv:2212.07957 [quant-ph].
* Cornelissen _et al._ [2022]A. Cornelissen, Y. Hamoudi, and S. Jerbi, in _Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing_ (2022) pp. 33-43.
* Childs and Wiebe [2012]A. M. Childs and N. Wiebe, arXiv e-prints, arXiv:1202.5822 (2012), arXiv:1202.5822 [quant-ph].
* Childs _et al._ [2018]A. M. Childs, D. Maslov, Y. Nam, N. J. Ross, and Y. Su, Proceedings of the National Academy of Sciences **115**, 9456 (2018).
* Gidney [2018]C. Gidney, Quantum **2**, 74 (2018).
* Jurecka _et al._ [2006]P. Jurecka, J. Sponer, J. Cerny, and P. Hobza, Phys. Chem. Chem. Phys. **8**, 1985 (2006).
* ULC [2022]C. C. G. ULC, Molecular operating environment (moe), 2022.02 (2022).
* Hohenberg and Kohn [1964]P. Hohenberg and W. Kohn, Phys. Rev. **136**, B864 (1964).
* Kohn and Sham [1965]W. Kohn and L. J. Sham, Phys. Rev. **140**, A1133 (1965).
* Frisch _et al._ [2020]M. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Robb, J. R. Cheeseman, G. Scalmani, V. Barone, G. A. Petersson, H. Nakatsuji, X. Li, M. Caricato, A. V. Marenich, J. Bloino, B. G. Janesko, R. Gomperts, B. Mennucci, H. P. Hratchian, J. V. Ortiz, A. F. Izmaylov, J. L. Sonnenberg, D. Williams-Young, F. Ding, F. Lipparini, F. Egidi, J. Goings, B. Peng, A. Petrone, T. Henderson, D. Ranasinghe, V. G. Zakrzewski, J. Gao, N. Rega, G. Zheng, W. Liang, M. Hada, M. Ehara, K. Toyota, R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda, O. Kitao, H. Nakai, T. Vreven, K. Throssell, J. A. Montgomery, Jr., J. E. Peralta, F. Ogliaro, M. J. Bearpark, J. J. Heyd, E. N. Brothers, K. N. Kudin, V. N. Staroverov, T. A. Keith, R. Kobayashi, J. Normand, K. Raghavachari, A. P. Rendell, J. C. Burant, S. S. Iyengar, J. Tomasi, M. Cossi, J. M. Millam, M. Klene, C. Adamo, R. Cammi, J. W. Ochterski, R. L. Martin, K. Morokuma, O. Farkas, J. B. Foresman, and D. J. Fox, Gaussian\({}^{-}\)16 Revision C.01 (2016), gaussian Inc. Wallingford CT.
* Chai and Head-Gordon [2008]J.-D. Chai and M. Head-Gordon, Phys. Chem. Chem. Phys. **10**, 6615 (2008).
* Weigend and Ahlrichs [2005]F. Weigend and R. Ahlrichs, Phys. Chem. Chem. Phys. **7**, 3297 (2005).
* Dunning [1989]T. H. Dunning, J. Chem. Phys. **90**, 1007 (1989).
* Balabanov and Peterson [2005]N. B. Balabanov and K. A. Peterson, J. Chem. Phys. **123**, 064107 (2005).
* Balabanov and Peterson [2006]N. B. Balabanov and K. A. Peterson, J. Chem. Phys. **125**, 074110 (2006).
* Chan and Head-Gordon [2002]G. K.-L. Chan and M. Head-Gordon, The Journal of Chemical Physics **116**, 4462 (2002).
* Wouters and Van Neck [2014]S. Wouters and D. Van Neck, The European Physical Journal D **68**, 272 (2014).
* Sun _et al._ [2020]Q. Sun, X. Zhang, S. Banerjee, P. Bao, M. Barbry, N. S. Blunt, N. A. Bogdanov, G. H. Booth, J. Chen, Z.-H. Cui, J. J. Eriksen, Y. Gao, S. Guo, J. Hermann, M. R. Hermes, K. Koh, P. Koval, S. Lehtola, Z. Li, J. Liu, N. Mardirossian, J. D. McClain, M. Motta, B. Mussard, H. Q. Pham, A. Pulkin, W. Purwanto, P. J. Robinson, E. Ronca, E. R. Sayfutyarova, M. Scheurer, H. F. Schurkus, J. E. T. Smith, C. Sun, S.-N. Sun, S. Upadhyay, L. K. Wagner, X. Wang, A. White, J. D. Whitfield, M. J. Williamson, S. Wouters, J. Yang, J. M. Yu, T. Zhu, T. C. Berkelbach, S. Sharma, A. Y. Sokolov, and G. K.-L. Chan, The Journal of Chemical Physics **153**, 10.1063/5.0006074 (2020), 024109.
* Zhai and Chan [2021]H. Zhai and G. K.-L. Chan, J. Chem. Phys. **154**, 224116 (2021).
* Cortes _et al._ [2023]C. L. Cortes, M. Loipersberger, R. M. Parrish, S. Morley-Short, W. Pol, S. Sim, C. S. Tautermann, M. Degroote, N. Moll, R. Santagati, and M. Streif, 10.5281/zenodo.7899977 (2023).
* Sayfutyarova _et al._ [2017]E. R. Sayfutyarova, Q. Sun, G. K.-L. Chan, and G. Knizia, Journal of Chemical Theory and Computation **13**, 4063 (2017), pMID: 28731706.
Overview of Appendices
## 1 Organization
Appendix A presents the notation and relevant definitions that will be used throughout this document. Appendix B presents the derivation of the first order SAPT operators. Appendix C presents the SAPT operators in the complete basis set limit. Appendix D derives the active space formulation of SAPT. Appendix E presents relevant details to the SAPT-EVE algorithm. Appendix F presents the SAPT operator encoding in the Majorana, _sparse_, and _tensor factorization_ representations. Appendix G presents the SAPT block encoding compilation techniques. Appendix H presents details regarding the benchmark set for small molecules. Appendix I presents details regarding the heme and Artemisinin benchmark system.
## 2 Notation
Throughout all of the appendices, we will use the following notation:
* orthogonal molecular spin orbital basis indices.
* orthogonal molecular spatial orbital basis indices.
* orthogonal occupied spatial orbital basis indices.
* orthogonal active spatial orbital basis indices.
* orthogonal spin function indices.
Conventional SAPT theory considers two separate monomer calculations that are effectively independent from one another apart from the atomic orbital choice. Each monomer wavefunction will be solved so that is fully anti-symmetric with itself but not with respect to the total dimer system. Formally, this requires two sets of fermionic operators for each of the two monomers obeying the conventional fermionic commutation relations,
\[\{\hat{a}_{r_{1}},\hat{a}_{r_{2}}^{\dagger}\}=\delta_{r_{1}r_{2}}\;\;\;\text{ and}\;\;\;\{\hat{b}_{\text{q}_{1}},\hat{b}_{\text{q}_{2}}^{\dagger}\}=\delta_{ \text{q}_{1}\text{q}_{2}}.\] (A1)
The monomer's molecular orbitals are orthonormal to themselves but not each other. Furthermore, it is assumed that the monomer operators fully commute with one another:
\[[\hat{a}_{r},b_{\text{q}}]=[\hat{a}_{r},\hat{b}_{\text{q}}^{\dagger}]=[\hat{a }_{r}^{\dagger},\hat{b}_{\text{q}}]=[\hat{a}_{r}^{\dagger},\hat{b}_{\text{q }}^{\dagger}]=0.\] (A2)
Throughout all of the appendices, we use the single-excitation operator defined in the spin-orbital basis,
\[\hat{E}_{r_{1}p_{2}}=\hat{a}_{r_{1}}^{\dagger}\hat{a}_{r_{2}}\;\;\text{and}\; \;\hat{E}_{\text{q}_{1}\text{q}_{2}}=\hat{b}_{\text{q}_{1}}^{\dagger}\hat{b}_ {\text{q}_{2}}\] (A3)
as well as the double-excitation operator,
\[\hat{e}_{\text{p}} \equiv\hat{e}_{p_{1}p_{2}p_{3}p_{4}}=\hat{a}_{r_{1}}^{\dagger} \hat{a}_{r_{3}}^{\dagger}\hat{a}_{r_{4}}\hat{a}_{r_{2}}=\hat{E}_{p_{1}p_{2}} \hat{E}_{p_{3}p_{4}}-\delta_{p_{2}p_{3}}\hat{E}_{p_{1}p_{4}}\] (A4) \[\hat{e}_{\text{q}} \equiv\hat{e}_{\text{q}_{1}\text{q}_{2}\text{q}_{3}\text{q}_{4}} =\hat{a}_{q_{1}}^{\dagger}\hat{a}_{q_{3}}^{\dagger}\hat{a}_{\text{q}_{4}}\hat{a }_{\text{q}_{2}}=\hat{E}_{\text{q}_{1}\text{q}_{2}}\hat{E}_{\text{q}_{3}\text{ q}_{4}}-\delta_{\text{q}_{1}\text{q}_{4}}\hat{E}_{\text{q}_{1}\text{q}_{4}}.\] (A5)
We also define the corresponding operators in the spatial orbital basis,
\[\hat{E}_{p_{1}p_{2}}^{\sigma}=\hat{a}_{p_{1}\sigma}^{\dagger}\hat{a}_{p_{2} \sigma}\;\;\text{and}\;\;\hat{E}_{q_{1}q_{2}}^{\sigma}=\hat{b}_{q_{1}\sigma}^ {\dagger}\hat{b}_{q_{2}\sigma}\] (A6)
as well as the spin-summed excitation operators,
\[\hat{E}_{p_{1}p_{2}}^{+}=\hat{a}_{p_{1}\alpha}^{\dagger}\hat{a}_{p_{2}\alpha}+ \hat{a}_{p_{1}\beta}^{\dagger}\hat{a}_{p_{2}\beta}\;\;\text{and}\;\;\hat{E}_{q _{1}q_{2}}^{+}=\hat{b}_{q_{1}\alpha}^{\dagger}\hat{b}_{q_{2}\alpha}+\hat{b}_{q _{1},\beta}^{\dagger}\hat{b}_{q_{2}\beta}.\] (A7)
We will generally use the convention that \(p\) indices belong to monomer A and \(q\) indices belong to monomer B unless it is explicitly stated otherwise.
Symmetry-adapted perturbation theory
### Brief overview of SAPT
Symmetry-adapted perturbation theory is a state-of-the-art method used for the calculation of interaction energies \(E_{\text{int}}\) in large molecular dimer systems. Within the context of SAPT, the interaction energy \(E_{\text{int}}\) is computed directly as a sum of well-defined, physically interpretable polarization and exchange contributions,
\[E_{\text{int}}=E_{\text{pol}}^{(1)}+E_{\text{exch}}^{(1)}+E_{\text{pol}}^{(2)}+E _{\text{exch}}^{(2)}+\cdots \tag{10}\]
The interaction terms are calculated by applying a symmetry-adapted Rayleigh-Schrodinger (RS) perturbative expansion with respect to the interaction operator, \(V=H-H_{o}\), where \(H\) corresponds to the total dimer Hamiltonian and \(H_{o}\) corresponds to the unperturbed operator, \(H_{o}=H_{A}+H_{B}\), of the uncoupled monomers. The first-order polarization energy \(E_{\text{pol}}^{(1)}\) describes the effect of the classical electrostatic interaction of the unperturbed charge distribution in the monomers. In contrast, the second-order polarization energy is the sum of induction and dispersion energies. The exchange corrections, \(E_{\text{exch}}^{(n)}\), are specific to SAPT and provide the short-range repulsion which distinguishes this approach from conventional Rayleigh-Schrodinger perturbation theory, making it applicable to correlated systems. The first-order perturbative energy correction is written as:
\[E^{(1)}=E_{\text{pol}}^{(1)}+E_{\text{exch}}^{(1)}=\frac{\langle\Psi_{A}\Psi_{ B}|\mathcal{A}\hat{V}\mathcal{A}|\Psi_{A}\Psi_{B}\rangle}{\langle\Psi_{A}\Psi_{B}| \mathcal{A}|\Psi_{A}\Psi_{B}\rangle} \tag{11}\]
where \(\hat{V}\) is the intermolecular interaction operator and \(\mathcal{A}\) is a symmetry-projection idempotent operator satisfying the property (\(\mathcal{A}^{2}=\mathcal{A}\)), known as the anti-symmetrizer. In conventional SAPT theory, each of the monomers is treated independently of one another, resulting in a dimer wavefunction that is not fully anti-symmetric. A typical approximation that is made within SAPT theory is commonly referred to as the \(S^{2}\) approximation, where the anti-symmetrizer is approximated as, \(\mathcal{A}\approx 1+\hat{P}\). The operator \(\hat{P}\) is the exchange operator, \(\hat{P}=-\sum_{i\in A}\sum_{j\in B}\hat{P}_{i_{1}j_{1}}\), describing the interchange between the spin and spatial coordinates of the electrons between the two monomers. The polarization energy is then defined as:
\[E_{\text{pol}}^{(1)}=\langle\Psi_{A}\Psi_{B}|\hat{V}|\Psi_{A}\Psi_{B}\rangle \tag{12}\]
while the exchange energy is written as:
\[E_{\text{exch}}^{(1)}=E^{(1)}-E_{\text{pol}}^{(1)}=\langle\Psi_{A}\Psi_{B}| \widehat{VP}_{\text{s}}|\Psi_{A}\Psi_{B}\rangle-E_{\text{pol}}^{(1)}\langle \Psi_{A}\Psi_{B}|\hat{P}|\Psi_{A}\Psi_{B}\rangle\,, \tag{13}\]
where we have kept first order terms in \(\hat{P}\) after expanding the denominator in Eq. (11). Note that we have defined \(\widehat{VP}_{\text{s}}\) rather than \(\hat{V}\hat{P}\) due to a convention in the SAPT literature where all SAPT energy contributions are derived from a first-quantized formulation, which are then projected into a finite basis of orbitals. In order to validate our results with numerical results from classical SAPT calculations, we follow the same path. In order to estimate the total first-order SAPT energy on a quantum computer, we will require the estimation of the three observables: \(\hat{F}=\{\hat{V},\hat{P},\widehat{VP}_{\text{s}}\}\).
### Derivation of SAPT operators
#### a.1 Intermolecular operator
In first quantization, the intermolecular interaction operator between monomers A and B is defined as:
\[V_{c}=\sum_{i\in A}^{\eta_{A}}\sum_{j\in B}^{\eta_{B}}v_{ij} \tag{14}\]
where
\[v_{ij}=\frac{1}{r_{ij}}-\sum_{J}\frac{Z_{J}}{\eta_{B}}\frac{1}{r_{iJ}}-\sum_{ I}\frac{Z_{I}}{\eta_{A}}\frac{1}{r_{IJ}}+\sum_{IJ}\frac{Z_{I}Z_{J}}{\eta_{A} \eta_{B}}\frac{1}{r_{IJ}} \tag{15}\]
Monomer A/B consists of a number of \(\eta_{A}/\eta_{B}\) electrons respectively. The first term describes the electron-electron interaction between the electrons in monomer A and monomer B. The second term describes the attractive electron-nuclear interaction between the electrons in monomer A with the nuclear atoms in monomer B; vice versa for the third term. The fourth term describes the repulsion interaction between the nuclear atoms of the two monomers. Interestingly, we found that keeping all of the terms combined in a single four-index tensor helps reduce the \(\ell_{1}\) norm of the operator required for block encoding.
## Appendix B Single-exchange operator
In addition, conventional SAPT under the \(S^{2}\) approximation uses the first-quantized form of the single-exchange operator,
\[P_{c}=-\sum_{i\in A}^{\eta_{A}}\sum_{j\in B}^{\eta_{B}}P_{ij}, \tag{10}\]
where \(P_{ij}\) exchanges the spin and space coordinates of the \(i\)th and \(j\)th electron of monomer A and monomer B respectively.
### Reduced density matrices
In first quantization, the one and two-body density matrices are defined as:
\[\rho_{X}(\mathbf{x}_{1}|\mathbf{x}_{1}^{\prime}) =\eta_{X}\int\mathrm{d}\mathbf{x}_{2,\eta_{X}}\Psi_{X}^{*}( \mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{\eta_{X}})\Psi_{X}(\mathbf{ x}_{1}^{\prime},\mathbf{x}_{2},\cdots,\mathbf{x}_{\eta_{X}}), \tag{11}\] \[\Gamma_{X}(\mathbf{x}_{1},\mathbf{x}_{2}|\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime}) =\eta_{X}(\eta_{X}-1)\int\mathrm{d}\mathbf{x}_{3,\eta_{X}}\Psi_{X }^{*}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\cdots,\mathbf{x}_{\eta_{X} })\Psi_{X}(\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime},\mathbf{x}_{3}, \cdots,\mathbf{x}_{\eta_{X}}), \tag{12}\]
where \(\mathrm{d}\mathbf{x}_{i,\eta_{X}}=\mathrm{d}\mathbf{x}_{i}\mathrm{d}\mathbf{x }_{i+1}\cdots\mathrm{d}\mathbf{x}_{\eta_{X}}\) and each monomer is assumed to have \(\eta_{X}\) total electrons. The integral over the variable \(\mathbf{x}_{i}\) is interpreted as an integral over both the spatial and spin components. Notably, these density matrices obey the permutational symmetries,
\[\rho_{X}(\mathbf{x}_{1}|\mathbf{x}_{1}^{\prime}) =\rho_{X}^{*}(\mathbf{x}_{1}^{\prime}|\mathbf{x}_{1}), \tag{13}\] \[\Gamma_{X}(\mathbf{x}_{1},\mathbf{x}_{2}|\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime}) =\Gamma_{X}^{*}(\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime}| \mathbf{x}_{1},\mathbf{x}_{2})=\Gamma_{X}(\mathbf{x}_{2},\mathbf{x}_{1}| \mathbf{x}_{2}^{\prime},\mathbf{x}_{1}^{\prime})=\Gamma_{X}^{*}(\mathbf{x}_{2} ^{\prime},\mathbf{x}_{1}^{\prime}|\mathbf{x}_{2},\mathbf{x}_{1}). \tag{14}\]
In second quantization, the density matrices are written in terms of molecular spin-orbitals \(\phi_{\mathrm{p}}(\mathbf{x})\):
\[\rho_{X}(\mathbf{x}_{1}|\mathbf{x}_{1}^{\prime}) =\sum_{\mathrm{p}_{\mathrm{Q}}}D_{\mathrm{p}_{\mathrm{Q}}}^{X} \phi_{\mathrm{p}}^{*}(\mathbf{x}_{1})\phi_{\mathrm{q}}(\mathbf{x}_{1}^{\prime}), \tag{15}\] \[\Gamma_{X}(\mathbf{x}_{1},\mathbf{x}_{2}|\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime}) =\sum_{\mathrm{p}_{\mathrm{Q}RS}}d_{\mathrm{p}_{\mathrm{Q}RS}}^{ \mathcal{X}}\phi_{\mathrm{p}}^{*}(\mathbf{x}_{1})\phi_{\mathrm{q}}(\mathbf{x}_ {1}^{\prime})\phi_{\mathrm{r}}^{*}(\mathbf{x}_{2})\phi_{\mathrm{s}}(\mathbf{x}_ {2}^{\prime}), \tag{16}\]
with the reduced density matrix (RDM) coefficients given by:
\[D_{\mathrm{p}_{\mathrm{Q}}}^{X} =\left\langle\Psi_{X}|\hat{c}_{\mathrm{r}}^{\dagger}\hat{c}_{ \mathrm{q}}|\Psi_{X}\right\rangle, \tag{17}\] \[d_{\mathrm{p}_{\mathrm{Q}RS}}^{X} =\left\langle\Psi_{X}|\hat{c}_{\mathrm{r}}^{\dagger}\hat{c}_{ \mathrm{q}}^{\dagger}\hat{c}_{\mathrm{s}}\hat{c}_{\mathrm{Q}}|\Psi_{X}\right\rangle, \tag{18}\]
where \(\mathrm{p},\mathrm{q},\mathrm{r},\mathrm{s}\) denote the indices of the molecular spin-orbitals for monomer \(X\), and \(\hat{c}^{\dagger}/\hat{c}\) denote the creation/annihilation operators for either monomer A or B. The permutational symmetries associated with the RDMs are:
\[D_{\mathrm{p}_{\mathrm{Q}}} =D_{\mathrm{q}\mathrm{p}}^{*}, \tag{19}\] \[d_{\mathrm{p}_{\mathrm{Q}RS}} =d_{\mathrm{q}_{\mathrm{Q}RS}}^{*}=d_{\mathrm{q}_{\mathrm{R} \mathrm{S}PQ}}=d_{\mathrm{q}_{\mathrm{R}\mathrm{S}PQ}}^{*}. \tag{20}\]
## Appendix d Electrostatic operator: \(\hat{V}\)
In first quantization, the expectation value with the intermolecular interaction operator is given by:
\[E^{(1)}_{\rm pol}\equiv\langle\Psi_{A}\Psi_{B}|V_{c}|\Psi_{A}\Psi_{B}\rangle= \int{\rm d}{\bf x}^{A}_{1,\eta_{A}}{\rm d}{\bf x}^{B}_{1,\eta_{B}}\Psi_{A}({\bf x }^{A}_{1,\eta_{A}})\Psi_{B}({\bf x}^{B}_{1,\eta_{B}})V_{c}\Psi^{*}_{A}({\bf x}^ {A}_{1,\eta_{A}})\Psi^{*}_{B}({\bf x}^{B}_{1,\eta_{B}}), \tag{101}\]
where \({\bf x}_{i,\eta_{X}}={\bf x}_{i}{\bf x}_{i+1}\cdots{\bf x}_{\eta_{X}}\). Using the density matrix formalism defined above, it is possible to derive the second quantized operator of interest. Using equations (100)-(101), we find:
\[E^{(1)}_{\rm pol} = \int{\rm d}{\bf x}_{i}{\rm d}{\bf x}_{j}\,\rho_{A}({\bf x}_{i}|{ \bf x}_{i})v_{ij}\rho_{B}({\bf x}_{j}|{\bf x}_{j}), \tag{102}\] \[= \sum_{p_{1}p_{2}}\sum_{\rm Q_{1}Q_{2}}D^{A}_{p_{1}p_{2}}D^{B}_{ \rm Q_{1}Q_{2}}\int{\rm d}{\bf x}_{i}{\rm d}{\bf x}_{j}v_{ij}\phi^{*}_{\rm p_{ 1}}({\bf x}_{i})\phi_{\rm p_{2}}({\bf x}_{i})\phi^{*}_{\rm q_{1}}({\bf x}_{j} )\phi_{\rm q_{2}}({\bf x}_{j})\]
where we have defined the intermolecular four-index tensor, \(v^{{\rm p}_{1}{\rm p}_{2}}_{\rm Q_{1}Q_{2}}\), as:
\[v^{{\rm p}_{1}{\rm p}_{2}}_{\rm Q_{1}Q_{2}}=\int{\rm d}{\bf x}_{i}{\rm d}{\bf x }_{j}\;\phi^{*}_{\rm p_{1}}({\bf x}_{i})\phi_{\rm p_{2}}({\bf x}_{i})v_{ij} \phi^{*}_{\rm q_{1}}({\bf x}_{j})\phi_{\rm q_{2}}({\bf x}_{j}). \tag{103}\]
The molecular overlap matrix is also defined with respect to molecular spin-orbitals \(\phi_{\rm p}({\bf x})\) and \(\phi_{\rm q}({\bf x})\) as,
\[S^{\rm p}_{\rm q}=\int\!{\rm d}{\bf x}\;\phi^{*}_{\rm p}({\bf x})\phi_{\rm q}( {\bf x}). \tag{104}\]
Since the operators from monomer A commute with those in monomer B, we find the final expression for the electrostatic operator \(\hat{V}\):
\[\hat{V}=\sum_{{\rm p}_{1}{\rm p}_{2}}\sum_{\rm Q_{1}Q_{2}}v^{{\rm p}_{1}{\rm p }_{2}}_{\rm Q_{1}Q_{2}}\hat{E}_{{\rm p}_{1}{\rm p}_{2}}\hat{E}_{{\rm q}_{1}{ \rm Q}_{2}}. \tag{105}\]
## e Exchange operator: \(\hat{P}\)
The exchange operator is found by considering the expectation value of \(\hat{P}\) with respect to the two-monomer wavefunction in first quantization:
\[\langle\hat{P}\rangle=\int{\rm d}{\bf x}^{A}_{1,\eta_{A}}{\rm d}{\bf x}^{B}_{ 1,\eta_{B}}\Psi_{A}({\bf x}^{A}_{1,\eta_{A}})\Psi_{B}({\bf x}^{B}_{1,\eta_{B}} )P_{c}\Psi^{*}_{A}({\bf x}^{A}_{1,\eta_{A}})\Psi^{*}_{B}({\bf x}^{B}_{1,\eta_ {B}}). \tag{106}\]
Performing this exchange and using the density matrix formalism as before, we find:
\[\langle\hat{P}\rangle = -\int{\rm d}{\bf x}_{i}{\rm d}{\bf x}_{j}\,\rho_{A}({\bf x}_{j}|{ \bf x}_{i})\rho_{B}({\bf x}_{i}|{\bf x}_{j}), \tag{107}\] \[= -\sum_{{\rm p}_{1}{\rm p}_{2}}\sum_{\rm Q_{1}Q_{2}}D^{A}_{{\rm p} _{1}{\rm p}_{2}}D^{B}_{{\rm Q}_{1}{\rm Q}_{2}}\int{\rm d}{\bf x}_{i}{\rm d}{ \bf x}_{j}\phi^{*}_{{\rm p}_{1}}({\bf x}_{j})\phi_{\rm p_{2}}({\bf x}_{i}) \phi^{*}_{\rm q_{1}}({\bf x}_{i})\phi_{\rm q_{2}}({\bf x}_{j}).\]
As a result, we find the final expression for the exchange operator:
\[\hat{P}=-\sum_{{\rm p}_{1}{\rm p}_{2}}\sum_{\rm Q_{1}Q_{2}}S^{{\rm p}_{1}{\rm Q }_{2}}_{{\rm p}_{2}}\hat{E}_{{\rm p}_{1}{\rm p}_{2}}\hat{E}_{{\rm q}_{1}{\rm Q }_{2}}. \tag{108}\]
Note that this operator can also be derived with the interaction density matrix approach that we discuss below.
## f Electrostatic-exchange operator: \(\widehat{VP}_{\rm s}\)
The electrostatic-exchange operator can be found in a similar fashion by using the following first quantized expectation value expression:
\[\langle\widehat{VP}_{\rm s}\rangle=\langle\Psi_{A}\Psi_{B}|\tfrac{1}{2}(V_{c}P_ {c}+P_{c}V_{c})|\Psi_{A}\Psi_{B}\rangle\,. \tag{109}\]
It is important to note that we have used the symmetric expression, \(\frac{1}{2}(V_{c}P_{c}+P_{c}V_{c})\), rather than \(V_{c}P_{c}\) which is normally the convention in the classical SAPT literature. This ensures that the electrostatic-exchange operator will remain strictly Hermitian as required for the SAPT-EVE algorithm. In the following, we will derive the resulting equations based on the first term, \(V_{c}P_{c}\), but will include both contributions in the final result. The expectation value of the first term is written as:
\[\langle V_{c}P_{c}\rangle=\int\mathrm{d}\mathbf{x}_{i}\mathrm{d}\mathbf{x}_{j} v_{ij}\rho_{\mathrm{int}}(\mathbf{x}_{i},\mathbf{x}_{j}), \tag{100}\]
where we have written this expression in terms of the interaction density matrix \(\rho_{\mathrm{int}}(\mathbf{x}_{i},\mathbf{x}_{j})\)[26],
\[\rho_{\mathrm{int}}(\mathbf{x}_{i},\mathbf{x}_{j})=\eta_{A}\eta_{B}\int \mathrm{d}\mathbf{x}_{2,\eta_{A}}^{A}\mathrm{d}\mathbf{x}_{2,\eta_{B}}^{B} \Psi_{A}^{*}(\mathbf{x}_{i}^{A},\mathbf{x}_{2,\eta_{A}}^{A})\Psi_{B}^{*}( \mathbf{x}_{j}^{B},\mathbf{x}_{2,\eta_{B}}^{B})P_{c}\Psi_{A}(\mathbf{x}_{i}^{ A},\mathbf{x}_{2,\eta_{A}}^{A})\Psi_{B}(\mathbf{x}_{j}^{B},\mathbf{x}_{2,\eta_{B}}^{B}). \tag{101}\]
If we apply (100) to the interaction density matrix (101), we obtain
\[\rho_{\mathrm{int}}(\mathbf{x}_{i},\mathbf{x}_{j})= -\rho_{A}(\mathbf{x}_{i}|\mathbf{x}_{j})\rho_{B}(\mathbf{x}_{j}| \mathbf{x}_{i})-\int\mathrm{d}\mathbf{x}_{k}\,\Gamma_{A}(\mathbf{x}_{i}, \mathbf{x}_{k}|\mathbf{x}_{i},\mathbf{x}_{j})\rho_{B}(\mathbf{x}_{j}|\mathbf{x }_{k})\] \[-\int\mathrm{d}\mathbf{x}_{l}\,\rho_{A}(\mathbf{x}_{i}|\mathbf{x} _{l})\Gamma_{B}(\mathbf{x}_{j},\mathbf{x}_{l}|\mathbf{x}_{j},\mathbf{x}_{i})- \int\!\mathrm{d}\mathbf{x}_{k}\mathrm{d}\mathbf{x}_{l}\,\Gamma_{A}(\mathbf{x} _{i},\mathbf{x}_{k}|\mathbf{x}_{i}\mathbf{x}_{l})\Gamma_{B}(\mathbf{x}_{j}, \mathbf{x}_{l}|\mathbf{x}_{j}\mathbf{x}_{k}). \tag{102}\]
The electrostatic-exchange observable can then be expressed as a sum of four terms:
\[\langle\widetilde{VP}\rangle=\int\mathrm{d}\mathbf{x}_{i}\mathrm{d}\mathbf{x} _{j}v_{ij}\rho_{\mathrm{int}}(\mathbf{x}_{i},\mathbf{x}_{j})=\langle\widetilde {VP}_{1}\rangle+\langle\widetilde{VP}_{2}\rangle+\langle\widetilde{VP}_{3} \rangle+\langle\widetilde{VP}_{4}\rangle\]
where
\[\langle\widetilde{VP}_{1}\rangle =-\int\mathrm{d}\mathbf{x}_{i}\mathrm{d}\mathbf{x}_{j}\,v_{ij} \rho_{A}(\mathbf{x}_{i}|\mathbf{x}_{j})\rho_{B}(\mathbf{x}_{j}|\mathbf{x}_{i}) \tag{103}\] \[\langle\widetilde{VP}_{2}\rangle =-\int\mathrm{d}\mathbf{x}_{k}\mathrm{d}\mathbf{x}_{i}\mathrm{d} \mathbf{x}_{j}\,v_{ij}\Gamma_{A}(\mathbf{x}_{i},\mathbf{x}_{k}|\mathbf{x}_{i},\mathbf{x}_{j})\rho_{B}(\mathbf{x}_{j}|\mathbf{x}_{k})\] (104) \[\langle\widetilde{VP}_{3}\rangle =-\int\mathrm{d}\mathbf{x}_{l}\mathrm{d}\mathbf{x}_{i}\mathrm{d} \mathbf{x}_{j}\,v_{ij}\rho_{A}(\mathbf{x}_{i}|\mathbf{x}_{l})\Gamma_{B}( \mathbf{x}_{j},\mathbf{x}_{l}|\mathbf{x}_{j},\mathbf{x}_{i})\] (105) \[\langle\widetilde{VP}_{4}\rangle =-\int\mathrm{d}\mathbf{x}_{k}\mathrm{d}\mathbf{x}_{l}\mathrm{d} \mathbf{x}_{j}\,v_{ij}\Gamma_{A}(\mathbf{x}_{i},\mathbf{x}_{k}|\mathbf{x}_{i} \mathbf{x}_{l})\Gamma_{B}(\mathbf{x}_{j},\mathbf{x}_{l}|\mathbf{x}_{j}\mathbf{x }_{k}). \tag{106}\]
Following the same steps as the electrostatics case, we now substitute the 1- and 2-RDMs to obtain the electrostatic-exchange operators. The first term is given by:
\[\langle\widetilde{VP}_{1}\rangle=-\sum_{\mathbf{p}\mathbf{q}}D^{A}_{\mathbf{p }_{1}\mathbf{q}_{2}}D^{B}_{\mathbf{q}_{1}\mathbf{q}_{2}}\int\mathrm{d} \mathbf{x}_{i}\mathrm{d}\mathbf{x}_{j}v_{ij}\phi^{*}_{\mathbf{p}_{1}}(\mathbf{ x}_{j})\phi_{\mathbf{p}_{2}}(\mathbf{x}_{i})\phi^{*}_{\mathbf{q}_{1}}(\mathbf{x}_{i} )\phi_{\mathbf{q}_{2}}(\mathbf{x}_{j}) \tag{107}\]
which leads to the operator,
\[\widehat{VP}_{1}=-\sum_{\mathbf{p}\mathbf{q}}v^{v_{1}\mathbf{q}_{2}}_{ \mathbf{q}_{1}\mathbf{p}_{2}}\hat{E}_{\mathbf{p}_{1}\mathbf{p}_{2}}\hat{E}_{ \mathbf{q}_{1}\mathbf{q}_{2}}. \tag{108}\]
The second term is then evaluated as:
\[\langle\widetilde{VP}_{2}\rangle =-\int\mathrm{d}\mathbf{x}_{k}\mathrm{d}\mathbf{x}_{i}\mathrm{d} \mathbf{x}_{j}\,v_{ij}\Gamma_{A}(\mathbf{x}_{i},\mathbf{x}_{k}|\mathbf{x}_{i},\mathbf{x}_{j})\rho_{B}(\mathbf{x}_{j}|\mathbf{x}_{k}) \tag{109}\] \[=-\sum_{\mathbf{p}\mathbf{q}}\!\int\!\mathrm{d}\mathbf{x}_{k} \mathrm{d}\mathbf{x}_{i}\mathrm{d}\mathbf{x}_{j}v_{ij}d^{A}_{\mathbf{p}_{1} \mathbf{p}_{2}\mathbf{p}_{3}\mathbf{p}_{4}}D^{B}_{\mathbf{q}_{1}\mathbf{q}_{2 }}\phi^{*}_{\mathbf{p}_{1}}(\mathbf{x}_{i})\phi_{\mathbf{p}_{2}}(\mathbf{x}_{i} )\phi^{*}_{\mathbf{p}_{3}}(\mathbf{x}_{k})\phi_{\mathbf{p}_{4}}(\mathbf{x}_{j} )\phi^{*}_{\mathbf{q}_{1}}(\mathbf{x}_{j})\phi_{\mathbf{q}_{2}}(\mathbf{x}_{k}) \tag{110}\]
The effective operator is then written as:
\[\widehat{VP}_{2} =-\sum_{\mathbf{p}\mathbf{q}}v^{v_{1}\mathbf{p}_{2}}_{\mathbf{q} _{1}\mathbf{p}_{4}}\phi^{\mathbf{p}_{3}}_{\mathbf{q}_{2}}\hat{a}^{\dagger}_{ \mathbf{p}_{1}}\hat{a}^{\dagger}_{\mathbf{p}_{3}}\hat{a}_{\mathbf{r}_{4}}\hat{a} _{\mathbf{r}_{2}}\hat{b}^{\dagger}_{\mathbf{q}_{1}}\hat{b}_{\mathbf{q}_{2}} \tag{111}\] \[=-\sum_{\mathbf{p}\mathbf{q}}v^{v_{1}\mathbf{p}_{2}}_{\mathbf{q} _{1}\mathbf{p}_{4}}\phi^{\mathbf{p}_{3}}_{\mathbf{q}_{2}}\hat{e}_{\mathbf{p}_{1} \mathbf{p}_{2}\mathbf{p}_{3}\mathbf{p}_{4}}\hat{E}_{\mathbf{q}_{1}\mathbf{q}_{2}}. \tag{112}\]
The evaluation of the third term follows in the same fashion,
\[\langle\widehat{VP}_{3}\rangle =-\int\mathrm{d}\mathbf{x}_{l}\mathrm{d}\mathbf{x}_{i}\mathrm{d} \mathbf{x}_{j}\,v_{ij}\rho_{A}(\mathbf{x}_{i}|\mathbf{x}_{l})\Gamma_{B}( \mathbf{x}_{j},\mathbf{x}_{l}|\mathbf{x}_{j},\mathbf{x}_{i})\] (100) \[=-\sum_{\mathbf{p}\mathbf{q}}\int\mathrm{d}\mathbf{x}_{l}\mathrm{ d}\mathbf{x}_{i}\mathrm{d}\mathbf{x}_{j}v_{ij}D^{A}_{{}_{\mathrm{P}_{1}}{}_{ \mathrm{P}_{1}}{}_{\mathrm{P}_{2}}{}_{\mathrm{Q}_{1}}{}_{\mathrm{Q}_{2}}{}_{ \mathrm{Q}_{3}}{}_{\mathrm{Q}_{4}}{}_{\mathrm{P}_{1}}{}_{\mathrm{P}_{2}}{}_{ \mathrm{Q}_{1}}{}_{\mathrm{Q}_{2}}{}_{\mathrm{Q}_{3}}{}_{\mathrm{Q}_{4}}{}_{ \mathrm{Q}_{4}}{}_{\mathrm{P}_{1}}{}_{\mathrm{Q}_{2}}{}_{\mathrm{Q}_{3}}{}_{ \mathrm{Q}_{4}}{}_{
Resulting in the total electrostatic-exchange operator given by:
\[\widehat{VP}_{\rm s} = \phantom{-}\tfrac{1}{2}(\widehat{VP}+\widehat{VP}) \tag{101}\] \[= -\tfrac{1}{2}\sum_{\mathbf{PQ}}\left(v_{{}_{\rm Q}{}_{1}{}_{\rm P} {}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{2}}\hat{E}_{{}_{\rm P}{}_{1}{}_{\rm P}{ }_{2}}\hat{E}_{{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{2}}+v_{{}_{\rm P}{}_{2}{}_{\rm Q}{ }_{1}{}_{\rm Q}{}_{2}}^{{}_{\rm Q}{}_{2}{}_{\rm P}{}_{1}{}_{\rm P}{}_{\rm Q}{}_ {2}{}_{\rm Q}{}_{1}}\right)\] (102) \[-\tfrac{1}{2}\sum_{\mathbf{PQ}}\left(v_{{}_{\rm Q}{}_{1}{}_{\rm P} {}_{\rm P}{}_{\rm Q}{}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{2}}\hat{E}_{{}_{ \rm P}{}_{1}{}_{\rm P}{}_{2}{}_{\rm P}{}_{3}{}_{\rm P}{}_{4}}\hat{E}_{{}_{\rm Q }{}_{1}{}_{\rm Q}{}_{2}}+v_{{}_{\rm P}{}_{4}{}_{\rm Q}{}_{1}{}_{\rm P}{}_{3}{} _{\rm Q}{}_{2}}^{{}_{\rm P}{}_{2}{}_{\rm P}{}_{1}{}_{\rm P}{}_{4}{}_{\rm P}{}_{ 3}}\hat{E}_{{}_{\rm Q}{}_{2}{}_{\rm Q}{}_{1}}\right)\] \[-\tfrac{1}{2}\sum_{\mathbf{PQ}}\left(v_{{}_{\rm Q}{}_{1}{}_{\rm Q} {}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{2}}\hat{E}_{{}_{\rm P}{}_{1}{}_{\rm P}{ }_{2}{}_{\rm Q}{}_{\rm Q}{}_{\rm Q}{}_{4}}+v_{{}_{\rm Q}{}_{2}{}_{\rm Q}{}_{1} {}_{\rm Q}{}_{3}{}_{\rm Q}{}_{2}}^{{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{3}}\hat{E}_{{} _{\rm P}{}_{2}{}_{\rm P}{}_{1}{}_{\rm P}{}_{4}{}_{\rm P}{}_{3}}\hat{E}_{{}_{ \rm Q}{}_{2}{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{4}{}_{\rm Q}{}_{3}}\right)\] \[-\tfrac{1}{2}\sum_{\mathbf{PQ}}\left(v_{{}_{\rm Q}{}_{1}{}_{\rm Q} {}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{2}}S_{{}_{\rm P}{}_{4}}^{{}_{\rm Q}{}_{ \rm Q}{}_{4}}\hat{E}_{{}_{\rm P}{}_{1}{}_{\rm P}{}_{2}{}_{\rm P}{}_{3}{}_{\rm P }{}_{4}}\hat{E}_{{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{2}{}_{\rm Q}{}_{3}{}_{\rm Q}{} _{4}}+v_{{}_{\rm Q}{}_{2}{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{3}}^{{}_{\rm P}{}_{4}{ }_{\rm Q}{}_{3}}\hat{E}_{{}_{\rm P}{}_{2}{}_{\rm P}{}_{1}{}_{\rm P}{}_{4}{}_{ \rm P}{}_{3}}\hat{E}_{{}_{\rm Q}{}_{2}{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{4}{}_{\rm Q }{}_{3}}\right).\]
This recovers the electrostatic-exchange operator defined in the main text. Note that compared to the electrostatic and exchange operators, \(\hat{V}\) and \(\hat{P}\), the electrostatic-exchange operator contains products of high-order tensors. It is possible to gain more insight by rewriting this operator in the so-called chemist notation where single-excitation and spin-summed operators are combined,
\[\widehat{VP}_{\rm s}= -\tfrac{1}{2}\sum_{\mathbf{PQ}}\left(\nu_{{}_{\rm Q}{}_{1}{}_{ \rm P}{}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{2}}\hat{E}_{{}_{\rm P}{}_{1}{}_{ \rm P}{}_{2}}\hat{E}_{{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{2}}+\nu_{{}_{\rm P}{}_{2}{ }_{\rm Q}{}_{1}{}_{\rm Q}{}_{2}}^{{}_{\rm Q}{}_{\rm P}{}_{1}}\hat{E}_{{}_{\rm P} {}_{2}{}_{\rm Q}{}_{1}}\hat{E}_{{}_{\rm Q}{}_{2}{}_{\rm Q}{}_{1}}\right)\] \[-\tfrac{1}{2}\sum_{\mathbf{PQ}}\left(\nu_{{}_{}_{\rm Q}{}_{1}{}_{ \rm P}{}_{4}{}_{\rm P}{}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{2}}\hat{E}_{{}_{ \rm P}{}_{1}{}_{\rm P}{}_{2}}\hat{E}_{{}_{\rm P}{}_{3}{}_{\rm P}{}_{4}}\hat{E}_{ {}_{\rm Q}{}_{1}{}_{\rm Q}{}_{2}}+\nu_{{}_{\rm P}{}_{4}{}_{\rm Q}{}_{1}{}_{\rm P }{}_{3}}^{{}_{\rm P}{}_{2}{}_{\rm P}{}_{1}}\hat{E}_{{}_{\rm P}{}_{4}{}_{\rm P}{ }_{3}}\hat{E}_{{}_{\rm Q}{}_{2}{}_{\rm Q}{}_{1}}\right)\] \[-\tfrac{1}{2}\sum_{\mathbf{PQ}}\left(\nu_{{}_{\rm Q}{}_{1}{}_{ \rm Q}{}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{2}}S_{{}_{\rm P}{}_{4}}^{{}_{\rm Q}{ }_{\rm P}{}_{3}}\hat{E}_{{}_{\rm P}{}_{1}{}_{\rm P}{}_{2}}\hat{E}_{{}_{\rm Q}{ }_{1}{}_{\rm Q}{}_{2}}\hat{E}_{{}_{\rm Q}{}_{3}{}_{\rm Q}{}_{4}}+v_{{}_{\rm Q}{} _{2}{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{3}}^{{}_{\rm P}{}_{4}{}_{\rm P }{}_{3}}\hat{E}_{{}_{\rm P}{}_{2}{}_{\rm P}{}_{1}}\hat{E}_{{}_{\rm Q}{}_{2}{}_{ \rm Q}{}_{1}}\hat{E}_{{}_{\rm Q}{}_{4}{}_{\rm Q}{}_{3}}\right), \tag{103}\]
where the renormalized tensor coefficients used in the chemist notation are defined as:
\[\nu_{{}_{\rm Q}{}_{1}{}_{\rm P}{}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{1}{}_{ \rm P}{}_{2}} =v_{{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{1}{}_{\rm P}{}_{2}}^{{}_{\rm Q}{}_{ 3}}-\sum_{\rm Q}{}_{\rm P}{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{4}-\sum_{\rm P}{}_{\rm Q }{}_{\rm Q}{}_{1}{}_{\rm P}{}_{2}{}_{\rm Q}{}_{3}+\sum_{\rm P}{}_{\rm P}{}_{ \rm Q}{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{4}{}_{\rm Q}{}_{\rm Q}{}_{2}{}^{{}_{\rm P}{ }_{3}}S_{{}_{\rm Q}{}_{3}}^{{}_{\rm P}{}_{2}},\] (104) \[\nu_{{}_{\rm Q}{}_{1}{}_{\rm P}{}_{4}}^{{}_{\rm P}{}_{1}{}_{\rm P}{}_{ 4}} =v_{{}_{\rm Q}{}_{1}{}_{\rm P}{}_{4}}^{{}_{\rm P}{}_{1}{}_{\rm P}{}_{ 4}}-\sum_{\rm Q}{}_{\rm Q}{}_{\rm Q}{}_{\rm Q}{}_{\rm Q}{}_{\rm Q}{}_{\rm Q}{}_{ \rm Q}{}_{\rm Q}{}_{\rm Q}{}_{\rm Q}{},\] (105) \[\nu_{{}_{\rm Q}{}_{1}{}_{\rm Q}{}_{2}}^{{}_{\rm P}{}_{1}{}_{\rm Q}{}_{ 2}} =v_{{}_{\rm Q}{}_{1}{}_{\rm Q}
Sapt operator symmetries
In the following, we discuss the permutational symmetries inherent to the SAPT operators defined above. For complex orbitals, the four-index intermolecular tensor obeys the Hermitian symmetry,
\[v^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{1}Q_{2}}}=[v^{{}_{\rm P_{2}P_{1}}}_{{}_{\rm Q _{2}Q_{1}}}]^{*}, \tag{111}\]
while the intermolecular overlap matrix obeys,
\[S^{\rm p}_{\rm Q}=[S^{\rm q}_{\rm F}]^{*}. \tag{112}\]
For real orbitals, the intermolecular tensor obeys the four-fold symmetry,
\[v^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{1}Q_{2}}}=v^{{}_{\rm P_{2}P_{1}}}_{{}_{\rm Q _{1}Q_{2}}}=v^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{2}Q_{1}}}=v^{{}_{\rm P_{2}P_{1}}} _{{}_{\rm Q_{2}Q_{1}}}, \tag{113}\]
while the intermolecular overlap matrix symmetry becomes,
\[S^{\rm p}_{\rm Q}=S^{\rm Q}_{\rm F}. \tag{114}\]
It is important to note that while the two-electron integral, \(({\rm P_{1}P_{2}}|{\rm Q_{1}Q_{2}})=\int{\rm d}{\bf r}_{i}{\rm d}{\bf r}_{j}\)\(\phi^{*}_{{}_{\rm P_{1}}}({\bf r}_{i})\phi_{{}_{\rm P_{2}}}({\bf r}_{i})r^{-1}_{ ij}\phi^{*}_{{}_{\rm Q_{1}}}({\bf r}_{j})\phi_{{}_{\rm Q_{2}}}({\bf r}_{j})\), obeys the symmetry, \(({\rm P_{1}P_{2}}|{\rm Q_{1}Q_{2}})=({\rm Q_{1}Q_{2}}|{\rm P_{1}P_{2}})\), the full intermolecular tensor in Eq. (113) does not obey this type of symmetry. The intermolecular tensor only has a four-fold symmetry as opposed to the typical eight-fold symmetry encountered in the conventional quantum chemistry Hamiltonian. These considerations will be important for the development of the SAPT-EVE algorithm and the block encoding methodologies encountered shortly. For the rest of the appendices as well as the main manuscript, we will assume real orbitals for all of the calculations and resource estimates. As a result, we will also assume that the prepared ground-state wavefunctions will be fully real resulting in the following permutational symmetries for the reduced density matrix elements of monomer A:
\[D^{A}_{{}_{\rm P_{1}P_{2}}} =D^{A}_{{}_{\rm P_{2}P_{1}}}, \tag{115}\] \[d^{A}_{{}_{\rm P_{1}P_{2}P_{3}P_{4}}} =d^{A}_{{}_{\rm P_{2}P_{1}P_{4}P_{3}}}=d^{A}_{{}_{\rm P_{3}P_{4}P_ {1}P_{2}}}=d^{A}_{{}_{\rm P_{4}P_{3}P_{2}P_{1}}}, \tag{116}\]
as well as monomer B:
\[D^{B}_{{}_{\rm Q_{1}Q_{2}}} =D^{B}_{{}_{\rm Q_{2}Q_{1}}}, \tag{117}\] \[d^{B}_{{}_{\rm Q_{1}Q_{2}Q_{3}Q_{4}}} =d^{B}_{{}_{\rm Q_{2}Q_{1}Q_{4}Q_{3}}}=d^{B}_{{}_{\rm Q_{3}Q_{4}Q_ {1}Q_{2}}}=d^{B}_{{}_{\rm Q_{4}Q_{3}Q_{2}Q_{1}}}. \tag{118}\]
Taking into account all of these symmetries, we re-write all of the SAPT operators as:
\[\hat{V} = \sum_{{}_{\bf p},{\bf q}}{\rm sym}(v^{{}_{\rm P_{1}P_{2}}}_{{}_{ \rm Q_{1}Q_{2}}})\hat{E}_{{}_{\rm P_{1}P_{2}}}\hat{E}_{{}_{\rm Q_{1}Q_{2}}}\,, \tag{119}\] \[\hat{P} = -\sum_{{}_{\bf p},{\bf q}}{\rm sym}(S^{{}_{\rm P_{1}}}_{{}_{\rm Q _{2}}}S^{{}_{\rm P_{2}}}_{{}_{\rm Q_{1}}})\hat{E}_{{}_{\rm P_{1}P_{2}}}\hat{E}_ {{}_{\rm Q_{1}Q_{2}}},\] (120) \[\widehat{VP}_{\rm s} = -\sum_{{}_{\bf p},{\bf q}}{\rm sym}(\bar{\rho}^{{}_{\rm P_{1}Q_{2 }}}_{{}_{\rm Q_{1}P_{2}}})\hat{E}_{{}_{\rm P_{1}P_{2}}}\hat{E}_{{}_{\rm Q_{1}Q_{ 2}}}\] (121) \[-\sum_{{}_{\bf p},{\bf q}}{\rm sym}(\nu^{{}_{\rm P_{1}P_{2}}}_{{}_{ \rm Q_{1}P_{4}}}S^{{}_{\rm P_{3}}}_{{}_{\rm Q_{2}}})\hat{E}_{{}_{\rm P_{1}P_{2}} }\hat{E}_{{}_{\rm P_{3}P_{4}}}\hat{E}_{{}_{\rm Q_{1}Q_{2}}}\] \[-\sum_{{}_{\bf p},{\bf q}}{\rm sym}(\nu^{{}_{\rm P_{1}Q_{2}}}_{{}_{ \rm Q_{2}}}S^{{}_{\rm P_{3}}}_{{}_{\rm Q_{3}}})\hat{E}_{{}_{\rm P_{1}P_{2}}} \hat{E}_{{}_{\rm Q_{1}Q_{2}}}\hat{E}_{{}_{\rm Q_{3}Q_{4}}}\] \[-\sum_{{}_{\bf p},{\bf q}}{\rm sym}(v^{{}_{\rm P_{1}P_{2}}}_{{}_{ \rm Q_{1}Q_{2}}}S^{{}_{\rm P_{3}}}_{{}_{\rm Q_{4}}})\hat{E}_{{}_{\rm P_{1}P_{2}} }\hat{E}_{{}_{\rm P_{3}P_{4}}}\hat{E}_{{}_{\rm Q_{1}Q_{2}}}\hat{E}_{{}_{\rm Q_{3 }Q_{4}}}\,,\]
where
\[\bar{\nu}^{{}_{\rm P_{1}Q_{2}}}_{{}_{\rm Q_{1}P_{2}}} =v^{{}_{\rm P_{1}Q_{2}}}_{{}_{\rm Q_{1}P_{2}}}+\tfrac{1}{2}\Big{[} \sum_{{}_{\rm P_{3}Q_{3}}}\!\!\left(v^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{1}Q_{3}}} S^{{}_{\rm P_{3}}}_{{}_{\rm Q_{2}}}S^{{}_{\rm P_{2}}}_{{}_{\rm Q_{3}}}+v^{{}_{\rm P _{1}P_{3}}}_{{}_{\rm Q_{1}Q_{3}}}S^{{}_{\rm P_{3}}}_{{}_{\rm Q_{3}}}S^{{}_{\rm P_{2}}} _{{}_{\rm Q_{2}}}\right)-\sum_{{}_{\rm Q_{3}}}\!\!\left(v^{{}_{\rm P_{1}Q_{2}}}_{{ \rm Q_{1}Q_{3}}}S^{{}_{\rm P_{2}}}_{{}_{\rm Q_{3}}}+v^{{}_{\rm P_{1}Q_{3}}}_{{}_{ \rm Q_{1}Q_{3}}}S^{{}_{\rm P_{2}}}_{{}_{\rm Q_{2}}}\right)-\sum_{{}_{\rm P_{3}}} \!\!\left(v^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{1}P_{2}}}S^{{}_{\rm P_{3}}}_{{}_{\rm Q _{2}}}+v^{{}_{\rm P_{1}P_{3}}}_{{}_{\rm Q_{1}P_{3}}}S^{{}_{\rm P_{2}}}_{{}_{\rm Q _{2}}}\right)\Big{]} \tag{122}\] \[\nu^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{1}P_{4}}} =v^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{1}P_{4}}}-\sum_{{}_{\rm Q_{3}}}v ^{{}_{\rm P_{1}P_{2}}}_{{}_{\rm Q_{1}Q_{3}}}S^{{}_{\rm P_{4}}}_{{}_{\rm Q_{3}}},\] (123) \[\nu^{{}_{\rm P_{1}Q_{4}}}_{{}_{\rm Q_{1}Q_{2}}} =v^{{}_{\rm P_{1}Q_{4}}}_{{}_{\rm Q_{1}Q_{2}}}-\sum_{{}_{\rm P_{3}}}v ^{{}_{\rm P_{1}P_{3}}}_{{}_{\rm Q_{1}Q_{2}}}S^{{}_{\rm P_{3}}}_{{}_{\rm Q_{4}}}, \tag{124}\]
where we emphasize that \(\bar{\nu}_{Q_{1}P_{2}}^{P_{1}Q_{2}}\) is different from \(\nu_{Q_{1}P_{2}}^{P_{1}Q_{2}}\) in the main text. The operator, \(\mathrm{sym}(\cdot)\), symmetrizes all of the tensors with respect to the monomer indices \(\mathrm{p}/\mathrm{q}\) independently using the permutational symmetries associated with the 1- and 2- RDMS in Eqs. (111)-(112). The permutational-invariant SAPPT tensors are written explicitly as:
\[\mathrm{sym}(S_{Q_{2}}^{P_{1}}S_{Q_{2}}^{\rho_{2}}) =\tfrac{1}{4}(S_{Q_{2}}^{P_{1}}S_{Q_{2}}^{P_{2}}+S_{Q_{2}}^{P_{2} }S_{Q_{1}}^{P_{1}}+S_{Q_{2}}^{P_{1}}S_{Q_{2}}^{P_{2}}+S_{Q_{1}}^{P_{2}}S_{Q_{2} }^{P_{1}})=\tfrac{1}{2}(S_{Q_{2}}^{P_{1}}S_{Q_{1}}^{P_{2}}+S_{Q_{2}}^{P_{2}}S_{ Q_{1}}^{P_{1}}),\] (113) \[\mathrm{sym}(\nu_{Q_{1}Q_{2}}^{P_{1}P_{2}}) =\tfrac{1}{4}(\bar{\nu}_{Q_{1}Q_{2}}^{P_{1}P_{2}}+\nu_{Q_{1}Q_{2}} ^{P_{1}P_{2}}+\nu_{Q_{2}Q_{1}}^{P_{1}P_{2}}+\nu_{Q_{2}Q_{1}}^{P_{1}}),\] (114) \[\mathrm{sym}(\bar{\nu}_{Q_{1}Q_{2}}^{P_{1}P_{2}}) =\tfrac{1}{4}(\bar{\nu}_{Q_{1}P_{2}}^{P_{1}Q_{2}}+\bar{\nu}_{Q_{1} P_{2}}^{P_{1}P_{2}}+\bar{\nu}_{Q_{1}P_{2}}^{P_{1}P_{2}}+\bar{\nu}_{Q_{1}P_{2}}^{P_{ 1}P_{2}}),\] (115) \[\mathrm{sym}(\nu_{Q_{1}P_{2}}^{P_{1}P_{2}}S_{Q_{2}}^{P_{3}}) =\tfrac{1}{8}(\nu_{Q_{1}P_{2}}^{P_{1}P_{2}}S_{Q_{2}}^{P_{3}}+\nu_{ Q_{1}P_{2}P_{1}}^{P_{2}P_{1}}S_{Q_{2}}^{P_{4}}+\nu_{Q_{1}P_{2}Q_{2}}^{P_{1}P_{2}}+ \nu_{Q_{1}P_{1}P_{2}}^{P_{4}P_{3}}S_{Q_{2}}^{P_{2}}\] (116) \[+\nu_{Q_{1}P_{2}P_{2}}^{P_{1}P_{3}}S_{Q_{1}}^{P_{1}}+\nu_{Q_{2}P_{ 1}P_{3}}^{P_{2}P_{1}}S_{Q_{1}}^{P_{4}}+\nu_{Q_{2}P_{2}P_{1}}^{P_{4}P_{3}}S_{Q_{ 1}}^{P_{1}}+\nu_{Q_{2}P_{1}P_{1}}^{P_{4}P_{3}}S_{Q_{1}}^{P_{2}}\] \[\mathrm{sym}(\nu_{Q_{1}Q_{2}}^{P_{1}Q_{2}}S_{Q_{3}}^{P_{2}}) =\tfrac{1}{8}(\nu_{Q_{1}Q_{2}}^{P_{1}Q_{2}}S_{Q_{3}}^{P_{3}}+\nu_{ Q_{1}Q_{1}Q_{2}}^{P_{3}}S_{Q_{4}}^{P_{4}}+\nu_{Q_{1}Q_{2}Q_{3}}^{P_{4}}S_{Q_{ 1}}^{P_{1}}+\nu_{Q_{1}Q_{3}Q_{2}}^{P_{4}P_{3}}S_{Q_{2}}^{P_{2}}\] (117) \[+\nu_{Q_{1}Q_{2}}^{P_{2}P_{3}}S_{Q_{3}}^{P_{1}}+\nu_{Q_{2}Q_{1}Q_{ 2}}^{P_{3}}S_{Q_{4}}^{P_{1}}+\nu_{Q_{3}Q_{4}}^{P_{2}Q_{3}}S_{Q_{1}}^{P_{1}}+ \nu_{Q_{4}Q_{3}}^{P_{2}Q_{3}}S_{Q_{2}}^{P_{1}}\] \[\mathrm{sym}(v_{Q_{1}Q_{2}}^{P_{1}P_{2}}S_{Q_{3}}^{P_{3}}S_{Q_{4}}^ {P_{4}}) =\tfrac{1}{16}\Big{[}v_{Q_{1}Q_{2}}^{P_{1}P_{2}}S_{Q_{3}}^{P_{3}}S_{ Q_{4}}^{P_{4}}+v_{Q_{1}Q_{2}P_{3}}^{P_{1}P_{2}}S_{Q_{4}}^{P_{3}}+\nu_{Q_{1}Q_{2}P_{3}}^{P_{ 1}P_{3}}S_{Q_{4}}^{P_{4}}S_{Q_{1}}^{P_{4}}+v_{Q_{4}Q_{3}}^{P_{1}P_{2}}S_{Q_{2}}^{ P_{3}}S_{Q_{1}}^{P_{4}}\] (118) \[+v_{Q_{1}Q_{2}}^{P_{1}P_{3}}S_{Q_{4}}^{P_{3}}S_{Q_{3}}^{P_{3}}+v_{ Q_{2}Q_{1}Q_{1}}^{P_{3}P_{3}}S_{Q_{4}}^{P_{3}}+v_{Q_{3}Q_{4}}^{P_{2}P_{1}}S_{Q_{1}}^{P_{4}}S_{Q_{ 2}}^{P_{3}}+v_{Q_{4}Q_{3}}^{P_{2}P_{1}}S_{Q_{2}}^{P_{3}}S_{Q_{1}}^{P_{3}}\] \[+v_{Q_{1}Q_{2}}^{P_{3}P_{4}}S_{Q_{3}}^{P_{3}}+v_{Q_{2}Q_{1}Q_{1}}^{P_ {3}P_{3}}S_{Q_{4}}^{P_{4}}+v_{Q_{3}Q_{4}}^{P_{3}P_{4}}S_{Q_{1}}^{P_{3}}S_{Q_{2}} ^{P_{2}}+v_{Q_{4}Q_{3}}^{P_{3}P_{4}}S_{Q_{2}}^{P_{1}P_{2}}\] \[+v_{Q_{1}Q_{2}}^{P_{3}P_{4}}S_{Q_{4}}^{P_{2}}S_{Q_{3}}^{P_{1}}+v_{ Q_{2}Q_{1}}^{P_{3}P_{3}}S_{Q_{4}}^{P_{1}}+v_{Q_{3}Q_{4}}^{P_{3}P_{4}}S_{Q_{1}}^{P_{2}}S_{Q_{ 2}}^{P_{1}}+v_{Q_{4}Q_{3}}^{P_{4}P_{3}}S_{Q_{2}}^{P_{1}}S_{Q_{1}}^{P_{2}}\] \[+v_{Q_{1}Q_{2}}^{P_{3}P_{4}}S_{Q_{4}}^{P_{3}}S_{Q_{3}}^{P_{1}}+v_{ Q_{2}Q_{1}}^{P_{3}P_{3}}S_{Q_{3}}^{P_{4}}+v_{Q_{3}Q_{4}}^{P_{3}P_{4}}S_{Q_{1}}^{P_{2}}S_{Q_{ 2}}^{P_{1}}+v_{Q_{4}Q_{3}}^{P_{4}P_{3}}S_{Q_{2}}^{P_{1}}S_{Q_{1}}^{P_{2}}\] \[+v_{Q_{1}Q_{2}}^{P_{1}P_{3}}S_{Q_{4}}^{P_{3}}S_{Q_{3}}^{P_{1}}+v_{ Q_{2}Q_{1}}^{P_{3}P_{3}}S_{Q_{4}}^{P_{4}}+v_{Q_{3}Q_{4}}^{P_{3}P_{4}}S_{Q_{1}}^{P_{2}}S_{Q_{ 2}}^{P_{1}}+v_{Q_{4}Q_{3}}^{P_{4}P_{3}}S_{Q_{2}}^{P_{1}}S_{Q_{1}}^{P_{2}}\] \[+v_{Q_{1}Q_{2}}^{P_{1}P_{3}}S_{Q_{4}}^{P_{2}P_{3}}S_{Q_{3}}^{P_{1}}+v_{ Q_{2}Q_{1}}^{P_{3}P_{3}}S_{Q_{4}}^{P_{4}}+v_{Q_{3}Q_{4}}^{P_{3}P_{4}}S_{Q_{1}}^{P_{1}}S_{Q_{2}}^{P_{ 1}}+v_{Q_{4}Q_{3}}^{P_{4}P_{3}}S_{Q_{2}}^{P_{1}}S_{Q_{1}}^{P_{2}}\] \[+v_{Q_{1}Q_{2}}^{P_{1}P_{3}}S_{Q_{4}}^{P_{3}}S_{Q_{3}}^{P_{1}}+v_{ Q_{2}Q_{1}}^
Complete basis set limit
In the following, we show that the equality, \(\widehat{VP}=\hat{V}\hat{P}\), holds in the complete basis limit for the electrostatic-exchange operator defined in the spatial orbital basis defined by Eq. (100). The proof is based on the identity,
\[\delta(\mathbf{r}-\mathbf{r}^{\prime})=\sum_{n}\phi_{n}^{*}(\mathbf{r})\phi_{n}( \mathbf{r}^{\prime}), \tag{101}\]
which only holds for complete basis sets. This identity leads to the following formula in the spatial orbital basis,
\[\sum_{p_{3}}S_{q_{4}}^{p_{3}}v_{q_{1}q_{2}}^{p_{1}p_{3}} =\sum_{p_{3}}\iint d\mathbf{r}d\mathbf{r}_{i}d\mathbf{r}_{j}\,\phi _{p_{3}}^{*}(\mathbf{r})\phi_{q_{4}}(\mathbf{r})\phi_{p_{1}}^{*}(\mathbf{r}_{i })\phi_{p_{3}}(\mathbf{r}_{i})v_{ij}\phi_{q_{1}}^{*}(\mathbf{r}_{j})\phi_{q_{2 }}(\mathbf{r}_{j}) \tag{102}\] \[=\iint d\mathbf{r}d\mathbf{r}_{i}d\mathbf{r}_{j}\,\delta(\mathbf{ r}-\mathbf{r}_{i})\phi_{q_{4}}(\mathbf{r})\phi_{p_{1}}^{*}(\mathbf{r}_{i})v_{ij} \phi_{q_{1}}^{*}(\mathbf{r}_{j})\phi_{q_{2}}(\mathbf{r}_{j})\] (103) \[=v_{q_{1}q_{2}}^{p_{1}q_{4}}, \tag{104}\]
which is used throughout the calculation. Starting with the third term of Eq. (100), we have:
\[\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}v_{q_{1}q_{2}}^{p_{1}q_{4}}S_{p_{2}}^{q_{3}}\hat{E}_{p_{1 }p_{2}}^{\sigma}\hat{E}_{q_{1}q_{2}}^{+}\hat{E}_{q_{0}q_{4}}^{\sigma} =\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\left(v_{q_{1}q_{2}}^{p_{1}q_{4}}S_{p_{2}}^{q_{3}}-\sum_{ p}v_{q_{1}q_{2}}^{p_{1}p}S_{q_{4}}^{p_{3}}S_{p_{2}}^{q_{3}}\right)E_{p_{1}p_{2}}^{ \sigma}\hat{E}_{q_{1}q_{2}}^{+}\hat{E}_{q_{3}q_{4}}^{\sigma} \tag{105}\] \[=\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\left(v_{q_{1}q_{2}}^{p_{1}q_{4}}S_{p_{2}}^{q_{3}}-v_{q_{ 1}q_{2}}^{p_{1}q_{4}}S_{p_{2}}^{q_{3}}\right)E_{p_{1}p_{2}}^{\sigma}\hat{E}_{q _{1}q_{2}}^{+}\hat{E}_{q_{3}q_{4}}^{\sigma}\] (106) \[=0. \tag{107}\]
The second term in Eq. (100) is evaluated as:
\[\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\nu_{q_{1}p_{4}}^{p_{1}p_{2}}S_{q_{2}}^{p_{3}}\hat{E}_{p_ {1}p_{2}}^{+}\hat{E}_{p_{3}p_{4}}^{\sigma}\hat{E}_{q_{1}q_{2}}^{\sigma} =\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\left(v_{q_{1}p_{4}}^{p_{1}p_{2}}S_{q_{2}}^{p_{3}}-\sum_{ q}v_{q_{1}q_{2}}^{p_{1}p_{2}}S_{q_{2}}^{p_{3}}S_{p_{4}}^{q}\right)E_{p_{1}p_{2}}^{+}E_{p_{3}p_{ 4}}^{\sigma}\hat{E}_{q_{1}q_{2}}^{\sigma} \tag{108}\] \[=\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\left(v_{q_{1}q_{2}}^{p_{1}p_{2}}S_{q_{2}}^{p_{3}}-v_{q_{ 1}p_{2}}^{p_{1}p_{2}}S_{q_{2}}^{p_{3}}\right)E_{p_{1}p_{2}}^{+}E_{p_{3}p_{4}} ^{\sigma}\hat{E}_{q_{1}q_{2}}^{\sigma}\] (109) \[=0, \tag{110}\]
while the first term is given by:
\[\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\nu_{q_{1}p_{2}}^{p_{1}q_{2}}\hat{E}_{p_{1}p_{2}}^{\sigma} \hat{E}_{q_{1}q_{2}}^{\sigma} =\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\left(v_{q_{1}p_{2}}^{p_{1}q_{2}}-\sum_{q}v_{q_{1}q_{2}}^{ p_{1}q_{2}}S_{p_{2}}^{q}-\sum_{p}v_{q_{1}p_{2}}^{p_{1}p}S_{q_{2}}^{p_{2}}+\sum_{pq}v_{q_{1}q _{2}}^{p_{1}p}S_{q_{2}}^{p_{3}}\right)E_{p_{1}p_{2}}^{\sigma}E_{q_{1}q_{2}}^{\sigma} \tag{111}\] \[=\sum_{\begin{subarray}{c}\mathbf{pq}\\ \sigma\end{subarray}}\left(v_{q_{1}p_{2}}^{p_{1}q_{2}}-v_{q_{1}p_{2}}^{p_{1}q_{ 2}}-v_{q_{1}p_{2}}^{p_{1}q_{2}}+v_{q_{1}p_{2}}^{p_{1}q_{2}}\right)E_{p_{1}p_{ 2}}^{\sigma}E_{q_{1}q_{2}}^{\sigma}\] (112) \[=0. \tag{113}\]
As a result, we are only left with the fourth term such that, \(\widehat{VP}=\hat{V}\hat{P}\). Ultimately, this explains the form of \(\widehat{VP}\) consisting of four terms compared the simpler \(\hat{V}\hat{P}\) term that one might have expected from the beginning. It is important to note that this equality does not hold exactly for the electrostatic-exchange operator defined by Eq. (108). Further studies of these properties and their relevance to the SAPT-EVE algorithm and the \(\ell_{1}\) norm is left for future work.
## Appendix D Active space formulation
In the following, we develop an active space formulation of SAPT where the orbitals are decomposed into three parts: (1) core orbitals which are assumed to be fully occupied, (2) active space orbitals where we expect the main physics/chemistry to occur, and (3) virtual orbitals which have higher energy than the active space orbitals but provide a negligible contribution to the ground-state energy. Active space methods are important for the description
of large-scale systems. In the following, we use a notation where \(i/t\) and \(j/u\) indices correspond to the core/active orbitals of monomers A and B respectively. The active space decomposition is illustrated for the electrostatic (Sec. D.1), exchange (Sec. D.2), and electrostatic-exchange (Sec. D.3) operators below. Throughout this appendix, we will use Einstein notation exclusively with the core orbital indices \((i/j)\) to reduce the verbosity of the equations. We will also ignore the symmetrization of the tensor elements but will discuss the symmetrized version of the active space expressions near the end. Real orbitals are also assumed throughout.
## 1 Electrostatics
We first consider the full-space electrostatic operator,
\[\hat{V}=\sum_{\mathbf{p}\mathbf{q}}v^{p_{1}p_{2}}_{\mathbf{q}_{1}\mathbf{q}_{2 }}\hat{E}_{p_{1}\mathbf{p}_{2}}\hat{E}_{q_{1}\mathbf{q}_{2}}=\sum_{\begin{subarray} {c}\mathbf{p}\mathbf{q}\\ \sigma\tau\end{subarray}}v^{p_{1}p_{2}}_{\mathbf{q}_{1}q_{2}}\hat{E}^{\sigma}_ {p_{1}p_{2}}\hat{E}^{\tau}_{q_{1}q_{2}} \tag{45}\]
The active space operator is derived by tracing out the inactive degrees of freedom with a Hartree-Fock-like wavefunction in the inactive space. As a result, we require decomposing the summation of each of the indices in terms of core \(i/j\) and active space \(t/u\) indices,
\[\sum_{p_{1}p_{2}\in A}\sum_{q_{1}q_{2}\in B}=\sum_{i_{1}q_{2}\in A}\sum_{j_{1 }j_{2}\in B}+\sum_{t_{1}t_{2}\in A}\sum_{j_{1}j_{2}\in B}+\sum_{i_{1}i_{2}\in A }\sum_{u_{1}u_{2}\in B}+\sum_{t_{1}t_{2}\in A}\sum_{u_{1}u_{2}\in B}. \tag{46}\]
The virtual orbitals are assumed to remain unoccupied therefore they will not contribute to the active space renormalization procedure. By tracing out the core orbitals, the active space electrostatic operator may be written as:
\[\hat{V}_{\text{active}}=v^{i_{1}i_{2}}_{j_{1}j_{2}}D^{A}_{i_{1}i_{2}}D^{B}_{j_ {1}j_{2}}+\sum_{t_{1}t_{2}\in A}\!\!v^{t_{1}t_{2}}_{j_{1}j_{2}}\hat{E}^{+}_{t_ {1}t_{2}}D^{B}_{j_{1}j_{2}}+\sum_{u_{1}u_{2}\in B}\!\!v^{i_{1}i_{2}}_{u_{2}}D^{ A}_{i_{1}i_{2}}\hat{E}^{+}_{u_{1}u_{2}}+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### \(\widehat{VP}_{1}\)
The first term of the full space electrostatic-exchange operator is given by Eq. (B51) which, following the same steps as above, results in the following expression for the active space operator:
\[\widehat{VP}_{1,\text{active}}=-2v^{ij}_{ji}-\sum_{\mathbf{u}}v^{iu_{2}}_{u_{1} i}\hat{E}^{+}_{u_{1}u_{2}}-\sum_{\mathbf{t}}v^{t_{1}j}_{jt_{2}}\hat{E}^{+}_{t_{ 1}t_{2}}-\sum_{\begin{subarray}{c}\mathbf{t}\mathbf{u}\\ \sigma\end{subarray}}v^{t_{1}u_{2}}_{u_{1}t_{2}}\hat{E}^{\sigma}_{t_{1}t_{2}} \hat{E}^{\sigma}_{u_{1}u_{2}}.\] (D8)
### \(\widehat{VP}_{2}\)
For the second term, Eq. (B39), we find the following active space expression:
\[\widehat{VP}_{2,\text{active}}= -(4v^{i_{1}i_{1}}_{j_{1}i_{2}}S^{i_{2}}_{j_{1}}-2v^{i_{2}i_{1}}_{ j_{1}i_{2}}S^{i_{1}}_{j_{1}})-\sum_{\mathbf{u}}(2v^{i_{1}i_{1}}_{u_{1}i_{2}}S^{i_{2}}_{ u_{2}}-v^{i_{2}i_{1}}_{u_{1}i_{2}}S^{i_{1}}_{u_{2}})\hat{E}^{+}_{u_{1}u_{2}}\] \[-\sum_{\mathbf{t}}(2v^{t_{1}t_{2}}_{j_{1}i_{1}}S^{i_{1}}_{j_{1}}+ 2v^{i_{1}i_{1}}_{j_{1}t_{2}}S^{t_{1}}_{j_{1}}-v^{t_{1}i_{1}}_{j_{2}i_{1}}S^{t_ {1}}_{j_{1}})\hat{E}^{+}_{t_{1}t_{2}}-\sum_{v^{t_{1}t_{2}}_{j_{1}t_{2}}S^{t_{ 1}}_{j_{1}}}\hat{E}^{+}_{t_{1}t_{2}}\] \[-\sum_{\mathbf{t}\mathbf{u}}v^{t_{1}t_{2}}_{u_{1}i_{1}}S^{i_{1}}_ {u_{2}}\hat{E}^{+}_{t_{1}t_{2}}\hat{E}^{+}_{u_{1}u_{2}}-\sum_{\begin{subarray} {c}\mathbf{t}\mathbf{u}\\ \sigma\end{subarray}}(2v^{i_{1}i_{1}}_{u_{1}t_{2}}S^{t_{1}}_{u_{2}}-v^{t_{1}i_{ 1}}_{u_{1}t_{2}}S^{i_{1}}_{u_{2}}-v^{i_{1}i_{1}}_{u_{1}i_{1}}S^{t_{1}}_{u_{2}} )\hat{E}^{\sigma}_{t_{1}t_{2}}\hat{E}^{\sigma}_{u_{1}u_{2}}\] \[-\sum_{\begin{subarray}{c}\mathbf{t}\mathbf{u}\\ \sigma_{1}\sigma_{2}\end{subarray}}v^{t_{1}t_{2}}_{u_{1}t_{2}}S^{t_{2}}_{u_{2} }\hat{E}^{\sigma_{1}\sigma_{2}}_{u_{1}u_{2}}.\] (D9)
### \(\widehat{VP}_{3}\)
Next, we find the following expression for the active space version of the third term, Eq. (B44):
\[\widehat{VP}_{3,\text{active}}= -(4v^{i_{1}j_{2}}_{j_{1}j_{2}}S^{i_{2}}_{i}-2v^{i_{1}j_{2}}_{j_{2 }j}S^{i_{1}}_{i_{1}})-\sum_{\mathbf{t}}(v^{t_{1}j_{2}}_{j_{1}j_{2}}S^{i_{2}}_ {t_{2}}-v^{t_{1}j_{2}}_{j_{2}j}S^{i_{1}}_{t_{2}})\hat{E}^{+}_{t_{1}t_{2}}\] \[-\sum_{\mathbf{u}}(2v^{i_{1}j_{1}}_{u_{1}u_{2}}S^{i_{1}}_{i_{1}}+ 2v^{i_{1}u_{2}}_{j_{1}j_{2}}S^{u_{1}}_{i_{1}}-v^{i_{1}u_{2}}_{u_{1}j_{1}}S^{i_ {1}}_{u_{1}})\hat{E}^{+}_{u_{1}u_{2}}-\sum_{\mathbf{u}}v^{i_{1}u_{2}}_{u_{1}u_{ 2}}S^{u_{3}}_{i_{1}}\hat{E}^{+}_{u_{1}u_{2}}\] \[-\sum_{\mathbf{t}\mathbf{u}}v^{t_{1}j_{1}}_{u_{1}u_{2}}S^{j_{1}}_ {t_{2}}\hat{E}^{+}_{t_{1}t_{2}}\hat{E}^{+}_{u_{1}u_{2}}-\sum_{\begin{subarray} {c}\mathbf{t}\\ \sigma\end{subarray}}(2v^{t_{1}u_{2}}_{j_{1}j_{2}}S^{u_{1}}_{t_{2}}-v^{t_{1}u_ {2}}_{u_{1}j_{1}}S^{i_{1}}_{t_{2}}-v^{t_{1}j_{1}}_{ju_{1}}S^{u_{2}}_{t_{2}}) \hat{E}^{\sigma}_{t_{1}t_{2}}\hat{E}^{\sigma}_{u_{1}u_{2}}\] \[-\sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma\end{subarray}}v^{t_{1}u_{1}u_{2}}_{u_{1}u_{2}}S^{u_{3}}_{t_{2}}\hat{E}^{ \sigma_{1}\sigma_{2}}_{t_{1}t_{2}}\hat{e}^{\sigma_{2}\sigma_{1}}_{\mathbf{u}}.\] (D10)
### \(\widehat{VP}_{4}\)
Finally, we find the following active space operator for the fourth term, Eq. (B50):
\[\widehat{VP}_{4,\text{active}}= -(8v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{2}}S_{j_{2}}^{j_{2}}-4v _{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{2}}^{i_{2}}S_{j_{1}}^{i_{1}}-4v_{j_{1}j_{2}}^{i _{1}i_{1}}S_{j_{1}}^{i_{2}}S_{j_{2}}^{i_{2}}+2v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j _{1}}^{i_{1}}S_{j_{2}}^{i_{1}})\] \[-\sum_{\textbf{u}}(4v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{j_{1}}^{i_{2}}S_ {j_{1}}^{i_{2}}-2v_{u_{1}u_{2}}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}}+4v_{j_{1}j_{2}}^{ i_{1}i_{1}}S_{j_{2}}^{i_{2}}S_{u_{1}}^{i_{2}}-2v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{ 2}}^{i_{2}}S_{u_{1}}^{i_{1}}\] \[-2v_{u_{1}j_{1}}^{i_{1}i_{2}}S_{u_{2}}^{i_{2}}S_{j_{1}}^{i_{1}}+v _{u_{1}j_{1}}^{i_{1}i_{2}}S_{u_{2}}^{i_{1}}-2v_{j_{1}j_{1}}^{i_{1}i_{1}}S_{j_{ 1}}^{i_{2}}S_{u_{2}}^{i_{2}}+v_{j_{1}i_{1}}^{i_{1}i_{2}}S_{j_{1}}^{i_{2}}S_{u_{ 2}}^{i_{1}})\widehat{E}_{u_{1}u_{2}}^{+}\] \[-\sum_{\textbf{u}}(2v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{u_{2}}^{i_{2}}S_ {u_{3}}^{i_{2}}-v_{u_{1}u_{2}}^{i_{1}i_{2}}S_{u_{4}}^{i_{1}})\widehat{E}_{u_{1} u_{2}}^{+}\] \[-\sum_{\textbf{u}}(4v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{2}}^{i_{1}}S_ {j_{2}}^{i_{1}}-2v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}}S_{j_{2}}^{i_{1}}+ 4v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}S_{j_{2}}^{i_{2}}-2v_{j_{1}j_{2}}^ {i_{1}i_{1}}S_{j_{1}}^{i_{1}}S_{j_{2}}^{i_{2}}\] \[-2v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}S_{j_{2}}^{i_{2}}+v _{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}S_{j_{2}}^{i_{2}}-2v_{j_{1}j_{2}}^{ i_{1}i_{1}}S_{j_{2}}^{i_{2}}S_{j_{2}}^{i_{1}}+v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{ 2}}^{i_{2}}S_{j_{1}}^{i_{1}}\widehat{E}_{t_{1}t_{2}}^{+}\] \[-\sum_{\textbf{t}}(2v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{2}}S_{t_{4}j_ {2}}-v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{t_{3}}S_{t_{4}j_{2}})\widehat{E}_{t_{1}t_{2} }^{+}\] \[-\sum_{\textbf{u}}(2v_{u_{1}u_{2}}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}}S_ {j_{1}}^{i_{1}}+2v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{u_{1}}^{i_{1}}S_{u_{2}}^{i_{1}}- v_{u_{1}i_{1}j_{1}}^{i_{1}i_{2}}S_{u_{2}}^{i_{1}}S_{j_{1}}^{i_{1}}-v_{j_{1}i_{1}}^{i_{1}i_{ 2}}S_{j_{1}}^{i_{1}}S_{u_{2}}^{i_{1}}+2v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{j_{1}}^{i_{ 1}}S_{j_{1}}^{i_{2}}\] \[-v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{j_{1}}^{i_{1}}S_{j_{2}}^{i_{1}}- v_{u_{1}u_{2}}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}}S_{j_{1}}^{i_{1}}\hat{E}_{t_{1}t_{2}}^{+}\hat{E}_{u_ {1}u_{2}}^{+}\] \[-\sum_{\textbf{u}}(4v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}S_ {u_{2}}^{i_{1}}-2v_{u_{1}j_{1}}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_{j}^{i_{2}}-2v_{ j_{2}}^{i_{1}i_{1}}S_{j}^{i_{1}}S_{u_{1}}^{i_{2}}-2v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{u_{2}}^{i_{ 1}}S_{u_{2}}^{i_{1}}-2v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_{u_{1}}^{i _{2}}-2v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_{u_{1}}^{i_{2}}-2v_{j_{1}j_{2 }}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_{u_{2}}^{i_{1}}\] \[+v_{u_{1}j_{1}}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_{j_{2}}^{i_{1}}+v _{j_{1}i_{1}}^{i_{1}i_{1}}S_{j_{1}}^{i_{1}}S_{u_{2}}^{i_{2}}+v_{u_{1}j_{1}}^{i_{ 1}i_{2}}S_{u_{2}}^{i_{1}}S_{j_{1}}^{i_{1}}+v_{j_{1}j_{2}}^{i_{2}}S_{j_{1}}^{i_{1}}S _{u_{1}}^{i_{1}}\hat{E}_{\textbf{t}}^{\sigma}\widehat{E}_{\textbf{u}}^{\sigma}\] \[-\sum_{\textbf{u}}(2v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_ {u_{3}}^{i_{1}}-v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_{u_{3}}^{i_{1}}-v_{ u_{1}u_{2}}^{i_{1}i_{2}}S_{u_{3}}^{i_{1}}-v_{u_{1}u_{2}}^{i_{1}i_{2}}S_{u_{4}}^{i_{1}}S_{u_{3}}^{i_{ 1}})\widehat{E}_{\textbf{t}}^{\sigma}\widehat{E}_{\textbf{u}}^{\sigma}\] \[-\sum_{\textbf{u}}(2v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{u_{4}}^{i_{1}}S_{u_{3}}^ {i_{2}}-v_{u_{1}i_{1}}^{i_{1}i_{1}}S_{u_{2}}^{i_{1}}S_{u_{3}}^{i_{1}}-v_{u_{1}u_{2}}^{i_{ 1}i_{2}}S_{u_{4}}^{i_{1}}S_{u_{3}}^{i_{1}})\widehat{E}_{\textbf{t}}^{\sigma} \widehat{E}_{\textbf{u}}^{\sigma}\widehat{E}_{\textbf{u}}^{\sigma}\] \[-\sum_{\textbf{u}}(2v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{u_{4}}^{i_{1}}S_{u_{3}}^ {i_{2}}-v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{
where
\[\widetilde{VP}_{A,\text{active}}= \sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma\end{subarray}}\tilde{I}_{t_{1}t_{2}}^{(A)}\hat{E}_{t_{1}t_{2}}^{ \sigma}+\sum_{\begin{subarray}{c}\mathbf{t}\\ \mathbf{t}\\ \sigma\end{subarray}}\tilde{v}_{t_{3}t_{4}}^{t_{1}t_{2}}\hat{e}_{\mathbf{t} }^{\sigma\tau},\] (D13) \[\widetilde{VP}_{B,\text{active}}= \sum_{\begin{subarray}{c}\mathbf{u}\\ \sigma\end{subarray}}\tilde{I}_{u_{1}u_{2}}^{(B)}\hat{E}_{u_{1}u_{2}}^{ \sigma}+\sum_{\begin{subarray}{c}\mathbf{u}\\ \sigma\end{subarray}}\tilde{v}_{u_{3}u_{4}}^{u_{1}u_{2}}\hat{e}_{\mathbf{u} }^{\sigma\tau},\] (D14) \[\widetilde{VP}_{1m,\text{active}}= \sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma\end{subarray}}\tilde{v}_{u_{1}u_{2}}^{t_{1}t_{2}}\hat{E}_{t_{1}t_{2}}^{ \downarrow}\hat{E}_{u_{1}u_{2}}^{\downarrow},\] (D15) \[\widetilde{VP}_{1\ell,\text{active}}= \sum_{\begin{subarray}{c}\mathbf{t}\\ \mathbf{u}\\ \sigma\end{subarray}}\tilde{v}_{u_{1}t_{2}}^{t_{1}u_{2}}\hat{E}_{t_{1}t_{2}}^{ \sigma}\hat{E}_{u_{1}u_{2}}^{\sigma},\] (D16) \[\widetilde{VP}_{2,\text{active}}= \sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma\end{subarray}}\tilde{v}_{u_{1}t_{4}}^{t_{1}t_{2}}\hat{v}_{u_{2}}^{t_{3}} \hat{e}_{\mathbf{t}}^{\sigma_{1}\sigma_{2}}\hat{E}_{\mathbf{u}}^{\sigma_{2}},\] (D17) \[\widetilde{VP}_{3,\text{active}}= \sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma_{1}\sigma_{2}\end{subarray}}\tilde{v}_{u_{1}u_{2}}^{t_{1}u_{4}}\hat{S}_{ u_{3}}^{t_{2}}\hat{E}_{\mathbf{t}}^{\sigma_{2}}\hat{E}_{\mathbf{u}}^{\sigma_{1} \sigma_{2}},\] (D18) \[\widetilde{VP}_{4,\text{active}}= \sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma_{1}\sigma_{2}\sigma_{3}\end{subarray}}\tilde{v}_{u_{1}u_{2}}^{t_{1}t_{2 }}\hat{p}_{u_{3}u_{4}}^{t_{4}t_{4}}\;\hat{e}_{\mathbf{t}}^{\sigma_{1}\sigma_ {2}}\hat{e}_{\mathbf{u}}^{\sigma_{3}\sigma_{2}},\] (D19)
with renormalized active space tensor coefficients defined as:
\[VP_{0,\text{active}}= -2v_{j_{1}i_{1}}^{i_{1}i_{1}}-(4v_{j_{1}i_{2}}^{i_{1}i_{1}}S_{j_{ 1}}^{i_{2}}-2v_{j_{1}i_{2}}^{i_{2}i_{1}}S_{j_{1}}^{i_{1}})-(4v_{j_{1}j_{2}}^{i_ {1}i_{2}}S_{j_{2}}^{i_{1}}-2v_{j_{2}j}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}})\] (D20) \[-(8v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{2}}S_{j_{2}}^{i_{2}}-4 v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{2}}^{i_{1}}S_{j_{2}}^{i_{1}}-4v_{j_{1}j_{1}}^{i_{1}i_{ 2}}S_{j_{2}}^{i_{2}}S_{j_{2}}^{i_{2}}+2v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{2}}^{i_{ 1}}S_{j_{2}}^{i_{1}})\] (D21) \[\tilde{I}_{t_{1}t_{2}}^{(A)}= -v_{t_{1}j_{1}}^{i_{1}i_{1}}-(2v_{j_{1}i_{1}}^{j_{1}i_{1}}S_{j_{ 1}}^{i_{1}}+2v_{j_{1}i_{2}}^{i_{1}i_{1}}S_{j_{1}}^{i_{1}}-v_{j_{1}i_{2}}^{i_{1}i _{1}}S_{j_{1}}^{i_{1}}-v_{j_{1}i_{2}}^{i_{1}i_{1}}S_{j_{1}}^{i_{1}})-(v_{j_{1}j_{ 2}}^{i_{1}i_{2}}S_{j_{2}}^{i_{2}}-v_{j_{2}j}^{i_{1}i_{2}}S_{j_{2}}^{i_{2}})\] (D22) \[-(4v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{2}}^{i_{1}}S_{j_{1}}^{i_{1}}-2 v_{j_{1}i_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}-2v_{j_{1}i_{2}}^{i_{1}i_{1}}S_{j_{1}}^{i_{ 2}}-2v_{j_{1}i_{1}i_{1}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}-2v_{j_{1}j_{2}}^{i_{1}i _{1}}S_{j_{2}}^{i_{1}}-2v_{j_{1}j_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}+v_{j_{1}j _{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}S_{j_{2}}^{i_{1}}-2v_{j_{1}j_{2}}^{i_{1}i_{ 1}}S_{j_{2}}^{i_{1}}S_{j_{2}}^{i_{1}}+v_{j_{1}j_{2}}^{i_{1}i_{2}}S_{j_{1}}^{i_{ 1}})\] \[\tilde{I}_{t_{1}u_{2}}^{(B)}= -v_{u_{1}i_{1}}^{i_{1}u_{2}}-(2v_{j_{1}i_{1}}^{i_{1}i_{1}}S_{j_{ 1}}^{i_{2}}-v_{u_{1}i_{2}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}-2v_{j_{1}i_{2}}^{i_{1}i _{1}}S_{j_{1}}^{i_{1}}+2v_{j_{1}i_{2}}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}}-v_{j_{1}i_{ 1}}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}}-v_{j_{1}i_{1}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}})\] (D23) \[-(4v_{u_{1}u_{2}}^{i_{1}i_{1}}S_{j_{1}}^{i_{2}}S_{j_{2}}^{i_{1}}-2 v_{u_{1}i_{2}}^{i_{1}i_{2}}S_{j_{1}}^{i_{1}}S_{j_{1}}^{i_{1}}-2v_{j_{1}i_{2}}^{i_{1}i _{2}}S_{j_{2}}^{i_{2}}S_{j_{1}}^{i_{1}}-2v_{u_{1}i_{1}}^{i_{1}i_{1}}S_{j_{2}}^{i_{ 2}}S_{j_{1}}^{i_{1}}+v_{u_{1}j_{1}}^{i_{1}i_{2}}S_{j_{1}}^{i_{2}}-2v_{j_{1}i_{1 }}^{i_{1}i_{1}}S_{j_{1}}^{i_{2}}S_{j_{2}}^{i_{1}}-2v_{j_{1}i_{1}i_{1}}^{i_{1}i_{1}}S_{j_{ 1}}^{i_{2}}S_{j_{1}}^{i_{1}}-2v_{j_{1}i_{1}}^{i_{1}i_{1}}S_{j_{1}}^{i_{2}}S_{j_{ 1}}^{i_{1}})\] (D24) \[\tilde{v}_{u_{1}u_{2}}^{t_{1}t_{2}}= -v_{u_{1}i_{1}}^{t_{1}t_{2}}S_{j_{1}}^{i_{1}}+v_{j_{1}i_{2}}^{i_{1}i_{2}}S_{j_{ 1}}^{i_{1}}S_{j_{2}}^{i_{1}}-2v_{j_{1}i_{1}}^{i_{1}i_{1}}S_{j_{2}}^{i_{1}}S_{j_{1}}^{i_{ 1}}-2v_{j_
Converting to chemist notation, we obtain:
\[\widehat{VP}_{A,\text{active}} =\sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma\end{subarray}}(\hat{I}^{(A)}_{t_{1}t_{2}}-\sum_{t}\tilde{v}^{t_{1}t_{ 2}}_{t_{2}})\hat{E}^{\sigma}_{t_{1}t_{2}}+\sum_{\begin{subarray}{c}\mathbf{t} \\ \sigma\tau\end{subarray}}\tilde{\nu}^{t_{1}t_{2}}_{t_{3}t_{4}}\hat{E}^{\sigma}_{ t_{1}t_{2}}\hat{E}^{\tau}_{t_{3}t_{4}}, \tag{113}\] \[\widehat{VP}_{B,\text{active}} =\sum_{\begin{subarray}{c}\mathbf{u}\\ \sigma\end{subarray}}(\tilde{I}^{(B)}_{u_{1}u_{2}}-\sum_{u}\tilde{v}^{u_{1}u_{ 2}}_{uu_{2}})\hat{E}^{\sigma}_{u_{1}u_{2}}+\sum_{\begin{subarray}{c}\mathbf{u} \\ \sigma\tau\end{subarray}}\tilde{\nu}^{u_{1}u_{2}}_{u_{3}u_{4}}\hat{E}^{\sigma}_{ u_{1}u_{2}}\hat{E}^{\tau}_{u_{3}u_{4}},\] (114) \[\widehat{VP}_{1m,\text{active}} =\sum_{\begin{subarray}{c}\mathbf{t}\\ \mathbf{u}\end{subarray}}\tilde{\nu}^{t_{1}t_{2}}_{u_{1}u_{2}}\hat{E}^{\tau}_{ t_{1}t_{2}}\hat{E}^{+}_{u_{1}u_{2}},\] (115) \[\widehat{VP}_{1\ell,\text{active}} =\sum_{\begin{subarray}{c}\mathbf{t}\\ \mathbf{u}\end{subarray}}\tilde{\nu}^{t_{1}u_{2}}_{u_{1}t_{2}}\hat{E}^{\sigma}_ {t_{1}t_{2}}\hat{E}^{\sigma}_{u_{1}u_{2}},\] (116) \[\widehat{VP}_{2,\text{active}} =\sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma_{1}\sigma_{2}\end{subarray}}\tilde{\nu}^{t_{1}t_{2}}_{u_{1}t_{2}}\hat{ T}^{\sigma_{1}}_{t_{1}t_{2}}\hat{E}^{\sigma_{2}}_{t_{3}t_{4}}\hat{E}^{\sigma_{2}}_{ u_{1}u_{2}},\] (117) \[\widehat{VP}_{3,\text{active}} =\sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma_{1}\sigma_{2}\end{subarray}}\tilde{\nu}^{t_{1}u_{4}}_{u_{1}u_{2}}\hat{ S}^{t_{3}}_{u_{3}}\hat{E}^{\sigma_{2}}_{t_{1}t_{2}}\hat{E}^{\sigma_{1}}_{u_{1}u_{2}} \hat{E}^{\sigma_{2}}_{u_{3}u_{4}},\] (118) \[\widehat{VP}_{4,\text{active}} =\sum_{\begin{subarray}{c}\mathbf{t}\\ \sigma_{1}\sigma_{2}\end{subarray}}\tilde{\nu}^{t_{1}t_{2}}_{u_{1}u_{2}}\tilde {\nu}^{t_{3}t_{4}}_{u_{3}u_{4}}\;\bar{E}^{\sigma_{1}}_{t_{1}t_{2}}\hat{E}^{ \sigma_{2}}_{t_{3}t_{4}}\hat{E}^{\sigma_{3}}_{u_{1}u_{2}}\hat{E}^{\sigma_{2}} _{u_{3}u_{4}}, \tag{119}\]
where the active space tensor coefficients in the chemist notation are given by,
\[\tilde{\nu}^{t_{1}t_{2}}_{u_{1}u_{2}} =\tilde{v}^{t_{1}t_{2}}_{u_{1}u_{2}}-\sum_{t_{3}}\tilde{v}^{t_{1} t_{3}}_{u_{1}u_{2}}S^{t_{3}}_{j_{1}}S^{t_{2}}_{j_{1}}-\sum_{u_{3}}\tilde{v}^{t_{1} t_{2}}_{u_{1}u_{3}}S^{i_{1}}_{u_{3}}S^{i_{1}}_{u_{2}}, \tag{120}\] \[\tilde{\nu}^{t_{1}u_{2}}_{u_{1}t_{2}} =\tilde{v}^{t_{1}u_{2}}_{u_{1}t_{2}}+\sum_{t_{3}u_{3}}\tilde{v}^{t _{1}t_{3}}_{u_{1}u_{3}}S^{t_{3}}_{u_{2}}S^{t_{2}}_{u_{2}}-\sum_{u_{3}}\tilde{v}^ {t_{1}u_{2}}_{u_{1}u_{3}}S^{t_{2}}_{u_{3}}-\sum_{t_{3}}\tilde{v}^{t_{1}t_{3}}_{ u_{1}t_{2}}S^{t_{3}}_{u_{2}},\] (121) \[\tilde{\nu}^{t_{1}t_{2}}_{u_{1}t_{4}} =\tilde{v}^{t_{1}t_{2}}_{u_{1}t_{4}}-\sum_{u_{3}}\tilde{v}^{t_{1} t_{2}}_{u_{1}u_{3}}S^{t_{4}}_{u_{3}}\,,\] (122) \[\tilde{\nu}^{t_{1}u_{4}}_{u_{1}u_{2}} =\tilde{v}^{t_{1}u_{4}}_{u_{1}u_{2}}-\sum_{t_{3}}\tilde{v}^{t_{1} t_{3}}_{u_{1}u_{2}}S^{t_{3}}_{u_{4}}\,. \tag{123}\]
The intra-monomer tensors under the chemist notation remain equivalent, i.e \(\tilde{\nu}^{t_{1}t_{2}}_{t_{3}t_{4}}=\tilde{v}^{t_{1}t_{2}}_{t_{3}t_{4}}\) and \(\tilde{\nu}^{u_{1}u_{2}}_{u_{3}u_{4}}=\tilde{v}^{u_{1}u_{2}}_{u_{3}u_{4}}\).
## Appendix E SAPT-EVE algorithm
To ensure a first order interaction energy estimate to chemical accuracy, the target accuracy of the three observables \(\hat{F}=\{\hat{V},\hat{P},\widehat{VP}\}\) must be optimally chosen to simultaneously satisfy the desired accuracy and minimize computational costs. That is, we must find a target precision \(\varepsilon_{F}\) for every observable \(\hat{F}\), such that they satisfy one over all precision \(\varepsilon_{\text{targ}}\) while minimizing the cost of the three prime phase estimation algorithms. The resource cost minimization can be formulated as the following optimization problem,
\[\min_{\varepsilon_{V},\varepsilon_{VP},\varepsilon_{P}}\left(\frac {\lambda_{V}}{\varepsilon_{V}}+\frac{\lambda_{VP}}{\varepsilon_{VP}}+\frac{ \lambda_{P}}{\varepsilon_{P}}\right) \tag{124}\] \[\text{s.t.}\quad(1+\lambda_{P})\varepsilon_{V}+\varepsilon_{VP}+ \lambda_{V}\varepsilon_{P}=\varepsilon_{\text{targ}}\,. \tag{125}\]
which can be solved with the Lagrange method yielding
\[\left\{\varepsilon_{V},\,\varepsilon_{VP},\,\varepsilon_{P}\right\}=\frac{ \varepsilon_{\text{targ}}}{\sqrt{(1+\lambda_{P})\lambda_{V}}+\sqrt{\lambda_{VP}}+ \sqrt{\lambda_{V}\lambda_{P}}}\left\{\sqrt{\frac{\lambda_{V}}{1+\lambda_{P}}},\, \sqrt{\lambda_{VP}},\,\sqrt{\frac{\lambda_{P}}{\lambda_{V}}}\right\}\,. \tag{126}\]
This solution is only optimal if given no further information about the expected values of \(\langle\hat{V}\rangle\), \(\langle\hat{P}\rangle\) and \(\langle\widehat{VP}\rangle\). We have also upper bounded expectation values \(\langle F\rangle\) with their respective \(\ell_{1}\) norms \(\lambda_{F}\) in Eq. (125), which is a very loose bound. In practice, one will certainly be able to relax the target accuracies by bootstrapping Eq. (125) with low accuracy estimates \(\langle F\rangle_{\text{low}}\), replacing \(\lambda_{F}\mapsto\langle F\rangle_{\text{low}}\).
For every observable \(\hat{F}\) we have to adjust the polynomial degrees of SAPT-EVE's inner phase estimation routines \(\mathsf{iQPE_{A}}\) and \(\mathsf{iQPE_{B}}\) according to the target errors \(\varepsilon_{F}\). While we have demonstrated this procedure in Ref. [24] for QSP-EVE, it is not immediately clear how the discretization errors of two inner phase estimation routines would be dealt with in SAPT-EVE. Let us quickly summarize the estimation procedure behind QSP-EVE and SAPT-EVE. The iterate of Figure 2 is the product of two reflections \(\mathcal{R}_{\pi}\) and \(\mathcal{R}_{\tau}\), each of which has a structure \(\mathcal{R}_{\mathfrak{X}}=1-2\hat{\mathfrak{X}}\) with some arbitrary-rank projectors \(\hat{\mathfrak{X}}=\hat{\pi}\), \(\hat{\tau}\). The product \(\mathcal{R}_{\tau}\mathcal{R}_{\pi}\) now has the eigenvalues \(\exp(\pm i2\arccos w_{k})\), where \(w_{k}\) are the singular values of the product of projectors \(\hat{\pi}\cdot\hat{\tau}\). The idea behind SAPT-EVE is now that one of the singular values is a function of \(\langle\hat{F}\rangle\). In a world without discretization errors, the circuits we employ would fix the reference states exactly:
\[\mathsf{iQPE_{q}^{\dagger}\,Ref_{q}\,iQPE_{q}}\approx 1-2|Q_{q}\rangle\! \langle Q_{q}|_{\mathsf{sim}\,q,\mathsf{enc}[H_{q}]}\otimes|\mathbf{0}\rangle\! \langle\mathbf{0}|_{\mathsf{phase}\,q}\,-\ldots \tag{101}\]
for \(q=\mathsf{A},\mathsf{B}\) labeling routines associated with the two subsystems, and \(|Q_{\mathsf{A}}\rangle\), \(|Q_{\mathsf{B}}\rangle\) are their respective subitization ground states. Following the state on the right-hand side of Eq. (101) are other terms orthogonal to the all zero state \(|\mathbf{0}\rangle\) on the phase register of the corresponding monomer \(q=\mathsf{A},\mathsf{B}\). Considering the leading order contribution of the discretization errors, we introduce the subitization states \(|\mathcal{E}_{q}\rangle\) associated with the first excited states in the respective subsystem \(q\). Note that \(|Q_{\mathsf{A}}\rangle\) and \(|\mathcal{E}_{\mathsf{A}}\rangle\) are associated with Hamiltonian \(H_{A}\), they are generally formed with respect to different eigenenergies than \(|Q_{\mathsf{B}}\rangle\) and \(|\mathcal{E}_{\mathsf{B}}\rangle\), which are associated with \(H_{B}\). For both subsystems we introduce the projectors \(\varrho^{\mathsf{A}}(E)_{\mathsf{phase}\,\mathsf{A}}\) and \(\varrho^{\mathsf{B}}(E)_{\mathsf{phase}\,\mathsf{B}}\) associated with either excited state \(E=\mathcal{E}\) or ground state \(E=Q\). We find
\[\hat{\pi} =\sum_{E=Q,\mathcal{E}}\left(\bigotimes_{q=\mathsf{A},\mathsf{B} }|E_{q}\rangle\!\langle E_{q}|_{\mathsf{sim}\,q,\mathsf{enc}[H_{q}]}\otimes \varrho^{\mathsf{q}}(E)_{\mathsf{phase}\,q}\right)\otimes|\mathbf{0}\rangle\! \langle\mathbf{0}|_{\mathsf{enc}[F]}\,, \tag{102}\] \[\hat{\tau} =\frac{1}{2}\left(1-\mathcal{B}[\hat{F}]\right)_{\mathsf{sim}\, \mathsf{sim}\,\mathsf{sim}\,\mathsf{sim}\,\mathsf{B},\,\mathsf{enc}[F]}\otimes \bigotimes_{q=\mathsf{A},\mathsf{B}}|\mathbf{0}\rangle\!\langle\mathbf{0}|_{\mathsf{ enc}[H_{q}]}\,. \tag{103}\]
To investigate the product \(\hat{\pi}\cdot\hat{\tau}\), we consider the singular value decomposition
\[\varrho^{\mathsf{q}}(Q)\cdot\varrho^{\mathsf{q}}(\mathcal{E})=\sum_{j=1}^{r} \Omega^{(q)}|\xi_{Q,q,j}\rangle\!\langle\xi_{\mathcal{E},q,j}| \tag{104}\]
where \(\Omega^{(q)}\) are singular values, \(|\xi_{Q,q,j}\rangle\) and \(|\xi_{\mathcal{E},q,j}\rangle\) are singular vectors and where the sum over \(j\) is in the range from \(1\) to \(r\), the rank of \(\varrho^{\mathsf{q}}(Q)\cdot\varrho^{\mathsf{q}}(\mathcal{E})\). We will now learn the singular values \(w_{k}\) of \(\hat{\pi}\cdot\hat{\tau}\) by computing the eigenvalues \((w_{k})^{2}\) of \(\hat{\pi}\cdot\hat{\tau}\cdot\hat{\pi}\), which has a block-diagonal form due to the singular value decomposition in Eq. (104):
\[\hat{\pi}\cdot\hat{\tau}\cdot\hat{\pi}=\bigoplus_{i,j}\frac{1}{8}\left[\begin{array} []{cccc}1-F_{00}&-F_{01}\Omega_{j}^{(\mathsf{B})}&-F_{02}\Omega_{i}^{(\mathsf{ A})}&-F_{03}\Omega_{i}^{(\mathsf{A})}\Omega_{j}^{(\mathsf{B})}\\ -F_{10}\Omega_{j}^{(\mathsf{B})}&1-F_{11}&-F_{12}\Omega_{i}^{(\mathsf{A})} \Omega_{j}^{(\mathsf{B})}&-F_{13}\Omega_{i}^{(\mathsf{A})}\\ -F_{20}\Omega_{i}^{(\mathsf{A})}&-F_{21}\Omega_{i}^{(\mathsf{A})}\Omega_{j}^{( \mathsf{B})}&1-F_{22}&-F_{23}\Omega_{j}^{(\mathsf{B})}\\ -F_{30}\Omega_{i}^{(\mathsf{A})}\Omega_{j}^{(\mathsf{B})}&-F_{31}\Omega_{i}^{( \mathsf{A})}&-F_{32}\Omega_{j}^{(\mathsf{B})}&1-F_{33}\end{array}\right]_{i_{ 1}j_{1}}\,, \tag{105}\]
where \(F_{lm}\) are some matrix elements of the observable \(\hat{F}\), and the block matrices \([\cdots]_{i_{1}j_{1}}\) denote operators with respect to the bases states
\[|Q_{\mathsf{A}}\rangle_{\mathsf{sim}\,\mathsf{A,\,enc}[H_{A}]} \otimes|\xi_{Q,\mathsf{A},i}\rangle_{\mathsf{phase}\,\mathsf{A}}\otimes|Q_{ \mathsf{B}}\rangle_{\mathsf{sim}\,\mathsf{B,\,enc}[H_{B}]}\otimes|\xi_{Q,\mathsf{ B},j}\rangle_{\mathsf{phase}\,\mathsf{B}}\otimes|\mathbf{0}\rangle_{\mathsf{ enc}[F]}, \tag{106}\] \[|Q_{\mathsf{A}}\rangle_{\mathsf{sim}\,\mathsf{A,\,enc}[H_{A}]} \otimes|\xi_{Q,\mathsf{A},i}\rangle_{\mathsf{phase}\,\mathsf{A}}\otimes|\xi_{ \mathsf{B}}\rangle_{\mathsf{sim}\,\mathsf{B,\,enc}[H_{B}]}\otimes|\xi_{ \mathcal{E},\mathsf{B},j}\rangle_{\mathsf{phase}\,\mathsf{B}}\otimes|\mathbf{0} \rangle_{\mathsf{enc}[F]},\] (107) \[|\mathcal{E}_{\mathsf{A}}\rangle_{\mathsf{sim}\,\mathsf{A,\,enc}[H_{A}]} \otimes|\xi_{\mathcal{E},\mathsf{A},i}\rangle_{\mathsf{phase}\,\mathsf{A}} \otimes|\mathcal{E}_{\mathsf{B}}\rangle_{\mathsf{sim}\,\mathsf{B,\,enc}[H_{B}]} \otimes|\xi_{\mathcal{E},\mathsf{B},j}\rangle_{\mathsf{phase}\,\mathsf{B}} \otimes|\mathbf{0}\rangle_{\mathsf{enc}[F]}, \tag{108}\]
For vanishing singular values, \(\Omega_{j}^{(q)}=0\), we find that there is one solution of Eq. (105) with eigenvalue \(w_{k}^{2}=(1-F_{00})/8\). Since \(F_{00}\) is the matrix element associated with the ground states of both subsystems, \(w_{k}^{2}\) is the solution with the eigenphase \(\pm 2\arccos\sqrt{(1-\langle F\rangle)/8}\). For at least one \(\Omega_{j}^{(q)}\neq 0\), the block matrices \([\cdots]_{i_{1}j_{1}}\) must be diagonalized. The good solution is the one whose eigenvalue \(w_{k}^{2}\) is closest to \((1-F_{00})/8\). Assuming that the contamination of this solution with contributions of excited states will increase, we replace individual \(\Omega_{j}^{(\mathsf{A})}\) and \(\Omega_{j}^{(\mathsf{B})}\) in Eq. (105) with one
\[\Omega=\max_{j}\max_{g=\mathsf{A},\mathsf{B}}\Omega_{j}^{(q)}\,. \tag{109}\]
Perturbation theory in \(\Omega\) informs us that the deviation of the solution from \((1-F_{00})/8\) is \(O(\Omega)\), only if the diagonal elements of \(F_{j_{1}j_{2}}\) are all equal. Assuming that the structure of \(F_{i_{1}j_{1}}\) is as malicious as possible, we set all \(F_{j_{1}j_{2}}\) to be equal. Neglecting quadratic contributions \(\Omega^{2}\) due to their size, we get a deviation of \(w_{\tilde{k}}^{2}\) from \((1-F_{00})/8\) that is entirely linear in \(\Omega\). Note that for \(\Omega=1\), we would bound \(0<w_{\tilde{k}}^{2}<1/4\) due to \(||\tilde{F}||\leq 1\). This means that we can bound the error of estimated observable \(F_{\rm est}\) and actual observable \(\langle\hat{F}\rangle\) by
\[\left|F_{\rm est}-\langle\hat{F}\rangle\right|\leq 2\Omega\,. \tag{101}\]
We therefore need to fortify both \({\sf iQPE}_{q}\) routines equally well against errors using quantum signal processing (QSP). Given the QSP routines, the values of \(\Omega_{i}^{(\mathsf{A})}\) and \(\Omega_{j}^{(\mathsf{B})}\) can be obtained with a numerical procedure outlined in [24]. The success probability of this routine, if one prepares an initial state of
\[\left(\bigotimes_{q=\mathsf{A},\mathsf{B}}|Q_{q}\rangle_{\mathsf{sim}\,\,{ \sf enc}[H_{q}]}\otimes|\mathbf{0}\rangle_{\mathsf{phase}\,q}\right)\otimes| \mathbf{0}\rangle_{\mathsf{enc}[F]} \tag{102}\]
is between \(\langle\mathbf{0}|\varrho^{\mathsf{A}}(Q)\otimes\varrho^{\mathsf{B}}(Q)| \mathbf{0}\rangle/4\) and \(\langle\mathbf{0}|\varrho^{\mathsf{A}}(Q)\otimes\varrho^{\mathsf{B}}(Q)| \mathbf{0}\rangle\).
## Appendix F Sapt Operator Encoding
In the following, we present the _sparse_ and _tensor factorization_ encoding methods that allow us to design a full algorithm for SAPT observable estimation. To this end, we first re-write all of the SAPT operators in terms of Majorana operators as they provide a clear and direct decomposition in terms of self-inverse operators required for block encoding.
## Appendix A Majorana Representation
In the Majorana representation, the fermionic operators \(\hat{\gamma}_{r,0}\) and \(\hat{\gamma}_{p,1}\) for monomer A are defined as:
\[\hat{\gamma}_{r,0}=\hat{a}_{r}+\hat{a}_{p}^{\dagger}\ \,\ \ \hat{\gamma}_{p,1}=-i(\hat{a}_{r}-\hat{a}_{p}^{ \dagger}), \tag{103}\]
satisfying the properties,
\[\{\hat{\gamma}_{r,i}\,\hat{\gamma}_{r^{\prime},j}\}=2\delta_{rp^{\prime}} \delta_{ij}\mathbb{1}\ \,\ \ \hat{\gamma}_{r,i}^{\dagger}=\hat{\gamma}_{r,i}\ \,\ \ \hat{\gamma}_{r,i}^{2}=\mathbb{1}. \tag{104}\]
For monomer B, the Majorana operators \(\hat{\gamma}_{\rm q,0}\) and \(\hat{\gamma}_{\rm q,1}\) are defined as:
\[\hat{\gamma}_{\rm q,0}=\hat{b}_{\rm q}+\hat{b}_{\rm q}^{\dagger}\ \,\ \ \hat{\gamma}_{\rm q,1}=-i(\hat{b}_{\rm q}-\hat{b}_{\rm q}^{\dagger}), \tag{105}\]
while also satisfying the properties,
\[\{\hat{\gamma}_{\rm q,i}\,\hat{\gamma}_{\rm q^{\prime},j}\}=2\delta_{\rm qq ^{\prime}}\delta_{ij}\mathbb{1}\ \,\ \ \hat{\gamma}_{\rm q,i}^{\dagger}=\hat{\gamma}_{\rm q,i}\ \,\ \ \hat{\gamma}_{\rm q,i}^{2}= \mathbb{1}. \tag{106}\]
To find the representation of the SAPT operators in terms of the Majorana operators defined above, one may simply use the inverse relations, \(\hat{a}_{r}^{\dagger}=\frac{1}{2}(\hat{\gamma}_{r,0}-i\hat{\gamma}_{r,1})\), \(\hat{a}_{p}=\frac{1}{2}(\hat{\gamma}_{r,0}+i\hat{\gamma}_{r,1})\) and \(\hat{b}_{\rm q}^{\dagger}=\frac{1}{2}(\hat{\gamma}_{\rm q,0}-i\hat{\gamma}_{ \rm q,1})\), \(\hat{b}_{\rm q}=\frac{1}{2}(\hat{\gamma}_{\rm q,0}+i\hat{\gamma}_{\rm q,1})\). In the following, we provide the final result in both the full space and active space pictures after simplification. Throughout this section, we will use the symmetrized SAPT operators defined in Eqs. (103)-(104).
### Full space picture
The all-electron (full space) electrostatic and exchange operators in the Majorana representation are defined as:
\[\hat{V} =\tfrac{1}{4}\sum_{\text{r}_{\text{Q}}}v^{\text{pr}}_{\text{Q}_{ \text{Q}}}+\tfrac{i}{4}\sum_{\text{r}_{1}\text{r}_{2}}f^{(A)}_{\text{r}_{1} \text{r}_{2}}\hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1}+\tfrac{i }{4}\sum_{\text{q}_{1}\text{q}_{2}}f^{(B)}_{\text{r}_{1}\text{r}_{2}}\hat{ \gamma}_{\text{q}_{2},1} \tag{101}\] \[-\tfrac{1}{4}\sum_{\text{r}_{\text{Q}}}\text{sym}(v^{\text{p}_{1 }\text{p}_{2}}_{\text{q}_{1}\text{q}_{2}})\hat{\gamma}_{\text{p}_{1},0}\hat{ \gamma}_{\text{r}_{2},1}\hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_{ 2},1},\] \[\hat{P} =-\tfrac{1}{4}\sum_{\text{r}_{\text{Q}}}S^{\text{p}}_{\text{Q}}S ^{-}_{\text{q}}-\tfrac{i}{4}\sum_{\text{r}_{\text{r}_{\text{Q}}}^{2}}p^{(A)}_ {\text{r}_{1}\text{r}_{2}}\hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r} _{2},1}-\tfrac{i}{4}\sum_{\text{q}_{1}\atop\text{r}_{\text{Q}_{2}}}p^{(B)}_{ \text{q}_{1}\text{q}_{2}}\hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_{ 2},1}\] (102) \[+\tfrac{1}{4}\sum_{\text{pq}}\text{sym}(S^{\text{p}_{1}}_{\text{ Q}_{2}}S^{\text{p}_{2}}_{\text{q}_{1}})\hat{\gamma}_{\text{r}_{1},0}\hat{ \gamma}_{\text{r}_{2},1}\hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_{ 2},1},\]
where we have defined \(f^{(A)}_{\text{r}_{1}\text{r}_{2}}=\sum_{\text{q}}v^{\text{pr}_{1}\text{r}_{2}}_ {\text{q}_{2}}\), \(f^{(B)}_{\text{r}_{1}\text{r}_{2}}=\sum_{\text{pr}}v^{\text{pr}}_{\text{q}_{1 }\text{q}_{2}}\) and \(p^{(A)}_{\text{r}_{1}\text{r}_{2}}=\sum_{\text{q}}S^{\text{p}_{1}}_{\text{q}_ {2}}S^{\text{p}_{2}}_{\text{q}_{1}}\), \(p^{(B)}_{\text{q}_{1}\text{q}_{2}}=\sum_{\text{pr}}S^{\text{p}}_{\text{q}_{2}} S^{\text{p}_{1}}_{\text{q}_{1}}\), as the single-body coefficients of the electrostatic and exchange operators in the Majorana representation. The electrostatic-exchange operator is written as:
\[\widetilde{VP}^{(m)}_{\text{s}}=\widetilde{VP}^{(m)}_{A}+\widetilde{VP}^{(m) }_{B}+\widetilde{VP}^{(m)}_{\text{1}\text{m}}+\widetilde{VP}^{(m)}_{\text{1} \ell}+\widetilde{VP}^{(m)}_{\text{2}}+\widetilde{VP}^{(m)}_{\text{3}}+ \widetilde{VP}^{(m)}_{\text{4}}, \tag{103}\]
where
\[\widetilde{VP}^{(m)}_{A} =-\tfrac{i}{4}\sum_{\text{p}}\text{sym}(\kappa^{(A)}_{\text{r}_ {1}\text{r}_{2}})\hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1}+ \tfrac{1}{8}\sum_{\text{p}}\text{sym}(\Lambda^{\text{p}_{1}\text{r}_{2}}_{ \text{r}_{3}\text{r}_{4}})\hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_ {2},1}\hat{\gamma}_{\text{r}_{3},0}\hat{\gamma}_{\text{r}_{4},1}, \tag{104}\] \[\widetilde{VP}^{(m)}_{B} =-\tfrac{i}{4}\sum_{\text{q}}\text{sym}(\kappa^{(B)}_{\text{q}_{1 }\text{q}_{2}})\hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_{2},1}+ \tfrac{1}{8}\sum_{\text{q}}\text{sym}(\Lambda^{\text{q}_{1}\text{q}_{2}}_{ \text{q}_{3}\text{q}_{4}})\hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_ {2},1}\hat{\gamma}_{\text{q}_{3},0}\hat{\gamma}_{\text{q}_{4},1},\] (105) \[\widetilde{VP}^{(m)}_{\text{1}\text{m}} =\tfrac{1}{8}\sum_{\text{p},\text{q}}\text{sym}(\Lambda^{\text{p} _{1}\text{p}_{2}}_{\text{q}_{1}\text{q}_{2}})\ \hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1}\hat{\gamma}_{\text{ q}_{1},0}\hat{\gamma}_{\text{q}_{2},1},\] (106) \[\widetilde{VP}^{(m)}_{\text{1}\ell} =\tfrac{1}{4}\sum_{\text{p},\text{q}}\text{sym}(\Lambda^{\text{p} _{1}\text{q}_{2}}_{\text{q}_{1}\text{r}_{2}})\hat{\gamma}_{\text{r}_{1},0}\hat{ \gamma}_{\text{r}_{2},1}\hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_{2},1},\] (107) \[\widetilde{VP}^{(m)}_{\text{2}} =\tfrac{i}{8}\sum_{\text{p},\text{q}}\text{sym}(\Lambda^{\text{p} _{1}\text{r}_{2}}_{\text{q}_{1}\text{r}_{4}}S^{\text{p}_{2}}_{\text{q}_{2}}) \hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1}\hat{\gamma}_{\text{r}_ {3},0}\hat{\gamma}_{\text{r}_{4},1}\hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{ \text{q}_{2},1},\] (108) \[\widetilde{VP}^{(m)}_{\text{3}} =\tfrac{i}{8}\sum_{\text{p},\text{q}}\text{sym}(\Lambda^{\text{p} _{1}\text{q}_{2}}_{\text{q}_{1}\text{q}_{2}}S^{\text{q}_{3}}_{\text{r}_{2}}) \hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1}\hat{\gamma}_{\text{ q}_{1},0}\hat{\gamma}_{\text{q}_{2},1}\hat{\gamma}_{\text{q}_{3},0}\hat{\gamma}_{ \text{q}_{4},1},\] (109) \[\widetilde{VP}^{(m)}_{\text{4}} =\tfrac{1}{2}(\hat{V}^{\prime}\hat{P}^{\prime}+\hat{P}^{\prime} \hat{V}^{\prime}). \tag{110}\]
Here, we have defined the modified electrostatic and exchange operators \(\hat{V}^{\prime}\) and \(\hat{P}^{\prime}\),
\[\hat{V}^{\prime} =-\tfrac{1}{4}\sum_{\text{PQ}}v^{\text{pr}_{1}\text{r}_{2}}_{ \text{q}_{1}\text{q}_{2}}\hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1} \hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_{2},1},\] (111) \[\hat{P}^{\prime} =-\tfrac{i}{4}\sum_{\text{r}_{1}\text{r}_{2}}p^{(A)}_{\text{r}_{1} \text{r}_{2}}\hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1}- \tfrac{i}{4}\sum_{\text{q}_{1}\text{q}_{2}}p^{(B)}_{\text{q}_{1}\text{q}_{2}} \hat{\gamma}_{\text{q}_{1},0}\hat{\gamma}_{\text{q}_{2},1}+\tfrac{1}{4}\sum_{ \text{PQ}}\text{sym}(S^{\text{p}_{1}}_{\text{q}_{2}}S^{\text{p}_{2}}_{\text{q}_{1 }})\hat{\gamma}_{\text{r}_{1},0}\hat{\gamma}_{\text{r}_{2},1},\hat{\gamma}_{\text{ q}_{1},0}\hat{\gamma}_{
in order to reduce the \(\ell_{1}\) norm contribution of the total electrostatic-exchange operator. The renormalized tensor coefficients in the Majorana representation are given by,
\[VP_{0}^{(m)} =-\tfrac{1}{4}\sum_{\text{Pq}}\text{sym}(\bar{\nu}_{\text{Q}\text{ P}}^{\text{Pq}})-\tfrac{1}{8}\sum_{\text{Pq}\text{Q^{\prime}}}\text{sym}(\nu_{ \text{Qq}}^{\text{Pq^{\prime}}}S_{\text{Q^{\prime}}}^{\text{P}})-\tfrac{1}{8} \sum_{\text{Pp^{\prime}}_{\text{Q}}}\text{sym}(\nu_{\text{Qq}^{\prime}}^{ \text{Pp}}S_{\text{Q}}^{\text{P}^{\prime}})-\tfrac{1}{16}\sum_{\text{Pp^{ \prime}}_{\text{Qq}}}\text{sym}(\nu_{\text{Qq}}^{\text{Pp^{\prime}}}S_{\text{ Q^{\prime}}}^{\text{P}^{\prime}}), \tag{111}\] \[\kappa_{\text{P}_{1}\text{P}_{2}}^{(A)} =\sum_{\text{Q}}\bar{\nu}_{\text{Q}\text{P}_{2}}^{\text{P}_{1} \text{Q}}+\tfrac{1}{2}\sum_{\text{Qq^{\prime}}}\nu_{\text{Qq}}^{\text{P}_{1} \text{Q}^{\prime}}S_{\text{Q}}^{\text{P}_{2}}+\tfrac{1}{2}\sum_{\text{Pq}}\nu _{\text{Q}\text{P}_{2}}^{\text{Pp}}S_{\text{Q}}^{\text{P}_{1}}+\tfrac{1}{2} \sum_{\text{Pq}}\nu_{\text{Qp^{\prime}}}^{\text{P}_{1}\text{P}_{2}}S_{\text{ Q}}^{\text{P}}+\tfrac{1}{4}\sum_{\text{Pq}\text{Qq^{\prime}}}\nu_{\text{Qq}}^{ \text{P}_{1}\text{P}_{2}}S_{\text{Q^{\prime}}}^{\text{P}^{\prime}}S_{\text{ Q^{\prime}}}^{\text{P}^{\prime}}+\tfrac{1}{4}\sum_{\text{Pqq^{\prime}}}\nu_{ \text{Qq}}^{\text{P}_{1}\text{P}_{2}}S_{\text{Q^{\prime}}}^{\text{P}^{\prime}} S_{\text{Q^{\prime}}}^{\text{P}_{2}},\] (112) \[\kappa_{\text{Q}_{1}\text{Q}_{2}}^{(B)} =\sum_{\text{P}}\bar{\nu}_{\text{Q}_{1}\text{P}}^{\text{Pq}_{2}}+ \tfrac{1}{2}\sum_{\text{Pp^{\prime}}}\nu_{\text{q}_{1}\text{P}}^{\text{Pp}}S_{ \text{Q}_{2}}^{\text{P}^{\prime}}+\tfrac{1}{2}\sum_{\text{Pq}}\nu_{\text{Qq }}^{\text{P}_{2}}S_{\text{Q}_{1}}^{\text{P}_{1}}+\tfrac{1}{2}\sum_{\text{Pq}} \nu_{\text{Q}_{1}\text{Q}_{2}}^{\text{P}_{0}}S_{\text{Q}}^{\text{P}}+\tfrac{1 }{4}\sum_{\text{Pp^{\prime}}\text{Q}}\nu_{\text{Qq}_{2}}^{\text{P}_{0}}S_{ \text{Q}}^{\text{P}^{\prime}}+\tfrac{1}{4}\sum_{\text{Pp^{\prime}}\text{Q}} \nu_{\text{Qq}}^{\text{P}_{0}\text{P}_{1}}S_{\text{Q}_{2}}^{\text{P}^{\prime}},\] (113) \[\Lambda_{\text{P}_{3}\text{P}_{4}}^{\text{P}_{1}\text{P}_{2}} =\sum_{\text{Q}}\nu_{\text{Q}_{1}\text{P}_{2}}^{\text{P}_{1}\text{Q}_{2}}S _{\text{Q}}^{\text{P}_{1}}+\tfrac{1}{2}\sum_{\text{Qq^{\prime}}}\nu_{\text{ Qq}}^{\text{P}_{1}\text{P}_{2}}S_{\text{Q^{\prime}}}^{\text{P}_{3}}S_{\text{Q^{ \prime}}}^{\text{P}^{\prime}},\] (114) \[\Lambda_{\text{Q}_{3}\text{Q}_{4}}^{\text{P}_{1}\text{P}_{2}} =\sum_{\text{P}}\nu_{\text{Q}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{Q}_{2}}S _{\text{Q}_{3}}^{\text{P}_{1}}+\tfrac{1}{2}\sum_{\text{pp^{\prime}}}\nu_{ \text{Q}_{1}\text{Q}_{2}}^{\text{P}_{0}}S_{\text{Q}_{3}}^{\text{P}^{\prime}}S_ {\text{Q^{\prime}}}^{\text{P}^{\prime}},\] (115) \[\Lambda_{\text{Q}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{P}_{2}} =\tfrac{1}{2}\sum_{\text{Pq}}(S_{\text{Q}}^{\text{P}}S_{\text{Q}}^{ \text{P}}\nu_{\text{Qq}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{P}_{2}}+v_{\text{ Qq}}^{\text{P}_{1}\text{P}_{2}}S_{\text{Q1}}^{\text{P}}S_{\text{Q}_{2}}^{ \text{P}}+v_{\text{Qq}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{P}_{2}}S_{\text{Q}}^ {\text{P}_{1}}S_{\text{Q}_{2}}^{\text{P}_{2}})+\sum_{\text{P}}\nu_{\text{Q}_{1} \text{Q}_{2}}^{\text{P}_{1}\text{P}_{2}}S_{\text{Q}_{2}}^{\text{P}}+\sum_{\text{ Q}}\nu_{\text{Q}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{Q}_{2}}S_{\text{Q}}^{ \text{P}_{2}},\] (116) \[\Lambda_{\text{Q}_{1}\text{P}_{2}}^{\text{P}_{1}\text{Q}_{2}} =\bar{\nu}_{\text{Q}_{1}\text{P}_{2}}^{\text{P}_{1}\text{Q}_{2}}+ \tfrac{1}{2}\sum_{\text{P}}\nu_{\text{Q}_{1}\text{P}_{2}}^{\text{P}_{1}\text{P}_{ 2}}S_{\text{Q}_{2}}^{\text{P}_{1}}+\tfrac{1}{2}\sum_{\text{Q}}\nu_{\text{Qq}}^{ \text{P}_{1}\text{Q}_{2}}S_{\text{Q}_{1}}^{\text{P}_{2}}+\tfrac{1}{4}\sum_{ \text{Pq}}v_{\text{Qq}}^{\text{P}_{1}\text{P}_{2}}S_{\text{Q}_{1}}^{\text{P}_{2}},\] (117) \[\Lambda_{\text{Q}_{1}\text{P}_{2}}^{\text{P}_{1}\text{P}_{2}} =\nu_{\text{Q}_{1}\text{P}_{4}}^{\text{P}_{1}\text{P}_{2}}+ \tfrac{1}{2}\sum_{\text{Q}}v_{\text{Qq}}^{\text{P}_{1}\text{P}_{2}}S_{\text{Q}_{ 1}}^{\text{P}_{4}},\] (118) \[\Lambda_{\text{Q}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{Q}_{2}} =\nu_{\text{Q}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{Q}_{2}}+ \tfrac{1}{2}\sum_{\text{P}}v_{\text{Qq}_{1}\text{Q}_{2}}^{\text{P}_{1}\text{Q}_{2}}S _{\text{Q}_{4}}^{\text{P}_{1}}. \tag{119}\]
## Appendix B Active space picture
The active space SAPT operators in the Majorana representation are exactly equivalent apart from the definitions of the two-index and four-index tensors, which are mapped as: \(v\to\tilde{v}\), \(\Lambda\to\bar{\Lambda}\), and so forth. In the spatial orbital basis, the renormalized active space tensor coefficients for the electrostatic-exchange operator are given by:
\[VP_{0,\text{active}}^{(m)} =VP_{0,\text{active}}-\sum_{t}\tilde{f}_{tt}^{(A)}+\sum_{tt^{ \prime}}(\tilde{v}_{tt^{\prime}}^{tt^{\prime}}-\tilde{v}_{t^{\prime}t^{ \prime}}^{tt})-\sum_{u}\tilde{f}_{uu}^{(A)}+\sum_{uu^{\prime}}(\tilde{v}_{u^{ \prime}u}^{uu^{\prime}}-\tilde{v}_{u^{\prime}u^{\prime}}^{uu})-\sum_{tu}\tilde{ \nu}_{uu}^{tt}\] (120) \[-\tfrac{1}{2}\sum_{tuu^{\prime}}\tilde{\nu}_{uu^{\prime}}^{tt^{ \prime}}S_{u^{\prime}}^{t}-\tfrac{1}{2}\sum_{tt^{\prime}}\tilde{\nu}_{ut^{ \prime}}^{tt^{\prime}}S_{u}^{t^{\prime}}-\tfrac{1}{2}\
\[\tilde{\Lambda}^{t_{1}t_{2}}_{u_{1}t_{4}} =\tilde{p}^{t_{1}t_{2}}_{u_{1}t_{4}}+\sum_{u}\tilde{\nu}^{t_{1}t_{2 }}_{uu}S^{t_{4}}_{u_{1}},\] (F33) \[\tilde{\Lambda}^{t_{1}u_{4}}_{u_{1}u_{2}} =\tilde{\nu}^{t_{1}u_{4}}_{u_{1}u_{2}}+\sum_{t}\tilde{\nu}^{t_{1}t _{2}}_{u_{1}u_{2}}S^{t_{1}}_{u_{4}}.\] (F34)
Eqs. (F15) and (F16) in the active space picture are given by,
\[\hat{V}^{\prime}_{\rm active} =-\tfrac{1}{4}\sum_{\begin{subarray}{c}{\bf t}u\\ \sigma\tau\end{subarray}}\tilde{v}^{t_{1}t_{2}}_{u_{1}u_{2}}\hat{\gamma}_{t_{1 }\sigma,0}\hat{\gamma}_{t_{2}\sigma,1}\hat{\gamma}_{u_{1}\tau,0}\hat{\gamma}_{ u_{2}\tau,1},\] (F35) \[\hat{P}^{\prime}_{\rm active} =-\tfrac{i}{4}\sum_{\begin{subarray}{c}t_{1}t_{2}\\ \sigma\end{subarray}}\tilde{p}^{(A)}_{t_{1}t_{2}}\hat{\gamma}_{t_{1}\sigma,0} \hat{\gamma}_{t_{2}\sigma,1}-\tfrac{i}{4}\sum_{\begin{subarray}{c}u_{1}u_{2} \\ \sigma\end{subarray}}\tilde{p}^{(B)}_{u_{1}u_{2}}\hat{\gamma}_{u_{1}\sigma,0} \hat{\gamma}_{u_{2}\sigma,1}+\tfrac{1}{4}\sum_{\begin{subarray}{c}{\bf t}u\\ \sigma\end{subarray}}\operatorname{sym}(S^{t_{1}}_{u_{2}}S^{t_{2}}_{u_{1}}) \hat{\gamma}_{t_{1}\sigma,0}\hat{\gamma}_{t_{2}\sigma,1}\hat{\gamma}_{u_{1} \sigma,0}\hat{\gamma}_{u_{2}\sigma,1}.\] (F36)
with
\[\tilde{p}^{(A)}_{t_{1}t_{2}} =\sum_{u}S^{t_{1}}_{u}S^{t_{2}}_{u}+2\sum_{j}S^{t_{1}}_{j}S^{t_{2 }}_{j},\] (F37) \[\tilde{p}^{(B)}_{u_{1}u_{2}} =\sum_{t}S^{t}_{u_{2}}S^{t}_{u_{1}}+2\sum_{i}S^{i}_{u_{1}}S^{i}_{ u_{2}}.\] (F38)
In the following two sections, we present the _sparse_ and _tensor factorization_ encoding schemes which can apply to both the all-electron (full-space) and active space pictures.
## Appendix B Sparse Representation
In the _sparse_ encoding scheme [33] scheme, a data-loading oracle loads the non-zero entries of the tensor coefficients reducing the overall cost compared to an equivalent _dense_ scheme. The Jordan-Wigner mapping from the Majorana operators for monomer A is given by,
\[\hat{\gamma}_{p\sigma,0}=\hat{X}_{p,\sigma}\hat{Z}_{p-1,\sigma} \cdots\hat{Z}_{0,\sigma},\] (F39) \[\hat{\gamma}_{p\sigma,1}=\hat{Y}_{p,\sigma}\hat{Z}_{p-1,\sigma} \cdots\hat{Z}_{0,\sigma}.\] (F40)
Similarly, for monomer B we have:
\[\hat{\gamma}_{q\tau,0}=\hat{X}_{q,\tau}\hat{Z}_{q-1,\tau}\cdots \hat{Z}_{0,\tau},\] (F41) \[\hat{\gamma}_{q\tau,1}=\hat{Y}_{q,\tau}\hat{Z}_{q-1,\tau}\cdots \hat{Z}_{0,\tau}.\] (F42)
The PREPARE and SELECT operators for these operators are given explicitly in [35]. Here, we summarize the total \(\ell_{1}\) norm for all of the SAPT operators:
\[\lambda^{(s)}_{V} =\sum_{q_{1}q_{2}}\Big{|}\sum_{p}v^{pp}_{q_{1}q_{2}}\Big{|}+\sum_{ p_{1}p_{2}}\Big{|}\sum_{q}v^{p_{1}p_{2}}_{qq}\Big{|}+\sum_{p_{1}p_{2}}|v^{p_{1}p_{2} }_{q_{1}q_{2}}|,\] (F43) \[\lambda^{(s)}_{P} =\tfrac{1}{2}\sum_{q_{1}q_{2}}\Big{|}\sum_{p}S^{p}_{q_{2}}S^{p}_{ q_{1}}\Big{|}+\tfrac{1}{2}\sum_{p_{1}p_{2}}\Big{|}\sum_{q}S^{p_{1}}_{q}S^{p_{2}}_{q} \Big{|}+\tfrac{1}{2}\Big{[}\sum_{pq}|S^{p}_{q}\Big{]}\Big{|}^{2},\] (F44) \[\lambda^{(s)}_{VP} =\lambda^{(s)}_{VP_{A}}+\lambda^{(s)}_{VP_{B}}+\lambda^{(s)}_{VP _{\rm Im}}+\lambda^{(s)}_{VP_{\rm Im}}+\lambda^{(s)}_{VP_{\rm I}}+\lambda^{(s)} _{VP_{3}}+\lambda^{(s)}_{P}\sum_{\begin{subarray}{c}p_{1}p_{2}\\ q_{1}q_{2}\end{subarray}}|v^{p_{1}p_{2}}_{q_{1}q_{2}}|,\] (F45)
where
\[\lambda^{(s)}_{VP_{A}} =\tfrac{1}{2}\sum_{p_{1}p_{2}}|\kappa^{(A)}_{p_{1}p_{2}}|+\tfrac{1}{2 }\sum_{\begin{subarray}{c}p_{1}>p_{2}\\ p_{3}>p_{4}\end{subarray}}|\Lambda^{p_{1}p_{2}}_{p_{3}p_{4}}-\Lambda^{p_{1}p_{ 4}}_{p_{3}p_{2}}|+\tfrac{1}{4}\sum_{\begin{subarray}{c}p_{1}p_{2}\\ p_{3}p_{4}\end{subarray}}|\Lambda^{p_{1}p_{2}}_{p_{3}p_{4}}|, \tag{111}\] \[\lambda^{(s)}_{VP_{B}} =\tfrac{1}{2}\sum_{q_{1}q_{2}}|\kappa^{B}_{q_{1}q_{2}}|+\tfrac{1}{ 2}\sum_{\begin{subarray}{c}q_{1}>q_{2}\\ q_{3}>q_{4}\end{subarray}}|\Lambda^{q_{1}q_{2}}_{q_{3}q_{4}}-\Lambda^{q_{1}q_{ 4}}_{q_{3}q_{2}}|+\tfrac{1}{4}\sum_{\begin{subarray}{c}q_{1}q_{2}\\ q_{3}q_{4}\end{subarray}}|\Lambda^{q_{1}q_{2}}_{q_{3}q_{4}}|,\] (112) \[\lambda^{(s)}_{VP_{\mathrm{im}}} =\tfrac{1}{2}\sum_{\begin{subarray}{c}p_{1}p_{2}\\ q_{1}q_{2}\end{subarray}}|\Lambda^{p_{1}p_{2}}_{q_{1}q_{2}}|,\] (113) \[\lambda^{(s)}_{VP_{\mathrm{it}}} =\tfrac{1}{2}\sum_{\begin{subarray}{c}p_{1}p_{2}\\ q_{1}q_{2}\end{subarray}}|\Lambda^{p_{1}q_{2}}_{q_{1}p_{2}}|,\] (114) \[\lambda^{(s)}_{VP_{\mathrm{2}}} =\tfrac{1}{2}\sum_{\begin{subarray}{c}p_{1}p_{2}\\ p_{1}p_{2}p_{3}p_{4}\end{subarray}}|\Lambda^{p_{1}p_{2}}_{q_{1}p_{4}}S^{p_{3} }_{q_{2}}|,\] (115) \[\lambda^{(s)}_{VP_{\mathrm{3}}} =\tfrac{1}{2}\sum_{\begin{subarray}{c}p_{1}p_{2}\\ q_{1}q_{2}q_{3}q_{4}\end{subarray}}|\Lambda^{p_{1}q_{4}}_{q_{1}q_{2}}S^{p_{2} }_{q_{3}}| \tag{116}\]
Here, the superscript \((s)\) is used to denote the sparse representation. The total number \(L_{F}\) of operator terms for each of the SAPT operators \(\hat{F}\) will scale as \(L^{(s)}_{V}=\mathcal{O}(N_{A}^{2}N_{B}^{2}),L^{(s)}_{P}=\mathcal{O}(N_{A}^{2} N_{B}^{2})\), and \(L^{(s)}_{VP}=\mathcal{O}(N_{A}^{4}N_{B}^{4})\) using a naive block encoding. As we show below, however, it is possible to reduce this scaling with respect to the total number of terms \(L_{F}\) by using factorization circuits especially designed to describe the product of two operators, e.g. \(\hat{V}\) and \(\hat{P}\). This well help reduce the scaling of certain terms, such as the second, third, and fourth terms: \(\widehat{VP}_{2}\), \(\widehat{VP}_{3}\) and \(\widehat{VP}_{4}\). For instance, the compilation scaling of the fourth term in the electrostatic exchange operator, Eq. (110), will reduce to \(L^{(s)}_{VP}=\mathcal{O}(N_{A}^{4}+N_{B}^{4})\). It is also worth pointing out that we used the anti-symmetry property of the fermionic operators to reduce the \(\ell_{1}\) norm of the intra-monomer contributions as pointed out in Ref. [36].
## 3 Factorized Tensor Representation
While the sparse representation serves to provide a first-pass implementation of symmetry-adapted perturbation theory, it does not scale well with the number of terms \(L\), nor does it scale well with the \(\ell_{1}\) norm of the operator. In the following, we present a tensor factorization scheme that is analogous to the low-rank double factorization scheme [15] used in Hamiltonian simulation. The advantage afforded by this encoding scheme is two-fold. First, it will be shown to drastically reduce the \(\ell_{1}\) norm of each of the three operators. Fundamentally, this is explained by the fact that the Schatten 1-norm is strictly less than or equal to the entry-wise 1-norm of tensors. This property was pointed out by Von Burg et al. and becomes apparent in the numerical benchmark data sets we present in the results section. Second, it reduces the total number of terms \(L\) required for the data-loading oracle. It is important to emphasize that without an appropriate factorization scheme, the implementation of SAPT on the quantum computer would be much less scalable. The complete tensor factorization procedure is presented in the main text. Using this procedure,
all of the unique four-index tensors in first order SAPT theory may be expanded as:
\[v^{p_{1}p_{2}}_{q_{1}q_{2}} =\sum_{tkl}s^{(v)}_{t}\alpha^{(A_{v})}_{tk}\alpha^{(B_{v})}_{tl}U^{( v)}_{tkp_{1}}U^{(v)}_{tkp_{2}}V^{(v)}_{tlu_{1}q_{2}}V^{(v)}_{tlu_{2}}, \tag{111}\] \[\text{sym}(\Lambda^{p_{1}p_{2}}_{p_{3}p_{4}}) =\sum_{tkl}s^{(A_{2})}_{t}\alpha^{(A_{2})}_{tk}\alpha^{(A_{2})}_{ tl}U^{(A_{2})}_{tkp_{1}}U^{(A_{2})}_{tkp_{2}}U^{(A_{2})}_{tly_{1}q_{1}}U^{(A_{2})}_{ tlp_{4}},\] (112) \[\text{sym}(\Lambda^{q_{1}q_{2}}_{q_{3}q_{4}}) =\sum_{tkl}s^{(B_{2})}_{t}\alpha^{(B_{2})}_{tk}\alpha^{(B_{2})}_{ tl}V^{(B_{2})}_{tkp_{1}}V^{(B_{2})}_{tly_{2}q_{2}}V^{(B_{2})}_{tly_{3}}V^{(B_{2})}_{ tla_{4}},\] (113) \[\text{sym}(\Lambda^{p_{1}q_{2}}_{q_{1}p_{2}}) =\sum_{tkl}s^{(1)}_{t}\partial^{(1L)}_{tk}\partial^{(1L)}_{tl}U^{ (1L)}_{tkp_{1}}V^{(1L)}_{tkq_{2}}U^{(1L)}_{tlp_{2}}V^{(1L)}_{tla_{1}},\] (114) \[\text{sym}(\Lambda^{p_{1}p_{2}}_{q_{1}q_{2}}) =\sum_{tkl}s^{(1)}_{t}\alpha^{(A_{1m})}_{tk}\alpha^{(B_{1m})}_{tl} U^{(1m)}_{tkp_{1}}U^{(1m)}_{tkp_{2}}V^{(1m)}_{tla_{1}}V^{(1m)}_{tla_{2}},\] (115) \[\Lambda^{p_{1}p_{2}}_{q_{1}p_{4}} =\sum_{tkl}s^{(2)}_{t}\alpha^{(2)}_{tk}\beta^{(2)}_{tl}U^{(2)}_{ tkp_{1}}U^{(2)}_{tkp_{2}}\tilde{U}^{(2)}_{tlp_{4}}V^{(2)}_{tli_{1}q_{1}},\] (116) \[\Lambda^{p_{1}q_{4}}_{q_{1}q_{2}} =\sum_{tkl}s^{(3)}_{t}\alpha^{(3)}_{tk}\beta^{(3)}_{tl}U^{(3)}_{ tlp_{1}}\tilde{V}^{(3)}_{tkbq_{3}}V^{(3)}_{tlq_{1}}V^{(3)}_{tla_{2}}, \tag{117}\]
which is exact when none of the terms are truncated. In the following subsections, we use this expansion to summarize the tensor-factorized SAPT operators applicable in both the all-electron (full space) and active space pictures. Throughout the following sections, we make use of the orbital-transformed Majorana operators,
\[\tilde{\gamma}_{k\sigma,i} =\sum_{p}U_{pk}\tilde{\gamma}_{p,i}=\hat{G}^{(A)\dagger}_{\sigma} \hat{\gamma}_{k\sigma,i}\hat{G}^{(A)}_{\sigma}, \tag{118}\] \[\tilde{\gamma}_{l\tau,i} =\sum_{q}V_{q}\hat{\gamma}_{l\tau,i}=\hat{G}^{(B)\dagger}_{\tau} \hat{\gamma}_{l\tau,i}\hat{G}^{(B)}_{\tau}, \tag{119}\]
where \(i=\{0,1\}\) and the second equality in both expressions is defined with respect to the Givens operator, \(\hat{G}^{(A)}_{\sigma}=\exp(\sum_{pk}[\log\mathbf{U}]_{pk}\hat{a}^{\dagger}_{p} \hat{a}_{k\sigma})\) and \(\hat{G}^{(B)}_{\tau}=\exp(\sum_{ql}[\log\mathbf{V}]_{ql}\hat{b}^{\dagger}_{q \tau}\hat{b}_{l\tau})\) respectively.
### Factorization of electrostatic operator: \(\hat{V}\)
Following tensor factorization procedure from the main text, we find the final form of the electrostatic operator,
\[\hat{V} =\sum_{pq}v^{pp}_{qq}+\tfrac{i}{2}\sum_{k,\sigma}s^{(A)}_{k} \tilde{\gamma}_{k\sigma,0}\tilde{\gamma}_{k\sigma,1}+\tfrac{i}{2}\sum_{l, \sigma}s^{(B)}_{l}\tilde{\gamma}_{l\sigma,0}\tilde{\gamma}_{l\sigma,1}- \tfrac{1}{4}\sum_{tkl}s^{(v)}_{t}\alpha^{(A_{v})}_{tk}\alpha^{(B_{v})}_{tl} \tilde{\gamma}_{k\sigma,0}\tilde{\gamma}_{k\sigma,1}\tilde{\gamma}_{l\tau,0} \tilde{\gamma}_{l\tau,1}, \tag{120}\] \[=\sum_{pq}v^{pp}_{qq}-\tfrac{1}{2}\sum_{k,\sigma}s^{(A)}_{k}\hat{ G}^{(A)\dagger}_{\varnothing}\hat{Z}_{k\sigma}\hat{G}^{(A)}_{\varnothing}-\tfrac{1}{2} \sum_{l,\sigma}s^{(B)}_{l}\hat{G}^{(B)\dagger}_{\varnothing}\hat{Z}_{l\sigma} \hat{G}^{(B)}_{\varnothing}\] (121) \[+\tfrac{1}{4}\sum_{\begin{subarray}{c}tkl\\ \sigma\tau\end{subarray}}[s^{(v)}_{t}\alpha^{(A_{v})}_{tk}\alpha^{(B_{v})}_{tl} ]\Big{(}\hat{G}^{(A)\dagger}_{t}\hat{Z}_{k\sigma}\hat{G}^{(A)}_{t}\Big{)} \otimes\Big{(}\hat{G}^{(B)\dagger}_{t}\hat{Z}_{l\tau}\hat{G}^{(B)}_{t}\Big{)}.\]
In the second line, we used the following Jordan-Wigner identity for each monomer,
\[\hat{\gamma}_{k\sigma,0}\hat{\gamma}_{k\sigma,1} =i\hat{Z}_{k\sigma}, \tag{122}\] \[\hat{\gamma}_{l\tau,0}\hat{\gamma}_{l\tau,1} =i\hat{Z}_{l\tau}, \tag{123}\]
and we have also defined the Givens operators, \(\hat{G}^{(X)}_{\varnothing}=\hat{G}^{(X)}_{\varnothing,\alpha}\otimes\hat{G}^{(X)}_ {\varnothing,\beta}\) and \(\hat{G}^{(X)}_{t}=\hat{G}^{(X)}_{t,\alpha}\otimes\hat{G}^{(X)}_{t,\beta}\) for \(X\in\{A,B\}\), where each spin-block is defined as:
\[\hat{G}^{(A)}_{\varnothing,\sigma} =\exp\left(\sum_{p>k}[\log\mathbf{U}^{(f)}]_{pk}(\hat{E}^{\sigma}_{ pk}-\hat{E}^{\sigma}_{kp})\right), \tag{124}\] \[\hat{G}^{(B)}_{\varnothing,\tau} =\exp\left(\sum_{q>l}[\log\mathbf{V}^{(f)}_{t}]_{ql}(\hat{E}^{\tau} _{ql}-\hat{E}^{\tau}_{lq})\right), \tag{125}\]
and
\[\hat{G}^{(A)}_{t,\sigma} =\exp\left(\sum_{p>k}[\log{\bf U}^{(v)}_{t}]_{pk}(\hat{E}^{\sigma}_ {pk}-\hat{E}^{\sigma}_{kp})\right), \tag{111}\] \[\hat{G}^{(B)}_{t,\tau} =\exp\left(\sum_{q>l}[\log{\bf V}^{(v)}_{t}]_{ql}(\hat{E}^{\tau}_ {ql}-\hat{E}^{\tau}_{lq})\right). \tag{112}\]
The corresponding \(\ell_{1}\) norm for the tensor-factorized electrostatic operator is given by,
\[\lambda^{(\rm{tI})}_{V}=\sum_{k}|s^{(A)}_{k}|+\sum_{l}|s^{(B)}_{l}|+\sum_{tkl} |s^{(v)}_{t}\alpha^{(A_{v})}_{tk}\alpha^{(B_{v})}_{tl}|. \tag{113}\]
## Appendix IV Factorization of Exchange Operator: \(\hat{P}\)
We consider the tensor factorization procedure for the active space exchange operator since it is more general than the all-electron (full space) version. Substituting the factorization of the intermolecular overlap matrix, we obtain:
\[\hat{P}_{\rm{active}} =-\tfrac{1}{2}\sum_{pq}S^{p}_{q}S^{p}_{q}-\tfrac{i}{4}\sum_{k, \sigma}s^{(P_{A_{1}})}_{k}\tilde{\gamma}_{k\sigma,0}\tilde{\gamma}_{k\sigma, 1}-\tfrac{i}{4}\sum_{l,\sigma}s^{(P_{B_{1}})}_{l}\tilde{\gamma}_{l\sigma,0} \tilde{\gamma}_{l\sigma,1} \tag{114}\] \[+\tfrac{1}{16}\sum_{\begin{subarray}{c}kl\\ \sigma\end{subarray}}s_{k}s_{l}(\tilde{\gamma}_{l\sigma,0}\tilde{\gamma}_{k \sigma,1}+\tilde{\gamma}_{k\sigma,0}\tilde{\gamma}_{l\sigma,1})\otimes(\tilde{ \gamma}_{l\sigma,0}\tilde{\gamma}_{k\sigma,1}+\tilde{\gamma}_{k\sigma,0} \tilde{\gamma}_{l\sigma,1})\] \[=-\tfrac{1}{2}\sum_{pq}S^{p}_{q}S^{p}_{q}+\tfrac{i}{4}\sum_{k, \sigma}s^{(P_{A_{1}})}_{k}\hat{G}^{(P_{A_{1}})\dagger}_{\mathcal{G}}\hat{Z}_{ k\sigma}\hat{G}^{(P_{A_{1}})}_{\mathcal{G}}+\tfrac{1}{4}\sum_{l,\sigma}s^{(P_{B_{ 1}})}_{l}\hat{G}^{(P_{B_{1}})\dagger}_{\mathcal{G}}\hat{Z}_{l\sigma}\hat{G}^{ (P_{B_{1}})}_{\mathcal{G}}\] (115) \[-\tfrac{1}{4}\sum_{k,\sigma}s^{2}_{k}\Big{(}\hat{G}^{(A_{S}) \dagger}\hat{Z}_{k\sigma}\hat{G}^{(A_{S})}\Big{)}\otimes\Big{(}\hat{G}^{(B_{S })\dagger}\hat{Z}_{k\sigma}\hat{G}^{(B_{S})\dagger}\Big{)}\] \[+\tfrac{1}{16}\sum_{\begin{subarray}{c}k\neq l\\ \sigma\end{subarray}}s_{k}s_{l}\Big{(}\hat{G}^{(A_{S})\dagger}[\hat{X}_{k \sigma}\vec{Z}^{\sigma}_{kl}\hat{X}_{l\sigma}+\hat{Y}_{k\sigma}\vec{Z}^{ \sigma}_{kl}\hat{Y}_{l\sigma}]\hat{G}^{(A_{S})}\Big{)}\otimes\Big{(}\hat{G}^{ (B_{S})\dagger}[\hat{X}_{k\sigma}\vec{Z}^{\sigma}_{kl}\hat{X}_{l\sigma}+\hat{ Y}_{k\sigma}\vec{Z}^{\sigma}_{kl}\hat{Y}_{l\sigma}]\hat{G}^{(B_{S})}\Big{)},\]
where
\[\vec{Z}^{\sigma}_{kl}=\begin{cases}\hat{Z}_{k-1,\sigma}-\hat{Z}_{l+1,\sigma}& \text{if }\,k>l\\ \hat{Z}_{l-1,\sigma}-\hat{Z}_{k+1,\sigma}&\text{if }\,l>k.\end{cases} \tag{116}\]
We have also defined the Givens operators, \(\hat{G}^{(P_{X_{1}})}_{\mathcal{G}}=\hat{G}^{(P_{X_{1}})}_{\mathcal{G},\alpha }\otimes\hat{G}^{(P_{X_{1}})}_{\mathcal{G},\beta}\) and \(\hat{G}^{(X_{S})}=\hat{G}^{(X_{S})}_{\alpha}\otimes\hat{G}^{(X_{S})}_{\beta}\) for \(X\in\{A,B\}\), where each spin-block is defined as:
\[\hat{G}^{(P_{A_{1}})}_{\mathcal{G},\sigma} =\exp\left(\sum_{p>k}[\log{\bf U}^{(P_{A_{1}})}]_{pk}(\hat{E}^{ \sigma}_{pk}-\hat{E}^{\sigma}_{kp})\right), \tag{117}\] \[\hat{G}^{(P_{B_{1}})}_{\mathcal{G},\tau} =\exp\left(\sum_{q>l}[\log{\bf V}^{(P_{B_{1}})}_{t}]_{ql}(\hat{E }^{\tau}_{ql}-\hat{E}^{\tau}_{lq})\right), \tag{118}\]
and
\[\hat{G}^{(A_{S})}_{\sigma} =\exp\left(\sum_{p>k}[\log{\bf U}^{(s)}]_{pk}(\hat{E}^{\sigma}_{pk} -\hat{E}^{\sigma}_{kp})\right), \tag{119}\] \[\hat{G}^{(B_{S})}_{\tau} =\exp\left(\sum_{q>l}[\log{\bf V}^{(s)}]_{ql}(\hat{E}^{\tau}_{ql}- \hat{E}^{\tau}_{lq})\right). \tag{120}\]
The corresponding \(\ell_{1}\) norm is given by,
\[\lambda_{P}^{(\rm tr)}=\tfrac{1}{2}\sum_{k}|s_{k}^{(P_{A_{1}})}|+\tfrac{1}{2} \sum_{l}|s_{l}^{(P_{B_{1}})}|+\tfrac{1}{2}\left(\sum_{n}s_{n}\right)^{2}. \tag{101}\]
## V Factorization of electrostatic-exchange operator: \(\widetilde{VP}\)
The electrostatic-exchange factorization is decomposed with respect to six different four-index contributions, which we outline below. The first two correspond to monomer-only contributions, \(\widetilde{VP}_{A}\) and \(\widetilde{VP}_{B}\) respectively. In the following, we provide the factorization procedure which arrives at the same result as the standard double factorization procedure derived by Von Burg et al.
\[\widetilde{VP}_{A}/\widetilde{VP}_{B}\boldsymbol{:}\boldsymbol{:}\boldsymbol{ :}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol {:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{} \boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:}\boldsymbol{:} \boldsymbol
The first order spin-locked contribution is similar to the exchange operator but requires a four-index factorization procedure outlined in previous terms. The tensor factorized operator is written as:
\[\widehat{VP}_{1\ell} =\tfrac{1}{4}\sum_{\sigma}s_{t}^{(1\ell)}\beta_{tk}^{(1\ell)}\beta_ {tl}^{(1\ell)}\tilde{\beta}_{lk\sigma,0}^{(1\ell)}\tilde{\gamma}_{l\sigma,1}^{A _{1\ell}}\tilde{\gamma}_{l\sigma,1}^{B_{1\ell}}\tilde{\gamma}_{k\sigma,0}^{B_ {1\ell}}\tilde{\gamma}_{l\sigma,1}^{B_{1\ell}}, \tag{111}\] \[=-\tfrac{1}{4}\sum_{\begin{subarray}{c}\sigma\\ \sigma\end{subarray}}s_{t}^{(1\ell)}\beta_{tk}^{(1\ell)}\beta_{tk}^{(1\ell)} \Big{(}\hat{G}_{t}^{(A_{1\ell})\dagger}\tilde{Z}_{k\sigma}\hat{G}_{t}^{(A_{1 \ell})}\Big{)}\otimes\Big{(}\hat{G}_{t}^{(B_{1\ell})\dagger}\hat{Z}_{k\sigma} \hat{G}_{t}^{(B_{1m})}\Big{)}\] (112) \[+\tfrac{1}{16}\sum_{\begin{subarray}{c}\epsilon\sigma\\ k\neq l\end{subarray}}s_{t}^{(1\ell)}\beta_{tk}^{(1\ell)}\beta_{tl}^{(1\ell)} \Big{(}\hat{G}_{t}^{(A_{1\ell})\dagger}[\hat{X}_{k\sigma}\vec{Z}_{kl}^{\sigma }\hat{X}_{l\sigma}+\hat{Y}_{k\sigma}\vec{Z}_{kl}^{\sigma}\hat{Y}_{l\sigma}] \hat{G}_{t}^{(A_{1\ell})}\Big{)}\otimes\Big{(}\hat{G}_{t}^{(B_{1\ell})\dagger }[\hat{X}_{k\sigma}\vec{Z}_{kl}^{\sigma}\hat{X}_{l\sigma}+\hat{Y}_{k\sigma} \vec{Z}_{kl}^{\sigma}\hat{Y}_{l\sigma}]\hat{G}_{t}^{(B_{1\ell})}\Big{)}.\]
where we used the permutation symmetries of the four-index tensor to go from the first line to the second line and we have defined the Givens operators for each spin-block as:
\[\hat{G}_{t,\sigma}^{(A_{1\ell})} =\exp\left(\sum_{p>k}[\log\mathbf{U}_{t}^{(1\ell)}]_{pk}(\hat{E}_ {pk}^{\sigma}-\hat{E}_{kp}^{\sigma})\right), \tag{113}\] \[\hat{G}_{t,\tau}^{(B_{1\ell})} =\exp\left(\sum_{q>l}[\log\mathbf{V}_{t}^{(1\ell)}]_{ql}(\hat{E}_ {ql}^{\tau}-\hat{E}_{lq}^{\tau})\right). \tag{114}\]
The corresponding \(\ell_{1}\) norm given by:
\[\lambda_{VP_{1\ell}}=\tfrac{1}{2}\sum_{tkl}|s_{t}^{(1\ell)}\beta_{tk}^{(1\ell) }\beta_{tl}^{(1\ell)}|. \tag{115}\]
\[\widehat{VP}_{2}/\widehat{VP}_{3}\]
The second and third terms have similar structure. Here, we provide the explicit form for the second term taking into account all of the permutations involved for \(\text{sym}(\Lambda_{q_{1}p_{2}}^{p_{1}p_{2}}\mathcal{S}_{q_{2}}^{p_{3}})\):
\[\widehat{VP}_{2} =-\tfrac{i}{8}\sum_{tkl}[s_{n}s_{t}^{(2)}\alpha_{tk}^{(2)}\beta_ {tl}^{(2)}]\tfrac{1}{8}\Big{(}\tilde{\gamma}_{k\sigma,0}^{(2\sigma)}\tilde{ \gamma}_{l\sigma,1}^{(2\sigma)}\Big{[}\tilde{\gamma}_{n\tau,0}^{(A_{S})} \tilde{\gamma}_{l\tau,1}^{(A_{2\beta})}\otimes\tilde{\gamma}_{l\tau,0}^{(B_{ \beta})}\tilde{\gamma}_{n\tau,1}^{(B_{S})}+\tilde{\gamma}_{n\tau,0}^{(A_{S})} \tilde{\gamma}_{l\tau,1}^{(A_{2\beta})}\otimes\tilde{\gamma}_{n\tau,0}^{(B_{ S})}\tilde{\gamma}_{l\tau,1}^{(B_{\beta})} \tag{116}\] \[+\tilde{\gamma}_{l\tau,0}^{(A_{2\beta})}\tilde{\gamma}_{n\tau,1}^ {(A_{S})}\otimes\tilde{\gamma}_{l\tau,0}^{(B_{2\beta})}\tilde{\gamma}_{n\tau,1}^{(B_{\beta})}+\tilde{\gamma}_{l\tau,0}^{(A_{2\beta})}\tilde{\gamma}_{n \tau,1}^{(A_{S})}\otimes\tilde{\gamma}_{n\tau,0}^{(B_{\beta})}\tilde{\gamma}_{ l\tau,1}^{(B_{\beta})}\Big{]}\] \[+\Big{[}\tilde{\gamma}_{n\tau,0}^{(A_{S})}\tilde{\gamma}_{l\tau,1}^{(A_{2\beta})}\otimes\tilde{\gamma}_{l\tau,0}^{(B_{2\beta})}\tilde{\gamma}_{ n\tau,1}^{(B_{S})}+\tilde{\gamma}_{n\tau,0}^{(A_{S})}\tilde{\gamma}_{l\tau,1}^{(A_{2 \beta})}\otimes\tilde{\gamma}_{n\tau,0}^{(A_{B})}\tilde{\gamma}_{l\tau,1}^{(B_ {\beta})}\] \[+\tilde{\gamma}_{l\tau,0}^{(A_{2\beta})}\tilde{\gamma}_{n\tau,1}^ {(A_{S})}\otimes\tilde{\gamma}_{l\tau,0}^{(B_{2\beta})}\tilde{\gamma}_{n\tau,1}^ {(B_{\beta})}+\tilde{\gamma}_{l\tau,0}^{(A_{2\beta})}\tilde{\gamma}_{n\tau,1}^ {(A_{S})}\otimes\tilde{\gamma}_{n\tau,0}^{(B_{S})}\tilde{\gamma}_{l\tau,1}^{(B_ {\beta})}\tilde{\gamma}_{k\sigma,0}^{(2\sigma)}\tilde{\gamma}_{k\sigma,1}^{(2 _{\alpha})}\Big{)},\]
where all of the orbital-transformed Majorana operators are explicitly defined as:
\[\tilde{\gamma}_{k\sigma,i}^{(2_{\alpha})} =\sum_{p}U_{kp}^{(2)}\hat{\gamma}_{p\sigma,i}, \tag{117}\] \[\tilde{\gamma}_{l\sigma,i}^{(A_{2\beta})} =\sum_{p}\tilde{U}_{tlp}^{(2)}\hat{\gamma}_{p\sigma,i},\] (118) \[\tilde{\gamma}_{l\sigma,i}^{(B_{2\beta})} =\sum_{q}V_{tlq}^{(2)}\hat{\gamma}_{q\sigma,i},\] (119) \[\tilde{\gamma}_{n\sigma,i}^{(A_{S})} =\sum_{p}U_{np}^{(s)}\hat{\gamma}_{p\sigma,i},\] (120) \[\tilde{\gamma}_{n\sigma,i}^{(B_{S})} =\sum_{q}V_{nq}^{(s)}\hat{\gamma}_{q\sigma,i}. \tag{121}\]
The corresponding \(\ell_{1}\) norm is given by:
\[\lambda_{VP_{2}}=\tfrac{1}{2}\lambda_{s}\sum_{tkl}|s_{t}^{(2)}\alpha_{tk}^{(2)} \beta_{tl}^{(2)}|. \tag{111}\]
The \(\ell_{1}\) norm of the third term is also found to be given by,
\[\lambda_{VP_{3}}=\tfrac{1}{2}\lambda_{s}\sum_{tkl}|s_{t}^{(3)}\alpha_{tk}^{(3) }\beta_{tl}^{(3)}|. \tag{112}\]
\[\widehat{VP}_{4}\]
**:**
The fourth term consists of the symmetric product of two operators, \(\hat{V}^{\prime}\) and \(\hat{P}^{\prime}\) defined above. The corresponding \(\ell_{1}\) norm is simply given by,
\[\lambda_{VP_{4}}=\lambda_{P}^{(\text{tf})}\sum_{tkl}|s_{t}^{(v)}\alpha_{tk}^{( A_{u})}\alpha_{tl}^{(B_{v})}|. \tag{113}\]
To implement this operator on the quantum computer, we have two options which we discuss in more detail in Appendix G.3. We can block encode the entire operator \(\widehat{VP}_{4}\) which will consist of \(L_{V}L_{P}\) total terms without truncation. Using the tensor factorization procedures from previous sections, we have found empirically that the number of terms of the electrostatic operator, \(L_{V}\), will scale between \(\mathcal{O}(N^{2})-\mathcal{O}(N^{3})\) depending on whether we increase the number of basis orbitals in the continuum limit (increasing the basis set size) or increase the number of active space orbitals with fixed filling fraction. Since \(L_{p}\) scales as \(\mathcal{O}(N^{2})\) then the quantum circuit for \(\widehat{VP}_{4}\) will have an overall complexity of \(\mathcal{O}(N^{5})\) using a naive implementation. For large system sizes, this implies that the compilation overhead for this operator will be substantial. While we have shown in the main manuscript that the eigenstate reflection circuit is the primary bottleneck in the SAPT-EVE algorithm, it is still worth understanding how we can improve this scaling to ensure that the block encoding of \(\widehat{VP}_{4}\) is not the rate-limiting step once improved eigenstate preparation techniques are developed in the future. As a result, we propose the self-inverse product of block encoded operators circuit outlined in more detail in Appendix G.3. While the total normalization constant (which affects the total run-time of the expectation value estimation algorithm) will remain equal to the product, \(\lambda_{a}\lambda_{b}\), the compilation cost is additive with respect to the two operators, \(H_{a}\) and \(H_{b}\), scaling asymptotically as \(\mathcal{O}(L_{a}+L_{b})\) rather than \(\mathcal{O}(L_{a}L_{b})\).
## Appendix G SAPT block encoding
block encodings provide a powerful framework for performing non-unitary operations on a quantum computer. In the following, we describe the block encoding of the electrostatic, exchange and dominant terms of the electrostatic-exchange operator. We provide the block encoding of all of the SAPT operators in the active space picture because they contain more terms than the full space picture. The full space operator block encoding represents a special case of the active space operator with some terms excluded. This Appendix is divided into four sections: Sec. G.1 presents an overview of the block encoding methodology, Sec. G.2 presents the data-loading oracle, Sec. G.3 presents the product of block encoded operator circuit proposed in this work, and Sec. G.4 outlines the full circuits for different SAPT operators.
## Appendix A Overview of block encoding framework
We consider the general block encoding of an operator \(\hat{F}\) expressed as a linear combination of unitaries (LCU),
\[\hat{F}=\sum_{n}\alpha_{n}H_{n} \tag{114}\]
where \(\alpha_{n}\) are real coefficients and \(H_{n}\) are unitary and Hermitian operators that are assumed to be self-inverse, \(H_{n}^{2}=\mathbb{1}\). This operator can be prepared on a quantum computer using standard block encoding techniques where
the Hermitian operator \(\hat{F}\) is embedded within a larger unitary operator, \(\mathcal{B}[\hat{F}]\). To perform the appropriate block encoding, we require:
\[\mathsf{PREPARE}_{F}\ket{0}\ket{\psi} =\sum_{n}\sqrt{\frac{\alpha_{n}}{\lambda}}\ket{n}\ket{\psi} \tag{122}\] \[\mathsf{SELECT}_{F} =\sum_{n}\ket{n}\!\bra{n}\otimes H_{n} \tag{123}\]
where \(\mathsf{PREPARE}_{F}\) is a unitary circuit that prepares the coefficients in the LCU representation of the operator \(\hat{F}\) in Eq. (121), and \(\mathsf{SELECT}_{F}\) is a reflection which coherently loads each unitary in the LCU representation of \(\hat{F}\)[57]. These subroutines satisfy the block encoding equation,
\[\bra{0}\mathsf{PREPARE}_{F}^{\dagger}\mathsf{SELECT}_{F}\,\mathsf{PREPARE}_{F }\ket{0}=\hat{F}/\lambda \tag{124}\]
Here, \(\lambda=\sum_{n}|\alpha_{n}|\) denotes the \(\ell_{1}\) norm for the vector of coefficients \(\{\alpha_{n}\}\) and is needed to ensure that the operator remains unitary. It is well established in subitization as well as other energy estimation algorithms with block encodings that the \(\ell_{1}\) norm significantly affects the resource cost of the algorithm. This will also be true for observable estimation algorithms such as the SAPT-EVE algorithm.
## 2 Data-loading Oracle
The efficient implementation of the PREPARE and SELECT oracles has been the subject of ongoing work over the past few years. In the following, we briefly review current state-of-the-art implementations which make use of the data-lookup oracle \(\mathsf{QROM}\) (quantum read-only memory) that performs the task [58, 35],
\[\mathsf{QROM}\ket{\mathbf{x}}\ket{0}=\ket{\mathbf{x}}\ket{a_{x}}. \tag{125}\]
As pointed out in [38], by utilizing additional qubits (beyond the standard \(\log L\) qubits), it is possible to reduce the overall Toffoli cost. The asymptotic Toffoli count for this type of \(\mathsf{QROM}\) is given by,
\[C_{T}=\mathcal{O}\Big{(}\Big{[}\frac{L}{k}\Big{]}+b(k-1)\Big{)}, \tag{126}\]
where \(k\) represents a tunable power-of-two number copies of the first register. Minimizing this quantity with respect to \(k\), we obtain:
\[k=\sqrt{\frac{L}{b}} \tag{127}\]
which requires \(\mathcal{O}(\sqrt{L})\) additional qubits. This optimization will be exploited throughout in order to minimize the overall runtime of the algorithm. We also take advantage of the efficient quantum adders, coherent alias sampling, and unprepare subroutines discussed in Babbush _et al._[35].
## 3 Product of Block Encoded Operators
Before proceeding with the exposition of the block encoding of the SAPT observables, we first discuss the block encoding of products of operators which is required for the electrostatic-exchange \(\overline{VP}\) operator. For illustration purposes, we consider two independent operators \(\hat{F}_{1}\) and \(\hat{F}_{2}\) given by,
\[\hat{F}_{1} =\sum_{n}^{L_{1}}\alpha_{n}^{(1)}H_{n}^{(1)}\;\;\text{and}\;\; \lambda_{F_{1}}=\sum_{n}|\alpha_{n}^{(1)}|, \tag{128}\] \[\hat{F}_{2} =\sum_{n}^{L_{2}}\alpha_{n}^{(2)}H_{n}^{(2)}\;\;\text{and}\;\; \lambda_{F_{2}}=\sum_{n}|\alpha_{n}^{(2)}|. \tag{129}\]
The block encoding of the product of two operators, \(\hat{F}_{2}\hat{F}_{1}\), can proceed in different ways. For instance, consider the following product of block encoding representation,
\[\texttt{PREPARE}^{\dagger}_{F_{2}F_{1}}\texttt{SELECT}_{F_{2}F_{1}} \texttt{PREPARE}_{F_{2}F_{1}}. \tag{111}\]
This block encoding uses two sets of non-contiguous auxiliary qubit registers with sizes \(\log L_{1}\) and \(\log L_{2}\) respectively. For instance, the select oracle is explicitly written as:
\[\texttt{SELECT}_{F_{2}F_{1}} = \sum_{\begin{subarray}{c}n=0\\ m=0\end{subarray}}^{n=L_{1}-1}|m\rangle\langle m|\otimes|n\rangle\langle n| \otimes H_{m}^{(2)}H_{n}^{(1)}. \tag{112}\]
On the other hand, the prepare oracle may be expressed as the Kronecker product of each individual prepare oracle, \(\texttt{PREPARE}_{F_{2}F_{1}}=\texttt{PREPARE}_{F_{2}}\otimes\texttt{PREPARE} _{F_{1}}\). Based on these definitions, one can verify that the projection with respect to the zero-state of both auxiliary registers recovers the appropriate product of operators, \(\hat{F}_{2}\hat{F}_{1}\),
\[\left\langle 0\right|_{2}\left\langle 0\right|_{1}\texttt{PREPARE}^{ \dagger}_{F_{2}}\texttt{PREPARE}^{\dagger}_{F_{1}}\texttt{SELECT}_{F_{2}F_{1 }}\texttt{PREPARE}_{F_{1}}\texttt{PREPARE}_{F_{2}}\left|0\right\rangle_{1} \left|0\right\rangle_{2}=\hat{F}_{2}\hat{F}_{1}/(\lambda_{F_{1}}\lambda_{F_{2}}). \tag{113}\]
This implementation, however, will have a QROM cost scaling as \(\mathcal{O}(L_{1}L_{2})\) which becomes prohibitive for large values of \(L_{1}\) and \(L_{2}\). Instead, we consider the product of block encoded operator (PBEO) circuit in Fig. 6.
This circuit is a simplification of the one first presented in the work by Von Burg _et al._[15]. To ensure that this operator remains self-inverse, we make a modification by adding a single auxiliary qubit initialized in the Hadamard basis as shown in Fig. 7, which is manifestly self-inverse.
The advantage of this approach comes from the asymptotic cost of block encoding which now scales as \(\mathcal{O}(L_{1}+L_{2})\), providing a substantial reduction Toffoli cost in the large \(L\) limit.
## IV SAPT operators
Due to the factorized representations of the various SAPT operators detailed in Appendix F.3, the circuit implementations of PREPARE and SELECT must be greatly modified from their most naive, canonical forms. Below we present the block encoding circuits for each SAPT term with qualitative descriptions of the reproduced circuit diagrams to provide intuition for the constructions. We highlight the following notational conventions used throughout the circuit diagrams:
Figure 6: Not self-inverse block encoding of a product of operators. Product of normalized hermitian operators \(\hat{F}_{2}\) and \(\hat{F}_{1}\) (with \(\ell_{1}\) norms \(\lambda_{F_{2}}\), \(\lambda_{F_{1}}\), respectively), that each act on a register sim. The block encoding of the product features the block encoding of each operator, such that a register enc of auxiliary qubits is shared. When the number of auxiliary qubits to encode \(\hat{F}_{j}\) is \(\log L_{j}\), then enc only needs to hold enough qubits to encode the operators separately, i.e. a total of \(\max\{\log L_{1},\log L_{2}\}\) qubits, rather than their combined cost \(\log L_{1}+\log L_{2}\). This comes at the cost of only one additional qubit labeled aux, as well as a Toffoli gate over the entire enc register.
Figure 7: Self-inverse circuit for block encoding of an operator product. Using the non-self-inverse block encodings of a product of normalized hermitian operators \(\hat{F}_{2}\) and \(\hat{F}_{1}\) (with \(\ell_{1}\) norms \(\lambda_{F_{2}}\), \(\lambda_{F_{1}}\), respectively), we can turn said block encoding into a self-inverse encoding of \((\hat{F}_{1}\hat{F}_{2}+\hat{F}_{2}\hat{F}_{1})/2\) using one query to the controlled versions of each – the original block encoding and its inverse. The cost for this is an additional qubit, which has to be initialized on the Hadamard basis.
* We use hexagonal controls to indicate "multiplexing", or "uniformly-controlling" a routine [59].
* We distinguish temporary registers and persistent qubit registers whose state must remain coherent throughout the entirety of an algorithm by where the drawn wires begin and end; persistent registers (such as index registers and system qubits) extend the entire length of a diagram, while temporary qubits begin at particular subroutines that allocate them and terminate at subroutines that deallocate them.
* For example, data-loaders allocate a number of so-called clean auxiliary qubits, and the uncomputation of data-loaders deallocate (measure out) those same qubits; in these diagrams, the target registers where data is loaded will never flow into a compute portion of a data-loader or flow out of the uncompute portion of a data-loader, but rather, will emerge and terminate (respectively) from these routines. As such, these are "temporary" qubit registers.
* As an extension of the Gidney elbow notation introduced in Ref. [59] for a Toffoli gate targeting a newly-allocated clean ancilla qubit, we also depict certain subroutines with emerging elbows to denote when a routine ANDs the result of a computation (via a Toffoli gate) onto a newly-allocated, single clean ancillary qubit.
* For example, a comparator circuit flips the state of a zero qubit depending on the result of a particular inequality check. This "result" qubit can be allocated by this comparator, and so we depict this resulting qubit and the Toffoli that targets it as an elbow emerging from a comparator box.
## VI Block Encoding Circuit for the Factorized Electrostatics Operator \(\hat{V}\)
Just as the factorized form of the electrostatics operator (after a Jordan-Wigner transformation) shown in Eq. (111) is analogous to the double factorization procedure used for electronic structure Hamiltonians, the block encoding circuit in Figure 8 is analogous to the double factorization circuit introduced in von Burg _et al._[15] and modified in Lee _et al._[8]. In fact, the circuit largely resembles Fig. 16 in Appendix C of Lee _et al._[8].
Only a handful of the qubit registers in the circuit are persistent qubit registers; the rest are temporary auxiliary qubits used for loading information, and conditioning subsequent operations on those registers. The persistent registers are: a register that indexes over the first rank (logarithmic in \(N_{1}\leq\max(N_{A}^{2},N_{B}^{2})\)), a register that indexes over the second internal rank for monomer A (logarithmic in \(N_{2}^{(A)}\leq N_{A}\)), a register that indexes over the second internal rank for monomer B (logarithmic in \(N_{2}^{B}\leq N_{B}\)), a single ancilla used to index over spin for each monomer, and a pair of registers representing alpha (up) and beta (down) spin-orbitals of monomers A and B.
In spirit, the circuit proceeds just as the canonical double factorization circuit [8]: the PREPARE portion begins by preparing a superposition over the coefficients indexed by the outer rank on one register, then uses this register to load data necessary for state preparation on a separate register for the inner rank of each leaf in the double factorization, and performs this multiplexed state preparation over the inner rank. The SELECT portion proceeds by coherently loading angles for basis-transforming Givens rotations indexed by the inner, second rank, and then applies these Givens rotations onto the system qubits before applying single-qubit Pauli Zs.
The apparent differences between the original double factorization circuit [8] and the one depicted in this work arise because we are dealing with two monomers rather than a single set of system qubits. For this reason, we must coherently load two sets of data for the inner rank state preparations over each monomer individually, and for rotations that target each monomer's system qubits register. Additionally, rather than only applying single-qubit Pauli Z gates on the system qubits after the Givens rotations, we must select between \(Z_{A}\otimes Z_{B}\), \(Z_{A}\otimes\mathbb{1}\), and \(\mathbb{1}\otimes Z_{B}\). To do this, we introduce two additional auxiliary qubits to act as one-body, single monomer flags to either turn on or turn off the application of a Pauli Z on the opposite monomer (top wires of the data\({}_{A}\) and data\({}_{B}\) subroutines).
The bulk of the cost for this block encoding is in loading the Givens rotation angles for monomers A and B, and then executing these rotations. For more details on subroutine-wise costs, see Ref. [8].
## VII Block Encoding Circuit for the Factorized Exchange Operator \(\hat{P}\)
The factorization circuit used for the exchange circuit in Fig. 9 is similar in spirit to the single factorization circuit introduced in Berry _et al._[34] and modified in Lee _et al._[8]. Just as in that case, the PREPARE circuit prepares a superposition over the coefficients resulting from the factorization (in this case, the two indices obtained from the single-rank decomposition of both monomers), and then the SELECT portion must choose between a number of different Pauli strings. The differences here again owe to the fact that we are dealing with two monomers rather than a single system register.
It is worth noting that in the full space formulation, the Givens operators for the inter-monomer and intra-monomer terms are equivalent, i.e. \(G_{2,X}=G_{1,X}\) where \(X\in\{A,B\}\) (see Eq. (111)), while in the active space case, they are different, i.e. \(G_{2,X}\neq G_{1,X}\).
Furthermore, the Pauli strings are implemented using a similar control logic to that detailed in Ref. [34]. We have the operations \(\vec{Z}Y_{k}\) and \(\vec{Z}X_{l}\) targeting the same sets of qubits (depicted in Fig. 9 of Ref. [35]), which implement \(X\vec{Z}X\), \(Y\vec{Z}Y\), or \(Z\) (up to signs and phases) depending on whether \(k<l\), \(k>l\), or \(k=l\). The signs and phases are cleaned up by the Pauli \(Z\) and \(S\) gates in the diagram. To distinguish the one-body single-monomer cases (\(Z_{k}\otimes 1\) and \(\mathbb{1}\otimes Z_{l}\)), we also introduce additional one-body single-monomer temporary qubits emerging (or terminating) at the joint state preparation (or its conjugate). These auxiliary qubits conditionally control the application of a \(k\)-indexed \(Z_{k}\) operation or an \(l\)-indexed \(Z_{l}\) operation, also controlled on \(k\neq l\) (zero-controls at the top of the diagram).
Here the only persistent qubit registers are the two index registers which sum over the first rank indices for both monomers, a single ancilla in the plus state to symmetrize over both index registers after joint state preparation, a single ancilla plus state to index over spin, another single ancilla on which we apply a Pauli Z to correct a phase on the resulting Pauli strings, and the up-and-down spin-orbital registers for both monomers.
Figure 8: block encoding of the electrostatic operator. Here, the gates labeled ‘\(|\vec{\lambda}\rangle\) ’, ‘\(|f_{k}\rangle\)’ and ‘\(|f_{l}\rangle\)’ implement linear superpositions, while hexagonal gates are data loaders multiplexing over indices such as \(t\), \(k\) and \(l\). Controlled gates labeled ‘\(+\)’ (‘\(-\)’) denote in place additions (subtractions) of the control number to the target number. The labels \(\Theta_{A}\) and \(\Theta_{B}\) are sets of angles for the Givens rotations \(G_{A}\) and \(G_{B}\). The \(S\) gate acts on the computational basis as \(S=\operatorname{diag}(1,i)\). We also highlight the PREPARE and SELECT sections of this circuit near the top of the figure. H is the Hadamard gate.
Appendix B Block encoding circuit for the factorized electrostatic-exchange operator \(\widetilde{VP}\)
As detailed in Appendix F.5, the factorization for the complete electrostatic-exchange operator is quite verbose. For illustration purposes, here we present the exact block encoding of only the \(\widetilde{VP}_{1\ell}\) and \(\widetilde{VP}_{4}\) terms. The latter is the asymptotically most expensive part of the block encoding. We then describe how to combine all the resulting terms to realize a block encoding of the entire electrostatic-exchange operator \(\widetilde{VP}\).
### \(\widehat{VP_{1\ell}}\)
The complete block encoding circuit for the electrostatic-exchange operator \(\widetilde{VP}_{1\ell}\) from Eq. (F88) is shown in Fig. 10. This operator is conceptually a hybrid between the electrostatic and exchange operators discussed in the previous sections. Compared to the previous operators, \(\widetilde{VP}_{1\ell}\) does not contain any one-body terms. However, unlike the exchange operator, it requires a double factorization procedure where the first factorization is denoted by the index \(t\) and the second factorization(s) are denoted by \(k\) and \(l\) respectively, where \(k\) and \(l\) are analogous to the \(k\) and \(l\) indices in the exchange operator.
The only persistent registers are the ones holding the indices over the various factorization indices (\(t\), \(k\), and \(l\)), a plus state to symmetrize over the \(k\) and \(l\) registers, qubits labeled \(q\) and \(\theta\) to help implement different Pauli strings in the case distinction (analogous to the case distinction discussed for the exchange operator), a plus state to symmetrize over spin, and the pairs of up and down spin-orbital system qubits for each monomer.
Just as in the exchange operator block encoding, we need control logic to distinguish various Pauli string cases. Here the case distinction is exactly like the one described in Ref. [34], except we have two monomers. For each monomer, we must apply \(X\vec{Z}X+Y\vec{Z}Y\) and \(\mathbb{1}-Z\), which we can do via inequality checks for \(k\) and \(l\) and using the \(q\) wire in a plus state to choose between \(Z\) and \(\mathbb{1}\) for the case of \(k=l\). The controlled Pauli Z and controlled S are used to correct for the sign and imaginary unit that arise from the product of the appropriate Pauli operators.
Figure 9: block encoding of exchange operator. Here, the gates labeled ‘\(|s_{k}s_{l}\rangle\)’ implement linear superpositions of data entries \(k\) and \(l\), while hexagonal gates are data loaders multiplexing over said indices. The gates labeled ‘\(k\stackrel{{?}}{{=}}l\)’ compare the numbers in the \(k\) and \(l\) registers with an adder. Wires born from and ending with rectangular gates denote Gidney elbows, see Ref. [35] Figure 4. The gates labeled ‘\(G_{i,A}\)’ and ‘\(G_{j,B}\)’ are Givens rotations of monomers A and B, where \(i,j\in\{1,2\}\) label whether it corresponds to the intra-monomer or inter-monomer Givens rotations respectively. The \(S\) gate acts on the computational basis as \(S=\operatorname{diag}(1,i)\). We have also highlighted the PREPARE and SELECT sections of this circuit near the top of the figure.
### \(\widehat{VP_{4}}\)
Finally, we consider the implementation of, \(\widehat{VP_{4}}=\frac{1}{2}(\hat{V}\hat{P}+\hat{P}\hat{V})\), written as the symmetric product of two operators. Using the established block encoding circuit primitives for \(\mathcal{B}[\hat{V}]\) and \(\mathcal{B}[\hat{P}]\) found above, it is then possible to use the circuits in Figs. 6 and 7 to implement the self-inverse product of block encoding of \(\widehat{VP_{4}}\). Explicitly, the complete circuit for the \(\widehat{VP_{4}}\) term is given in Fig. 11.
Figure 11: block encoding of the electrostatic-exchange operator, \(\widehat{VP}_{4}\). The symmetric and self-inverse product of \(\hat{V}\) and \(\hat{P}\) is block encoded on the simulator registers of monomers A and B, with a number \(\log L_{\max}+2\) of auxiliary qubits, where \(L_{\max}=\max\{L_{V},L_{P}\}\), meaning that the two types of block encodings reuse each other’s auxiliary qubits. The price of that is just one extra qubit, plus the qubit to implement the superposition. The circuit calls three instances of the block encoding circuits, where we can make the choice for the more expensive one, here: \(\mathcal{B}[\hat{V}/\lambda_{V}]\), to be called only once.
Combining all terms
In order to implement the complete electrostatic-exchange operator, we recall that the following terms are required:
\[\widehat{VP}_{\text{s}}=\widehat{VP}_{A}+\widehat{VP}_{B}+\widehat{VP}_{\text{1 m}}+\widehat{VP}_{\text{1\ell}}+\widehat{VP}_{2}+\widehat{VP}_{3}+\widehat{VP}_{4}. \tag{111}\]
The first two terms correspond to monomer-only terms that are exactly analogous to the standard monomer Hamiltonians which use the standard double factorization procedure for its implementation. The \(\widehat{VP}_{\text{1m}}\) term corresponds to a unique term that only arises in the active space picture and is exactly equal to the electrostatic operator without any 1-body contributions. We have also accounted for \(\widehat{VP}_{\text{1\ell}}\) and \(\widehat{VP}_{4}\) in the previous paragraphs. As discussed in Appendix F.5, in the limit of a large number of orbitals, this operator will always correspond to the most dominant in terms of Toffoli cost. To implement the complete electrostatic-exchange operator, we use the following quantum circuit primitive from Ref. [15]:
Which only requires three additional ancillae in order to implement all seven terms appropriately.
## Appendix H Benchmark set for small molecules
The benchmark molecules have been chosen according to previous benchmark studies found in the classical SAPT literature [60]. The XYZ files for all of the small benchmark cases may be found in in the benchmark energy and geometry database for noncovalent complexes: [http://www.begdb.org](http://www.begdb.org). Resource estimates for the SAPT-EVE algorithm for the small molecule benchmark system as well as the heme-artemisinin system are also presented here for completeness.
## Appendix I Benchmark for drug design: heme-artemisinin
### Computational details
The initial pre-complex structure of heme and artemisinin was prepared using the MOE suit employing the Amber10:EHT force field [61]. Unrestricted density functional theory calculations (DFT) [62; 63] were performed to obtain the transition state (TS) structure using Gaussian 16, Revision C.01 (G16) [64]. The \(\omega\)B97X-D functional [65] and a mixed basis set (Fe: def2-TZVP [66], N: def2-SVPD and def2-SVP for C, O, and H) were used with standard G16 settings for SCF convergence, exchange-correlation grid and geometry optimization convergence parameters. The level shift was enabled (\(vshift=1000\)) to accelerate the SCF convergence. Relaxed potential-energy surface scans along the Fe-O bond prepared the initial structure of the transition state. The transition state was subsequently located using the G16 standard TS search algorithm. We used ROHF in combination with the cc-pVDZ [67; 68; 69] basis to generate orbitals and integrals for the active space selection and subsequent DMRG [70; 71] calculations using the Pyscf [72] and block2 [73] packages. All important data describing the molecular systems used in this paper can be found on the open access repository [74].
## Appendix II Active space selection
The active space was generated for each monomer separately. For heme, we followed the approach from Ref. [53] and included Fe 3d and 4d orbitals, \(\pi\) and \(\pi^{*}\) orbitals of nitrogen and carbon in the heme ring, and bonding and
Figure 12: Linear combination of block encodings. The block encoding of a hermitian operator \(\sum_{j}F_{j}/||\vec{a}||_{1}\) (where \(\alpha_{j}\) is the one-norm of \(F_{j}\), and \(F_{j}\) is hermitian) acting on a register \(\mathsf{sim}\), is block encoded by preparing the state \(\sum_{j}a_{j}|j\rangle\) in one auxiliary qubit register \(\mathsf{aux}\,\mathsf{1}\), such that \(a_{j}=|\alpha_{j}|/||\vec{a}||_{1}\), and then multiplexing the block encoding of the \(F_{j}/\alpha_{j}\) over the computational basis state \(|j\rangle\) in \(\mathsf{aux}\,\mathsf{1}\) using the auxiliary register \(\mathsf{aux}\,\mathsf{2}\) to encode the operator.
antibonding Fe-N orbitals. This resulted in a (42o,42e) active space. We picked the orbitals individually from the Pipek-Mezej localized orbitals based on a high-spin ROHF reference state. The natural orbital occupation from DMRG calculation are plotted in Fig. 14. For the dimer-centered basis, we started with the converged monomer-centered ROHF density matrix to converge the ROHF calculation for which the same procedure was repeated.
For artemisinin, we have used the atomic valence active space (AVAS) method [75]. We have also included all \(2s\) and \(2p\) orbitals of the peroxo moiety (C\(-\)O\(-\)O\(-\)C) in the selection prompt. In addition, we included four more atoms close to the moiety, which play an important role during the decomposition (see Fig. 13, colored in red). This yields an active of (40o, 48e) for the monomer-centered basis, the natural orbital occupation numbers are plotted in Fig. 14. The same procedure was repeated for the dimer-centered basis.
The active space for the dimer system has been generated by first selecting the artemisinin orbitals through AVAS from a high-spin ROHF reference. The remaining orbitals were localized through the Pipek-Mezej routine and the
Figure 14: Natural occupation numbers vs. the index of the active space orbitals from DMRG calculations; left: the heme monomer (bond dimension \(M=2000\)); right: artemisinin monomer (\(M=1000\)). Deviation from integer values \((0,1,2)\) indicates a strongly correlated nature that is not well described by single-reference wavefunctions. Both the heme and the artemisinin molecule have orbitals around the Fermi-level that substantially deviate from integer occupation, while the heme natural orbital occupations also show long tails of smaller corrections away from the Fermi-level that would require large active space calculations to describe correctly.
Figure 13: (Top) Proposed mechanism of the decomposition of artemisinin with heme. The key initial reaction step involves the coordination of artemisinin to the Fe center and a homolytic cleavage of the O\(-\)O bond. (Bottom) Summary of the key orbitals for both artemisinin and heme in order to describe the complete decomposition mechanism, which must be included in the active space (see main text for a detailed justification).
same set of orbitals from the heme monomer were then selected by hand. This yields a total active space of (82o, 90e) for the supersystem, which is intractable for most quantum chemistry methods. The active space choice is not unique, and FTQC simulations would be required actually to confirm this choice. However, we believe by carefully analyzing each monomer DMRG result, using chemical intuition and established mechanistic insights, that this is a representable active space for studying the initial reaction step for the artemisinin decomposition.
For the resource estimates, we also require the energy gap between the ground state of the Hamiltonian and the first excited state and the overlap with the initial state. We do not have access to the full configuration interaction energies or states but we use accurate DMRG calculations as a proxy. In Table 2, we list the results of our active space calculations for the heme and artemisinin monomers in the monomer-centered and dimer-centered bases. We chose bond dimension \(M=2000\) for the heme and \(M=1000\) for artemisinin. The ground states and excited states are found in the same spin sector: \(S=1\) for the heme and \(S=0\) for artemisinin.
## Appendix J Supermolecular resource data
Supermolecular resource data is provided in Tab. 3 for the small molecule benchmark system in the all-electron (full space) picture as well as the active-space-optimized heme-artemisinin dimer system. In all cases, the supermolecular (SM) resource estimation consists of three standard quantum phase estimation runs with the block-encoded double-factorized Hamiltonians for each system (for more details, see Refs. [8; 15]). Note that we have performed an appropriate error budgeting procedure to reduce the overall Toffoli gate cost of the total supermolecular calculation.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline System & Basis & \(E_{0}\)[Ha] & \(E_{1}\)[Ha] & \(\Delta\)[Ha] & \(\left|\langle\Psi_{\mathrm{ROHF}}|\Psi_{\mathrm{DMRG}}\rangle\right|^{2}\) \\ \hline Heme & MCB & -2469.858136 & -2469.851239 & 0.006897 & 0.068174 \\ & DCB & -2469.858690 & -2469.851043 & 0.007646 & 0.067537 \\ \hline Artemisinin & MCB & -916.399835 & -916.278659 & 0.121176 & 0.800254 \\ & DCB & -916.405542 & -916.284444 & 0.121098 & 0.800654 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Molecular data from DMRG calculations for benchmark heme artemisinin system. \(E_{0}/E_{1}\) are ground and first-excited state energies measured in Hartrees (Ha). \(\Delta=E_{1}-E_{0}\) is the energy gap. The last column represents the overlap between the restricted open-shell Hartree Fock wavefunction and the DMRG wavefunction. Data is provided on both monomer-centered basis (MCB) and dimer-centered basis (DCB).
\begin{table}
\begin{tabular}{l l
## Appendix K Comprehensive Resource Data
Figure 15: Quantum resource estimation in the active space picture. (a) Toffoli gate cost for target heme-artemisinin system as a function of active space spatial orbitals. The active space orbitals are not optimized and are symmetrically chosen around the Fermi level. (b) Toffoli and logical qubit dependence of the SAPT-EVE algorithm for the three SAPT observables, \(\hat{F}=\{\hat{V},\hat{P},\widetilde{VP}\}\), for the non-optimal active space system on the left. The size of the data points corresponds to the size of the \(\ell_{1}\) norm for the corresponding observable. The numbers on the right figure correspond to the number of active space orbitals.
Figure 16: SAPT-EVE algorithm call graph for the electrostatic operator, \(\hat{V}\). The total call graph depicts the estimated resources required for computing all of the SAPT operators for heme-artemisinin benchmark system using QSP-EVE. The graph displays the distribution of the costs among various subroutines, and where B[H] is the block encoding of the Hamiltonian and \(\mathcal{R}_{\tau}\) is the controlled block encoding of the observable. All costs are given in terms of Toffoli gate counts, with each subroutine node depicting the per-call cost, the total number of calls and cost when taken over the full algorithm (i.e. overall parent calls). Note that some routines are deliberately omitted in the count, as they contribute very little. Edge numbers define the number of calls of the target routine within a single call of its parent routine. Darker shading indicates a greater total gate cost for the subroutine. The ASP comprises several low-precision phase estimation routines in the repeat-until-success circuit.
Figure 17: SAPT-EVE algorithm call graph for the exchange operator, \(\hat{P}\). The total call graph depicts the estimated resources required for computing all of the SAPT operators for heme-artemisinin benchmark system using QSP-EVE. The graph displays the distribution of the costs among various subroutines, and where B[H] is the block encoding of the Hamiltonian and \(\mathcal{R}_{\tau}\) is the controlled block encoding of the observable. All costs are given in terms of Toffoli gate counts, with each subroutine node depicting the per-call cost, the total number of calls and cost when taken over the full algorithm (i.e. over all parent calls). Note that some routines are deliberately omitted in the count, as they contribute very little. Edge numbers define the number of calls of the target routine within a single call of its parent routine. Darker shading indicates a greater total gate cost for the subroutine. The ASP comprises several low-precision phase estimation routines in the repeat-until-success circuit.
## References |
2308.01484 | Revival of antibiskyrmionic magnetic phases in bilayer NiI$_2$ | Magnetic skyrmions are topologically protected spin textures with potential
applications in memory and logic devices. Skyrmions have been commonly observed
in systems with Dzyaloshinskii-Moriya interaction due to broken inversion
symmetry. Yet, recent studies suggest that skyrmions can also be stabilized in
systems with inversion symmetry such as Ni-based dihalides due to magnetic
frustration. In this article, we employ atomistic simulations to investigate
chiral magnetic phases in bilayers of NiI$_2$ and NiBr$_2$. We show that the
antiferromagnetic interlayer coupling introduces an additional magnetic
frustration and gives rise to a variety of novel spin textures with different
topological charges. Specifically for NiI$_2$, we observe that the skyrmions
with the in-plane component of spins wrapping around twice (biskyrmions) have
an enhanced stability compared to the monolayer case. We also study the
polarization induced by the non-colinear magnetic order in NiI$_2$ bilayers and
show that the polarization of the topologically nontrivial phases is negligible
compared to the spiral phases. Thus, we conclude that polarization measurements
can be an indirect route for detecting skyrmions in upcoming experiments. | Jyotirish Das, Muhammad Akram, Onur Erten | 2023-08-03T00:45:55Z | http://arxiv.org/abs/2308.01484v1 | # Revival of antibiskyrmionic magnetic phases in bilayer NiI\({}_{2}\)
###### Abstract
Magnetic skyrmions are topologically protected spin textures with potential applications in memory and logic devices. Skyrmions have been commonly observed in systems with Dzyaloshinskii-Moriya interaction due to broken inversion symmetry. Yet, recent studies suggest that skyrmions can also be stabilized in systems with inversion symmetry such as Ni-based dihalides due to magnetic frustration. In this article, we employ atomistic simulations to investigate chiral magnetic phases in bilayers of NiI\({}_{2}\) and NiBr\({}_{2}\). We show that the antiferromagnetic interlayer coupling introduces an additional magnetic frustration and gives rise to a variety of novel spin textures with different topological charges. Specifically for NiI\({}_{2}\), we observe that the skyrmions with the in-plane component of spins wrapping around twice (biskyrions) have an enhanced stability compared to the monolayer case. We also study the polarization induced by the non-colinear magnetic order in NiI\({}_{2}\) bilayers and show that the polarization of the topologically nontrivial phases is negligible compared to the spiral phases. Thus, we conclude that polarization measurements can be an indirect route for detecting skyrmions in upcoming experiments.
## I Introduction
Nickel dihalides, NiX\({}_{2}\) (X = I, Cl, Br), belong to a class of insulating van der Waals (vdW) magnets with transition temperatures ranging from \(T_{c}=52\)K in NiCl\({}_{2}\) and NiBr\({}_{2}\) up to \(T_{c}=76\)K in NiI\({}_{2}\) in the bulk [1; 2; 3]. They exhibit a variety of magnetic phases including ferromagnetic (FM), antiferromagnetic (AFM) and spiral (Sp) ground states [3]. The Sp phases in NiI\({}_{2}\) and NiBr\({}_{2}\) are of particular interest as they break inversion symmetry and thus exhibit finite polarization, leading to multiferroic properties [3; 4]. Monolayers of NiX\({}_{2}\) can be obtained by suitable exfoliation methods and both magnetism and multiferroic properties survive down to single layer [5; 6]. Recent theoretical studies [7] predict that NiX\({}_{2}\) monolayers may also host chiral magnetic phases such as skyrmions (SkX) and antibiskyrmions (A2Sk) even in the absence of Dzyaloshinskii-Moriya interaction (DMI). Skyrmions are topologically protected vortex-like magnetic textures with potential applications in logic and memory devices [8; 9; 10; 11]. In nickel dihalides, these phases are predicted to be stabilized by a combination of magnetic frustration, anisotropic exchange [7] and external magnetic field (B). However, atomistic simulations for NiI\({}_{2}\) monolayer indicate that the A2Sk and Sp phases are quite close in energy [12]. Indeed, circular dichroic Raman measurements show that multiferroic order persists at monolayers, consistent with a Sp ground state [5]. Biskyrmions have twice the topological charge of skyrmions. Experimentally, they have only been observed in a handful of compounds including La\({}_{2-2x}\)Sr\({}_{1+2x}\)Mn\({}_{2}\)O\({}_{7}\)[13], MnNiGa [14], Cr\({}_{11}\)Ge\({}_{19}\)[15], MnPdGa [16] and Nd\({}_{2}\)Co\({}_{17}\)[17]. However some of these observations can be misleading since the topologically trivial magnetic bubbles can show similar images under Lorentz microscopy [18; 19; 20]. In light of these observations, it is important to find new platforms which can stabilize biskyrmions.
Due to weak interlayer bonding, vdW materials can be arranged in different stacking patterns and further manipulated through twisting to create moire superlattices [21; 22]. In both cases, completely new phenomena that is not possible to obtain in monolayers can be achieved [23]. A range of new non-coplanar phases are predicted [24; 25; 26; 27; 28; 29] for moire magnets and some of these phases have been observed experimentally [30; 31; 32]. For NiX\({}_{2}\) bilayers, interlayer interactions are antiferromagnetic and therefore compete with the external magnetic field and introduce additional magnetic frustration. Motivated by this observation, we study the phase diagram of NiI\({}_{2}\) and NiBr\({}_{2}\) bilayers in rhombohedral and AA
Figure 1: (a) Front view and (b) side view of NiX\({}_{2}\) (X = I, Br) monolayer. NiX\({}_{2}\) bilayer with rhombohedral stacking, (c) side view with X atoms and (d) front view showing only the halide atoms.
stacking patterns via atomistic Landau-Lifshitz-Gilbert (LLG) simulations as a function of interlayer exchange and external magnetic field. Our main results show that: (i) for NiI\({}_{2}\), the interlayer coupling narrows the region of the SkX phase and promotes antibskyrmionic phases with different topological charges for both AA and rhombohedral stacking orders. (ii) As a result, for the range of _ab initio_ estimates for interlayer exchange [5; 33], it is feasible to stabilize A2Sk phases in NiI\({}_{2}\) bilayers compared to monolayer systems. (iii) Larger values of interlayer exchange leads to the complete suppression of A2Sk/SkX phases and gives rise to the formation of Sp phases instead. (iv) Due to negligible anisotropic exchange, skyrmions are completely suppressed in NiBr\({}_{2}\) for realistic interlayer exchange parameters. (v) In NiI\({}_{2}\), the electric polarization induced by the non-colinear magnetic order increases with the magnetic field in the Sp phase and is negligible in the A2Sk/SkX phases. This prediction can be applied to deduce the skyrmionic phases indirectly.
The rest of the article is organized as follows. First, we introduce the effective spin Hamiltonian and overview the results of atomistic simulations for the NiI\({}_{2}\) monolayer. Next, we discuss the phase diagram of NiI\({}_{2}\) and NiBr\({}_{2}\) bilayers in rhombohedral and AA stacking patterns. We conclude with a discussion on the induced polarization due to magnetic order and experimental signatures of the phase diagram.
## II Microscopic magnetic model
NiX\({}_{2}\) belongs to \(R\bar{3}m\) space group which is a centro-symmetric rhombohedral structure. In a monolayer, Ni\({}^{2+}\) ions (3d\({}^{8}\), S=1) form a triangular lattice with X\({}^{-}\) (X=I, Cl, Br) ions are arranged above and below the plane as shown in Fig. 1. The magnetic interactions between localized Ni spins (\(\mathbf{s}_{i}^{\dagger}\)) bilayers can be modeled by the following spin Hamiltonian [7]:
\[H = \frac{1}{2}\sum_{i\neq j,l}\mathbf{s}_{i}^{l}\cdot\mathbf{J}_{ij} \cdot\mathbf{s}_{j}^{l}+J^{\perp}\sum_{\langle ij\rangle}\mathbf{s}_{i}^{1} \cdot\mathbf{s}_{j}^{2} \tag{1}\] \[+ \sum_{i,l}\mathbf{s}_{i}^{l}\cdot\mathbf{A}_{i}\cdot\mathbf{s}_{ i}^{l}-\sum_{i,l}\mathbf{B}\cdot\mathbf{s}_{i}^{l}\,.\]
Here, \(l=1,2\) denotes the layer index. \(\mathbf{J}_{ij}\) is the tensor for the exchange coupling interactions which can be decomposed into an isotropic coupling term and an anisotropic one, the latter also referred to as the two-site anisotropy. \(J^{\perp}\) is the antiferromagnetic interlayer exchange, \(\mathbf{A}_{i}\) is the single-ion anisotropy and \(\mathbf{B}\) is the external magnetic field. For the intralayer exchange parameters, we use the values obtained from first principle calculations in Ref. [7] (See Appendix A for more details).
The magnetic frustration in nickel dihalides originate from a strong ferromagnetic nearest-neighbor exchange interaction (J\({}^{1\rm{iso}}\)) with a comparable antiferromagnetic third nearest-neighbor exchange (J\({}^{3\rm{iso}}\)). Theoretical studies show that the interplay of the magnetic frustration and exchange anisotropy is the key ingredient for the stabilization of topologically protected spin textures in monolayer NiX\({}_{2}\)[7; 12].
## III Results and discussion
### Overview of the phase diagram of NiI\({}_{2}\) monolayer
The phase diagram of monolayer NiI\({}_{2}\) (eq. 1) has been studied via Monte Carlo simulations [7]. In a previous work [12], we have extended these results to the Janus counterparts via atomistic LLG simulations. The magnetic unit cell (\(L\times L\)) for these simulations is estimated by the Luttinger-Tisza (LT) method [34] and we verify these values by performing system-size dependent calculations (see Appendix C for more details). Our atomistic simulations show that magnetic field up to 29.3T give rise to a Sp phase, which is energetically very close to an -A2Sk phase, (\(\Delta E<0.05\) meV). For intermediate magnetic fields, \(29.3T<B<69.9T\), we obtain a SkX phase. A further increase in the magnetic field results in a Sp phase which adiabatically connects to a ferromagnet.
Figure 2: Skyrmion and antibiskyrmion phases in NiI\({}_{2}\) monolayer: (a)-A2Sk at B\({}_{z}\) = 36.2 T and (b) SkX at B\({}_{z}\) = 18.1 T. The area marked by red denotes the magnetic unit cell. The arrows denote the in-plane component while the colormap denotes the out-of-plane component of the spin.
The different magnetic phases are distinguished by the topological charge Q measured over an \(8\times 8\) magnetic unit cell. A2Sk, SkX and Sp phases have \(Q=6\), 3, 0 respectively. Representative SkX and A2Sk spin textures are shown in Fig. 2. The spin structure factor \(S(\mathbf{q})\) is also a useful indicator to distinguish different phases.
### Magnetic phase diagram of NiI\({}_{2}\) bilayer
Even though NiI\({}_{2}\) has rhombohedral stacking in the bulk, _ab initio_ calculations show that the energy differences between the rhombohedral and AA stacking patterns for bilayer systems are relatively small [35], \(\Delta E\simeq 0.3\) meV. Therefore, we study the magnetic phase diagrams of both stacking orders by using atomistic LLG simulations, following a method similar to the monolayer case. The LT method shows that the antiferromagnetic interlayer exchange does not affect the size of the magnetic unit cell. The phase diagram of NiI\({}_{2}\) for rhombohedral and AA stacking patterns are presented in Fig. 3(a) and (b) respectively. The dashed red lines in Fig. 3(a) indicate the two values for interlayer coupling obtained from _ab initio_ calculations in Refs. [33] and [5]. To the best of our knowledge, there is no _ab initio_ estimate for the interlayer exchange in AA stacking. The magnetic ground states and the corresponding topological charge on each layer are indicated. The phase -A2Sk is an antibiskyrmion with a topological charge \(Q=-6\) per unit cell. A2Sk\({}^{\star}\), A2Sk\({}^{\star\star}\) and A2Sk\({}^{\star\star\star}\) are antibiskyrmionic intermediate phases with varying topological charge \(4,~{}5\) and \(5.5\). We call these phases antibiskyrmionic since the spin texture closely resembles the A2Sk phase. For both stacking patterns, the phase diagrams show that topologically non-trivial phases such as SkX and A2Sk are sandwiched between the Sp phases. There are two distinguishing features of this phase diagram. Firstly, the antiferromagnetic interlayer coupling enables the revival of the A2Sk phase which was suppressed by the Sp phase in the monolayer case. This comes at the cost of inhibiting the SkX phase at an interlayer coupling strength of \(J^{\perp}/J^{\rm{iso}}\sim 0.2\) and \(0.4\) for the rhombohedral and AA stackings, respectively. Secondly, at larger values, the interlayer coupling completely suppresses the antibiskyrmionic phases in favor of the Sp phase. Next, we elucidate these findings and other features of the phase diagram.
#### iii.2.1 Revival of the antibiskyrmionic phases
A key difference between the monolayer and bilayer phase diagrams of NiI\({}_{2}\) is the restoration of the antibiskyrmionic phases at intermediate \(B\). Fig. 3(a) and (b) show that this effect can be observed in both stacking orders and it is primarily due to the suppression of the (SkX, SkX) phase with interlayer coupling. In order to illustrate the competition between (SkX, SkX) and (-A2Sk, SkX) phases and the effects of interlayer exchange and B in detail, we tabulate the total energy of the bilayer and its various contributions from the spin Hamiltonian for both the competing phases for B\({}_{z}=46.6\) T and J\({}^{\perp}/J^{\rm{iso}}=0.1\) in Table 1. For these parameters,
Figure 3: The phase diagram of NiI\({}_{2}\) bilayer for (a) rhombohedral and (b) AA stacking. The dashed red lines indicate the _ab initio_ values for the interlayer exchange obtained from Refs. [33] and [5]. The black dashed lines imply the regions of quasi-degenerate solutions with energy difference less than 0.01 meV. The SkX and A2Sk phases are sandwiched between the Sp phases. Interlayer exchange suppresses the SkX phase leading to a revival of antibiskyrmionic phases.
\begin{table}
\begin{tabular}{||c|c|c|c|c|c|c||} \hline & \(E\) & \(E^{1}\) & \(E^{3}\) & \(E^{2an}\) & \(E^{b}\) & \(E^{I}_{\parallel}\) & \(E^{I}_{\perp}\) \\ \hline \hline (-A2Sk, SkX) & -16.92 & -10.35 & -3.36 & -1.22 & -1.63 & -0.48 & -0.09 \\ \hline (SkX, SkX) & -16.89 & -10.57 & -3.04 & -1.21 & -1.8 & -0.37 & -0.09 \\ \hline \(\Delta E\) & -0.03 & 0.22 & -0.32 & -0.01 & 0.17 & -0.11 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 1: Energy contributions from different terms of the spin Hamiltonian for rhombohedral stacking at 46.6 T and J\({}^{\perp}/J^{\rm{iso}}=0.1\) in units of meV. \(E\) is the total energy per site. \(E^{1}\) (\(E^{3}\)) is the energy due to (anti-) ferromagnetic exchange, \(E^{2an}\) represents the energy due to the two-site anisotropy, \(E^{I}_{\parallel}\) (\(E^{I}_{\perp}\)) is the energy due to the parallel (perpendicular) component of the interlayer exchange, \(E^{b}\) is the energy contribution due to an external magnetic field. Energies due to single-site anisotropy and due to interaction with the second nearest neighbour have been ignored. \(\Delta E\) is defined as \(\Delta E=E_{(-A2Sk,SkX)}-E_{(SkX,SkX)}\).
the ground state is (-A2Sk, SkX), and we recover (SkX, SkX) as a local minimum in LLG simulations. We find that the intralayer exchange interaction \(E^{1}\) (\(E^{3}\)) stabilizes the SkX (-A2Sk) phase as it has effectively a greater number of nearest neighbor spins which are aligned (anti-)parallel to each other. Moreover the contribution to the energy due to the external magnetic field (\(E^{b}\)) stabilizes the (SkX, SkX) phase as it has a larger magnetization along the \(\hat{z}\) direction compared to (-A2Sk, SkX). Yet the in-plane component of interlayer interaction (\(E^{I}_{\parallel}\)) stabilizes the (-A2Sk, SkX) phase. As a result, for large interlayer exchange antibiskyrmionic phases takes over the (SkX, SkX) phase.
\(E^{I}_{\parallel}\) stabilizes the (-A2Sk, SkX) phase due to the relative arrangement of the skyrmions and antibiskyrmions within each layer. This can be understood more intuitively with the help of the spin textures for the two layers in both the competing phases as shown in Fig. 4. Spin textures consist of a periodic pattern of vortices which have skyrmionic/antibiskyrmionic structures but with fractional topological charges [7]. We call the vortices which wind in the plane once (hence with a skyrmionic structure) V\({}_{1}\) and the ones which wind in the opposite direction twice with an antibiskyrmionic structure V\({}_{2}\) for convenience. Nine such vortices form the smallest repetitive spin texture within the magnetic unit cell (see Appendix B). In Fig. 4, we compare the six vortices in the (-A2Sk, SkX) phase ((a), (b)) with those in the (SkX, SkX) phase ((c),(d)) at a magnetic field of 46.6 T and \(J^{\perp}/J^{\rm{Iiso}}=0.1\). In the (-A2Sk, SkX) phase ((a), (b)) vortices of the same type are arranged on top of each other while this is not the case for the (SkX, SkX) phase ((c), (d)). This is because in the (SkX, SkX) phase, vortices of type V\({}_{2}\) have the out-of plane component of the magnetization pointing in the same direction i.e both of them have the same polarity. Therefore, in this phase the antiferromagnetic interlayer interaction couples the vortex V\({}_{2}\) to a vortex V\({}_{1}\) with opposite polarity. However, the coupling of similar vortices in the (-A2Sk, SkX) phase result in a better alignment of the in-plane spin components thereby lowering their in-plane interlayer coupling energy (\(E^{I}_{\parallel}\)) compared to the (SkX, SkX) phase.
#### iii.2.2 Suppression of the antibiskyrmionic phase
In order to illustrate the mechanism behind the suppression of antibiskyrmionic phases for favor of Sp phase at large \(J^{\perp}\), we present the energy difference between these phases for various terms of the spin Hamiltonian for rhombohedral stacking in Fig. 5. We observe that keeping the magnetic field constant at B\({}_{z}=46.6\) T, an increase in interlayer coupling from J\({}^{\perp}\)/J\({}^{\rm{Iiso}}=0.1\) (Fig. 5(a)(i)) to J\({}^{\perp}\)/J\({}^{\rm{Iiso}}=0.7\) (Fig. 5(a)(iii)) changes the ground state from skyrmions (blue background) to spirals (green background). The dominating contributions result from the energy difference due to two-site anisotropy (\(E^{2an}\)) and the parallel (\(E^{I}_{\parallel}\)) and perpendicular (\(E^{I}_{\perp}\)) components of interlayer coupling energy. We observe that \(\Delta E^{I}_{\parallel}\) is slightly greater in magnitude than \(\Delta E^{I}_{\perp}\) and both of them increases almost linearly with interlayer coupling strength. \(\Delta E^{2an}\) by contrast tends to saturate and cannot compensate for \(\Delta E^{I}_{\parallel}\) which destabilizes the chiral phases at high values of interlayer coupling. We also observe that the energy due to antiferromagnetic exchange (\(E^{3}\)) stabilizes skyrmions for lower values of interlayer coupling but stabilizes spirals at higher values. We therefore conclude that the parallel component of interlayer coupling energy and the energy due to antiferromagnetic exchange are the dominant factors which tends to destabilize chiral phases and form spirals at high values of interlayer coupling.
#### iii.2.3 Other features of the phase diagram
Fig.5(b)(i)-(iii) shows the energy decomposition for rhombohedral stacking for J\({}^{\perp}\)/J\({}^{\rm{Iiso}}=0.1\), 0.4 and 0.7 respectively at B\({}_{z}=13.8\) T. We observe that the different energy contributions show negligible variation with varying interlayer coupling at relatively small magnetic fields. This even includes the energy contributions due to the interlayer coupling. Therefore, the phase boundary between the spiral phases and the antibiskyrmionic phases is nearly vertical for both rhombohedral and AA stacking at \(B\sim 22\) T. On the contrary, larger values of interlayer exchange forces the spins on each layer to be oriented anti-parallel. This increases the polarity and thereby increasing the topological charge. Therefore, we observe
Figure 4: Spin textures of bilayer NiI\({}_{2}\) in rhombohedral stacking for \(B_{z}=46.6\) T and \(J^{\perp}=0.1J^{1iso}\). (a) and (b) denotes the (-A2Sk, SkX) phase whereas (c) and (d) denotes the (SkX, SkX) phase respectively.
that the (-A2Sk, SkX) phase undergoes a transition into the (-A2Sk, A2Sk) phase for rhombohedral stacking and into the (-A2Sk, SkX*) and (-A2Sk, SkX**) phases successively for AA stacking at \(B\sim 34.5\) T. The opposite phenomenon is observed when the magnetic field is increased keeping the interlayer coupling constant. A strong magnetic field acts against the interlayer coupling and forces the spins of both the layers to be oriented along magnetic field which facilitates successive transitions into phases with lower topological charges.
It is important to note that for rhombohedral stacking, we obtain a phase (-A2Sk, A2Sk\({}^{***}\)) which has a fractional topological charge, \(Q=5.5\), on one layer (A2Sk\({}^{***}\)). This is due to the fact that the topological charge is calculated for each layer separately. Therefore it does not consider the bilayer system as a whole and ignores the chirality arising from the interlayer coupling of the spins. When we consider the bilayer system as effectively one layer with two antiferromagnetically coupled sublattices, we get some additional triangular plaquettes. If we reverse the spins of one layer to take the antiferromagnetic coupling into account and sum over these additional triangular plaquettes (see Fig. 10), a net topological charge \(Q=6\) for a magnetic unit cell of the entire bilayer system is obtained. It is worth mentioning that merons and anti-merons which combine in antiferromagnetic sublattices can also give rise to spin textures with fractional topological charge as predicted recently [36]. For the AA stacking pattern, there are no additional triangular plaquettes for the entire bilayer system since the two layers are exactly on top of each other. Therefore, the topological charge always remains an integer.
### Phase diagram of NiBr\({}_{2}\) bilayer
Fig. 6(a) shows the phase diagram of NiBr\({}_{2}\) in rhombohedral stacking. A small region of (SkX, SkX) phase coexists with (SkX, -SkX) phase with \(\Delta E\sim 0.02\) meV. The skyrmion phase space is significantly smaller compared to NiI\({}_{2}\) due to the lack of anisotropic exchange in NiBr\({}_{2}\). [7]. NiBr\({}_{2}\) does not exhibit antibiskymmic phases for the same reason. Similar to NiI\({}_{2}\), the antiferromagnetic coupling suppresses the SkX phase. The \(J^{\perp}\) obtained from DFT [37] implies that it may not be possible to observe skyrmions in bilayer NiBr\({}_{2}\) even in the presence of a magnetic field. The spin textures of the different phases are shown in the Appendix D. Fig. 6 (b) shows the phase diagram of NiBr\({}_{2}\) in AA stacking. The (SkX, SkX) phase coexists with (SkX,-SkX) phase with \(\Delta E\sim 0.02\) meV.
### Multiferrocity and Polarization
NiI\({}_{2}\) undergoes a magnetic transition at \(T_{c1}\sim 76\) K to an AFM state with FM planes. At \(T_{c2}\sim 59.5\) K, a second magnetic transition to a helimagnetic (or Sp) phase takes place at which NiI\({}_{2}\) starts to exhibit a finite electric polarization [38; 39]. In the monolayer limit, the Sp phase and the polarization survives but appears at a lower temperature \(T_{c2}\sim 21\) K. The origin of the polarization can be traced back to the non-colinear spin texture of the Sp phase [40] and demonstrated via a Ginzburg-Landau approach. Under a time reversal symmetry operation, \(t\rightarrow-t\), the polarization is unchanged, \(\mathbf{P}\rightarrow\mathbf{P}\). Yet the magnetization flips, \(\mathbf{M}\rightarrow\mathbf{-M}\). This requires the lowest order coupling between \(\mathbf{P}\) and \(\mathbf{M}\) to be quadratic in \(\mathbf{M}\). The symmetry with respect to the parity, \(\mathbf{r}\rightarrow\mathbf{-r}\), transforms \(\mathbf{P}\rightarrow\mathbf{-P}\) and \(\mathbf{M}\rightarrow\mathbf{M}\) implies that a linear coupling in \(\mathbf{P}\) needs to contain one gradient of \(\mathbf{M}\). Therefore the lowest order coupling term between \(\mathbf{P}\) and \(\mathbf{M}\) has the form [41]
\[\Phi_{em}(\mathbf{P},\mathbf{M})=\gamma\mathbf{P}\cdot[\mathbf{M}(\nabla\cdot \mathbf{M})-(\mathbf{M}\cdot\nabla)\mathbf{M}+...] \tag{2}\]
Figure 5: represents the energy difference \(\Delta E=E_{(-A2Sk,\mathrm{X})}-E_{(Sp,Sp)}\) between different skyrmionic phases (-A2Sk, X) and the spiral phase for rhombohedral stacked NiI\({}_{2}\). The different parameters are for (a) B\({}_{z}\) = 46.6 T, (i) X= SkX, J\({}^{\perp}\)/\(J^{\mathrm{1iso}}\) = 0.1, (ii) X= A2Sk***, J\({}^{\perp}\)/\(J^{\mathrm{1iso}}\) = 0.4, (iii) X= A2Sk, J\({}^{\perp}\)/\(J^{\mathrm{1iso}}\) = 0.7 and for (b) B\({}_{z}\) = 13.8 T and X= A2Sk at J\({}^{\perp}\)/\(J^{\mathrm{1iso}}\) = 0.1, 0.4 and 0.7 for (i), (ii) and (iii) respectively. Blue background indicates (-A2Sk, X) ground state whereas green background indicates (Sp, Sp) ground state. \(E^{1}\) (\(E^{3}\)) is the energy due to (anti-) ferromagnetic exchange, \(E^{2an}\) represents the energy due to the two-site anisotropy, \(E^{1}_{\parallel}\) (\(E^{1}_{\perp}\)) is the energy due to the parallel (perpendicular) component of the interlayer exchange, \(E^{b}\) is the energy contribution due to an external magnetic field and \(E^{a}\) is the energy contribution due to single-site anisotropy.
The quadratic term in the electric part of the thermodynamic potential is \(\Phi_{e}(\mathbf{P})=P^{2}/2\chi_{e}\) where \(\chi_{e}\) is the dielectric susceptibility. In order to determine \(\mathbf{P}\), we take the variation of \(\Phi_{e}+\Phi_{em}\) with respect to \(\mathbf{P}\) which leads to
\[\mathbf{P}=\gamma\chi_{e}[\mathbf{M}(\nabla\cdot\mathbf{M})-(\mathbf{M}\cdot \nabla)\mathbf{M}] \tag{3}\]
We use eq. 3 to calculate \(\mathbf{R}=\mathbf{P}/\gamma\chi_{e}\) for the spin textures obtained from LLG simulations. Since \(\gamma\) and \(\chi_{e}\) depend on material specific quantities, \(\mathbf{R}\) is not a good indicator for the magnitude of the induced polarization. However, it is effective to capture the trends as a function of tuning parameters such as B. For a coplanar spiral, it is straightforward to show that \(\mathbf{P}\propto(\hat{z}\times\hat{\mathbf{q}})\)[40; 41] where \(\mathbf{q}\) is the ordering wave vector of the spiral. Therefore, coplanar spirals can only exhibit polarization that lies in the plane. In Fig. 7, we present the \(\mathbf{R}\) as a function of B for \(J^{\perp}/J^{\mathrm{iso}}=0.3\) in rhombohedral stacking. We find that in the Sp phase, both in-plane and \(z\) component of \(\mathbf{R}\) is finite. This is due to the non-coplanar nature of the Sp phase. As a function of B, \(\mathbf{R}\) increases in the Sp phase. This effect has also been observed in experiments [42]. On the contrary, the induced polarization in topologically non-trivial phases including SkX and A2Sk is negligible compared to the Sp phase. As direct observation of skyrmions in 2D systems is a challenging task, polarization can be used as an indirect probe to detect topologically non-trivial phases.
## IV Conclusions
We studied the magnetic phases of bilayer NiI\({}_{2}\) and NiBr\({}_{2}\) in both rhombohedral and AA stacking via atomistic simulations. For both materials, we find that interlayer exchange strongly suppresses the SkX phase. In NiI\({}_{2}\), the depleted region is occupied by antibiskyrmionic phases with varying topological charges. We provide a detailed analysis for the competition between these phases. Due to weak exchange anisotropy in NiBr\({}_{2}\), interlayer exchange quickly destroys the topologically non-trivial phases and leads to the Sp phase. We conclude with an analysis on the induced polarization due to non-coplanar magnetic textures and show that the topological phases exhibit negligible polarization. Interesting future directions include moire superlattices of helimagnets and skyrmions in magnetically frustrated systems.
## V Acknowledgements
This work is supported by NSF Award No. DMR 2206987. MA acknowledges support from Fulbright Scholarship.
\begin{table}
\begin{tabular}{||c|c|c|c|c|c|c|c|c|c||} \hline & J\({}^{\mathrm{iso}}\) & J\({}^{\mathrm{2iso}}\) & J\({}^{\mathrm{3iso}}\) & J\({}_{xx}\) & J\({}_{yy}\) & J\({}_{zz}\) & J\({}_{yz}\) & J\({}_{xz}\) & J\({}_{xy}\) \\ \hline \hline NiI\({}_{2}\) & -7.0 & -0.3 & 5.8 & -1.0 & 1.4 & -0.3 & -1.4 & 0 & 0 \\ \hline NiBr\({}_{2}\) & -5.9 & -0.1 & 2.9 & -0.1 & 0.1 & 0 & -0.1 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 2: Intralayer exchange parameters for NiI\({}_{2}\) and NiBr\({}_{2}\)[7].
Figure 6: The phase diagram of NiBr\({}_{2}\) bilayer in (a) rhombohedral (b) AA stacking obtained from atomistic simulations. The dotted red line shows the DFT value. The area occupied by the SkX phase is very small compared to NiI\({}_{2}\) because of the absence of magnetic anisotropy.[7]
Figure 7: Induced polarization, R\({}_{y/z}\) = P\({}_{y/z}\)/\(\gamma\chi_{e}\) as a function of B for NiI\({}_{2}\) bilayer (\(J^{\perp}/J^{\mathrm{iso}}=0.3\)). The polarization increases as with B in the spiral phase whereas it is negligible in the topologically nontrivial phases.
## Appendix A Magnetic exchange parameters
For the intra-layer spin Hamiltonian, we use the magnetic exchange parameters that are obtained from first principle calculations by Ref. [7] shown in Table 2. These couplings are for the Ni\({}_{0}\)-Ni\({}_{1}\) pair (see Fig. 8) whose bonding vector is chosen parallel to the Cartesian \(x\) axis and given by:
\[\mathbf{J}^{(0^{\circ})}=\begin{pmatrix}\mathrm{J_{xx}}&0&0\\ 0&\mathrm{J_{yy}}&\mathrm{J_{yz}}\\ 0&\mathrm{J_{yz}}&\mathrm{J_{zx}}\end{pmatrix}\,, \tag{10}\]
where, by symmetry, \(\mathrm{J_{zy}}=\mathrm{J_{yz}}\) and the other off-diagonal terms are nominally zero. The corresponding tensor for the symmetry-equivalent pairs Ni\({}_{0}\)-Ni\({}_{3}\) and Ni\({}_{0}\)-Ni\({}_{5}\) rotated by \(\pm 120^{\circ}\) can be deduced by exploiting the three-fold rotational symmetry, leading to:
\[\mathbf{J}^{(\frac{2\pi}{Y})}=\begin{pmatrix}\frac{1}{4}(\mathrm{J_{xx}}+3 \mathrm{J_{yy}})&-\frac{\sqrt{3}}{4}(\mathrm{J_{xx}}-\mathrm{J_{yy}})&-\frac{ \sqrt{3}}{2}\mathrm{J_{yz}}\\ -\frac{\sqrt{3}}{4}(\mathrm{J_{xx}}-\mathrm{J_{yy}})&\frac{1}{4}(3J_{xx}+ \mathrm{J_{yy}})&-\frac{1}{2}\mathrm{J_{yz}}\\ -\frac{\sqrt{3}}{2}\mathrm{J_{yz}}&-\frac{1}{2}\mathrm{J_{yz}}&\mathrm{J_{zx} }\end{pmatrix}\,. \tag{11}\]
## Appendix B Skyrmions and Topological charge
We use the following definition [43] for the topological charge of a continuous field \(\mathbf{s}(x,y)\)
\[Q=\frac{1}{4\pi}\int d^{2}\mathbf{r}\;\mathbf{s}\cdot\left(\frac{\partial \mathbf{s}}{\partial x}\times\frac{\partial\mathbf{s}}{\partial y}\right) \tag{12}\]
Physically, this signifies the number of times the spins wrap around a unit sphere. Substituting \(\mathbf{s}=(\cos\Phi(\phi)\,\sin\Theta(r),\sin\Phi(\phi)\sin\Theta(r),\cos \Theta(r))\) and \(\mathbf{r}=(r\cos\phi,r\sin\phi)\) in eq. 12 we get \(Q=-\frac{1}{4\pi}[\cos\Theta(r)]\Big{|}_{\theta(r=0)}^{\theta(r=R)}\,[\Phi] \Big{|}_{\phi=0}^{\phi=2\pi}\). Therefore \(Q\) is the product of polarity which is the first part and vorticity given by \(\omega=\left[\Phi\right]\big{|}_{\phi=0}^{\phi=2\pi}/2\pi\)[44] which is the second part. \(R\) is defined as the radius of a skyrmion. The in-plane component of the spins wrap around twice in biskyrmions leading to twice the vorticity (the \(\phi\) term) compared to skyrmions. The topological charge is evaluated using the definition [45] of a discrete lattice model where we sum over \(\Omega\) defined as
\[\tan\left(\frac{\Omega}{2}\right)=\frac{\mathbf{s}_{1}\cdot\mathbf{s}_{2} \times\mathbf{s}_{3}}{1+\mathbf{s}_{1}\cdot\mathbf{s}_{2}+\mathbf{s}_{2}\cdot \mathbf{s}_{3}+\mathbf{s}_{3}\cdot\mathbf{s}_{1}} \tag{13}\]
over a magnetic unit cell. Here \(\Omega\) is calculated over a triangular plaquette with spins \(\mathbf{s}_{1}\), \(\mathbf{s}_{2}\) and \(\mathbf{s}_{3}\).
A representative spin texture of -A2Sk phase is shown in Fig. 9. The antibiskyrmionic vortex of type V\({}_{2}\) as defined in the main text (see subsection III.2.1) is shown by a white ellipse in Fig. 9 whereas the two skyrmionic vortices of type V\({}_{1}\) are denoted by a black and an orange ellipse. These form a unit which give an integer topological charge \(Q=-2\) when evaluated using the above formula for \(Q\). Three of these units form a magnetic unit cell (area under the red parallelogram) with a total topological charge \(Q=-6\)[7].
We also use the spin structure factor to determine the magnetic phases in addition to the topological charge. It is defined as
\[S(\mathbf{q})=\frac{1}{N}\sum_{\alpha=x,y,z}\left\langle\left|\sum_{i}s_{i, \alpha}e^{-i\mathbf{q}\cdot\mathbf{r}_{i}}\right|^{2}\right\rangle, \tag{14}\]
Figure 8: The nearest neighbours of monolayer NiI\({}_{2}\) are numbered in black and enclosed within the blue hexagon. The blue green and red arrows denote the 1st nearest, 2nd nearest and 3rd nearest neighbours respectively.
Figure 9: The -A2Sk phase of NiI\({}_{2}\) bilayer in rhombohedral stacking at B\({}_{z}\) = 31.1 T and J\({}^{\perp}\) = 0.1 J\({}^{1iso}\). The area marked by red denotes the magnetic unit cell. The white ellipse denotes the antibiskyrmion vortex of type V\({}_{2}\) while the orange and black ellipses denote the anticlockwise skyrmion and clockwise skyrmion vortices of type V\({}_{1}\) respectively.
where \(N=L^{2}\) is the total number of spins and where the position of spin \(s_{\rm i}\) is denoted by \({\bf r}_{\rm i}\).
## Appendix C LLG simulations and the Luttinger-Tisza method
The ground state of Hamiltonian (eq. 1) is determined by solving the Landau-Lifshitz-Gilbert (LLG) equation:[46]
\[\frac{d{\bf s}}{dt}=-\gamma{\bf s}\times{\bf B}^{\rm eff}+\alpha{\bf s}\times \frac{d{\bf s}}{dt}\,, \tag{10}\]
where \({\bf B}^{\rm eff}=-\delta H/\delta{\bf s}\), \(\gamma\) is the gyromagnetic ratio and \(\alpha\) is Gilbert damping coefficient. We have solved the LLG equations self-consistently by keeping \(|{\bf s}|=1\) and imposing periodic boundary conditions. A semi-implicit midpoint algorithm[47] was used in order to implement these equations because of its relative simplicity and the fact that the spins \({\bf s}\) do not have to be normalized after each step. For a particular magnetic field and interlayer coupling the lowest energy spin configuration was selected after converging around 200 simulations with random initial spin configurations.
The Luttinger-Tisza method[34] was used in order to get an estimate of the size of the magnetic unit cell (\(L\times L\)). This method replaces the hard spin constraint \(|{\bf s}_{i}|=1\) with a soft spin constraint \(\sum_{i}|{\bf s}_{i}|^{2}=N\) and enables us to determine the lowest energy, coplanar spiral configurations with a wave vector \({\bf q}\). This sets a natural length scale in the problem \(L\sim 2\pi/q\) which is also important for determining the SkX and A2Sk phases as they are a superposition of three spirals with the same \(q\) but rotated by 120 degrees with respect to each other.
For an isotropic model with only the first and third nearest neighbour interactions J\({}^{1\rm iso}\) and J\({}^{3\rm iso}\), an analytical expression for the wave vector can be obtained: \(q=2\cos^{-1}[(1+\sqrt{1-2{\rm J}^{\rm iso}/{\rm J}^{\rm iso}})/4]\)[48; 49]. In LLG simulations, we consider a multiple of integer \(L\). We benchmarked the validity of this method via considering different system sizes in the simulations. We deduce that the Luttinger-Tisza method provides the correct \(L\) in all cases.
## Appendix D The semi-implicit midpoint method
The Landau-Lifshitz-Gilbert equation 10 can be simplified to the Landau-Lifshitz form which is written as
\[\frac{d{\bf s}}{dt}=\gamma_{L}\;{\bf B}^{\rm eff}\times{\bf s}+\gamma_{L} \alpha\;({\bf s}\times{\bf B}^{\rm eff})\times{\bf s} \tag{11}\]
Here \(\gamma_{L}=\frac{\gamma}{1+\alpha^{2}}\) is the renormalized gyromagnetic ratio. We can further simplify it to the form
\[\frac{\partial{\bf s}_{i}}{\partial t}={\bf a}_{i}(t,\{{\bf s}_{j}(t)\}) \times{\bf s}_{i}(t) \tag{12}\]
Here \({\bf a}_{i}\) contains \({\bf B}^{\rm eff}_{i}\) and other constants which can be taken to be 1. And \(i\) denotes each site. This equation is solved using a predictor-corrector step. The predictor step is given as
\[{\bf s}^{p}_{i}(t+\delta t)={\bf s}_{i}(t)+{\bf a}_{i}(t,\{{\bf s}_{j}(t)\}) \times\delta t\;\frac{{\bf s}_{i}(t)+{\bf s}^{p}_{i}(t+\delta t)}{2} \tag{13}\]
Rearranging this gives
\[{\bf s}^{p}_{i}(t+\delta t)-\frac{\delta t}{2}{\bf a}_{i}\times{\bf s}^{p}_{i }(t+\delta t)={\bf s}_{i}(t)+\frac{\delta t}{2}\,{\bf a}_{i}\times{\bf s}_{i}(t) \tag{14}\]
For each site \(i\) the above equation can be written in matrix form as
\[{\bf A}\;{\bf S}={\bf B} \tag{15}\]
where \({\bf A}={\bf I}-\begin{pmatrix}0&-a_{z}&a_{y}\\ a_{z}&0&-a_{x}\\ -a_{y}&a_{x}&0\end{pmatrix}\), \({\bf S}=\begin{pmatrix}s^{p}_{x}\\ s^{p}_{y}\\ s^{p}_{z}\end{pmatrix}\) and \({\bf B}={\bf s}_{i}(t)+\frac{\delta t}{2}{\bf a}_{i}\times{\bf s}_{i}(t)\). Since this is a linear equation, it can be easily solved using 'linsolve' command in MATLAB. The 'parafor' command can be used to parallelly compute the spins at different sites and speed up the convergence. The calculated spins \({\bf s}^{p}_{i}\) are now used to generate new values of \({\bf a}_{i}\). This is used in the corrector step which is given by
\[{\bf s}_{i}(t+\delta t)= {\bf s}_{i}(t)+{\bf a}_{i}\Big{(}t+\frac{\delta t}{2},\Big{\{} \frac{{\bf s}_{j}(t)+{\bf s}^{p}_{j}(t+\delta t)}{2}\Big{\}}\Big{)}\] \[\times\delta t\;\frac{{\bf s}_{i}(t)+{\bf s}^{p}_{i}(t+\delta t)} {2} \tag{16}\]
The corrector step is also a linear equation and is solved the same way as the predictor step. The code converges when the difference between the spins at consecutive time steps is \(\leq 10^{-5}\).
Figure 10: Two ways of calculating the topological charge. (a) Triangular plaquettes for individual layers. (b) Triangular plaquettes for the whole bilayer system
## Appendix E Spin textures of NiBr\({}_{2}\) bilayer
In Fig. 11, we present the different spin textures of bilayer NiBr\({}_{2}\) obtained from the LLG simulations.
|
2301.02486 | Longterm Stability of Planetary Systems formed from a Transitional Disk | Transitional disks are protoplanetary disks with large and deep central holes
in the gas, possibly carved by young planets. Dong, R., & Dawson, R. 2016, ApJ,
825, 7 simulated systems with multiple giant planets that were capable of
carving and maintaining such gaps during the disk stage. Here we continue their
simulations by evolving the systems for 10 Gyr after disk dissipation and
compare the resulting system architecture to observed giant planet properties,
such as their orbital eccentricities and resonances. We find that the simulated
systems contain a disproportionately large number of circular orbits compared
to observed giant exoplanets. Large eccentricities are generated in simulated
systems that go unstable, but too few of our systems go unstable, likely due to
our demand that they remain stable during the gas disk stage to maintain
cavities. We also explore whether transitional disk inspired initial conditions
can account for the observed younger ages of 2:1 resonant systems orbiting
mature host stars. Many simulated planet pairs lock into a 2:1 resonance during
the gas disk stage, but those that are disrupted tend to be disrupted early,
within the first 10 Myr. Our results suggest that systems of giant planets
capable of carving and maintaining transitional disks are not the direct
predecessors of observed giant planets, either because the transitional disk
cavities have a different origin or another process is involved, such as
convergent migration that pack planets close together at the end of the
transitional disk stage. | Rory Bowens, Andrew Shannon, Rebekah Dawson, Jiayin Dong | 2023-01-06T12:46:15Z | http://arxiv.org/abs/2301.02486v1 | # Longterm stability of planetary systems formed from a transitional disk
###### Abstract
Transitional disks are protoplanetary disks with large and deep central holes in the gas, possibly carved by young planets. Dong, R., & Dawson, R. 2016, ApJ, 825, 7 simulated systems with multiple giant planets that were capable of carving and maintaining such gaps during the disk stage. Here we continue their simulations by evolving the systems for 10 Gyr after disk dissipation and compare the resulting system architecture to observed giant planet properties, such as their orbital eccentricities and resonances. We find that the simulated systems contain a disproportionately large number of circular orbits compared to observed giant exoplanets. Large eccentricities are generated in simulated systems that go unstable, but too few of our systems go unstable, likely due to our demand that they remain stable during the gas disk stage to maintain cavities. We also explore whether transitional disk inspired initial conditions can account for the observed younger ages of 2:1 resonant systems orbiting mature host stars. Many simulated planet pairs lock into a 2:1 resonance during the gas disk stage, but those that are disrupted tend to be disrupted early, within the first 10 Myr. Our results suggest that systems of giant planets capable of carving and maintaining transitional disks are not the direct predecessors of observed giant planets, either because the transitional disk cavities have a different origin or another process is involved, such as convergent migration that pack planets close together at the end of the transitional disk stage.
Transitional Disks -- Protoplanets -- Resonance +
Footnote †: journal: ApJ
## 1 Introduction
Planets form in disks of gas and dust that surround young stars, known as protoplanetary disks. Protoplanetary disks are sometimes observed with a deep and wide gap in the dust distribution (e.g., Strom et al., 1989). These disks are called transitional disks (Pietu et al., 2005). About 10% of protoplanetary disks are transitional disks (Luhman et al., 2010), but it remains unclear whether this fraction indicates that \(\sim 10\%\) of protoplanetary disks spend most of their lives as transitional disks or that most protoplanetary disks spend \(\sim 10\%\) of their lives as transitional disks (Owen, 2016). Numerous theories have been proposed to explain the origins of the gaps (e.g., Dullemond and Dominik, 2005; Chiang and Murray-Clay, 2007; Krauss et al., 2007; Suzuki and Inutsuka, 2009; Vorobyov et al., 2015); the leading theories are photoevaporation (Clarke et al., 2001; Alexander et al., 2006, 2006; Owen et al., 2011, 2012) and planetary sculpting (Calvet et al., 2005; Dodson-Robinson and Salyk, 2011; Zhu et al., 2011, 2012; Dong et al., 2015). Photoevaporative winds can deplete the inner disk when the photoevaporative mass loss rate exceeds the accretion rate. Although early photoevaporation models (e.g., Owen et al., 2011, 2012) produced photoevaporative mass loss rates too low to be consistent with the high observed accretion rates in transitional disks (Ercolano and Pascucci, 2017), recent models (e.g., Ercolano et al., 2021; Picogna et al., 2021) that more accurately model the temperature structure of the disk can meet or exceed half of observed accretion rates. Transitional disks with higher accretion rates may be the consequence of stronger photoevaporative winds in carbon-depleted disks (Ercolano et al., 2018; Wolfer et al., 2019). Alternatively, they may be the result of magnetically driven supersonic accretion flows (Wang and Goodman, 2017; but see Ercolano et al., 2018 for observational arguments against this hypothesis). More in-depth reviews of the observational properties of transitional disks and theory of their origins can be found in Espaillat et al. (2014); Owen (2016); Ercolano and Pascucci (2017); van der Marel (2017).
In this work, we focus on the question of whether gaps in transitional disks form by planetary sculpting (Papaloizou and Lin, 1984; Marsh and Mahoney, 1993; Paardekooper and Mellema, 2004). Planetary sculpting is a process in which planets or brown dwarfs formed in the protoplanetary disk clear a gap (Dodson-Robinson and Salyk, 2011; Zhu et al., 2011, Rosenthal et al., 2020; see Paardekooper et al., 2022 for a recent review). This theory is supported by the discovery of protoplanets forming in the gap of the transitional disk PDS 70 (Keppler et al., 2018; Haffert et al., 2019; Isella et al., 2019; Mulley et al., 2019), possibly LkCa 15 (Kraus and Ireland, 2012; Sallum et al., 2015; but see also Currie et al., 2019), and a few other candidates (HD 100546, HD 142527, HD 169142; Quanz et al., 2013; Biller et al., 2012; Reggiani et al., 2014). Further supporting this hypothesis, sub-structures in protoplanetary disks have also been attributed to planetary sculpting (e.g., Huang et al., 2018; Long et al., 2018; Zhang et al., 2018; Choksi and Chiang, 2021). Although the sculpting planets creating the deep and wide gaps in transitional disks are typically assumed to be Jovian - and we will make that assumption here - Fung and Chiang (2017) showed that compact configurations of super-Earths can clear inner cavities in disks with very low viscosity. See Ginzburg and Sari (2018) and Garrido-Deutelmoser et al. (2022) for more on the properties of such gaps.
To account for the gaps seen in transitional disks, planetary systems must remain stable over the observed disk timescale (Tamayo et al., 2015). To assess which planetary system architectures are capable of producing transitional disks, Dong and Dawson (2016) (hereafter DD16) performed \(N\)-body simulations of such planetary systems, using three to six giant planets spaced from 3 to 30 AU. The depletion of the gaps the planets produced was based on different assumptions for the disk viscosity and scale height, and these depletions dictated the spacings of the planets necessary to produce a continuous gap. DD16 included an eccentricity damping force from the gas in the gap. They found that a subset of planetary system configurations remained stable over a typical one million year disk lifetime. Thus they concluded it was plausible that the subset of planetary systems that contain Jovians planets in the 3 to 30 AU range all appear as transitional disks during the protoplanetary stage. Under this hypothesis, the phenomenon of transitional disks is not pervasive, affecting all protoplanetary disks for \(\sim\)10% of their lifetime, but instead is restricted to the \(\sim\)10% of disks that happen to harbor giant planets.
Because DD16 conducted simulations only during the gas disk stage, we cannot directly compare those systems to mature planetary systems observed around main sequence field stars. After the protoplanetary disk dissipates in \(\lesssim 10^{7}\) years (Haisch et al., 2001; Pfalzner et al., 2014), the systems may go unstable, as systems of massive planets in such compact configurations often do (Chambers et al., 1996; Smith and Lissauer, 2009; Morrison and Kratter, 2016). However, the damping during the protoplanetary disk state may allow these systems to find long-term stable compact dynamical states (e.g., Melita and Woolfson, 1996; Lee and Peale, 2002; Dawson et al., 2016; Morrison et al., 2020). The four tightly packed jovian planets around HR 8799 (Marois et al., 2008, 2010) may be in such a dynamical state (Fabrycky and Murray-Clay, 2010; Gozdziewski and Migaszewski, 2014; Wang et al., 2018), and thus perhaps an example of the post-gas evolution of such systems.
Here, we seek to understand the longer-term dynamical evolution of the giant planet planetary systems capable of sculpting deep and wide gaps in transitional disks. Simbulan et al. (2017) performed a case study of HL Tau along
these lines, finding ejection of one or more planets to be the most common outcome. How this result can be generalized remains an open question. Previous works simulating the longterm evolution of planetary systems compared simulated systems' eccentricity and semimajor axis distributions (e.g., Chatterjee et al., 2008; Juric and Tremaine, 2008; Malmberg and Davies, 2009; Petrovich et al., 2014; Carrera et al., 2019) to those of observed planets (e.g., Mayor et al., 2011; Dawson and Johnson, 2012; Winn and Fabrycky, 2015; Xie et al., 2016). However, those works typically began with ad hoc tightly packed initial configurations of orbits to quickly induce instabilities. Here, we use initial conditions physically motivated from DD16, as our interest is in how well those hypothesized transitional-disk sculpting systems match observed systems. We also explore the prevalence and evolution of mean motion resonances in these simulated systems, which were common during the stage simulated by DD16. Motivated by Koriski and Zucker (2011)'s finding that 2:1 mean motion resonance (MMR) systems are younger on average, we will assess whether the systems that begin in MMR will break within observable timescales.
We describe our simulations in Section 2. We identify the presence and behavior of orbital resonances Section 3. We assess longterm stability and which characteristics affect it in Section 4. We investigate the planets' eccentricities and compare to those of observed exoplanets in Section 5. We present our conclusions in Section 6.
## 2 Simulations
We simulate the long-term evolution of the multi-planet systems that are capable of opening the gaps observed in transitional disks. Most of our simulations start where DD16's left off, at the end of transitional disk stage, with gas damping forces turned off, which is an approximation that the gas disk is immediately removed. Their final planet masses, positions, and velocities from the gas stage are our initial post-gas conditions. However, since we do run some additional gas stage simulations, we describe the gas stage simulations in detail in Appendix A.
For our post-gas simulations use the mercury6 Bulirsch-Stoer integrator with a 1000 AU ejection distance, a solar mass and solar radius central body, an accuracy parameter of 10\({}^{-12}\), and medium precision outputs every million years. Our approximation of the instantaneous removal of the gas disk does not induce problems for several reasons: 1) as we will show the majority of systems remain stable after the removal, 2) the gas surface density was low to begin with due to depletion in the gas, and 3) simulations show that instantaneously removing a depleted gas disk does not significantly alter the resonant dynamics (Morrison et al., 2020).
The configurations are summarized in Table 1. Configuration names are defined as planet number followed by planet mass in Jupiters (with letters used for configurations with the same planet numbers and masses). Being continuations of DD16, the systems had between 3 and 6 equal mass planets, where the number of planets required was dictated by the gap widths. We did not include configurations that DD16 found to be unsuitable for creating proto-planetary disk cavities because the amount of gas expected to be present in the cavity would drive the planets too far apart via resonant repulsion to create overlapping gaps (their 4-10act, 5-5ac) or would be insufficient to stabilize the configuration during the gas disk stage (their 6-2). For each configuration, DD16 used various assumed gas surface densities since ALMA gas observations and chemical disk modeling are unable to constrain gas depletion in the inner few AU of a disk. For each configuration, we use the simulations with the gas surface density consistent with the expected depletion in the cavity (reported in Dong and Dawson, 2016 based on Fung et al., 2014). For their gas stage simulations, DD16 ran 10 random realizations for each configuration. We supplement with an additional 10 gas stage realizations for each configuration, that we then continue in the post-gas stage. See Appendix A for more details about the gas stage simulations. In summary, we ran eighteen realizations of twenty variations of the disk initial conditions for 10 Gyr, for a total of 360 simulations. We found that 14 simulations had gone unstable during the gas disk stage (i.e., lost a planet via ejections or collisions); those simulations are considered to have instability events at 0 Gyr in all future analysis. We also found that two simulations ended the 10 Gyr simulations with zero planets due to central body collisions.
To better compare giant planets discovered by the radial velocity method - which are commonly observed at \(\sim\) 1-3 AU - we ran an additional set of simulations that added an interior planet. We refer to the original set as 3-30 AU simulations and additional set as 1-30 AU simulations. The additional planet is placed interior to the others by the average Hill spacing of original configuration. First the average Hill spacing of the original configuration is determined by averaging every pair's separation in all \(\sim\)20 simulations for the configuration, excluding those that went unstable during the gas disk stage. Then the position for the innermost planet is determined relative to the initial location of the 3 AU planet according to the average Hill spacing. The newly created 1 AU planet is appended to the corresponding original 3-30 AU simulation during the gas stage, in order to make the final results more comparable. Similar to the 3-30 AU gas stage simulations (Appendix A), these systems are simulated for 1 Myr with gas damping present using the hybrid integrator (timestep of 3 days, accuracy parameter of 10\({}^{-12}\)). However, they are then simulated for only 1 Gyr with gas damping shut off using the Bulirsch-Stoer integrator. This reduced simu
lation time is necessary for the efficient completion of the simulations.
## 3 Resonant Behavior
Here we identify systems containing planets in two-body and three-body resonances. We assess resonance only for adjacent planet pairs (or triplets in the case of three body resonances) and by looking for librating resonant angles. For example, for the 2:1 resonance, the resonant angles are \(\phi_{\rm in}\) and \(\phi_{\rm out}\):
\[\phi_{in}=2\lambda_{in}-\lambda_{out}-\varpi_{in} \tag{1}\]
and
\[\phi_{out}=2\lambda_{in}-\lambda_{out}-\varpi_{out} \tag{2}\]
where \(\lambda_{\rm in}\) and \(\lambda_{\rm out}\) are the mean longitude of the inner and outer planet, respectively, and \(\varpi_{\rm in}\) and \(\varpi_{\rm out}\) are the longitude of pericenter of the inner and outer planets, respectively. To ensure the resonant angles are well sampled, we run simulations with more frequent output (every 10 years) for \(10^{5}\) years, using the same starting point as the standard simulations (i.e., immediately after gas disk stage). The 3-30 AU systems are run with a Bulirsch-Stoer integrator (\(10^{-12}\) accuracy parameter) while the 1-30 AU are run with a hybrid integrator (3 day timestep, \(10^{-12}\) accuracy parameter).
The libration centers for the massive planets in our simulations are 0 or \(180^{\circ}\)(Dong & Dawson, 2016). We consider a planet pair resonant with libration about 0 if the resonant angle does not come within \(\pm 0.5^{\circ}\) of \(180^{\circ}\) during the simulation (Fig. 1). Likewise, we consider a planet pair resonant with libration about \(180^{\circ}\) if the resonant angle does not come within \(\pm 0.5^{\circ}\) of 0 during the simulation.
Furthermore, we label systems with at least one planet pair whose resonant angle remained within \(\pm 170^{\circ}\) of 0 or 180 for 97.5% of the time as near resonance. These angles linger near a specific value as they circulate (Fig. 2). If the resonant angles circulate without lingering for all the planet pairs in a system, we classify the system as non-resonant. We inspect a subset of 54 of the resonant angle plots by eye to ensure the classification was reliable and that \(0.5^{\circ}\) worked well as a cut-off with our sampling frequency. We also considered a stricter resonant angle range, requiring it to stay \(\pm 50^{\circ}\) of \(0^{\circ}\) or \(180~{}^{\circ}\) during the simulation. This did reduce the number of resonant systems by 12% but had no significant impacts on other results.
Our resonance identification approach can fail for systems that go unstable quickly. To solve this problem, we truncate the resonance data at 75% of the instability timescale, defined as the timescale when a collision or ejection occurs (e.g., for a system that went unstable after four hundred thousand years, we use the resonant angle during the first three hundred thousand years). We label all planets that collided or were ejected before 60,000 years as having undergone "rapid instability." Less then ten systems had rapid instability so this classification is only a minor part of the results. Some systems that went unstable during the gas disk stage only had a single planet remaining at the start of the gas-free stage. These systems are likewise given a unique label apart from resonance or no resonance.
We find that it is more likely for the inner planets to be in 2:1 MMR than the outer planets. Approximately 50% of the innermost planet pairs are in resonance in all planet systems. For outermost pairs, 14% of those in the five planet systems are in 2:1 MMR, 3% of the outer pairs in the six planet systems, and none of those in the three or four planet systems. The higher fraction of true resonance among inner pairs vs. outer may be because the higher surface gas surface density and shorter orbital timescales in the inner disk facilitate resonance capture during the gas disk stage.
In our four, five, and six planet systems, outer pairs are near 2:1 MMR ("lingering" systems with at least one planet pair whose resonant angle remained within \(\pm 170^{\circ}\) of 0 or 180 for
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline Name & \(\Sigma_{30}\) & \(\Delta_{0}\) & GDI & 2:1 MMR \% & 10 Gyr \\ & g cm\({}^{-2}\) & (\(R_{H}\)) & Sims. & (Near \%) & Stability \% \\ \hline
[MISSING_PAGE_POST]
& 1 & 4.4 & 2 & 45, (25) & 5 \\ \hline \end{tabular} Note. – The parameter \(\Sigma_{30}\) indicates normalization the gas surface density inside the depleted gap for the gas stage simulation (Appendix A), where \(\Sigma_{\rm gas}=\Sigma_{30}\left(\frac{a}{30\rm AU}\right)^{-3/2}\), and a normalization of \(\Sigma_{30}=\)10 g cm\({}^{-2}\) corresponds to the minimum mass solar nebula. GDI Sims. gives the number of simulations (out of the twenty in a set) that experienced instabilities during the gas phase integration. These simulations were still run for the following 10 Gyr integration with their surviving planets.
\end{table}
Table 1: Summary of 3 – 30 AU Systems
97.5% of the time), in approximately 70% pairs while inner pairs experience closer to 30% with near 2:1 MMR. Intermediate planet pairs show intermediate rates of near 2:1 MMR. However, in the three planet systems, 30% of the inner pairs are near resonance but none of the outer pairs are near resonance.
We also examine the period ratios of 2:1 MMR systems (Fig. 3). Consistent with DD16's findings, planets with period ratios far outside of their nominal resonance can have librating resonant angles. There are some systems that are librating in the 2:1 that have a period ratio greater than 2.5, even up to 6 (i.e., their 2:1 resonant angle is librating). Libration of the 2:1 resonant angle at such large period ratios is possible when the longitude of periapse precesses quickly and is caused in our simulations by the eccentricity damping during the gas disk stage. This phenomenon, which is more generally established by a dissipative change to the eccentricity (which may or may not be accompanied by a change in semi-major axis) is known as resonant repulsion (e.g., Lithwick & Wu, 2012). We find 47% of systems that lie within 10% of a period ratio of 2 are in 2:1 MMR.
In later sections, we will focus on the 2:1 MMR because other two body resonant angles rarely librate in our simulations (e.g., 3:2, 3:1, 4:3). Among the 360 simulations, we find four systems contain one or more pairs that librate in the 3:2 resonance (wenty-five near resonance), one system in the 3:1 resonance (one-hundred sixty-nine near-resonance), and zero systems in the 4:3 resonance. We also examined various three body resonances. First, we define the three body librating angle equation:
\[\phi_{3b/p,q}(1,2,3)=p\lambda_{1}-(p+q)\lambda_{2}+q\lambda_{3} \tag{3}\]
In the above, 1, 2, and 3 refer to the outer, middle, and inner planet, respectively. We examined multiple potential reso
Figure 1: The resonance angle for planet pairs in a simulation from Config 3-10b, starting in the post-gas stage. Planets 2 and 3 (the inner two of the three; panel 3) happened to begin the gas disk stage in resonance and maintained this configuration in the post-gas stage. In other cases, our simulated planets get captured into resonance during the gas disk stage.
Figure 2: The resonance angle among planet pairs in a simulation from Config 3-10a, starting in the post-gas stage. Each resonance angle circulates but we consider planets 2 and 3 (the inner two of the three; panel 3) to be “near resonance” because the angle lingers within \(\pm\)90\({}^{\circ}\) of 0.The pair reached this configuration part way through the earlier gas stage.
nances and found that only 9:6:4, 15:12:8, 4:2:1, and particularly 3:2:1 had a significant number of three body resonance cases. 22% of systems displayed at least one type of three body resonance (1% for three planet systems, 23% for four planet systems, 45% for five planet systems, and 35% for six planet systems). In all cases, there were less than 5 near resonance simulations.
## 4 Stability
In approximately 31% of the simulations, an ejection or collision occurs over 10 Gyr. We define a system as stable if none of the member planets were ejected or collided during either the gas or post-gas stage.
### Impact of Planet Number and 2:1 MMR
Our data consists of 360 3-30 AU systems. Each set of 20 systems is defined by a planet number and initial gas surface density profile (Table 1). In total there are 6 three planet sets, 6 four planet sets, 4 five planet sets, and 2 six planet sets.
We find that systems with more planets experience more collisions and/or ejections. Examining systems that remained stable during the gas disk stage, approximately 13% of the outermost planets are lost (ejected or collided) during the 10 Gyr gas-free phase in contrast to 20% of inner planets lost. The difference is mostly due to more collisions among inner planets. In six planet systems, approximately 33% of planets were ejected and 15% collided. These values are significantly higher than the averages for the four planet systems (9% ejected and 3% collided). We conclude that additional planets reduce a system's stability but that origin of the ejected planets is fairly uniform across the semimajor axis range.
We plot the initial fraction of 3-30 AU systems in 2:1 MMR against the stability fraction, color coded by planet count (Fig. 4). Recall that for a given planet number, different sets have different planet masses and spacings. The initial fraction of systems in resonance alone does not appear to impact the stability but the initial planet count does. Moreover, the fraction of systems in resonance is not strongly correlated with the initial planet count, so it appears that planet count is independently driving the stability. The 3 planet systems remain stable -90% of the time. The four and five planet systems display a range of stability from low (~20% of a set stable) to high (~95% of a set stable) values. Both sets of six planet systems have a low stability fraction (5%).
We define the stability timescale as the time until any ejection or collision occurs within the system. We find some systematic differences in stability timescale for systems with 2:1 MMR and without (see Figure 5). The timescales at which systems went unstable differs slightly between the three MMR categories (resonance, near resonance, and no resonance). However, the overall rate of instabilities by 10 Gyrs between the three categories was similar: 73% stable, 68% stable, and 67% stable for resonance, near resonance, and no resonance; respectively. 10 systems could not have their resonance status determined due to rapid instability and thus were not used in the above assessments. From this data, we conclude that 2:1 MMR can improve the stability of a system in the short term (Myr timescale) but that this advantage is nullified with time. Based on only small differences in the
Figure 4: The fraction of the 3–30 AU simulations with at least one 2:1 MMR in a configuration versus the overall stability of the configuration. Each configuration contains 20 simulations. For readability, the three pairs of identical sets (3-10b and 3-10d, 4-2 and 4-5a, 4-5b and 4-5c) have had their 2:1 MMR % (75%, 85%, and 20%, respectively) split slightly.
Figure 3: The ratio of the periods of adjacent planets for all 3–30 AU simulated systems (blue line) and systems in 2:1 MMR (red-dotted/dashed), as defined by libration of the resonant angle. Systems in 2:1 MMR do not all have period ratios near 2. Many have ratios much larger than 2, even up to 6. Similar trends were observed for the 1–30 AU simulations.
final stability rates, we conclude the presence of one or more 2:1 MMR pairs within a system did not significantly impact the final number of planets that collided or were ejected of 10 Gyr.
The above analysis of how resonances affect system stability looks at a system level: if at least one pair is in resonance, we consider it a resonant system. However, some trends may only be observable at the individual pair level. In Figure 6, we report stability timescales in a fashion identical to Figure 5 but for adjacent pairs rather than systems. A pair is considered stable until one member experiences an instability event. The pairs follow a similar trend to the systems, though we find more significant evidence (compared to the system-level analysis) that near 2:1 MMR pairs are less stable over the long term. The higher instability of near systems may be the result of close spacing without the stabilizing influence of resonance or chaotic behavior at the resonance separatrix.
We also plot a calculation of resonance pairs based on period ratios rather than angle libration (Figure 7). Identifying systems near integer period ratios allows us to more directly compare to observational data where the orbital parameters are typically not well-constrained enough to determine if the resonance angle is librating. We determine if a period ratio is resonant according to the criteria in Koriski & Zucker (2011):
\[\delta=2\frac{|r-r_{c}|}{r+r_{c}}\leq 0.1 \tag{4}\]
where \(r\) is the measured period ratio and \(r_{c}\) the resonance period ratio (in this case 2). The output timesteps are every Myr and once a system leaves resonance we do not consider it capable of reentering. About 13% of our pairs begin in resonance post-gas stage according to this criterion, compared to 24% using the angle libration criterion. Most of these pairs are in five or six planet configurations, with some sets having up to 40% of pairs near the 2:1. Although the fraction of pairs near the 2:1 period ratio declines over time, most disruptions occur in the first 50 Myr. Therefore, this behavior is not a good explanation for the trend of younger ages for resonant pairs found by Koriski & Zucker (2011), which requires a typical disruption timescale \(\sim\)1-10 Gy and near 100% initial fraction (Dong & Dawson, 2016).
### Impact of Three-Body MMR
We consider the impact of three-body MMR on the stability of systems. These classifications are once again carried out at the system level. The most common type of three-body MMR in our systems is 3:2:1, occurring in 11% of systems. We found that ~51% systems with at least one of the three-body resonances present had a final stability rate, similar to the rate of 55% of all systems with four or more planets (our three planet systems have a higher stability rate but no three body resonances). The other three body MMRs have small
Figure 5: A CDF comparing instability times based on whether the system contains resonant, near-resonant, or no resonant planets. There are 185 resonant systems (of which 135 or 73% were stable), 111 near resonant systems (of which 75 or 68% were stable), and 54 no resonant systems (of which 36 or 67% were stable). Non-resonant systems start below 100% because some simulations went unstable during the gas disk stage and are marked as unstable at 0 Gyr. Notably, there is little discrepancy between the final stability rates, suggesting the presence of 2:1 MMR does not significantly impact overall stability.
Figure 6: A CDF comparing instability times for the three categories of resonance, now plotting per pair rather than per system. There are 262 resonant pairs (of which 67% were stable), 400 near resonant pairs (of which 57% were stable), and 411 non- resonant pairs (of which 73% were stable).
occurrence rates and thus limited statistics, precluding any conclusions on their significance, if any.
### Impact of Mutual Hill radii
We assess the impact of the initial post-gas spacing in mutual Hill radii and its relationship to stability of the 10 Gyr simulations. We present the starting (i.e., right after the gas disk phase) and final (i.e., after 10 Gyr) mutual Hill radii separations for adjacent planet pairs in Figure 8. At the start of the lifetimes all the planets have roughly equal separations of mutual Hill radii. Some sets display average mutual Hill radii values that differ from the majority of sets (see the blue diamond clump at 3,2 and the purple square clump at 2,1). These pairs tend to remain stable at these wider separations.
By comparing the two plots, we find that the planets furthest from the star (i.e., planets on the right) experience the greatest change in mutual Hill radii separation (especially amongst systems with four or more planets). Those pairs with Hill radii above 10 (right-side of Figure 8) are exclusively survivors of instability events (or pairs that began the post-gas stage with such wide separations, as mentioned in the previous paragraph).
We find that if a system had an individual pair with a initial mutual Hill radii values smaller than 3.5, it had a greater likelihood of instability. However, these represented a small fraction of the total pairs. We conclude that initial mutual Hill radius- within the narrow range encompassed in our starting conditions - was not a primary driver of the stability rate for a system, although its final value can be used to assess which systems experienced instabilities.
### 1 AU Planets
The simulations by DD16 only included planets in a range from 3 to 30 AU (as based upon the protoplanetary disk observations, it is unclear if the deep and wide gaps extend to within 3 AU). For simulation set, we run 10 new simulations with an additional planet near 1 AU (see Section 2 for details). The 1 AU planets still fulfill the transitional disk criteria presented by DD16: i.e., a planet massive enough to carve out a deep gap; packed closely enough with other planets to create a continuous gap; and close enough to the star to clear the disk at 1 AU. We run the simulations for 1 Myr the same gas damping parameters as the corresponding 3-30 AU systems. Then we run the simulations for 1 Gyr post-gas. The simulation timescale is shorter than for the 3-30 AU systems to keep the run time feasible. We run 200 simulations in total, each with 4 to 7 planets.
The increased number of planets reduces the stability of the system. Several 1 - 30 AU sets had stability rates similar to their 3-30 AU counterparts but a majority of the 1-30 AU sets saw substantial decreases in stability compared to their 3-30 AU counterparts. The reduction in stability is expected from past studies. For example, Chambers et al. (1996b) found that adding planets to a system led to shorter instability timescales. However, the impact was more modest for systems with higher multiplicity: for example, they found that a four planet system would go unstable significantly faster than a three planet system, but a seven planet system would only go unstable a little faster than a six planet system. We find that the 1-30 AU systems have shorter instability timescales, particularly for sets that originally contained 3 or 4 planets. The 5 and 6 planet sets were already frequently unstable by 1 Gyr: thus the additional instability from the extra planet has less impact on the final stability fraction.
We present a CDF for pair-by-pair resonance status based on period ratio for the 1 - 30 AU systems (Figure 9; in a fashion identical to the 3-30 AU systems in Figure 7). The observed trends are mostly identical between the two types of systems, barring a slightly lower initial resonance fraction for the 1-30 AU systems (approximately 10% compared to the prior 13%). The fraction of 2:1 period pairs at 9 Gyrs is likewise approximately 2% lower for the 1-30 AU systems. Adding a planet at 1 AU is unable to replicate the trend of younger ages for resonant pairs found by Koriski & Zucker (2011), which requires a typical disruption timescale \(\sim 1-\) -10 Gyr and near 100% initial fraction (Dong & Dawson, 2016). Instead, our systems go unstable usually within \(\sim 10--\)100 Myr.
## 5 Eccentricity
The eccentricity distributions of our final systems is a relic of how the planets evolved with time. In Figure 10, we plot the 1 Gyr eccentricity versus semimajor axis distribution for
Figure 7: A CDF showing the fraction of pairs with 2:1 MMR based on the period ratio. About 13% of pairs begin in resonance according to this metric.
the 3-30 AU systems (left) and 1-30 AU systems (center). Furthermore, in Figure 11, we plot a CDF for the 1 Gyr eccentricity of several categories. Of particular importance are the "unstable" categories for both the 3-30 AU and 1-30 AU systems: the unstable category shows the CDF of 1 Gyr eccentricities for surviving planets in systems that experienced instability events. Since extending the 3-30 AU systems to 10 Gyr only only decreased the fraction of circular orbits (<0.05 eccentricity) from 80% to 70%, we show the results at 1 Gyr to compare to the 1-30 AU systems, which were only simulated for 1 Gyr (Section 4.4).
We studied the influence of ejections and collisions on the eccentricity of surviving planets. In four and five planet systems (which had 80% and 50% stability, respectively), the mean eccentricity of all survivors was 0.16 \(\pm\) 0.25 while the eccentricity of survivors from unstable systems was 0.47 \(\pm\) 0.23. As anticipated instability events increased the eccentricity of the survivors. Further trends were found in the type of instability events. Those systems which only experienced collisions had a final mean eccentricity of 0.33 \(\pm\) 0.35 while those survivors that only experienced ejections had a final mean eccentricity of 0.50 \(\pm\) 0.20. Although both types of instability event correlated with higher eccentricity, systems that experienced ejections saw a larger increase in the eccentricity of survivors.
The 3-30 AU systems and the 1-30 AU systems exhibit similar trends: nearly all planets with an eccentricity greater than 0.2 are in systems that experienced some instability event (primarily ejections, which occurred about 2.5 times more frequently than collisions) during the simulation. Both sets of simulations display a high concentration of low eccentricity planets: trending below 0.1 within 30 AU and below 0.2 beyond 30 AU. Including the additional planet near \(\sim\) 1au slightly increased the typical eccentricities of surviving planets, though the additional planet itself typically remains at very low eccentricity if it survives.
Figure 8: The mutual Hill radii separation of adjacent planet pairs for the 3–30 AU simulations after the gas disk simulations are complete (left) and after 10 Gyr (right). X position within a column is slightly randomized for readability. Planet 1 is the farthest from the star. Planets are numbered upwards so a system with four planets has planets 1, 2, 3, and 4. After the 10 Gyr integration, many systems have far higher hill separations then they initially had, owing to the tendency of instabilities to greatly alter a system. If a planet is ejected, the number corresponding to a planet is updated an new comparisons are made. That is, if planet 1 is ejected, planet 2 is renamed to planet 1 and a pair is then determined between the new planet 1 and planet 0.
Figure 9: A CDF showing the fraction of pairs with 2:1 MMR based on the period ratio for 1–30 AU systems. About 10% of pairs begin in resonance according to this metric, a slight decrease from the same analysis for 3–30 AU systems (Figure 7). The two types of systems otherwise should identical trends.
We compare the eccentricities between our simulations and observed exoplanets in both Figure 10 (right) and Figure 11. We use known Jovian mass planets (0.3 to 10 Jupiter M*sin(\(i\))) taken from the Exoplanet Archive on Aug. 4th, 2022. For our comparison sample, we select from the database those planets with eccentricities \(>0\). Planets discovered by various methods including radial velocity surveys and direct imaging are included in the comparison sample. Although there can be some biases in the measured values (Pan et al., 2010), observers can often measure eccentricity for giant planets detected by radial velocity, which is how the vast majority of planets suitable for comparison (i.e., a similar mass and semimajor axis range) to our sample are detected.
The observed planets show eccentricity values with a concentration towards low eccentricity (\(e<0.15\)). However, the simulations show a much more extreme concentration towards very low eccentricity (\(e<0.05\)). Unstable configurations can reach high eccentricities (Figure 11), but our configurations (even if we limit to certain sets) apparently do not produce the correct mix of stable and unstable systems. Based on the large discrepancy between the simulations (even when considering only unstable simulations or only 5 and 6 planet systems) and the observed exoplanet eccentricities, we conclude that the systems created in DD16, while capable of creating transitional disks, can not reproduce evolved systems.
In contrast, other studies of planet-planet scattering that did not include a gas disk stage and/or require stability during the gas disk stage (e.g., Chatterjee et al., 2008; Juric & Tremaine, 2008) produced eccentricity distributions that matched the observations well. Our use of equal-mass planets likely does not account for the difference, as such configurations are just as likely to lead to elliptical orbits, albeit with a narrow distribution (Ford et al., 2001). Instead, our requirement that configurations remain stable during the gas disk stage to maintain a cavity (Dong & Dawson, 2016) apparently ensures too much stability to excite eccentricities. Our configurations do not achieve the "dynamically active" state identified by Juric & Tremaine (2008) that erases the memory of initial conditions and are reminiscent of the "dynamically cold" configurations explored by Dawson et al. (2016) that remain stable after the gas disk stage.
## 6 Conclusions
We simulated the long-term evolution of planetary systems capable of carving out and maintaining a transitional disk during the gas stage, using initial conditions from DD16. We subjected the planets to eccentricity damping during the disk stage and simulated the systems another 10 Gyr post-disk.
Our main finding (Section 5) was that our systems tend to remain on stable, circular orbits. The typically very low eccentricities are at odds with those observed in real systems of giant exoplanets discovered via the radial velocity method. For example, fraction of planets with eccentricities between 0 and 0.2 was approximately 25% higher in our simulated systems compared to real systems. Among our simulated systems that experience an instability, the eccentricities can reach the observed high values, but too few of our systems go unstable. The stability of systems showed a dependence on multiplicity: three and four planet systems were more stable (i.e., experienced no collisions or ejections) compared to five and six planet systems by a significant margin (86% stable versus 33% stable, respectively). This trend is consistent with prior studies showing that higher planet multiplicity decreases stability (Chambers et al., 1996). However, even when considering only high multiplicity systems and adding planets interior to those needed to carve observed gaps, our systems were too stable and circular.
We also found that the presence of a 2:1 MMR in a system - which were commonly established in our simulations during the gas disk stage - did not significantly impact the overall likelihood of going unstable over 10 Gyr. Koriski & Zucker (2011) found that the presence of a 2:1 period ratio for a planetary pair indicates a system is younger on average. However, in contrast to Koriski & Zucker (2011) who found that the typical lifetime of a 2:1 MMR is near 4 Gyr, we found that half of our pairs broke their 2:1 period ratio by 10 Myr. We noted a slightly higher instability rate for the near 2:1 MMR systems (those systems where the resonance angle still circled through all degrees but with a preference for a certain value) when considering resonance on a pair-by-pair basis (57% stable compared to approximately 70% stable for the resonance or no resonance systems).
In future work, we could explore configurations of unequal mass planets, though past work has shown this is unlikely to significantly boost the resulting eccentricities (Ford et al., 2001). It may be possible to further fine-tune our gas disk stage initial conditions (i.e., gas disk properties and planet spacing) to produce a higher fraction of post-gas unstable systems. However, more likely stability during the transitional disk stage precludes wide-spread instabilities after the gas disk disappears. If so, transitional disks may be typically caused by other processes besides giant planets, such as photoevaporation (e.g., Picogna et al., 2021) or compact configurations of super-Earths and mini-Neptunes in low viscosity disk (Fung & Chiang, 2017). Another possibility is that giant planets undergo convergent widescale migration at the very end of the transitional disk stage that packs them much closer together. As proposed by van der Marel & Mulders (2021), this explanation would also help account for the large size of transitional disks cavities, compared to the peak in giant planet occurrence at smaller semi-major axes. However, it
would need to be explored what could trigger this just in-time migration.
Computations for this research were performed on the Pennsylvania State University's Institute for Computational & Data Sciences Advanced CyberInfrastructure (ICS-ACI). This content is solely the responsibility of the authors and does not necessarily represent the views of the Institute for CyberScience. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University and the Eberly College of Science. This project was supported in part by NASA XRP NNX16AB50G and NASA XRP 80NSSC18K0355, the National Science Foundation under Grant No. NSF PHY-1748958, and the Alfred P. Sloan Foundation's Sloan Research Fellowship. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Exoplanet Archive: Numpy
|
2308.11697 | A family of repulsive neutral conductor geometries via abstract vector
spaces | Recently it was shown that it is possible for a neutral, isolated conductor
to repel a point charge (or, a point dipole). Here we prove this fact using
general properties of vectors and operators in an inner-product space. We find
that a family of neutral, isolated conducting surface geometries, whose shape
lies somewhere between a hemispherical bowl and an ovoid, will repel a point
charge. In addition, we find another family of surfaces (with a different
shape) that will repel a point dipole. The latter geometry can lead to Casimir
repulsion. | Julian J. Dukes, Brian Shotwell | 2023-08-22T18:00:00Z | http://arxiv.org/abs/2308.11697v1 | # A family of repulsive neutral conductor geometries via abstract vector spaces
# A family of repulsive neutral conductor geometries via abstract vector spaces
Julian J. Dukes
Brian Shotwell
Department of Physics, University of California San Diego, La Jolla, CA 92093, USA
###### Abstract
Recently it was shown that it is possible for a neutral, isolated conductor to repel a point charge (or, a point dipole). Here we prove this fact using general properties of vectors and operators in an inner-product space. We find that a family of neutral, isolated conducting surface geometries, whose shape lies somewhere between a hemispherical bowl and an ovoid, will repel a point charge. In addition, we find another family of surfaces (with a different shape) that will repel a point dipole. The latter geometry can lead to Casimir repulsion.
## I Introduction
Electromagnetism courses and textbooks usually begin with an overview of electrostatics. At the upper-division or graduate level (following, for example, Griffiths Griffiths (1958) or Jackson Jackson (1958), respectively), a typical unit on electrostatics will explore methods for solving Laplace's/Poisson's Equation in some region and associated uniqueness theorems. Such a discussion includes the method of images. With this method, students are expected to find, for example, the force on a point charge placed near a grounded, conducting plane, or the force on a point charge placed near a neutral, isolated sphere. For all such cases presented in these texts, the force between the point charge and a neutral conductor is attractive.1 That is, when the point charge and conductor are held fixed in space, and when the charges on the conductor are allowed to come to electrostatic equilibrium, the resulting charge distribution on the conductor will attract the point charge.
Footnote 1: It may be ambiguous what is meant by attractive, but in cases where there is an azimuthal rotational symmetry of the conductor about some axis of symmetry, and where there is a plane normal to this axis dividing the point charge on one side and the conductor on the other, this is clear.
One might naturally ask if these are examples of a more general phenomenon -- whether a point charge is _always_ attracted to a conductor (whether grounded, or isolated and neutral). Ref. Dukes (2007) answered this in the negative for the case of an isolated, neutral conductor, giving a particular geometry and explicitly showing the repulsion.
In this paper we arrive at another geometry where there is repulsion between an isolated, neutral conductor and a point charge. However, we do so through different means. While Ref. Dukes (2007) explicitly computed potential energy as a function of the position of the point charge along an axis of symmetry, we instead use general properties of vectors and operators on an inner-product space of scalar-valued functions defined on a two-dimensional surface.2 Our treatment begins general, casting electrostatics in this language, but we eventually specify a particular geometry based on constraints that naturally arise in defining the force on a point charge or dipole in this setting. In addition to this method resulting in interesting repulsive geometries, we hope that this methodology can be generalized and applied to different problems altogether.
Footnote 2: We are not the first to cast electrostatics in this vector-space language — see, for example, Ref. Dukes (2007).
The structure of the paper is as follows: In Section II we introduce the inner-product space (which we'll also call a "vector space"), a class of operators on the space, and some vectors in the space; this also serves to set up some notation. We also discuss how charge distributions and electric potentials can be considered vectors in this vector space, and we introduce associated operators with physical interpretation. In Section III, we consider the force on a point charge (and, afterwards, a dipole) off the surface -- how it can be cast in the language of this vector space, and how it can be repulsive in some special cases. We conclude in Section IV with a summary and possibilities for future work.
## II Vector space; application to electrostatics
### Vector Space Definitions, Notation
Let \(\Omega\subset\mathbb{R}^{3}\setminus\{\mathbf{0}\}\) be a bounded surface, not necessarily closed. Let \(\mathcal{F}\) be the space of \(L^{2}\) square-integrable, real scalar functions \(\Omega\to\mathbb{R}\). For two such functions \(f,g\), define an inner product
\[\langle f|g\rangle=\int_{\vec{r}\in\Omega}f(\vec{r})g(\vec{r})\ dA \tag{1}\]
As we explore in more detail in this section, elements \(|\psi\rangle\in\mathcal{F}\) can represent electric potentials or charge densities on \(\Omega\). The bilinear scalar product (Eq. (1)) induces a dual space \(\mathcal{F}^{*}\) with elements \(\langle\psi|\). Furthermore, \(\mathcal{F}\) is in fact a Hilbert space, but most of the arguments presented in this paper only require general properties of an inner-product space and do not require the additional structure provided by a Hilbert space.
For every (well-enough behaved) function \(O(\vec{r},\vec{s}):\Omega\times\Omega\to\mathbb{R}\), we can define a linear operator \(O:\mathcal{F}\to\mathcal{F}\), such that
\[O\left|f\right\rangle\equiv\left|g\right\rangle,\ \text{where}\ g(\vec{r})=\int_{ \vec{s}\in\Omega}O(\vec{r},\vec{s})f(\vec{s})\ dA \tag{2}\]
Every function \(O(\vec{r},\vec{s})\) corresponds to an operator, but the converse is not true: there are linear operators \(O:\mathcal{F}\rightarrow\mathcal{F}\) that cannot be written as integrals via Eq. (2). We give an example in the next subsection with \(O=S^{-1}\).
Then, define the following3 on \(U\equiv\mathbb{R}^{3}\setminus\{\mathbf{0}\}\):
Footnote 3: The functions \(m\), \(d\), and \(c\) are labeled as such to reference “monopole,” “dipole,” and “constant,” respectively. Also, Eq. (4) is defined with the negative sign since, in Section III, we will be primarily be interested in surfaces \(\Omega\) that exist below the \(xy\)-plane. This polar angle \(\theta\) is the angle that \(\vec{r}\in U\) makes with the \(-\hat{z}\) direction.
\[r =\sqrt{x^{2}+y^{2}+z^{2}} \tag{3}\] \[\cos\theta =\frac{-z}{r}\] (4) \[m(\vec{r}) =\frac{1}{r}\] (5) \[d(\vec{r}) =\frac{\cos\theta}{r^{2}}\] (6) \[c(\vec{r}) =1 \tag{7}\]
And define the following on \(U\times U\) (minus the diagonal):
\[S(\vec{r}_{1},\vec{r}_{2})=\frac{1}{||\vec{r}_{1}-\vec{r}_{2}||}\qquad(\vec{r }_{1}\neq\vec{r}_{2}) \tag{8}\]
For every (well-enough behaved) function \(f(\vec{r}):U\rightarrow\mathbb{R}\), there exists a vector \(|f\rangle\), given by the restriction of the domain of \(f(\vec{r})\) to \(\Omega\). Likewise, for functions \(O(\vec{r}_{1},\vec{r}_{2}):U\times U\rightarrow\mathbb{R}\), there is an operator \(O\), defined via Eq. (2), restricting the domain of \(O(\vec{r}_{1},\vec{r}_{2})\) to \(\Omega\times\Omega\). This defines vectors \(|m\rangle\), \(|d\rangle\), and \(|c\rangle\), and the operator \(S\).
### Charge Densities and Electric Potentials as Vectors
\(S\) has the property that its action on a charge density (via Eq. (2)) gives back the surface charge density's contribution to the electric potential on the surface:4
Footnote 4: This sets the potential at infinity to be zero.
\[S\left|\sigma\right\rangle=\left|\phi\right\rangle,\text{ where }\phi(\vec{r})=\int_{\vec{s}\in\Omega}\frac{\sigma(\vec{s})\, dA}{||\vec{r}-\vec{s}||} \tag{9}\]
\(S(\vec{r},\vec{s})\) is a Green's function of the Laplacian operator, which lets us write Eq. (9). However, we emphasize that \(S\) is also an _operator_ in our vector space.
As discussed in Ref. [4], \(S\) is compact and self-adjoint since \(S(x,y)\) is a weakly singular kernel which is real-valued and symmetric with respect to \(x\) and \(y\). Note \(S\) being self-adjoint implies Green's reciprocity theorem:
\[\langle\sigma_{1}|S|\sigma_{2}\rangle=\langle\sigma_{1}|\phi_{2}\rangle= \langle\phi_{1}|\sigma_{2}\rangle \tag{10}\]
\(S^{-1}\), if it exists, maps an electric potential to the charge distribution that produces that potential on the surface:
\[S^{-1}\left|\phi\right\rangle=\left|\sigma\right\rangle \tag{11}\]
An explicit form of \(S^{-1}\), as in the form of Eq. (2), would involve the Laplacian. However, trying to do so is a little awkward given our framework: \(\nabla^{2}\phi(\vec{r})=\rho(\vec{r})\) (\(\vec{r}\in\mathbb{R}^{3}\)) makes reference to the electric potential off the surface, and so we can't write \(S^{-1}\) explicitly over the surface alone.
The operator \(S\) is injective but not surjective:5
Footnote 5: One might expect \(S\) to be bijective in light of existence/uniqueness theorems. Several of these theorems require continuity and/or smoothness of \(\Omega\) and of the boundary value data, which we do not impose here. See, for example, Ref. [5].
* Injectivity: if there exist two distinct charge distributions that induce the same potential, \[S\left|\sigma_{1}\right\rangle=S\left|\sigma_{2}\right\rangle=\left|\phi \right\rangle,\qquad(\sigma_{1}\neq\sigma_{2})\] (12) then their difference would be a nonzero charge distribution that produces zero electric potential everywhere on the surface. This is impossible, and so \(S\) is one-to-one.
* (non-)Surjectivity: there exist functions which may be charge distributions but cannot be potentials induced by a surface charge distribution: for example, any discontinuous function. Therefore, not every element of \(\mathcal{F}\) is in the image of \(S\), and \(S\) is not onto.
Therefore, there exists an inverse \(S^{-1}\) that is well-defined so long as we restrict the domain of \(S^{-1}\) to the image of \(S\). These are "physical potentials," defined to be those potentials which result from a physical charge distribution on \(\Omega\). We discuss the matter further and review uses of the operator \(S^{-1}\) in Appendix A.
### Conducting Surfaces
We focus now on surfaces where \(\left|\sigma\right\rangle\) is not specified, but rather takes on whatever value it must to minimize the potential energy of the system. That is, we focus on conducting surfaces \(\Omega\).
Suppose \(\Omega\) is a neutral conducting surface (or the surface of a conducting object). Then, charges on that surface will arrange themselves so as to minimize total electric potential energy. If there are no charges outside of the surface, this is, up to a constant,
\[U =\frac{1}{2}\int_{\vec{r}\in\Omega}\int_{\vec{s}^{\prime}\in \Omega}\frac{\sigma(\vec{r})\sigma(\vec{s}^{\,\prime})}{||\vec{r}-\vec{s}^{ \,\prime}||}\ dA\ dA^{\prime} \tag{13}\] \[=\frac{1}{2}\left\langle\sigma|S|\sigma\right\rangle \tag{14}\]
Since this quadratic form achieves its minimum only when \(\left|\sigma\right\rangle=\left|0\right\rangle\), \(S\) is positive definite. This is a physical argument: the energy can be rewritten as an integral over the (non-negative) energy-density, which is non-negative and zero iff \(\left|\sigma\right\rangle=\left|0\right\rangle\).
If a unit point charge is placed at the origin,6 then, in addition to the expression for the potential energy given in Eq. (14), there is another term. This term, the potential energy stored in the interaction between the point charge at the origin and the surface distribution on \(\Omega\), is linear in \(\sigma\). Any linear functional \(g:\mathcal{F}\rightarrow\mathbb{R}\) may be expressed as an inner product \(\left\langle g|\sigma\right\rangle\); in this case, we have
Footnote 6: A _unit_ point charge at the origin normalizes the charge-distribution and electric potential on the surface \(\Omega\).
\[U=\frac{1}{2}\langle\sigma|S|\sigma\rangle+\left\langle m|\sigma\right\rangle \tag{15}\]
First, we consider conductors with no restrictions on \(\left|\sigma\right\rangle\). The potential energy Eq. (15) is a convex quadratic form that is minimized when \(\left|\sigma\right\rangle\) satisfies
\[S\left|\sigma\right\rangle+\left|m\right\rangle=\left|0\right\rangle \tag{16}\]
This is equivalent to grounding the conductor -- when energy is minimized, the electric potential due to the charge distribution on \(\Omega\) and due to the point charge off the surface cancel for every point on the surface.
When the conductor is isolated and neutral, we can use 1. the fact that the conductor is an equipotential, and 2. the fact that the total charge is zero, to write an explicit form for \(\left|\sigma\right\rangle\) in terms of \(\left|c\right\rangle\), \(\left|m\right\rangle\), and \(S^{-1}\).
The constant potential on the surface of the conductor allows us to write
\[S\left|\sigma\right\rangle+\left|m\right\rangle=\kappa\left|c\right\rangle, \tag{17}\]
where \(\kappa\) is the constant electric potential on the conductor. Solving for \(\left|\sigma\right\rangle\),
\[\left|\sigma\right\rangle=\kappa S^{-1}\left|c\right\rangle-S^{-1}\left|m\right\rangle \tag{18}\]
Total charge is another linear functional \(\mathcal{F}\rightarrow\mathbb{R}\); the constraint that the total charge on the conductor is zero is equivalent to requiring \(\left\langle c|\sigma\right\rangle=0\). We can take the inner product of Eq. (18) with \(\left|c\right\rangle\) to utilize this constraint and solve for \(\kappa\):
\[\left\langle c|\sigma\right\rangle=0 =\kappa\left\langle c|S^{-1}|c\right\rangle-\left\langle c|S^{-1} |m\right\rangle \tag{19}\] \[\kappa =\frac{\left\langle c|S^{-1}|m\right\rangle}{\left\langle c|S^{- 1}|c\right\rangle} \tag{20}\]
Plugging this expression for \(\kappa\) into Eq. (18) gives
\[\left|\sigma\right\rangle=S^{-1}\left|c\right\rangle\frac{\left\langle c|S^{- 1}|m\right\rangle}{\left\langle c|S^{-1}|c\right\rangle}-S^{-1}\left|m\right\rangle \tag{21}\]
This gives an explicit solution for the induced charge distribution. Unfortunately, there is no explicit expression for how to compute \(S^{-1}\) on the surface, so this is difficult to calculate in general.
## III Repulsive geometries
### Point Charge Repulsion
A neutral, isolated conducting surface \(\Omega\) that exists entirely below the \(xy\)-plane (i.e., negative \(z\)) is considered _repulsive_ if, when a unit point charge is placed at the origin, the charge distribution induced on the surface (from the point charge) exerts a force on that point charge whose \(z\)-component is positive.
As before, we use the fact that the \(z\)-component of the force exerted on a unit point charge at the origin is a linear functional \(\mathcal{F}\rightarrow\mathbb{R}\); in this case, this is \(\left\langle d|\sigma\right\rangle\). Substituting the expression for \(\left|\sigma\right\rangle=Q\left|m\right\rangle\) from Eq. (21), this becomes
\[F_{z}=\left\langle d|Q|m\right\rangle, \tag{22}\]
where \(Q\) is self-adjoint and defined via
\[Q\equiv\frac{S^{-1}\left|c\right\rangle\left\langle c\right|S^{-1}}{\left\langle c |S^{-1}|c\right\rangle}-S^{-1} \tag{23}\]
The operator \(Q\) is negative semi-definite: \(\left\langle x|Q|x\right\rangle\leq 0\) for all \(\left|x\right\rangle\). Additionally, its null space is spanned by \(\left|c\right\rangle\). Proof: Since \(S^{-1}\) is positive definite (on its domain of physical potentials), it defines an inner product. With respect to this inner product, \(\left\langle x|Q|x\right\rangle\) is the norm of the projection of \(\left|x\right\rangle\) onto \(\left|c\right\rangle\) minus the norm of \(\left|x\right\rangle\); the latter must be greater if \(\left|x\right\rangle\neq k\left|c\right\rangle\), and the difference is zero iff \(\left|x\right\rangle=k\left|c\right\rangle\).
This explains why the force is almost always attractive (i.e., why \(F_{z}=\left\langle d|Q|m\right\rangle\) is almost always negative). The vectors \(\left|m\right\rangle\) and \(\left|d\right\rangle\) have a lot of overlap: the functions are each positive everywhere below the \(xy\)-plane, and each decreases asymptotically to zero with greater distance from the origin. It therefore makes intuitive sense that, if \(Q\) maps every identical pair of vectors to a number \(\leq 0\), the same would hold for most pairs that are "close enough."
However, this is not always true -- the force is not always attractive. If, given the surface \(\Omega\), there exist positive constants \(k_{1}\) and \(k_{2}\) such that
\[k_{1}\left|m\right\rangle+k_{2}\left|d\right\rangle=k_{3}\left|c\right\rangle \tag{24}\]
(\(k_{3}>0\)), then the surface is repulsive. The negative semi-definiteness of \(Q\), along with the fact that \(\left|c\right\rangle\) is in its null space, can be used to show this:7
Footnote 7: Note that, in Eq. (26), we took the inner product with \(\left|m\right\rangle\), but using \(\left|d\right\rangle\) would have worked as well.
\[Q\left|k_{1}m+k_{2}d\right\rangle =\left|0\right\rangle \tag{25}\] \[\left\langle m|Q|k_{1}m+k_{2}d\right\rangle =0\] (26) \[\left\langle m|Q|d\right\rangle =-\frac{k_{1}}{k_{2}}\left\langle m|Q|m\right\rangle \tag{27}\]
\(\langle m|Q|m\rangle\) is negative and \(-k_{1}/k_{2}\) is also negative, and so \(\langle m|Q|d\rangle=F_{z}>0\). Therefore, \(\Omega\) is repulsive.
These surfaces are parameterized by the constraint in Eq. (24). Restricting to negative \(z\), they are defined by
\[\frac{k_{1}}{r}+\frac{k_{2}\cos\theta}{r^{2}}=k_{3} \tag{28}\]
We can divide by \(k_{3}\) and rescale the constant coefficients to make them dimensionless:
\[k_{1}^{\prime}\frac{R}{r}+(k_{2}^{\prime})^{2}\left(\frac{R}{r}\right)^{2} \cos\theta=1 \tag{29}\]
We are left with a family of repulsive surfaces parameterized by \(\{k_{1}^{\prime},k_{2}^{\prime}\}\), where \(k_{1}^{\prime}=k_{1}/(k_{3}R)\) and \(k_{2}^{\prime}=\sqrt{k_{2}/(k_{3}R^{2})}\). Under the transformation \(\{k_{1}^{\prime},k_{2}^{\prime}\}\rightarrow\{\lambda k_{1}^{\prime},\lambda k _{2}^{\prime}\}\), the surface is rescaled, but its shape remains the same. We can fix the scale by setting the maximum distance of the surface from the origin, achieved when \(\theta=0\), to \(R\), so that the shape is controlled by a single parameter \(k=k_{1}^{\prime}=1-(k_{2}^{\prime})^{2}\), where \(k\in(0,1)\):
\[k\frac{R}{r}+(1-k)\left(\frac{R}{r}\right)^{2}\cos\theta=1 \tag{30}\]
We look at the scale-dependence of \(F_{z}\) and introduce some dimensionless notation in Appendix B; the force \(F_{z}\) is proportional to \(1/R^{2}\).
This phenomenon may be explained using the same intuition as Ref. [3]: their example began with a hemisphere, then moved the hemisphere down (in the \(-z\) direction) some arbitrarily small amount, causing the rim to be negatively charged and the bottom positively charged. These negative charges are closer to the point charge by an arbitrarily small amount, but their effect is attenuated by their shallow angle, so the repulsion from the positive charges dominates. These surfaces do essentially the same thing: starting with a hemisphere when \(k\) is near 1, decreasing \(k\) warps the rim toward the point charge, which causes the same effect. One interesting consequence that we find here is that it is not mandatory for the surface to contain all points \(z<0\) that satisfy the constraint.
Our constraint gives a parameterization of the surface \(r(\theta)\) that seems to imply that it must be open. However, we recognize the fact that any physical material will have a closed surface boundary. To resolve the discrepancy, we give the surface a small thickness \(t\ll R\). This is similar in spirit to Ref. [3], where they gave their hemispherical bowl a small thickness as well.
### Dipole Repulsion, Casimir Repulsion
We now turn our attention to a _dipole_ placed at the origin instead of a point charge. Again, we consider a conducting surface below the \(xy\) plane.8 Equations (17) to (21) apply in this scenario with the replacement \(\left|m\right\rangle\rightarrow\left|d\right\rangle\); Eq. (21) with this replacement gives the induced charge distribution on the conductor due to the dipole placed at the origin:
Footnote 8: If the dipole has any finite spatial extent (which we assume is \(\ll R\)), we require that the conductor lie entirely below the dipole.
\[\left|\sigma\right\rangle=Q\left|d\right\rangle \tag{31}\]
(\(Q\) is the same as before.) Note this assumes that the dipole moment is pointing in the negative \(z\) direction.
To help in our discussion of the force on the dipole, we define one more function \(q(\vec{r})\) (and associated vector \(\left|q\right\rangle\)) via
\[q(\vec{r})=\frac{3\cos^{2}\theta-1}{2r^{3}} \tag{32}\]
The \(z\)-component of the force on the dipole is \(F_{z}=-F_{z}^{\text{cond}}\), where \(F_{z}^{\text{cond}}\) is the \(z\)-component of the force on the conductor due to the dipole. By considering the \(z\)-component of the electric field contribution from a dipole, we can write
\[F_{z}=-F_{z}^{\text{cond}} =\int_{\vec{r}\in\Omega}\left(\frac{3\cos^{2}\theta-1}{r^{3}} \right)\sigma(\vec{r})\,dA \tag{33}\] \[F_{z} =2\left\langle q|\sigma\right\rangle=2\left\langle q|Q|d\right\rangle \tag{34}\]
The argument is then similar to before. If, given the surface \(\Omega\), there exist positive constants \(k_{1}\) and \(k_{2}\) such that
\[k_{1}\left|d\right\rangle+k_{2}\left|q\right\rangle=k_{3}\left|c\right\rangle, \tag{35}\]
(\(k_{3}>0\)) then the force on the dipole is repulsive. These surfaces can be parameterized by
\[k\left(\frac{R}{r}\right)^{2}\cos\theta+(1-k)\left(\frac{R}{r}\right)^{3} \left(\frac{3\cos^{2}\theta-1}{2}\right)=1 \tag{36}\]
References [3] and [6] explain the relevance of this system to Casimir repulsion. If our dipole at the origin is replaced with a neutral, conducting needle (lying along
Figure 1: Plots of \(\Omega_{0}\) (dimensionless, see App. B) for point charge repulsion. FIG. 1a (left): 3D plots of surfaces restricted to below the \(xy\) plane. Plotted are \(k=0.75\) (blue, outer surface) and \(k=0.25\) (gold, inner surface). FIG. 1b (right): 2D cross-sections of the 3D surfaces (colors the same as the 3D plot and the blue \(k=0.75\) line is dashed). The limit \(k\to 1\) is a hemispherical bowl.
the \(z\) axis), then the instantaneous dipole-dipole forces between the needle and the conductor are repulsive, and the system exhibits the same sort of Casimir repulsion as described in these references. Through a similar analysis as that of Appendix B, the force on the dipole is \(\sim 1/R^{4}\), as expected for a Casimir force.
The surfaces parameterized by Eq. (36) have one solution for \(r/R\) when \(\theta\in[0,\theta_{\rm crit})\), two solutions when \(\theta\in(\theta_{\rm crit},\theta_{\rm max})\), and zero solutions when \(\theta\in(\theta_{\rm max},\pi/2)\). The critical value \(\theta_{\rm crit}\equiv\cos^{-1}(1/\sqrt{3})\approx 54.7^{\circ}\) is independent of \(k\), but \(\theta_{\rm max}(k)\) depends on \(k\).9 In Figures 2 and 3, the "second solution" (with smaller radius) is shown in blue; this portion of the surface is a "cap" to the (now closed) surface, extending all the way to the origin. However, we emphasize that in Eq. (33) we assumed that every part of the surface was very far away from the dipole. If the dipole has any spatial extent, then a portion of the cap of the surface must be removed, which does not change the fact that the surface is repulsive since it still satisfies the constraint.
Footnote 9: \(\theta_{\rm max}(k=0.75)\approx 66.2^{\circ}\) and \(\theta_{\rm max}(k=0.25)\approx 55.8^{\circ}\). \(\theta_{\rm max}(k)\) approaches \(\pi/2\) in the \(k\to 1\) limit and \(\theta_{\rm crit}\) in the \(k\to 0\) limit.
## IV Summary and Future Work
In this letter we have introduced an inner-product space and cast scalar-valued functions in electrostatics as vectors in this inner-product space. Using general properties of particular operators in this space, we were able to derive a class of conductive geometries that can repel a point charge. Another class of geometries can repel a point dipole, which can lead to Casimir repulsion.
There are a few clear opportunities for future work:
* Either numerically or analytically, find the value of \(k\) that would extremize \(F_{z}\).
* One might want to redefine the vector space \(\mathcal{F}\), and possibly use a different space for charge distributions and for electric potentials, as discussed in Appendix A.
* Instead of defining \(\mathcal{F}\) as a single space of scalar-valued functions on \(\Omega\), one could instead could have a family of surfaces that depend continuously on some parameter space, and then make a vector bundle over that parameter space, so that each vector space fiber in it varies continuously with the parameters.
###### Acknowledgements.
The authors thank Prof. Jeffrey M. Rabin for very helpful comments on an earlier version of the manuscript.
## Appendix A Invertibility of \(S\)
The domain of \(S^{-1}\) is not the entire space, but some subspace of "valid" potential functions, referring to functions which may represent the potential generated by a charge distribution confined to the surface. We now verify that \(S^{-1}\) only acts on such vectors.
Figure 2: Plot of \(\Omega_{0}^{k=0.75}\) (dimensionless, see App. B) for point dipole repulsion. FIG. 2a (left): 3D plot of \(\Omega_{0}^{k=0.75}\) restricted to below the \(xy\) plane. The blue cap is the smaller-radius solution for those \(\theta\in(\theta_{\rm crit},\theta_{\rm max})\) that have two solutions. (See text for details.) FIG. 2b (right): 2D cross-section.
Figure 3: Similar to FIG. 2, but for \(k=0.25\).
Figure 4: Level curves \(m_{0}(\vec{u})=1\) (blue, dashed), \(d_{0}(\vec{u})=1\) (gold, solid), and \(q_{0}(\vec{u})=1\) (green, dot-dashed). Point charge repulsion is possible for level curves of convex combinations of the first two functions, and point dipole repulsion is possible for level curves of convex combinations of the last two.
Throughout this paper, \(S^{-1}\) acts only on \(S\left|\sigma\right>\), \(\left|c\right>\), \(\left|m\right>\), and \(\left|d\right>\). The first of these is the potential on \(\Omega\) produced by the charge density \(\left|\sigma\right>\), so this is certainly a valid physical potential. The second is a constant potential \(c(\vec{r})=1\) on \(\Omega\) -- this is the equilibrium situation for an isolated conductor with the appropriate amount of charge on it to produce this unit potential, absent of any external sources. The remaining two of these are potentials \(\left|\phi_{\text{ext}}\right>\) caused by an external source (a point charge or dipole at the origin). Suppose we were to ground the conductor with the external source held fixed. In that case, the induced charge distribution on \(\Omega\) would exactly cancel the external potential. In other words, a charge density \(\left|\sigma_{\text{ind}}\right>\) is induced on \(\Omega\) such that
\[S\left|\sigma_{\text{ind}}\right>=-\left|\phi_{\text{ext}}\right> \tag{30}\]
This implies that the charge density \(\left|\sigma\right>=-\left|\sigma_{\text{ind}}\right>\) can produce the potential \(\left|\phi_{\text{ext}}\right>\) on \(\Omega\), and therefore these are "valid" potential functions in the domain of \(S^{-1}\). Furthermore, any linear combination of valid potentials is a valid potential.
In this paper, we have restricted the domain of \(S^{-1}\). An alternative approach might be to find some way to treat charge distributions and potentials as two different vector spaces, where the transpose, rather than the operator \(S\), maps charge distributions to the potentials they induce. We have not done this because it would prevent the map from charge to potential from having well-defined eigenvectors. Though the spectrum of \(S\) is not considered here, we believe it may have interesting properties for future investigation.
## Appendix B Scale Dependence of \(F_{z}\)
Here we explore how \(F_{z}\) for point charge repulsion depends on the length scale, \(R\). Rewriting the expression for \(F_{z}\) in Eq. (27) in terms of the rescaled constants, we have (expressing \(-F_{z}\) in terms of both \(\left|m\right>\) and \(\left|d\right>\) to emphasize the symmetry between the two)
\[-F_{z} =\frac{k_{1}}{k_{2}}\left<m|Q|m\right>=\frac{k_{2}}{k_{1}}\left< d|Q|d\right> \tag{31}\] \[=\frac{k_{1}^{\prime}R}{k_{2}^{\prime 2}R^{2}}\left<m|Q|m\right>= \frac{k_{2}^{\prime 2}R^{2}}{k_{1}^{\prime}R}\left<d|Q|d\right>\] (32) \[=R^{-1}\frac{k}{1-k}\left<m|Q|m\right>=R\frac{1-k}{k}\left<d|Q|d\right> \tag{33}\]
We have extracted the scale-dependence of the constants \(k_{1}\) and \(k_{2}\), but not yet the vectors, operators, nor inner products.10 We separate the scale-dependence by defining the dimensionless functions/vectors
Footnote 10: For example, note \(m\sim R^{-1}\) and \(d\sim R^{-2}\). In addition, operators and inner products also carry some scale-dependence.
\[m(\vec{r}) =\frac{1}{r}=\frac{1}{R}\frac{1}{u}\equiv\frac{1}{R}m_{0}(\vec{u}) \tag{34}\] \[d(\vec{r}) =\frac{\cos\theta}{r^{2}}=\frac{1}{R^{2}}\frac{\cos\theta}{u^{2}} \equiv\frac{1}{R^{2}}d_{0}(\vec{u}) \tag{35}\]
\(\vec{u}\) is the vector from the origin to the point on the "unit surface" \(\Omega_{0}\), parameterized by Eq. (30) with \(u\equiv r/R\). There is an analogous definition for \(S_{0}\) such that11\(S=RS_{0}\), and therefore \(S^{-1}=S_{0}^{-1}/R\) and \(Q=Q_{0}/R\). The inner product carries some dimensions with it via the measure \(dA_{\Omega}=R^{2}dA_{\Omega_{0}}\); denote inner products over the unit surface via \(\left<\cdot\right|\right>_{0}\). Making all scale dependence explicit leaves us with
Footnote 11: We are saying that the operator \(S\sim R\). If you expected that \(S\sim 1/R\) because the function \(S(\vec{r}_{1},\vec{r}_{2})\) scales as \(\sim 1/R\), note the _operator_\(S\) also includes an integral over the surface, and so there is an additional factor \(\sim R^{2}\).
\[F_{z}=-R^{-2}\frac{k}{1-k}\left<m_{0}|Q_{0}|m_{0}\right>_{0}=-R^{-2}\frac{1-k} {k}\left<d_{0}|Q_{0}|d_{0}\right>_{0} \tag{36}\]
As expected from dimensional analysis, the force is proportional to \(1/R^{2}\).
|
2306.16454 | States, symmetries and correlators of $T\bar{T}$ and $ J\bar{T} $
symmetric orbifolds | We derive various properties of symmetric product orbifolds of $T\bar{T}$ and
$J\bar{T}$ - deformed CFTs from a field-theoretical perspective. First, we
generalise the known formula for the torus partition function of a symmetric
orbifold theory in terms of the one of the seed to non-conformal
two-dimensional QFTs; specialising this to seed $T\bar{T}$ and $J\bar{T}$ -
deformed CFTs reproduces previous results in the literature. Second, we show
that the single-trace $T\bar{T}$ and $J\bar{T}$ deformations preserve the
Virasoro and Kac-Moody symmetries of the undeformed symmetric product orbifold
CFT, including their fractional counterparts, as well as the KdV charges.
Finally, we discuss correlation functions in these theories. By extending a
previously-proposed basis of operators for $J\bar{T}$ - deformed CFTs to the
single-trace case, we explicitly compute the correlation functions of both
untwisted and twisted-sector operators and compare them to an appropriate set
of holographic correlators. Our derivations are based mainly on Hilbert space
techniques and completely avoid the use of conformal invariance, which is not
present in these models. | Soumangsu Chakraborty, Silvia Georgescu, Monica Guica | 2023-06-28T18:00:02Z | http://arxiv.org/abs/2306.16454v2 | # States, symmetries and correlators of \(T\bar{T}\) and \(J\bar{T}\) symmetric orbifolds
###### Abstract
We derive various properties of symmetric product orbifolds of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs from a field-theoretical perspective. First, we generalise the known formula for the torus partition function of a symmetric orbifold theory in terms of the one of the seed to non-conformal two-dimensional QFTs; specialising this to seed \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs reproduces previous results in the literature. Second, we show that the single-trace \(T\bar{T}\) and \(J\bar{T}\) deformations preserve the Virasoro and Kac-Moody symmetries of the undeformed symmetric product orbifold CFT, including their fractional counterparts, as well as the KdV charges. Finally, we discuss correlation functions in these theories. By extending a previously-proposed basis of operators for \(J\bar{T}\) - deformed CFTs to the single-trace case, we explicitly compute the correlation functions of both untwisted and twisted-sector operators and compare them to an appropriate set of holographic correlators. Our derivations are based mainly on Hilbert space techniques and completely avoid the use of conformal invariance, which is not present in these models.
###### Contents
* 1 Introduction
* 2 The spectrum and the entropy
* 2.1 Review of the \(T\bar{T}\) and \(J\bar{T}\)-deformed spectrum and partition function
* 2.2 Torus partition function of general symmetric product orbifold QFTs
* 2.3 Spectrum of \(T\bar{T}\) and \(J\bar{T}\) symmetric product orbifolds
* 2.4 Comments on the entropy
* 3 Flow of the states and symmetries
* 3.1 Brief review of the symmetries of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs
* 3.2 Brief review of the symmetries of symmetric orbifold CFTs
* 3.3 Flow of the states in single-trace \(T\bar{T}\) and \(J\bar{T}\)-deformed CFTs
* 3.4 Symmetries of single-trace \(T\bar{T}\) and \(J\bar{T}\)-deformed CFTs
* 4 Correlation functions
* 4.1 Review of correlation functions in \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs
* 4.2 Correlation functions in single-trace \(J\bar{T}\) - deformed CFTs
* 4.3 Comparison with holographic results
* 5 Conclusions
## 1 Introduction
The study of symmetric product orbifolds of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs is interesting for a number of reasons. First, symmetric product orbifolds of two-dimensional QFTs play an important role in holography, as their large \(N\) behaviour is compatible with that of a gravitational dual where quantum-gravitational corrections are supressed [1]. When the seed theory is a CFT, they enter concrete realisations of the AdS\({}_{3}\)/CFT\({}_{2}\) correspondence [2, 3, 4, 5, 6, 7]. According to the proposals of [8, 9, 10], symmetric product orbifolds of \(T\bar{T}\)[11, 12] and \(J\bar{T}\) - deformed CFTs [13] - a set of non-local, yet UV-complete and solvable two-dimensional QFTs - should provide tractable models of three-dimensional non-AdS holography. More precisely, the \(T\bar{T}\) symmetric orbifold should be related to a spacetime that is asymptotically flat with a linear dilaton, whereas the \(J\bar{T}\) one should correspond to a warped AdS\({}_{3}\) background, which is relevant to understanding the Kerr/CFT correspondence [14, 15].
The study of symmetric product orbifolds of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs is also interesting from the point of view of the original motivation of [11, 12] - namely, to understand the space of integrable two-dimensional QFTs. The existence of exactly solvable irrelevant deformations of two-dimensional QFTs whose UV behaviour is not governed by a standard UV CFT fixed point, yet is entirely under control [16], is quite remarkable. The orbifold construction provides a simple way to enlarge the set of tractable examples of such QFTs. The properties of the resulting theories are similar - though not exactly the same - as those of the seed QFTs. It is a useful exercise to work them out explicitly from first principles, which is the main goal of this article.
Another motivation for studying this problem is that neither \(T\bar{T}\), nor \(J\bar{T}\) - deformed CFTs possess (full) conformal invariance, which is nevertheless omnipresent in the symmetric product orbifold literature. We would therefore like to use these examples to illustrate the fact that many observables in symmetric orbifold QFTs _can_ be obtained without the conformality asssumption. Depending on the specifics of the system under study, these observables can even include twisted-sector correlation functions, as we show explicitly for the case of single-trace \(J\bar{T}\) - deformed CFTs.
The analysis presented in this article is purely field-theoretical, and the QFTs under study are _exact_ symmetric product orbifolds of \(T\bar{T}\) or \(J\bar{T}\) - deformed CFTs, obtained via a single-trace \(T\bar{T}\)/\(J\bar{T}\) deformation of an _exact_ symmetric orbifold of two-dimensional CFTs. As a result, the large \(N\) holographic duals of these theories are highly stringy. Our setup is thus different from that used in the holographic proposals [8, 9, 10], who deformed an _approximate_ symmetric product orbifold of CFTs - namely, the CFT dual to the near horizon of several NS5 branes and a large number of F1 strings1 - by an operator whose action resembles that of the single-trace \(T\bar{T}\) or \(J\bar{T}\) operator2. These deformations were argued to correspond to exactly marginal deformations of the worldsheet string theory, which can be studied with a variety of techniques [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Most of the results obtained so far in the "single-trace \(T\bar{T}\)" and \(J\bar{T}\) literature were in fact derived using worldsheet methods that, given the only approximate identification of the boundary deformation with single-trace \(T\bar{T}\)/\(J\bar{T}\), may or may not agree with the exact symmetric product orbifold calculations. Thus, yet another motivation for this work is to provide an _independent_ derivation of various properties of these theories that were previously predicted via holography.
Footnote 1: See [17] for a proposed concrete realisation of this CFT.
Footnote 2: Throughout this article, the single-trace \(T\bar{T}\)/\(J\bar{T}\) operator will simply denote the sum over copies, in a symmetric orbifold QFT, of the corresponding Smirnov-Zamolodchikov operator. By contrast, in [8, 9, 10] “single-trace \(T\bar{T}\)/\(J\bar{T}\)” is a nickname given to a certain operator of dimension \((2,2)/(1,2)\) that is single-trace (in the sense of corresponding to a single-particle bulk excitation) and some of whose properties resemble those of \(T\bar{T}\)/\(J\bar{T}\).
The first observable we study is the finite-size spectrum of the orbifolded theories. This has been first computed using worldsheet methods, by studying the effect of the exactly marginal deformations on the spectrum of long strings in the massless BTZ background [9, 10, 18]. More precisely, it was shown that the spectrum of singly-wound long strings in the deformed backgrounds precisely coincides with the \(T\bar{T}\) and, respectively, \(J\bar{T}\) - deformed spectrum, which provided a non-trivial check of the proposed duality; the string theory prediction for the spectrum of multiply-wound strings was then naturally conjectured to represent the contribution of the twisted sectors of the symmetric product orbifold in this specific example. The \(T\bar{T}\) result has been recently confirmed by the field-theoretical analysis of [29], who fixed the partition function by requiring it to be modular invariant in a generalised \(T\bar{T}\) sense [30].
As already noted in [30], this modular invariance is an automatic property of the partition function of any (UV - complete) QFT with a single dimensionful scale; the generalization to several parameters, including non Lorentz-invariant ones, is straightforward [31]. In this article, we provide a general expression for the partition function of the symmetric orbifold of such theories, based on a slight generalisation of Bantay's formula [32, 33, 34] for the case of CFTs; its modular invariance follows automatically from that of the seed QFT. When applied to the case of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, this partition function precisely reproduces or generalises previous results in the literature.
Given the partition function, one may analyse the thermodynamic properties of the symmetric product orbifold of \(T\bar{T}/J\bar{T}\) - deformed CFTs. The \(T\bar{T}\) case has been analysed in detail in [29]. We use these results to compare the entropy of a single-trace to that of a double-trace \(T\bar{T}\) deformation [35] of a symmetric orbifold CFT and note that while they agree - as they should - in the universal high-energy regime discussed in [29], they disagree outside it. We also discuss the entropy of single/double-trace \(J\bar{T}\) - deformed CFTs, showing there exists a regime of real high energies where the behaviour of the entropy is either Cardy-like or Hagedorn, depending on the chirality properties of the \(U(1)\) current.
Next, we study the extended symmetries of single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs. A non-trivial property of the standard \(T\bar{T}\) and \(J\bar{T}\) deformations is that they preserve the Virasoro and, if present, the Kac-Moody symmetries of the undeformed CFT [36, 37, 38]. That the same is true of the single-trace deformation is strongly suggested by the results of the asymptotic symmetry group analysis of the linear dilaton spacetime [39], which uncovered an infinite set of symmetries, whose algebra closely resembles the \(T\bar{T}\) symmetry algebra.
In this article, we provide a purely field-theoretical proof that these symmetries are indeed preserved, closely following the argument used in the double-trace case [37, 38]. This argument requires understanding the operator that drives the flow of the energy eigenstates under the single-trace \(T\bar{T}/J\bar{T}\) deformation, which is technically more complicated than the corresponding double-trace flow in that many of the initial CFT degeneracies are broken when the deformation is first turned on. We also discuss other bases of symmetry generators, which are non-linearly related to the Virasoro one, and argue that they may be preferred at a global level in the single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFT. Working out the corresponding non-linear symmetry algebra in single-trace \(T\bar{T}\) - deformed CFTs, we show the result agrees precisely with the holographic calculation [39]. In addition, we show that the KdV charges and the fractional Virasoro and Kac-Moody modes are preserved by the deformation; the fate of the higher spin symmetries such as those discussed in [40] is less clear.
Finally, we turn our attention to correlation functions. For standard \(J\bar{T}\) - deformed CFTs, these have been understood in [41] (see also [42]), and recently have also been computed in \(T\bar{T}\) - deformed CFTs [43] (see also [44]), using rather different methods. In addition, several holographic calculations of two-point functions - using either worldsheet or supergravity techniques - were performed in [20, 21, 22, 23, 24]. We provide explicit expressions for the correlation functions of a proposed set of both untwisted and twisted-sector operators in single-trace \(J\bar{T}\) - deformed CFTs, which we then compare with a holographic computation of the two-point functions of long string vertex operators - the only worldsheet operators that are described by a symmetric product orbifold - performed using the methods of [20, 24]. The two results are found to slightly differ, and we comment on possible reasons for this.
This article is organised as follows. In section 2, we study the torus partition function of symmetric product orbifolds of general two-dimensional QFTs and show that it can be obtained via a slight generalisation of Bantay's formula; we work out the \(T\bar{T}\) and \(J\bar{T}\) case as an example. We also comment on the thermodynamics of single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs. In section 3, we study the flow of the states and of the Virasoro (- Kac-Moody) generators, including their fractional counterparts, in single-trace \(T\bar{T}\) and \(J\bar{T}\) deformed CFTs and show that they are still conserved, as are the KdV charges. We also discuss other possible bases of symmetry generators. Finally, in section 4 we compute correlation functions in single-trace \(J\bar{T}\) - deformed CFTs and compare them with an appropriate holographic result. We end with a summary in section 5. For completeness, each section contains an introductory subsection that summarizes the relevant results from the double-trace case.
## 2 The spectrum and the entropy
In this section, we explain in a simple fashion how to obtain the finite-size spectrum of a symmetric product orbifold of any two-dimensional QFT whose partition function is modular invariant in an appropriately generalised sense. Our results are exemplified by symmetric orbifolds of \(T\bar{T}\) - deformed CFTs in the Lorentz-invariant case, and symmetric orbifolds of \(J\bar{T}\) - deformed CFTs in the non-Lorentz-invariant one. For completeness, we start this section with a brief review of the spectrum and partition function of standard (double-trace) \(T\bar{T}\) and \(J\bar{T}\) - deformed QFTs.
### Review of the \(T\bar{T}\) and \(J\bar{T}\)-deformed spectrum and partition function
One remarkable feature of \(T\bar{T}\), \(J\bar{T}\) deformations and their generalisations is that the spectrum of the deformed QFT on a cylinder of circumference \(R\) is entirely determined by the finite-size spectrum of the undeformed QFT, as we now review.
#### \(T\bar{T}\) - deformed QFTs
The \(T\bar{T}\) deformation is a universal irrelevant deformation of a two-dimensional QFT by an operator constructed from the components of the stress tensor
\[\partial_{\mu}S=\frac{1}{2}\int d^{2}x\,{\cal O}_{T\bar{T}}^{[\mu]}\;,\;\;\;\; \;\;{\cal O}_{T\bar{T}}=\epsilon^{\alpha\beta}\epsilon^{\gamma\delta}T_{ \alpha\gamma}T_{\beta\delta} \tag{2.1}\]
which enjoys nice factorization properties in energy eigenstates [45, 11]. These properties imply that the energies \(E_{n}^{[\mu]}(R)\) of the eigenstates of the deformed theory on a cylinder of circumference \(R\) obey Burger's equation
\[\partial_{\mu}E_{n}^{[\mu]}(R)=E_{n}^{[\mu]}(R)\frac{\partial E_{n}^{[\mu]}( R)}{\partial R}+\frac{P_{n}^{2}(R)}{R} \tag{2.2}\]
This equation can be solved via the method of characteristics. For \(P=0\), the solution is simply given by \(E_{n}^{[\mu]}(R)=E_{n}^{[0]}(R+\mu E_{n}^{[\mu]})\), where \(E_{n}^{[0]}\) are the undeformed energies; the solution for \(P\neq 0\) is a slight generalisation of this result [12]. Thus, if the spectrum of the undeformed QFT is known explicitly as a function of \(R\), then so is the spectrum of the corresponding \(T\bar{T}\) - deformed QFT. A well-studied example where this is the case is that of \(T\bar{T}\) - deformed CFTs, where the undeformed energies are inversely proportional to \(R\), and the solution for the deformed spectrum is
\[E_{n}^{[\mu]}(R)=\frac{R}{2\mu}\bigg{(}-1+\sqrt{1+\frac{4\mu E_{n}^{[0]}(R)}{ R}+\frac{4\mu^{2}P_{n}^{2}}{R^{2}}}\bigg{)} \tag{2.3}\]
This solution can also be written in terms of the conformal dimension \(\Delta\) and spin \(s\) of the corresponding operator by plugging in the expressions for \(E_{n}^{[0]},P_{n}\) as a function of \(\Delta\) and \(s\).
The torus partition function of the deformed QFT is defined as usual via the Hilbert space trace
\[Z^{[\mu]}(\tau,\bar{\tau},R)=\sum_{n}e^{-\tau_{2}R\,E_{n}^{[\mu]}(R)+i\,\tau_ {1}R\,P_{n}} \tag{2.4}\]
where \(\tau=\tau_{1}+i\tau_{2}\) is the complex structure modular parameter and \(R\) is the length of the \(a\)-cycle of the torus, here designated as the spatial one. For a \(T\bar{T}\) - deformed CFT, \(Z^{[\mu]}\) only depends on \(R\) via the dimensionless combination \(\mu/R^{2}\), since \(\mu\) is the only dimensionful parameter in the theory.
Let us now discuss the modular transformation properties of this partition function. The metric on the torus can be written as
\[ds^{2}=R^{2}|dx+\tau dy|^{2}=R^{2}dzd\bar{z} \tag{2.5}\]
where \(x,y\) are real coordinates of unit periodicity and the complex coordinates \(z,\bar{z}\) are defined as \(z=x+\tau y\), \(\bar{z}=x+\bar{\tau}y\). This metric is invariant under large \(PSL(2,\mathbb{Z})\) diffeomorphisms of the torus
\[\begin{pmatrix}x\\ y\end{pmatrix}\mapsto\begin{pmatrix}a&-b\\ -c&d\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix}\;,\qquad\tau\mapsto\frac{a\tau+b}{c\tau+d}\;,\qquad ad-bc=1 \tag{2.6}\]
which leave the coordinate periodicities intact, provided we also transform
\[R\rightarrow|c\tau+d|R \tag{2.7}\]
Note this ensures that the area of the torus, \(R^{2}\tau_{2}\), is invariant. Under (2.6), the complex coordinates change as
\[z\mapsto\frac{z}{c\tau+d}\,\qquad\bar{z}\mapsto\frac{\bar{z}}{c\bar{\tau}+d} \tag{2.8}\]
Assuming the partition function (2.4) can also be computed via an Euclidean path integral over the torus, which is naturally invariant under the diffeomorphisms discussed above, we conclude that
\[Z^{[\mu]}\bigg{(}\frac{a\tau+b}{c\tau+d},\frac{a\bar{\tau}+b}{c\bar{\tau}+d},R| c\tau+d|\bigg{)}=Z^{[\mu]}(\tau,\bar{\tau},R) \tag{2.9}\]
While we wrote this relation with \(T\bar{T}\) - deformed CFTs in mind, whose partition function depends on a single dimensionful parameter \(\mu\), it should hold in any UV-complete two-dimensional QFT with dimensionful scalar couplings - collectively denoted as '\([\mu]\)' - whose partition function can be computed via a path integral over the euclidean torus. In a CFT, the radius dependence drops out by scale invariance, resulting in the usual modular invariance requirement; (2.9) may then be referred to as "generalised modular invariance". It simply states the invariance of the partition function under a relabeling of the torus coordinates, and as such it is natural that these transformations relate theories defined on tori with different sizes of the \(a\)-cycle, where the scalar couplings (which may be dimensionful) are held fixed.
In a \(T\bar{T}\) - deformed CFT, the partition function \(Z^{[\mu]}(\tau,\bar{\tau},R)=Z_{T\bar{T}}(\tau,\bar{\tau},\mu/R^{2})\), and so the above relation reads
\[Z_{T\bar{T}}\bigg{(}\frac{a\tau+b}{c\tau+d},\frac{a\bar{\tau}+b}{c\bar{\tau}+d },\frac{\mu}{R^{2}|c\tau+d|^{2}}\bigg{)}=Z_{T\bar{T}}\bigg{(}\tau,\bar{\tau}, \frac{\mu}{R^{2}}\bigg{)} \tag{2.10}\]
Thus, in this case one may reinterpret (2.9) as relating theories on a circle of the same radius, but with different dimensionless couplings. The above relation was checked explicitly in [46].
The density of states of a \(T\bar{T}\) - deformed CFT follows from the adiabaticity of the deformation, which implies that the number of states is unchanged along the flow. We thus have
\[S_{T\bar{T}}(E)=S_{Cardy}(E^{[0]}(E))=2\pi\sqrt{\frac{cE_{L}(R+2\mu E_{R})}{12 \pi}}+2\pi\sqrt{\frac{cE_{R}(R+2\mu E_{L})}{12\pi}} \tag{2.11}\]
where the relation between \(E^{[0]}\) and \(E,P\) was obtained by inverting (2.3), \(E_{L,R}\equiv(E\pm P)/2\) and, for simplicity, we have dropped the \(\mu\) label for the deformed energies. Note that the high-energy behaviour is Hagedorn.
Finally, let us remind the reader that reality of the deformed ground state energy (2.3) implies that \(T\bar{T}\) - deformed CFTs can only be defined on cylinders whose circumference satisfies
\[R\geq R_{min}=\sqrt{\frac{2\pi\mu c}{3}} \tag{2.12}\]
The high-energy behaviour of the entropy implies in turn that the thermal partition function only makes sense below the Hagedorn temperature \(T_{H}=R_{min}^{-1}\), which amounts to the same constraint. More generally, we have
\[\lim_{E_{L,R}\rightarrow\infty}Z_{T\bar{T}}\approx e^{-\beta_{L}E_{L}-\beta_{R }E_{R}}e^{2\sqrt{\frac{2\pi\mu c}{3}E_{L}E_{R}}}\leq e^{2\sqrt{E_{L}E_{R}}(R_ {min}-\sqrt{\beta_{L}\beta_{R}})} \tag{2.13}\]
where \(\beta_{L,R}\) are the left/right-moving temperatures, which satisfy \(\beta_{L}\beta_{R}=R^{2}|\tau|^{2}\). The partition function is thus well defined provided also \(R|\tau|>R_{min}\). One may check - by appropriately choosing the integer part of \(\tau_{1}\) - that modular transformations do not take us out of this regime.
### \(J\bar{T}\) - deformed QFTs
The \(J\bar{T}\) deformation, as well as all Smirnov-Zamolodchikov deformations involving \(U(1)\) currents and the stress tensor, can be treated in an entirely analogous manner. The only new element is that now the coupling has non-trivial transformation properties under diffeomorphisms, which need to be taken into account when discussing the modular invariance properties of the partition function.
We start our discussion with a general \(JT^{a}\) deformation of a two-dimensional QFT, defined via the flow equation
\[\partial_{\lambda^{a}}S=\int d^{2}x\,T^{\alpha}{}_{a}\epsilon_{\alpha\beta}J^ {\beta} \tag{2.14}\]
The coupling parameters \(\lambda^{a}\) are vectors with dimensions of length; these deformations thus break Lorentz invariance. The spectrum of the deformed QFT coupled to certain background fields is again simply related to the undeformed spectrum in a shifted background [47, 48]
\[E_{n}^{[\lambda^{a}]}(R,v,a^{\sigma})=E_{n}^{[0]}\left(R-\lambda^{\sigma}q_{n }^{[0]},\frac{vR-\lambda^{t}q_{n}^{[0]}}{R-\lambda^{\sigma}q_{n}^{[0]}},a^{ \sigma}+\frac{\lambda^{\sigma}(P_{n}+vE_{n}^{[\lambda^{a}]})-\lambda^{t}E_{n} ^{[\lambda^{a}]}}{R-\lambda^{\sigma}q_{n}^{[0]}}\right) \tag{2.15}\]
where \(v\) is a background vielbein, \(q_{n}^{[0]}\) is the undeformed \(U(1)\) charge of the state, and \(a^{\sigma}\) is a background gauge field, which may be set to zero at the end of the computation. The \(J\bar{T}\) deformation corresponds to the case when \(\lambda^{a}\) is a null vector with \(\lambda^{t}=\lambda^{\sigma}=\lambda\) (or, equivalently, \(\lambda^{z}=2\lambda,\lambda^{z}=0\)). If the seed theory is a CFT, then the \(J\bar{T}\) deformation has the special property of preserving locality and conformal invariance on the left-moving side, which leads to great simplifications in the study of the deformed theory. The deformed spectrum is obtained by applying (2.15) to a seed CFT, and is best expressed in terms of the deformed right-moving energy
\[E_{R}^{[\lambda]}(R)=\frac{4\pi}{\lambda^{2}k}\left(R-\lambda q^{[0]}-\sqrt{ \left(R-\lambda q^{[0]}\right)^{2}-\frac{\lambda^{2}kE_{R}^{[0]}R}{2\pi}}\right) \tag{2.16}\]
where \(E_{R}^{[0]},q^{[0]}\) are the right-moving energies and \(U(1)\) charges of the corresponding state in the undeformed CFT, and we have dropped the label '\(n\)' on the eigenstates. The left-moving \(U(1)\) charge also changes non-trivially with \(\lambda\), and is given by
\[q^{[\lambda]}=q^{[0]}+\frac{\lambda k}{4\pi}E_{R}^{[\lambda]}=\frac{1}{ \lambda}\bigg{(}R-\sqrt{\left(R-\lambda q^{[0]}\right)^{2}-\frac{\lambda^{2} kE_{R}^{[0]}}{2\pi}}\bigg{)} \tag{2.17}\]
Note that the deformed spectrum will become imaginary if the states in the undeformed CFT have large right-moving energy at fixed \(q^{[0]}\), a behaviour that resembles that of \(T\bar{T}\)-deformed CFTs with \(\mu<0\). At the same time, reality of the deformed energy results into an upper bound3 on \(q^{[\lambda]}\), suggesting it is the latter that should be held fixed as \(E_{R}^{[0]}\) is taken to be large. The relationship between the deformed and undeformed right-moving energies at fixed \(q^{[\lambda]}\) is given by
Footnote 3: One may increase \(q^{[\lambda]}\) beyond the limiting value \(R/\lambda\) by choosing a different branch of the square root. A similar behaviour was found for \(T\bar{T}\) - deformed CFTs with \(\mu<0\)[49].
\[E_{R}^{[\lambda]}=\frac{4\pi}{\lambda^{2}k}\left(\sqrt{\left(R-\lambda q^{[ \lambda]}\right)^{2}+\frac{\lambda^{2}kE_{R}^{[0]}R}{2\pi}}-\left(R-\lambda q^{ [\lambda]}\right)\right) \tag{2.18}\]
From (2.17), \(q^{[\lambda]}\) will be real provided the undeformed dimensionless energies \(RE_{R}^{[0]}/2\pi\) lie below the parabola \((R/\lambda-q^{[0]})^{2}/k\) depicted in figure 1, which still allows access to infinite energies. In addition, there are lower bounds on the allowed energy. For example, if the seed CFT also posesses a right-moving \(U(1)\) symmetry, the cosmic censorship bound on the right-moving side \(RE_{R}^{[0]}/2\pi\geq(\bar{q}^{[0]})^{2}/k\) indicates it should lie above the parabola \((q^{[0]}-\mathrm{w})^{2}/k\), where \(\mathrm{w}\equiv q^{[0]}-\bar{q}^{[0]}\) is the winding of the state, which is to be held fixed as we vary \(q^{[0]}\). As long as \(R>\lambda\mathrm{w}\) (for \(\lambda>0\)), there is always a sliver in the \(E_{R}^{[0]}\), \(q^{[0]}\) plane so that both conditions are satisfied.
The allowed values of \(E_{R}^{[0]}\) are further restricted by the cosmic censorship bound on the left-moving energy and charge, which requires that \(RE_{R}^{[0]}/2\pi\geq(q^{[0]})^{2}/k-PR/2\pi\). It is easy to check that for \(\mathrm{w}<R/\lambda\), there is always a region, depicted in figure 0(b), that extends to infinite energies and obeys all three constraints. If the \(U(1)\) current is chiral, then the second constraint is replaced by positivity of the energy, and we are in the situation of figure 0(a). Within the allowed region, we will be interested in the regime where \(E_{R}^{[0]}\) is large and \(q^{[0]}\) is large and negative, which corresponds via (2.18) to a large deformed right-moving energy.
The full deformed spectrum may be understood as a spectral flow by the right-moving Hamiltonian, as discussed in [50]. This observation also extends to the spectrum of \(SL(2,\mathbb{R})_{L}\) conformal dimensions on the plane, which in \(J\bar{T}\) - deformed CFTs are well-defined thanks to the fact that the theory enjoys full left conformal invariance. This spectrum may be obtained by applying an infinite boost to (2.16), and reads, as a function of the right-moving energy, now denoted \(\bar{p}\)[42]
\[h^{[\lambda]}(\bar{p})=h^{[0]}+\frac{\lambda}{2\pi}q^{[0]}\bar{p}+\frac{ \lambda^{2}k}{16\pi^{2}}\bar{p}^{2}\;,\qquad q^{[\lambda]}(\bar{p})=q^{[0]}+ \frac{\lambda k}{4\pi}\bar{p} \tag{2.19}\]
where \(h^{[0]},q^{[0]}\) are the left-moving conformal dimension and charge in the undeformed CFT. These dimensions can also be obtained via conformal perturbation theory [42]. Note this spectrum is manifestly real, indicating that the problems associated with the imaginary energy states disappear in infinite volume, in agreement with their physical interpretation put forth in [51].
The torus partition function of a \(J\bar{T}\) - deformed CFT is given by
\[Z_{J\bar{T}}\left(\tau,\bar{\tau},\nu,\frac{\lambda}{R}\right)=\sum_{n}e^{- \tau_{2}RE_{n}^{[\lambda]}(R)+i\tau_{1}RP_{n}+2\pi i\nu q_{n}^{[\lambda]}(R)} \tag{2.20}\]
where \(\nu\) is the chemical potential that couples to the chiral \(U(1)\) current. We wrote the coupling \(\lambda\) as an argument - rather than a label - because it changes under diffeomorphisms, due to its vectorial nature. Since imaginary energy modes are present for any value of the radius, it is currently not well understood to what extent this partition function is well defined; however, the fact that the theory admits a non-perturbative definition [48] yields hope that its study is meaningful.
The modular transformation properties of this partition function were discussed in [31]. Since \(\lambda^{a}\) transforms as a vector, \(\lambda^{\bar{z}}\), under modular transformations, which has the same transformation properties as \(R\bar{z}\), it follows that the dimensionless combination \(\lambda/R\) transforms exactly as \(\bar{z}\)
\[\frac{\lambda}{R}\mapsto\frac{\lambda}{R(c\bar{\tau}+d)} \tag{2.21}\]
Consequently, the dimensionful deformation parameter changes \(\lambda\mapsto\frac{\lambda|c\tau+d|}{(c\bar{\tau}+d)}\). A similar argument can be used to derive the well-known transformation properties of the chemical potential4\(\nu\)
Figure 1: Range of undeformed right-moving energies (shaded region) that lead to real energies in \(J\bar{T}\) - deformed CFTs and are allowed by the cosmic censorship bounds (green/blue parabolae). Note this range extends to infinite energies.
\[\nu\mapsto\frac{\nu}{c\tau+d} \tag{2.22}\]
With this in mind, the partition function has the standard anomalous transformation under diffeomorphisms of the torus
\[Z_{J\bar{T}}\bigg{(}\frac{a\tau+b}{c\tau+d},\frac{a\bar{\tau}+b}{c\bar{\tau}+d}, \frac{\nu}{c\tau+d},\frac{\lambda}{R(c\bar{\tau}+d)}\bigg{)}=\exp\bigg{(}\frac {2i\pi kc\nu^{2}}{c\tau+d}\bigg{)}Z_{J\bar{T}}\left(\tau,\bar{\tau},\nu,\frac{ \lambda}{R}\right) \tag{2.23}\]
One may also consider a slightly redefined partition function, which is invariant under these transformations [52]
\[Z_{J\bar{T}}^{inv}\left(\tau,\bar{\tau},\nu,\frac{\lambda}{R}\right)\equiv e^{ \frac{\pi k\nu^{2}}{\tau_{2}}}Z_{J\bar{T}}\left(\tau,\bar{\tau},\nu,\frac{ \lambda}{R}\right) \tag{2.24}\]
This transformation law can be readily extended to QFTs that may have various couplings that transform non-trivially under Lorentz transformations.
Finally, let us discuss the density of states. The entropy is again estimated by using the fact that the number of states does not change in fixed units. One may distinguish two cases: if the current in the seed CFT is not chiral, then
\[S_{J\bar{T}}(E,q) = S_{Cardy}(E^{[0]},q^{[0]})=2\pi\sqrt{\frac{c}{6}\bigg{(}\frac{ RE_{L}^{[0]}}{2\pi}-\frac{q^{[0]\,2}}{k}\bigg{)}}+2\pi\sqrt{\frac{c}{6} \bigg{(}\frac{RE_{R}^{[0]}}{2\pi}-\frac{\bar{q}^{[0]\,2}}{k}\bigg{)}} \tag{2.25}\] \[=\ 2\pi\sqrt{\frac{c}{6}\bigg{(}\frac{RE_{L}}{2\pi}-\frac{q^{2}}{k }\bigg{)}}+2\pi\sqrt{\frac{c}{6}\bigg{(}\frac{(R-\lambda\mathrm{w})E_{R}}{2 \pi}-\frac{\bar{q}^{2}}{k}\bigg{)}}\]
where we assumed, as explained above, that \(q\equiv q^{[\lambda]}\) and \(\bar{q}\equiv\bar{q}^{[\lambda]}\) are to be fixed in the deformed theory, as well as their difference, \(\mathrm{w}\). Note that the difference from the standard Cardy formula in presence of \(U(1)\) charge is rather minimal. If, on the other hand, the current \(J\) is chiral, then effectively \(\bar{q}^{[0]}=0\), and we obtain instead
\[S_{J\bar{T}}=2\pi\sqrt{\frac{c}{6}\bigg{(}\frac{RE_{L}}{2\pi}-\frac{q^{2}}{k }\bigg{)}}+2\pi\sqrt{\frac{c}{12\pi}\bigg{(}E_{R}(R-\lambda q)+\frac{\lambda^{ 2}kE_{R}^{2}}{8\pi}\bigg{)}} \tag{2.26}\]
Taking \(E_{R}\) large with \(q\) fixed, one finds Hagedorn behaviour at large energies. The above formula can be alternatively rewritten in terms of \(q^{[0]}\) using (2.17), but then the limit of large \(E_{R}\) with \(q^{[0]}\) fixed is problematic because the square roots become imaginary.
### Torus partition function of general symmetric product orbifold QFTs
In this subsection, we review and slightly generalize the well-known group-theoretical derivation [32] of the torus partition function of a symmetric product orbifold of two-dimensional QFTs. While this discussion is usually particularised to symmetric orbifolds of _C_FTs, we point out that the derivation only mildly depends on the conformal property of the seed. We can thus apply this method to general two-dimensional QFTs, and in particular to \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs.
We thus consider a two-dimensional QFT on a cylinder of circumference \(R\). This will be referred to as the _seed_ QFT and will be denoted as \(\mathcal{M}\). The theory obtained by taking a \(N\)-fold tensor product \(\mathcal{M}^{N}\) admits a natural action of the permutation group, \(S_{N}\). Quotienting it by the permutation group, one obtains the symmetric product orbifold theory, denoted as \(\mathcal{M}^{N}/S_{N}\).
The Hilbert space of \(\mathcal{M}^{N}/S_{N}\) is organized into twisted sectors [53], labeled by the conjugacy classes, denoted \([g]\), of \(S_{N}\)
\[\mathcal{H}(\mathcal{M}^{N}/S_{N})=\oplus_{[g]}\mathcal{H}^{[g]} \tag{2.27}\]
Each \(S_{N}\) conjugacy class is entirely specified by the lengths \((n)\) and multiplicities \((N_{n})\) of the cycles of the permutation, with \(\sum_{n}nN_{n}=N\). Within each conjugacy class, one keeps the states invariant under the centralizer (a.k.a commutant) of \(g\), which does not depend on the chosen representative. The resulting structure of the factors \(\mathcal{H}^{[g]}\) is
\[{\cal H}^{[g]}=\otimes_{n>0}\left({\cal H}^{{\mathbb{Z}}_{n}}\right)^{N_{n}}/S_{ N_{n}} \tag{2.28}\]
where \({\cal H}^{{\mathbb{Z}}_{n}}\) is the Hilbert space associated with a \({\mathbb{Z}}_{n}\) cyclic orbifold of the seed QFT, and the symmetrization is performed with respect to all cycles of the same length \(n\). The untwisted sector of this Hilbert space, which corresponds to the conjugacy class of the identity, is simply \(({\cal H}_{seed})^{N}/S_{N}\). The twisted sectors are characterized by the basic fields having twisted boundary conditions around the spatial cycle of the cylinder. States belonging to different twisted sectors are orthogonal, as is clear from the direct sum structure (2.27).
The twisted sectors can be understood by mapping to corresponding covering spaces. This is particularly simple to implement for the torus partition function, as the relevant covering spaces are again tori, allowing one to express the partition function of the orbifold QFT solely in terms of the seed partition function. The first results on the torus partition function (or, rather, elliptic genus) of a symmetric product orbifold were obtained in [53] using string-theoretical methods. In a series of articles [32, 33, 34] that built upon this work, an explicit group-theoretical construction of the torus partition function of a symmetric product orbifold of a generic CFT in terms of that of the seed was given. Importantly, this derivation - detailed below - does not involve conformal invariance, but only relies on the modular invariance of the seed partition function.
### Review and slight generalisation of Bantay's formula
The basic idea is the following: the partition function of the symmetric product orbifold QFT receives contributions from the different twisted sectors of the theory. Rather than considering fields with twisted boundary conditions on the original torus - denoted \({\cal T}^{2}\) - one can equivalently work with fields with standard boundary conditions on a covering space of the torus. The latter are unramified coverings with \(N\) sheets - not necessarily connected - for which the monodromy group - which encodes how the various sheets permute as one goes around a loop in base space - is a subgroup of the permutation group, \(S_{N}\)[54]. Permuting the sheets of the covering space under the monodromy action of the fundamental group corresponds to permuting the copies in the symmetric product orbifold, thus implementing the action of \(S_{N}\) in the QFT in a geometrical fashion.
Connected components of such covering spaces are associated to orbits of elements of the set \(\{1,2,...,N\}\) under the action of \(S_{N}\). We generically denote these orbits by5\(\xi\). By the Riemann-Hurwitz theorem, each such connected component is a torus, denoted \({\cal T}^{2}_{\xi}\), on which the seed theory lives and which covers the base \({\cal T}^{2}\)\(|\xi|\) times.
Footnote 5: For example, for \(N=5\) and the monodromy group generated by the permutations (12) and (345), namely \(((12),(345))=\{e,(12),(345),(354),(12)(345),(12)(354)\}\subset S_{5}\), the orbit of e.g. the element 3 is \(\xi=\{3,4,5\}\).
Each covering torus \({\cal T}^{2}_{\xi}\) can be written as the quotient of the complex plane by its fundamental group \(\pi_{1}({\cal T}^{2}_{\xi})\cong{\mathbb{Z}}\oplus{\mathbb{Z}}\), which is a subgroup of index6\(|\xi|\) of the fundamental group of the base torus \(\pi_{1}({\cal T}^{2})\cong{\mathbb{Z}}\oplus{\mathbb{Z}}\) or, equivalently, a sublattice of index \(|\xi|\) of the lattice associated to the base torus. These subgroups are labeled by three integers \(m_{\xi},r_{\xi},\ell_{\xi}\), with \(\ell_{\xi}>r_{\xi}\geq 0\) and \(m_{\xi}\ell_{\xi}=|\xi|\). Given a basis of generators for \(\pi_{1}({\cal T}^{2})\), usually denoted as the \(a\) and \(b\) cycles, these integers determine the generators of \(\pi_{1}({\cal T}^{2}_{\xi})\) - namely, the \(a_{\xi},b_{\xi}\) cycles - as
Footnote 6: The index of a subgroup \(H\subset G\) is the number, \(|G/H|\), of cosets of \(H\) in G.
\[a_{\xi}=\ell_{\xi}a\qquad\qquad b_{\xi}=r_{\xi}a+m_{\xi}b \tag{2.29}\]
Hence, the modular parameters of the covering tori can be written as
\[\tau_{\xi}=\frac{m_{\xi}\tau+r_{\xi}}{\ell_{\xi}} \tag{2.30}\]
In addition, if the length of the \(a\)-cycle on the base torus is \(R\), it follows that the length of the \(a_{\xi}\)-cycle on the covering torus \({\cal T}^{2}_{\xi}\) is
\[R_{\xi}=\ell_{\xi}R \tag{2.31}\]
An explicit example of the covering tori can be found in the pedagogical exposition of [55]. The area of the covering torus is \(R^{2}_{\xi}(\tau_{\xi})_{2}=\ell_{\xi}m_{\xi}R^{2}\tau_{2}=|\xi|R^{2}\tau_{2}\), in agreement with the fact that it covers
the base torus \(|\xi|\) times. This size information will be important when discussing the partition function of a non-conformal QFT, such as \(T\bar{T}\)-deformed CFTs, whose partition function depends explicitly on the length, \(R\), of the \(a\)-cycle.
Note that in the above, we have made a specific choice of parametrization, i.e. choice of basis of the generators of the fundamental group of the base and the covering tori. However, this choice should be immaterial as long as the quantities we compute are modular invariant.
Let us now reformulate these geometric data in group-theoretical language. The covering spaces discussed above are in one-to-one relation with the homomorphisms \(\phi:\pi_{1}({\cal T}^{2})\to S_{N}\). Using this correspondence, one can rewrite the covering space data in terms of permutations. Any such homomorphism is fully specified by two commuting permutations, \(\phi(a),\phi(b)\in S_{N}\) that generate \(\phi(\pi_{1}({\cal T}^{2}))\subset S_{N}\), corresponding to the choice of the two loops that generate \(\pi_{1}({\cal T}^{2})\). From the perspective of the QFT on the base \({\cal T}^{2}\), \(\phi(a)\) and \(\phi(b)\) correspond to the monodromies acquired by the fields as they circle around the \(a\) and \(b\)-cycle. The covering tori \({\cal T}^{2}_{\xi}\) correspond to the orbits \(\xi\) under the action of \(\phi(\pi_{1}({\cal T}^{2}))\subset S_{N}\). Each covering torus is determined by its fundamental group which, as explained in [32], is isomorphic to the stabilizer associated to the orbit
\[S_{\xi}=\{\phi(x)\in\phi(\pi_{1}({\cal T}^{2}))\mid\phi(x)\xi^{*}=\xi^{*}, \forall\xi^{*}\in\xi\}\cong\pi_{1}({\cal T}^{2}_{\xi}) \tag{2.32}\]
Intuitively, the elements of the stabilizer act by definition as identity on \(\xi\), thus mapping each sheet of the associated covering space into itself. Under this trivial monodromy action, all loops in the base space are lifted to loops in the covering space, i.e. elements of the fundamental group of the covering. In particular, for the choice (2.29), the generators of \(\pi_{1}({\cal T}^{2}_{\xi})\) are mapped by this isomorphism into
\[\phi(a)^{\ell_{\xi}}\mbox{ and }\phi(a)^{r_{\xi}}\phi(b)^{m_{\xi}} \tag{2.33}\]
providing a group-theoretical interpretation of the integers \(m_{\xi},r_{\xi},\ell_{\xi}\) that determine the complex structure of the covering tori: \(m_{\xi}\) is the number of \(\phi(a)\) orbits in \(\xi\), \(\ell_{\xi}\) is their common length and \(r_{\xi}\) is the smallest nonnegative integer such that \(\phi(a)^{r_{\xi}}\phi(b)^{m_{\xi}}\) belongs to the stabilizer7. See e.g. [55] for more examples.
Footnote 7: Let us give an example: in \(S_{5}\), we consider again the covering space associated to the homomorphism \(a\mapsto\phi(a)=(345),b\mapsto\phi(b)=(12)\). Clearly, it has two connected components. We first consider the one associated to the orbit \(\xi=\{3,4,5\}\), that should give a covering space with \(|\xi|=3\) sheets. The corresponding stabilizer is \(S_{\xi}=\{e,(12)\}\cong\mathbb{Z}_{2}\). The number of \(\phi(a)\) orbits in \(\xi\) is \(1\) and its length is \(3\), which means \(m_{\xi}=1,\ell_{\xi}=3\) and \(r_{\xi}=0\), leading to a modular parameter \(\tau_{\xi}=\tau/3\) for the covering torus. Note that \(\phi(a)^{\ell_{\xi}}=(345)^{3}=e,\phi(a)^{r_{\xi}}\phi(b)^{m_{\xi}}=(12)\) are the elements of \(S_{\xi}\). Similarly, for the orbit \(\xi=\{1,2\}\), the stabilizer is \(\{e,(345),(354)\}\). The number of \(\phi(a)\) orbits is \(m_{\xi}\)=2 and their common length is \(\ell_{\xi}=1\) because \(\phi(a)\) acts as identity on \(1\) and \(2\); again \(r_{\xi}=0\), implying \(\tau_{\xi}=2\tau\).
Putting everything together, one can express the partition function of the symmetric product orbifold on a torus with modular parameter \(\tau\) and length of the \(a\)-cycle \(R\) in terms of the seed partition function on tori of different modular parameters (2.30) and radii (2.31) as [32]
\[Z^{S_{N}}(\tau,\bar{\tau},R)=\frac{1}{N!}\sum_{\phi:\pi_{1}({\cal T}^{2}) \to S_{N}}\prod_{\xi\mbox{ orbit}}Z^{seed}(\tau_{\xi},\bar{\tau}_{\xi},R_{\xi}) \tag{2.34}\]
where we suppressed for now the possible dependence of the partition function on other parameters. This should be sufficient for constructing the partition function of a symmetric product orbifold of arbitrary Lorentz-invariant QFTs. The Lorentz-breaking case will be discussed when treating symmetric orbifolds of \(J\bar{T}\) - deformed CFTs.
This formula can be further massaged by considering the generating function
\[\sum_{N=0}^{\infty}p^{N}Z^{S_{N}}(\tau,\bar{\tau},R)=\exp\sum_{n=1}^{\infty}p^ {n}{\cal Z}^{(n)},\hskip 14.226378pt{\cal Z}^{(n)}=\frac{1}{n}\ \sum_{{\cal T}^{2}_{\xi} \mid_{|\xi|=n}}\hskip-14.226378ptZ^{seed}({\cal T}^{2}_{\xi}) \tag{2.35}\]
where the sum in \({\cal Z}^{(n)}\) runs over connected covering space of \({\cal T}^{2}\) with \(n\) sheets, which are the tori \({\cal T}^{2}_{\xi}\) discussed previously, with \(|\xi|=n\). Collecting the coefficient of \(p^{N}\) on the right-hand-side of the
first sum in(2.35), the formula (2.34) for the partition function of the \({\cal M}^{N}/S_{N}\) orbifold theory can be written more compactly as
\[Z^{S_{N}}(\tau,\bar{\tau},R)=\frac{1}{N!}\sum_{x\in S_{N}}\prod_{\ell\;\text{ orbit}}\;\;|\xi|{\cal Z}^{(|\xi|)} \tag{2.36}\]
As noted in [34], in the above formula the sum runs over all permutations in \(S_{N}\), which is much simpler to handle than the previous sum (2.34) over homomorphisms, namely over pairs of commuting permutations in \(S_{N}\). Since twisted sectors correspond to permutations up to conjugation, (2.36) provides an easy way to read off the contributions of the different sectors.
The individual contributions \({\cal Z}^{(n)}\) are given explicitly by
\[{\cal Z}^{(n)}=\frac{1}{n}\sum_{\ell|n}\sum_{0\leq r<\ell}Z^{seed}\bigg{(} \frac{n\tau}{\ell^{2}}+\frac{r}{\ell},\frac{n\bar{\tau}}{\ell^{2}}+\frac{r}{ \ell},R\ell\bigg{)} \tag{2.37}\]
The above formula gives the contribution of all sets of equal-length cycles whose lengths sum to \(n\), where \(\ell\) is the length of the cycles and \(n/\ell\) gives the number of cycles of that length. Choosing \(\ell=1\), we obtain the contribution to this term of the states from the untwisted sector (in the form of \(n\) identical copies of the same state in the seed QFT), while choosing \(\ell=n\) we obtain the contribution of the twisted sector of a single cycle of length \(n\).
### Comments on modular invariance
As we already discussed, well-definiteness of the torus partition function of the seed QFT requires it to be modular invariant in the generalised sense we reviewed. We would now like to show that modular invariance of the symmetric orbifold partition function (2.36) automatically follows from that of the seed.
In the simplest case of a CFT seed, the partition function depends only on the modular parameter of the torus. It is then not hard to recognise the 'connected' \({\cal Z}^{(n)}\) contribution as the action of the \(n^{th}\) Hecke operator, \(T_{n}\), on the seed partition function
\[{\cal Z}^{(n)}_{{}_{CFT}}=T_{n}Z^{seed}_{{}_{CFT}} \tag{2.38}\]
where, by definition, the action of a Hecke operator \(T_{n}\) on a modular form of weight \((\kappa,\bar{\kappa})\) is
\[T_{n}f(\tau,\bar{\tau})=\frac{1}{n}\sum_{\begin{subarray}{c}r,\ell\in{\mathbb{ Z}},\;\ell|n\\ 0\leq r<\ell\end{subarray}}\frac{1}{\ell^{\kappa+\bar{\kappa}}}f\bigg{(}\frac{n \tau}{\ell^{2}}+\frac{r}{\ell},\frac{n\bar{\tau}}{\ell^{2}}+\frac{r}{\ell} \bigg{)} \tag{2.39}\]
and produces another modular form of the same weight. The seed partition function is modular invariant, i.e. it is simply a modular form of weight zero. Then, (2.38) implies that \({\cal Z}^{(n)}_{{}_{CFT}}\), and thus the full symmetric orbifold partition function, is also modular invariant.
More generally, if the theory only possesses scalar dimensionful couplings, the partition function would depend on the dimensionless combinations built from these couplings and \(R\). If this partition function allows for a Taylor expansion in terms of this dimensionless coupling, as is the case for e.g. \(T\bar{T}\)-deformed CFTs
\[Z^{seed}_{{}_{TT}}\left(\tau,\bar{\tau},\frac{\mu}{R^{2}}\right)=\sum_{\kappa =0}^{\infty}\left(\frac{\mu}{R^{2}}\right)^{\kappa}Z_{\kappa}(\tau,\bar{\tau}) \tag{2.40}\]
then, as already discussed in [30], the coefficients of this expansion are all modular forms of weight \((\kappa,\kappa)\). Acting with the \(n^{th}\) Hecke operator on each term and then resumming yields precisely (2.37)
\[T_{n}Z^{seed}_{{}_{TT}}\left(\tau,\bar{\tau},\frac{\mu}{R^{2}}\right) =\frac{1}{n}\sum_{\kappa}\Big{(}\frac{\mu}{R^{2}}\Big{)}^{\kappa }\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Hence, we can again write \({\cal Z}^{(n)}=T_{n}Z^{seed}(\tau,\bar{\tau},R)\) and the modular invariance of the seed implies the modular invariance of the symmetric product. The same reasoning applies when the couplings have non-trivial transformation properties; examples will be given in the following section, where we will be discussing in detail the case of \(J\bar{T}\)-deformed CFTs with a chemical potential.
\(T\bar{T}\) - deformed CFTs are, in a certain sense, the next simplest case to consider beyond just CFT, since the fact that the coupling \(\mu\) has a negative mass dimension allows for an expansion in terms of standard modular forms of positive weight. More generally, there is no reason to expect that the partition function would be analytic in the given coupling, and therefore the above argument using the Taylor expansion would not hold. It is, nevertheless, possible to argue for modular invariance of \({\cal Z}^{(n)}\) directly from the modular properties of the seed partition function: the \(T\) transformation \(\tau\to\tau+1\) of the base QFT simply reshuffles the terms in the sum (2.40), whereas the \(S\) transformation \(\tau\to-1/\tau\) can be undone by a modular transformation of the covering tori, together with a reshuffling of the terms in the sum, as argued in [56] for the CFT case. Including the radius dependence is straightforward8.
Footnote 8: The \(S\) transformation on the base torus maps \(\tau_{\xi}\mapsto\tau^{\prime}_{\xi}=(r_{\xi}\tau-m_{\xi})/(\ell_{\xi}\tau)\) and \(R_{\xi}\mapsto R^{\prime}_{\xi}=|\tau|\ell_{\xi}R\). Using equation (12) of [56], one can show that \(\tau^{\prime}_{\xi},R^{\prime}_{\xi}\) are related by a modular transformation to \(\tilde{\tau}_{\xi}=(m^{*}_{\xi}\tau-r^{*}_{\xi})/\ell^{*}_{\xi},\tilde{R}_{\xi }=\ell^{*}_{\xi}R\), where \(\{\ell^{*}_{\xi},m^{*}_{\xi},r^{*}_{\xi}\}\) are integers with \(\ell^{*}_{\xi}m^{*}_{\xi}=|\xi|\) and \(0\leq r^{*}_{\xi}<\ell^{*}_{\xi}\), that parametrize the orbit \(\xi\), just like \(\{\ell_{\xi},m_{\xi},r_{\xi}\}\). The relation between the two parametrization is explained in [56]. Using the modular invariance of the seed partition function, it follows that the sum in (2.37) is invariant under the \(S\) transformation of the base torus.
### General features of the spectrum of symmetric product orbifolds
The spectrum of the symmetric product orbifold can be readily extracted from the partition function (2.36), by giving it a Hilbert space interpretation
\[Z^{S_{N}}(\tau,\bar{\tau},R)=\sum_{n}d_{n}e^{-\beta E_{n}+iP_{n}\theta}\;, \;\;\;\;\beta=R\tau_{2}\;,\;\;\theta=R\tau_{1} \tag{2.42}\]
where we have now explicitly included the ventral degeneracies, \(d_{n}\), of the energy levels. We would like to express the finite-size energies and momenta \(E_{n},P_{n}\) of the symmetric product orbifold, as well as \(d_{n}\), in terms of those of the seed QFT, denoted \(E^{(s)}_{n},P^{(s)}_{n},d^{(s)}_{n}\)
\[Z^{seed}(\tau,\bar{\tau},R)=\sum_{n}d^{(s)}_{n}e^{-\beta E^{(s)}_{n}+iP^{(s) }_{n}\theta} \tag{2.43}\]
As a warm-up, it it useful to first work out the contribution to the partition function of the twisted sector associated to a single cycle of length \(w\), which we will refer to as the \(w\)-twisted sector. It corresponds to the \(\ell=w\) contribution to \({\cal Z}^{(w)}\) and will be denoted \(Z^{(w)}\)
\[Z^{(w)}\equiv\left.{\cal Z}^{(w)}(\tau,\bar{\tau},R)\right|_{\ell=w}=\frac{1} {w}\sum_{0\leq r<w}Z_{seed}\left(\frac{\tau+r}{w},\frac{\bar{\tau}+r}{w},Rw\right) \tag{2.44}\]
One immediately notes that in this sector, the contributions of the seed partition are evaluated at the same inverse temperature, \(\beta=R\tau_{2}\), as that of the full orbifold, even though the length of the spatial circle is \(w\) times larger. This implies that
\[E^{(w)}_{n}(R)=E^{(s)}_{n}(Rw) \tag{2.45}\]
where \(E^{(w)}_{n}(R)\) represent the finite-size energy levels in the \(w\) - twisted sector. Note this result follows without any use of conformal invariance, but only of the modular invariance properties of the partition function we have been assuming throughout this section. For the case of a CFT, the energies on the cylinder can be related to the conformal dimensions of the corresponding operators on the plane via the usual conformal map, which yields
\[E^{(w)}_{n,\,CFT}(R)=\frac{2\pi(\Delta^{(w)}_{n}-\frac{cw}{12})}{R}\;,\;\;\;\; \;\;\;E^{(s)}_{n,\,CFT}(wR)=\frac{2\pi(\Delta^{(s)}_{n}-\frac{c}{12})}{Rw} \tag{2.46}\]
where \(c\) is the central charge of the seed CFT. Note that the gap above the ground state (\(\Delta^{(s)}=0\)) is, as is well-known, \(w\) times smaller in the twisted sector than in the untwisted one. The \(cw/12\) shift between the energy and the dimension in the twisted sector follows from the fact that the effective central charge of the latter is \(cw\). Combining (2.45) and (2.46), we obtain
\[\Delta^{(w)}_{n}=\frac{\Delta^{(s)}_{n}}{w}+\frac{c}{12}\left(w-\frac{1}{w}\right) \tag{2.47}\]
which reproduces the known result for the twisted sector operator dimensions [57]. Note that in the above, conformal invariance was only used to translate the cylinder energies into operator conformal dimensions, but is otherwise not needed to derive (2.45), which holds equally well in a non-conformal theory.
Let us now also take into account the momentum dependence of the \(w\)-twisted sector partition function (2.44). The quantity \(\theta=\tau_{1}R\) being the same as in the seed, we again have
\[P^{(w)}_{n}(R)=P^{(s)}_{n}(wR)=\frac{2\pi p_{n}}{wR}\,\ \ \ \ p_{n}\in\mathbb{Z} \tag{2.48}\]
where \(p_{n}\) is the integer-quantized momentum of the corresponding state in the seed QFT. The above appears to imply that the twisted-sector momentum may be a fractional multiple, \(p_{n}/w\), of the inverse radius, which would be inconsistent with modular invariance. This is resolved by the sum present in (2.44), since for every energy-momentum eigenstate in the seed, the full contribution to \(Z^{(w)}\) is
\[\frac{1}{w}\sum_{r=0}^{w-1}e^{-\beta E_{n}+i(\theta+rR)P_{n}}=\frac{1}{w}e^{- \beta E_{n}+i\theta P_{n}}\sum_{r=0}^{w-1}e^{2\pi ir\{\frac{p_{n}}{w}\}}=e^{- \beta E_{n}+i\theta P_{n}}\,\delta(p_{n}=0\;\mathrm{mod}\;w) \tag{2.49}\]
where in the second step we noted that only the fractional part of \(p_{n}/w\) contributes, and in the third we trivially summed the geometric series. Thus, the momentum in the twisted sector is an integer, as expected, and only seed momenta that are multiples of \(w\) will end up contributing to the \(w\)-twisted sector partition sum.
To summarize, each state in the seed QFT gives rise to a state in the \(w\) - twisted sector, whose energy and momentum are
\[E^{(w)}(R)=E^{(s)}(wR)\,\ \ \ \ P^{(w)}(R)=P^{(s)}(wR)\ \ \ \ \ \mathrm{iff}\ \ \ \ P^{(s)}(R)\in\frac{2\pi}{R}w\,\mathbb{Z} \tag{2.50}\]
In particular, the degeneracies of these states are the same, provided the constraint on the momentum is satisfied.
The contributions of the terms with \(\ell\neq w\) to \(\mathcal{Z}^{(w)}\) - namely, of sectors with \(w/\ell\) cycles of length \(\ell\) - can be analysed in an analogous manner. We find
\[E^{(w)}_{n}(R)=\frac{w}{\ell}E^{(s)}_{n}(\ell R)\,\ \ \ \ P^{(w)}_{n}(R)=\frac{w}{\ell}P^{(s)}_{n}(\ell R)\ \ \ \ \mathrm{iff}\ \ \ \ P^{(s)}(R)\in\frac{2\pi}{R}\ell\, \mathbb{Z} \tag{2.51}\]
These states correspond to \(w/\ell\) identical copies of the same state from the \(\ell\) - twisted sector, in agreement with the selection rule on the momentum.
The full spectrum of the symmetric orbifold is given by putting together these elements inside the partition function. It it useful to work out explicity the full partition function (2.36) for the the simplest example \(N=2\), as higher \(N\) work qualitatively similarly. In this case, there are only two sectors, one untwisted and one 2-twisted. Applying Bantay's formula (2.36), we have
\[Z^{S_{2}}(\tau,\bar{\tau},R)=\frac{1}{2}(\mathcal{Z}^{(1)})^{2}+ \mathcal{Z}^{(2)}=\] \[= \frac{1}{2}Z^{2}_{seed}(\tau,\bar{\tau},R)+\frac{1}{2}Z_{seed}(2 \tau,2\bar{\tau},R)+\frac{1}{2}Z_{seed}\bigg{(}\frac{\tau}{2},\frac{\bar{\tau }}{2},2R\bigg{)}+\frac{1}{2}Z_{seed}\bigg{(}\frac{\tau+1}{2},\frac{\bar{\tau}+ 1}{2},2R\bigg{)}\] \[= \frac{1}{2}\sum_{m,n}d^{(s)}_{m}d^{(s)}_{n}e^{-\beta(E^{(s)}_{m}+ E^{(s)}_{n})+i\theta(P^{(s)}_{m}+P^{(s)}_{n})}+\frac{1}{2}\sum_{m}d^{(s)}_{m}e^{-2 \beta E^{(s)}_{m}+2i\theta P^{(s)}_{m}}+\sum_{m}d^{(s)}_{m}e^{-\beta E^{(2)}_{ m}+i\theta P^{(2)}_{m}}\]
where in the last term we have used our previous result on the twisted sector spectrum and allowed momenta. The degeneracies of the various states simply follow from the seed degeneracies, with the given restriction. The first two terms contribute to the untwisted sector partition function, as can be seen by further massaging them into
\[Z^{S_{2}}\big{|}_{untw}=\sum_{m<n}d^{(s)}_{m}d^{(s)}_{n}e^{-\beta(E^{(s)}_{m}+E^ {(s)}_{n})+i\theta(P^{(s)}_{m}+P^{(s)}_{n})}+\sum_{m}\frac{d^{(s)}_{m}(d^{(s)}_ {m}+1)}{2}\,e^{-2\beta E^{(s)}_{m}+2i\theta P^{(s)}_{m}} \tag{2.53}\]
The contributing states belong to the symmetrized tensor product \(({\cal H}_{seed})^{2}/S_{2}\) and they take the form \((|E_{n}\rangle|E_{m}\rangle+|E_{m}\rangle|E_{n}\rangle)/\sqrt{2}\) for the first term, and \(|E_{m}\rangle|E_{m}\rangle\) for the second. The degeneracies precisely correspond to those in the symmetrized tensor product of seed Hilbert spaces. Note that integer degeneracies are obtained only after including all contributions to the partition function. In the twisted sector, the degeneracies are the same as in the seed, subject to the projection (2.48).
All higher \(N\) cases work similarly. The full energy spectrum is given by sums of the form
\[\sum_{cycles}E^{(w_{i})}(R)\;,\qquad\sum_{cycles}P^{(w_{i})}(R) \tag{2.54}\]
which run over all the cycles in the various conjugacy classes \([g]\). The ground state energy in each sector varies from \(-\pi cN/(6R)\) in the untwisted sector to \(-\pi c/(6NR)\) in the maximally twisted one.
### Spectrum of \(T\bar{T}\) and \(J\bar{T}\) symmetric product orbifolds
We would now like to apply these considerations to the specific examples of interest, namely \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs.
#### \(T\bar{T}\) - deformed CFTs
As reviewed in section 2.1, the partition function of a \(T\bar{T}\) - deformed CFT - a Lorentzian QFT with a single dimensionful coupling, \(\mu\) - depends not only on \(\tau,\bar{\tau}\), but also on \(R\) through the dimensionless combination \(\mu/R^{2}\). This partition function is modular invariant in the generalised sense (2.10).
The partition function of the symmetric product orbifold of \(T\bar{T}\) - deformed CFTs is obtained via a trivial application of Bantay's formula (2.36) to this seed, where the 'connected' contributions \({\cal Z}^{(n)}\) are given by particularizing (2.37) to the specific dependence on \(R\) of the \(T\bar{T}\) seed partiton function. Explicitly,
\[{\cal Z}^{(n)}_{TT}=\frac{1}{n}\sum_{\ell|n}\sum_{0\leq r<\ell}Z^{seed}_{TT} \bigg{(}\frac{n\tau}{\ell^{2}}+\frac{r}{\ell},\frac{n\bar{\tau}}{\ell^{2}}+ \frac{r}{\ell},\frac{\mu}{\ell^{2}R^{2}}\bigg{)} \tag{2.55}\]
As explained in the previous section, the modular invariance of \({\cal Z}^{(n)}\) follows from that of the seed partition function. It can be made particularly evident by rewriting \({\cal Z}^{(n)}\) in terms of Hecke operators. This result is in full agreement with the previous worldsheet computations [25, 26] and the recent derivation [29].
As explained in our general analysis, this allows us to obtain the spectrum in the various twisted sectors. In particular, the energies in the \(w\) - twisted sector are given by
\[E^{(w)}_{TT}(R)=E^{(s)}_{TT}(Rw)=\frac{Rw}{2\mu}\bigg{(}\sqrt{1+\frac{4\mu E^ {(s)}_{CFT}(Rw)}{wR}+\frac{4\mu^{2}\left(P^{(s)}(wR)\right)^{2}}{R^{2}w^{2}}} -1\bigg{)} \tag{2.56}\]
where we have opted to sometimes use the subscript 'CFT' to denote the undeformed fields, either in the seed or in the symmetric orbifold. The momenta are given by (2.50), which includes the projection. One may further plug in the expression (2.46) for \(E^{(s)}_{CFT}(wR)\) in terms of the conformal dimensions in the seed CFT, obtaining perfect agreement with the spectrum previously worked out in the literature [25, 26]; note this brings additional powers of \(w\) to the denominators. Alternatively, we may use (2.45) to replace \(E^{(s)}_{CFT}(wR)\) by \(E^{(w)}_{CFT}(R)\), and interpret (2.56) instead as the solution to a universal flow equation in the twisted sector with an effective parameter \(\mu/w\), as was previously
observed in [29]. The full spectrum of the symmetric orbifold of \(T\bar{T}\) - deformed CFTs is given by sums over this kind of terms, as in (2.54), and is thus entirely determined by the spectrum of the seed undeformed CFT. Note that since the twisted sectors are equivalent to the seed theory on a cylinder of radius \(Rw\), the torus partition function of the symmetric orbifold is well-defined provided the seed is, namely if the circumference of the torus satisfies (2.12).
Note the deformed spectrum may also be obtained directly from the flow equation. As usual, first order perturbation theory implies that
\[\partial_{\mu}E=\langle n|\sum_{I=1}^{N}T_{I}\bar{T}_{I}|n\rangle \tag{2.57}\]
In the untwisted sector, \(E=\sum_{I}E_{I}\), where each \(E_{I}\) obeys the \(T\bar{T}\) flow equation in the given copy. In the twisted sectors, one may uplift the flow equation to the covering space, which is a cylinder of circumference \(Rw\). Since the right-hand-side of the flow equation is inversely proportional to the radius, it will pick up an overall factor of \(1/w\)
\[\partial_{\mu}E^{(w)}=\frac{1}{w}(E^{(w)}\partial_{R}E^{(w)}-P^{(w)2}/R) \tag{2.58}\]
The solution will be given by the usual \(T\bar{T}\) solution, but with \(R\to Rw\) or, equivalently, \(\mu\to\mu/w\). This agrees with (2.56), provided one takes into account the fact that the undeformed energies and momenta are already in the twisted sector, and thus are related via (2.50) to the ones of the seed.
#### \(J\bar{T}\)-deformed CFTs
The case of \(J\bar{T}\)-deformed CFTs is more interesting, since the dependence on the couplings must be explicitly included in the partition function, as they transform non-trivially under modular transformations. This concerns both the \(J\bar{T}\) coupling, \(\lambda\), and the external chemical potential, \(\nu\), for the left-moving charge. In addition, the partition function of the seed is not modular invariant, but instead transforms (2.23) as a Jacobi form of weight \((0,0)\) and index \((k,0)\), where \(k\) is the level of the \(U(1)\) Kac-Moody algebra.
Our goal is to understand the dependence on the parameters \(\lambda/R\) and \(\nu\) of the seed theories on the covering tori. Let us first treat the case of the left-moving chemical potential \(\nu\). As explained, \(\nu=\beta a^{z}=\tau_{2}Ra^{z}\), where \(a^{z}\) is the gauge field that couples to the chiral left current. This coupling is held fixed when placing the seed theory on a covering torus; as a result, the chemical potential \(\nu_{\xi}\) on the covering tori is given by
\[\nu_{\xi}=(\tau_{\xi})_{2}R_{\xi}a^{z}_{\xi}=\frac{n}{\ell^{2}}\tau_{2}R\ell a ^{z}=\frac{n}{\ell}\nu \tag{2.59}\]
On the other hand, the dimensionless coupling \(\lambda/R\) simply picks up the factor of \(\ell\) that follows from dimensional analysis, \(\lambda\) itself being the same.
The partition function of the symmetric product orbifold of \(J\bar{T}\) - deformed CFTs is again given by Bantay's formula (2.36), where the individual contributions read
\[{\cal Z}^{(n)}_{J\bar{T}}\left(\tau,\bar{\tau},\nu,\frac{\lambda}{R}\right)= \frac{1}{n}\sum_{\ell|n}\sum_{0\leq r<\ell}Z^{seed}_{J\bar{T}}\left(\frac{n \tau}{\ell^{2}}+\frac{r}{\ell},\frac{n\bar{\tau}}{\ell^{2}}+\frac{r}{\ell}, \frac{n\nu}{\ell},\frac{\lambda}{\ell R}\right) \tag{2.60}\]
Given the modular transformation properties (2.23) of the seed partition function, we would now like to show that the partition function of the symmetric orbifold transforms in the same manner, but with \(k\to Nk\), as follows from the fact that the level of the \(U(1)\) current in the symmetric product is \(N\) times larger than that of the seed. Remember from (2.24) that the seed partition function differs from a modular-invariant one by a factor of \(\exp(\frac{\pi k\nu^{2}}{\tau_{2}})\). On the covering tori, we have
\[\frac{k\nu_{\xi}^{2}}{(\tau_{\xi})_{2}}=\frac{nk\nu^{2}}{\tau_{2}} \tag{2.61}\]
which is \(\ell\) - independent. Thus, each \({\cal Z}^{(n)}\) will differ from a modular-invariant contribution by the exponential of such a factor. Since the symmetric orbifold partition function is a sum of products
\(\prod_{n}({\cal Z}^{(n)})^{N_{n}}\) and \(\sum_{n}nN_{n}=N\), we immediately note that the lack of modular invariance of each term in the sum in (2.36) is \(kN\nu^{2}/\tau_{2}\), which is the same for every possible partition of the integer \(N\). Thus, the transformation properties of the seed partition function under modular transformations determine those of the symmetric orbifold one, which transforms as in (2.23), but with \(k\to Nk\). This connection can be made explicit by rewriting the result using Hecke operators, whose action can also be defined on the Jacobi forms of weight \((0,\kappa)\) and index \((k,0)\) relevant to \(J\bar{T}\) as [58]
\[T_{n}\phi(\tau,\bar{\tau},\nu)=\frac{1}{n}\sum_{\begin{subarray}{c}r,\ell\in {\cal Z},\ell|n\\ 0\leq r<\ell\end{subarray}}\frac{1}{\ell\kappa}\phi\bigg{(}\frac{n\tau}{\ell^ {2}}+\frac{r}{\ell},\frac{n\bar{\tau}}{\ell^{2}}+\frac{r}{\ell},\frac{n\nu}{ \ell}\bigg{)} \tag{2.62}\]
and yields a Jacobi form of the same weight and index \((nk,0)\). Expanding the \(J\bar{T}\)-deformed CFT partition function in a Taylor series in \(\lambda\), the coefficient of \(\lambda^{\kappa}\) is \(R^{-\kappa}\) times a \((0,\kappa)\) Jacobi form of index \((k,0)\). Thus, (2.60) can be written as
\[{\cal Z}^{(n)}_{JT}=T_{n}Z^{seed}_{JT}\bigg{(}\tau,\bar{\tau},\nu,\frac{ \lambda}{R}\bigg{)} \tag{2.63}\]
while the whole partition function is given by the right-hand side of (2.36).
Let us now understand the consequences of this formula for the spectrum of single-trace \(J\bar{T}\) - deformed CFTs. We focus first on the \(w\)-twisted sector, for which \(\ell=w\) and thus \(\nu_{\xi}=\nu\), implying that the spectrum of left-moving charges is the same as in the seed. According to our general formula (2.45), the right-moving energies in the \(w\)-twisted sector read
\[E^{(w)}_{R,J\bar{T}}(R)=E^{(s)}_{R,J\bar{T}}(Rw)=\frac{4\pi}{k\lambda^{2}} \left(Rw-\lambda q^{[0]}-\sqrt{\left(Rw-\lambda q^{[0]}\right)^{2}-\frac{ \lambda^{2}k}{2\pi}Rw\,E^{(s)}_{R,CFT}(Rw)}\right)\]
\[q^{(w)}=q^{[0]}+\frac{\lambda k}{4\pi}E^{(w)}_{R,J\bar{T}}(R) \tag{2.64}\]
where \(q^{[0]}\) is the charge in the undeformed seed and \(P^{(w)}(R)\) is given as before by (2.50), which entails a selection rule on the seed momenta. One can rewrite (2.64) in terms of the seed conformal dimensions by plugging in the explicit expressions (2.46) for the CFT finite-size energies. Alternatively, one can reinterpret \(E^{(s)}_{R,CFT}(Rw)\) as \(E^{(w)}_{R,CFT}(R)\) and view this expression as the solution to the \(J\bar{T}\) flow equation in the \(w\) - twisted sector, where the flow parameter is effectively \(\lambda/w\) and the effective \(U(1)\) level is \(kw\). The above formula matches the worldsheet analysis of [9, 27, 59]9.
Footnote 9: In [9, 59] a different convention for the winding is used, such that \(w_{here}=-w_{\rm there}\). The charges in [9] are also related to ours by \(\bar{q}=-q^{[0]},\bar{Q}=-q^{(w)}\) and the definitions of left and right are exchanged.
For the contributions to \({\cal Z}^{(w)}\) that have \(\ell\neq w\), note that the chemical potential on the covering torus is \(w/\ell\) times that of the full symmetric orbifold. This implies that the chages in this sector are \(w/\ell\) times the seed ones. This is in agreement with the fact that the states contributing to these terms take the form \((\otimes|E^{(\ell)},q^{(\ell)}))^{w/\ell}\).
Finally, since the single-trace \(J\bar{T}\) deformation also preserves the left conformal symmetry on the plane, we may again compute the corresponding left conformal dimensions, following the same steps as in the double-trace analysis of [42]. We obtain
\[h^{(w)}_{JT}(\bar{p})=h^{(w)}_{CFT}+\frac{\lambda\bar{p}\,q^{[0]}}{2\pi w}+ \frac{k\lambda^{2}\bar{p}^{2}}{16\pi^{2}w}=\frac{h^{(s)}_{JT}(\bar{p})}{w}+ \frac{c}{24}\bigg{(}w-\frac{1}{w}\bigg{)} \tag{2.65}\]
where \(h^{(w)}_{CFT}\) is the undeformed conformal dimension in the \(w\)-twisted sector, related to that in the seed CFT via (2.47), \(h^{(s)}_{JT}(\bar{p})\) is the momentum-dependent conformal dimension (2.19) in the \(J\bar{T}\) seed QFT, and \(\bar{p}\leftrightarrow E^{(w)}_{R}\) is the right-moving energy on the boosted cylinder. Overall, we obtain the standard CFT formula (2.47) for the orbifolded conformal dimensions, now taking into account the fact that the seed left-moving conformal dimension has been modified to (2.19). Another possible interpretation of this formula is as fractional spectral flow [60, 61] with parameter \(\lambda\bar{p}/w\) in the \(w\) - twisted sector, where the level is \(k^{(w)}=kw\). The left-moving charge is simply given by (2.64) with \(E^{(w)}_{R}\to\bar{p}\). This observation will be important for constructing the single-trace \(J\bar{T}\) correlation functions in section 4.2.
### Comments on the entropy
Given the partition function of the symmetric orbifold, the density of states can be readily extracted from it. In this section, we comment upon the entropy of both single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, as well as its relation to that of the respective double-trace deformations.
#### \(T\bar{T}\) - deformed CFTs
In the \(T\bar{T}\) case, the entropy of both the single-trace and double-trace deformation has been discussed in detail in the recent work [29], so we will be brief. For simplicity, we will set \(P=0\).
The analysis of [29] closely follows that of [62] for the case of two-dimensional CFTs. One of their results is that the entropy of a large \(N\) symmetric product orbifold of \(T\bar{T}\) - deformed CFTs presents two regimes10
Footnote 10: Note that our conventions differ from those in [29] by \(\mu_{here}=2\pi\mu_{there}\) and also \(R_{here}=2\pi R_{there}\).
\[S_{SPO\,of\,TT}(E)=\left\{\begin{array}{cc}R(E-E_{vac})&\mbox{for}\quad E_ {vac}\lesssim E<E_{c}\\ 2\pi\sqrt{\frac{c^{(s)}E}{6\pi}}(RN+\mu E)&\mbox{for}\quad\quad E>E_{c}\end{array}\right. \tag{2.66}\]
with a sharp transition between them. Here
\[E_{c}=-\frac{E_{vac}}{1+2\mu E_{vac}/NR}\;,\;\;\;\mbox{with}\;\;\;\;\;E_{vac} =\frac{NR}{2\mu}\left(\sqrt{1-\frac{2\pi\mu\,c^{(s)}}{3R^{2}}}-1\right) \tag{2.67}\]
and, in this subsection only, \(c^{(s)}\) is the central charge of the seed CFT.
Thus, the behaviour of the entropy is Hagedorn in an intermediate range of energies and then transitions to the universal \(T\bar{T}\) behaviour (Cardy \(\to\) Hagedorn) at high energies. Note that, since the partition function of single-trace \(T\bar{T}\) - deformed CFTs only makes sense on a circle of circumference \(R>R_{min}\), with \(R_{min}\) given in (2.12), the slope of the high-energy Hagedorn regime is always less than the slope of the intermediate Hagedorn one. It is interesting to ask whether the two Hagedorn regimes need to be separated by a Cardy one. The crossover between Cardy and Hagedorn behaviour in the universal regime \(E>E_{c}\) occurs at an energy scale \(E_{T}\sim NR/\mu\). The ratio of this scale11 to \(E_{c}\) is
Footnote 11: In units of \(N\pi c/3R\), the energies are \(E_{T}=2/x\), \(E_{c}=(1/\sqrt{1-x}-1)/x\) and \(E_{vac}=(\sqrt{1-x}-1)/x\).
\[\frac{E_{T}}{E_{c}}=\frac{RN}{\mu E_{c}}=\frac{2\sqrt{1-x}}{1-\sqrt{1-x}}\;, \qquad x\equiv\frac{2\pi\mu\,c^{(s)}}{3R^{2}}\leq 1 \tag{2.68}\]
which is a monotonously decreasing function of \(x\propto\mu\), from infinity at \(\mu=0\) to zero at the maximum allowed \(\mu\) for that compactification radius (\(x=1\)). For \(x<8/9\), the transition from the Cardy to the second Hagedorn regime occurs after the high-energy universal regime sets in (see figure 2a). However, when \(8/9<x\leq 1\), then the Hagedorn term dominates from the beginning, and we have a Hagedorn to Hagedorn transition (see figure 2b). This regime is possible precisely because the value of \(E_{c}\) at which the universal regime kicks in depends on \(\mu\); otherwise, the above ratio would be \(4/x\), which never becomes less than one in the given range.
As shown in [62] (henceforth HKS) using modular invariance, in a two-dimensional CFT with a large central charge and a sparse light spectrum, the entropy is universally given by Cardy's formula for energies \(E>\pi c/6R\), and satisfies a Hagedorn upper bound for smaller energies, which is saturated by symmetric product orbifold CFTs. Closely following this analysis, [29] (henceforth ASY) showed that a similar statement holds in double-trace \(T\bar{T}\)-deformed CFTs with a large central charge and an appropriately sparse light spectrum: the high-energy density of states is given by the universal formula (2.11) for \(E>E_{c}\), where \(E_{c}\) is given by (2.67) with \(N=1\) and \(c^{(s)}\) replaced by \(c\). Below \(E_{c}\), the entropy satisfies an upper bound, given by
\[S_{ASY\,\,TT\,\,bnd.}(E)=R(E-E_{vac}),\;\;\;\;\;E<E_{c} \tag{2.69}\]
where \(E_{vac}\) is given by (2.67) with \(N=1,c^{(s)}\to c\). From (2.66), one can see that the symmetric product orbifold of \(T\bar{T}\) - deformed CFTs can be thought of precisely saturating the bound for
\(c=Nc^{(s)}\), provided we replace \(\mu_{s.tr}\to N\mu_{d.tr}\), as follows from the matching of the entropies in the high-energy universal regime \(E>E_{c}\). Thus, the red curves in the plots above can also be interpreted as upper bounds on the entropy of double-trace \(T\bar{T}\) - deformed CFTs with central charge \(Nc\), coupling \(\mu/N\) and a certain sparseness condition on their light states.
Another way to obtain an upper bound on the density of states of a \(T\bar{T}\) - deformed CFT is to use the fact that in such a theory the degeneracies are identical, at leading order, to those in the seed CFT, but they are measured in a different variables. The degeneracies of a CFT with a sparse light spectrum satisfy, as discussed, the HKS bound [62], which is saturated by a symmetric orbifold CFT. Simply plugging in the relationship between the undeformed and deformed energies into this universal bound, one obtains
\[S_{T\bar{T}\,of\,HKS\,bnd.}(E)=\left\{\begin{array}{ll}E(R+\mu E)+\frac{ \pi c}{6}&\mbox{for}\quad E<E^{\prime}_{c}\\ 2\pi\sqrt{\frac{cE}{6\pi}(R+\mu E)}&\mbox{for}\quad E>E^{\prime}_{c}\end{array}\right. \tag{2.70}\]
where \(E^{\prime}_{c}\) is given by
\[E^{\prime}_{c}=\frac{R}{2\mu}\left(\sqrt{1+\frac{2\pi\mu c}{3R^{2}}}-1\right) \tag{2.71}\]
For \(c=Nc^{(s)}\), this represents the entropy of a double-trace \(T\bar{T}\) - deformed symmetric product orbifold CFT, which was studied in [35]. From (2.70) it follows that this exhibits an intermediate super-Hagedorn regime in the microcanonical ensemble, smoothly crossing over to the usual \(T\bar{T}\) Hagedorn behaviour at high energies. The specific heat in the intermediate super-Hagedorn regime is negative, and the system exhibits a first-order phase transition when coupled to a heat bath.
Let us now check whether this super-Hagedorn intermediate regime is consistent with the bound derived in [29]. From (2.71) and (2.67) with \(N=1\), \(c^{(s)}\to c\), it is easy to check that \(E^{\prime}_{c}<E_{c}\), so the universal high-energy regime is reached by the \(T\bar{T}\) - deformed HKS bound before the prediction (2.66) of the ASY bound. By evaluating (2.66) and (2.70) at \(E^{\prime}_{c}\), we obtain:
\[S_{ASY\,\,T\bar{T}\,\,bnd}(E^{\prime}_{c})>S_{T\bar{T}\,of\,HKS\,bnd}(E^{ \prime}_{c}). \tag{2.72}\]
One can check that this is also the case at \(E=0\). Since the double-trace \(T\bar{T}\) entropy is monotonously increasing in this interval, and at the end of the super-Hagedorn regime the double-trace \(T\bar{T}\) entropy is smaller than the AS bound, we can conclude that
\[S_{ASY\,T\bar{T}\,bnd}(E)>S_{T\bar{T}\,of\,HKS}(E),\quad\forall E\leq E_{c}, \tag{2.73}\]
as depicted in figure 3. To obtain this result, it was important that the transition between the two regimes in (2.66) is given by (2.67), as opposed to simply replacing the CFT energies by the \(T\bar{T}\) - deformed ones.
Figure 2: Plot of the entropy as a function of energy for a large \(N\) symmetric orbifold of \(T\bar{T}\) - deformed CFTs (red), as compared to the entropy of a symmetric orbifold of CFTs with the same central charge (blue).
Thus, the entropy bound obtained by \(T\bar{T}\) - deforming the HKS bound for a CFT with a large central charge and a sparse light spectrum is tighter than that obtained in [29] directly from modular invariance in the \(T\bar{T}\) - deformed CFT. This can be explained by the fact that the sparseness condition imposed in [29] on the light spectrum of \(T\bar{T}\) - deformed CFTs is less restrictive than the \(T\bar{T}\) deformation of the HKS sparseness condition on the seed CFT, which results in a less strong bound at intermediate energies. Given that these bounds can also be interpreted as the entropy of a single-trace and, respectively, double-trace \(T\bar{T}\) - deformed SPO CFT at appropriately scaled parameters, the above relation can also be written as
\[S_{{}_{SPO\,of\,T\bar{T}}}(E)>S_{T\bar{T}\,of\,SPO}(E),\quad\forall E\leq E_{c}, \tag{2.74}\]
We explicitly note that, while the entropies of the deformed theories agree at large energy, they differ in the intermediate regime.
#### \(J\bar{T}\) - deformed CFTs
Let us now discuss the single-trace \(J\bar{T}\) deformation, concentrating on the case of a purely chiral \(U(1)\) current, for concreteness. As discussed in section 2.1 for the double-trace case, there exists a sliver of energies in the undeformed CFT, given in figure 1, for which the deformed energies are real and can become arbitrarily high; in the chiral case, the high-energy behaviour of the entropy in the right-moving sector is Hagedorn. The starting point of the Hagedorn regime may be determined, as in our previous discussion, by translating the onset of the universal regime in CFT to \(J\bar{T}\) variables. We will henceforth assume that the energies in the deformed theory are sufficiently high, so that the Hagedorn formula (2.26) applies, and deduce the high-energy behaviour of the entropy in single-trace \(J\bar{T}\) - deformed CFTs from the fact that the twisted-sector degeneracies are directly determined from the seed \(J\bar{T}\) degeneracies via (2.60).
Let us concentrate on the contribution of a single twisted sector of length \(n\), which is given by the \(\ell=n\) term in (2.60). We will moreover only write the right-moving piece of the entropy, as the left-moving one is identical to that of a CFT with a fixed charge. Since in this sector, the degeneracy is the same as that of the seed at the same right-moving energy and charge, but on a cylinder \(n\) times larger, we find
\[S_{R}^{(n)}(E_{R},q)=2\pi\sqrt{\frac{cE_{R}}{12\pi}\left((nR-\lambda q)+\frac {\lambda^{2}kE_{R}}{8\pi}\right)} \tag{2.75}\]
It is interesting to compare the degeneracy of the maximally-twisted sector, \(S_{R}^{(N)}(E_{R},q)\), to that of the untwisted one with the same total energy and charge
Figure 3: Comparison of the ASY (red) and the \(T\bar{T}\) deformation of the HKS (blue) bounds on the entropy of \(T\bar{T}\) - deformed CFTs with a large central charge. The latter bound is stronger due to the associated stricter sparseness condition on the light states. These plots can also be interpreted as the entropy of a single-trace (red) and double-trace (blue) \(T\bar{T}\) - deformed entropy of a symmetric product orbifold CFT.
\[S_{R}^{untw}(E_{R},q)=2\pi N\sqrt{\frac{c}{12\pi}\left(\frac{E_{R}}{N}(R-\frac{ \lambda q}{N})+\frac{\lambda^{2}kE_{R}^{2}}{8\pi N^{2}}\right)}=2\pi\sqrt{\frac {cE_{R}}{12\pi}\left((NR-\lambda q)+\frac{\lambda^{2}kE_{R}}{8\pi}\right)} \tag{2.76}\]
where we have assumed that, on average, each copy posesses an equal share of the energy and charge, as this distribution maximizes the entropy. Thus, we find the same leading behaviour across sectors, similarly to the case of CFTs and \(T\bar{T}\) - deformed CFTs.
In order to establish which sector dominates, one would need to consider subleading corrections to the entropy, which can be analysed by translating the CFT results [62] to deformed \(J\bar{T}\) variables. In addition, one would need to ascertain that all sectors may in principle compete in the given energy range: for example, if the energies in the maximally twisted sector are real, then one should check that so are the contributing energies from the untwisted one, at least on average. In the maximally twisted sector, the reality constraint on the undeformed (twisted) energy and charge is
\[\frac{(q^{[0]})^{2}}{k}-\frac{PRN}{2\pi}\leq\frac{RNE_{R}^{[0]}}{2\pi}\leq \frac{1}{k}\left(\frac{RN}{\lambda}-q^{[0]}\right)^{2} \tag{2.77}\]
where we have simply translated the double-trace constraints discussed in section 2.1 to a cylinder of circumference \(RN\). This should be compared with the reality condition for a state in the seed CFT that has, on average, energy \(E_{R}^{[0]}/N\), momentum \(P/N\) and charge \(q^{[0]}/N\); it is straightforward to check that the two conditions are the same. We conclude that if we concentrate on a range of energies and charges in the undeformed symmetric product orbifold CFT such that the deformed energies in the untwisted sector are real, the ones in the maximally twisted sector will also be real, and viceversa.
It would be interesting to understand the validity of these formulae at lower energies, as well as to establish alternate universal bounds based on modular methods, analogous to that of [29] for \(T\bar{T}\). Given the relation (2.63) between the full \({\cal Z}_{J\bar{T}}^{(n)}\) and generalised Hecke operators, one would expect its behaviour to be universal and fixed by modular invariance. However, such arguments are harder to invoke in \(J\bar{T}\) - deformed CFTs, despite the formal modular covariance (2.23) of the partition function, due to the generic presence of imaginary energy states. Should one be able to apply such methods, it would be interesting to understand the energy at which different twisted sectors enter the universal regime. Note also that, as can be seen from (2.60), \({\cal Z}_{J\bar{T}}^{(n)}\) not only captures the contributions from the \(n\)-twisted sector, but also special contributions from sectors of lower twist; however, these other contributions with \(\ell<N\) are exponentially subleading, since their degeneracy is given by \(d(E\ell/N,R\ell)\) i.e. the degeneracy of tensor products of _identical_ states of lower energy.
Finally, if the \(U(1)\) current \(J\) is non-chiral, then the results are very similar to those in a CFT with both left- and right-moving \(U(1)\) charges, one simply needs to ensure that the undeformed energies lie in the correct regions, given in figure 0(b), to yield real deformed energies and charges.
## 3 Flow of the states and symmetries
One of the most remarkable properties of the \(T\bar{T}/J\bar{T}\) deformation of a two-dimensional CFT is that it preserves the Virasoro - and, if present, Kac-Moody - symmetries of the undeformed theory, despite rendering it non-local. These symmetries were studied from both a holographic and field-theoretic perspective in [36, 37, 38, 39, 49, 50, 63]. The most rigorous way to date to ascertain their presence is by transporting the symmetry generators of the undeformed CFT along the irrelevant flow12.
Footnote 12: For \(T\bar{T}\) - deformed CFTs, this perspective clears up a number of ambiguities present in the earlier analysis [36].
In this section we show, as intuited in [37], that an analogous analysis predicts the existence of Virasoro (\(\times\) Kac-Moody) symmetries in single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs. For completeness, we start this section with a brief review of these symmetries in double-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, then discuss the flow operator and the symmetries of the single-trace variant. In addition, we show that the fractional Virasoro and Kac-Moody generators present in the undeformed symmetric orbifold CFT can also be defined in the deformed theories and are conserved.
### Brief review of the symmetries of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs
The simplest way to show that \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs possess Virasoro (and, if initially present, also Kac-Moody) symmetries is by transporting the original Virasoro and Kac-Moody generators along the \(T\bar{T}/J\bar{T}\) flow. The operator used to define the transport is precisely the one that controls the flow of the energy eigenstates under the \(T\bar{T}/J\bar{T}\) deformation.
Concretely, since the \(T\bar{T}/J\bar{T}\) deformations are adiabatic, one can define for each a flow operator
\[\partial_{\lambda}|n_{\lambda}\rangle={\cal X}|n_{\lambda}\rangle \tag{3.1}\]
where \(|n_{\lambda}\rangle\) are the energy eigenstates in the deformed theory with generic coupling \(\lambda\), which in this section can represent either the coupling \(\mu\) of \(T\bar{T}\), or \(\lambda\) of \(J\bar{T}\). The flow operator is determined via first order perturbation theory of the states \(|n_{\lambda}\rangle\), and can be shown to satisfy
\[[H,{\cal X}]=\int d\sigma\,{\cal O}_{T\bar{T}/J\bar{T}}-diag \tag{3.2}\]
where "diag" stands for the diagonal elements of the integrated \(T\bar{T}/J\bar{T}\) operator in the energy eigenbasis. If the energy levels are degenerate, as is always the case if we start from a CFT, then it represents the matrix elements of the \(T\bar{T}\)/ \(J\bar{T}\) operator on the union of the degenerate subspaces. These matrix elements are only non-zero on the diagonal, as follows from the fact that the \(T\bar{T}\) and the \(J\bar{T}\) deformation do not break any existing degeneracies. The operators \({\cal X}_{T\bar{T}}\), \({\cal X}_{J\bar{T}}\) are known explicitly, at least in the classical limit [37, 38, 50, 64], though their specific expression is not needed for the argument that follows.
The flow operator can be used to define the generators \(\widetilde{L}^{\lambda}_{m},\widetilde{L}^{\lambda}_{m}\) (and their Kac-Moody counterparts \(\widetilde{J}^{\lambda}_{m},\widetilde{J}^{\lambda}_{m}\)) as solutions of the flow equations [37, 65]
\[\partial_{\lambda}\widetilde{L}^{\lambda}_{m}=[{\cal X},\widetilde{L}^{ \lambda}_{m}]\hskip 28.452756pt\partial_{\lambda}\widetilde{\widetilde{L}}^{ \lambda}_{m}=[{\cal X},\widetilde{\widetilde{L}}^{\lambda}_{m}] \tag{3.3}\]
with the initial condition \(\widetilde{L}^{0}_{m}=L^{{}_{CFT}}_{m},\widetilde{\widetilde{L}}^{0}_{m}= \widetilde{L}^{{}_{CFT}}_{m}\), etc. From this definition it follows that at any point along the flow, the algebra of these generators will consist of two commuting copies of the Virasoro (\(\ltimes\) Kac-Moody) algebra, with the same central charge as that of the undeformed CFT.
So far, this is just a definition. In order for these operators to generate symmetries of \(T\bar{T}\) and \(J\bar{T}\)- deformed CFTs, one needs to show that they are conserved. The conservation equation for the Schrodinger picture operators reads
\[\frac{\partial\widetilde{L}^{\lambda}_{m}}{\partial t}+\frac{i}{\hbar}\,[H, \widetilde{L}^{\lambda}_{m}]=0 \tag{3.4}\]
The commutator of the flowed generators with the Hamiltonian can be computed from first principles using the universality of the spectrum of \(T\bar{T}/J\bar{T}\)- deformed CFTs, and takes the simple form [37]
\[[\widetilde{L}^{\lambda}_{m},H]=\alpha_{m}\widetilde{L}^{\lambda}_{m}\;, \hskip 28.452756pt[\widetilde{\widetilde{L}}^{\lambda}_{m},H]=\bar{\alpha}_{m} \widetilde{\widetilde{L}}^{\lambda}_{m} \tag{3.5}\]
where \(\alpha_{m},\bar{\alpha}_{m}\) are operator-valued functions that depend on the Hamiltonian and other conserved charges. Their expressions for the two types of theories we consider are
\[T\bar{T} : \alpha_{m}(H,P)=\bar{\alpha}_{m}(H,-P)\,=\,\frac{1}{2\mu}\left( \sqrt{(R+2\mu H)^{2}+\frac{4\mu m\hbar(R+2\mu P)}{R}+\frac{4\mu^{2}m^{2}\hbar ^{2}}{R^{2}}}-(R+2\mu H)\right)\] \[J\bar{T} : \alpha_{m}=\frac{m\hbar}{R}\,,\hskip 22.762205pt\bar{\alpha}_{m}(Q) =2\frac{R-\lambda Q-\sqrt{(R-\lambda Q)^{2}-\hbar km\lambda^{2}}}{k\lambda^{2} }\;,\hskip 22.762205ptQ\equiv\bar{J}_{0}+\frac{\lambda k}{2}H_{R} \tag{3.6}\]
where \(\hbar\) is Planck's constant. Note that in \(J\bar{T}\)-deformed CFTs, \(\alpha_{m}\) is a \(c\)-number, which is related to the fact that these theories are local on the left-moving side. The conservation equation (3.4) is immediately satisfied if we assign the following explicit time dependence to the Schrodinger picture generators
\[\widetilde{L}^{\lambda}_{m}(t)=e^{i\alpha_{m}t}\widetilde{L}^{\lambda}_{m}(0) \hskip 28.452756pt\widetilde{\widetilde{L}}^{\lambda}_{m}(t)=e^{i\bar{\alpha}_{ m}t}\widetilde{\widetilde{L}}^{\lambda}_{m}(0) \tag{3.7}\]
where \(\bar{L}_{m}^{\lambda}(0)\) are the solutions to the flow equation (3.5). The Kac-Moody generators are treated in an entirely analogous manner. Thus, the operators we constructed correspond to conserved charges of the theory, and the Virasoro(\(\ltimes\) Kac-Moody) algebra they obey represents its symmetry algebra. Note this conservation argument is the same as the one used in standard two-dimensional CFTs; the only difference is that now \(\alpha_{m},\bar{\alpha}_{m}\) are operators, whereas in the CFT case they were \(c\)-numbers.
The above argument, while valid at the full quantum-mechanical level, is somewhat abstract, and does not lead to an intuitive picture of the action of these symmetries. It is also not clear whether this basis of generators is the one that acts most naturally on the fields in the theory. For example, in the case of \(J^{1}\wedge J^{2}\) deformation of two-dimensional CFTs - an exactly marginal deformation of the Smirnov-Zamolodchikov type - the Virasoro generators obtained via an analogous flow argument can be explicitly shown to _differ_ from the Virasoro generators of standard conformal symmetries [41].
In both \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, this question can be addressed very concretely at the classical level, by explicitly constructing the flow operators. In \(J\bar{T}\) - deformed CFTs, which are conformal in the standard sense on the left-moving side, one again finds that the flowed left-moving Virasoro and Kac-Moody generators \(\widetilde{L}_{n},\widetilde{J}_{n}\)_differ_ from the Virasoro - Kac-Moody generators \(L_{n},J_{n}\) of left conformal and affine transformations. This difference may be characterised as a "spectral flow by \(\lambda H_{R}\)", where \(H_{R}\) is the right-moving Hamiltonian
\[L_{n}=\widetilde{L}_{n}+\frac{\lambda H_{R}\widetilde{J}_{n}}{R}+\frac{\lambda ^{2}kH_{R}^{2}}{8\pi R}\,\delta_{n,0}\qquad\quad J_{n}=\widetilde{J}_{n}+ \frac{\lambda k}{4\pi}H_{R}\,\delta_{n,0} \tag{3.8}\]
where we have dropped the \(\lambda\) index on the generators and assumed that the explicit classical relation can be extended to the full quantum level. Note that in our conventions, the Virasoro generators are dimensionful; in particular \(L_{0}=H_{L},\bar{L}_{0}=H_{R}R_{v}/R\). A similar relation holds for the right-movers
\[\bar{L}_{n}=\widetilde{L}_{n}+\frac{\lambda:H_{R}\,\widetilde{J}_{n}\,\vdots} {R}+\frac{\lambda^{2}kH_{R}^{2}}{8\pi R}\,\delta_{n,0}\qquad\qquad\bar{J}_{n} =\widetilde{J}_{n}+\frac{\lambda k}{4\pi}H_{R}\,\delta_{n,0} \tag{3.9}\]
At least classically, the right-moving generators \(\bar{L}_{m}\) implement infinitesimal field-dependent coordinate transformations, as one may note from their expression
\[\bar{L}_{m}=\frac{R_{v}}{R}\int d\sigma\,e^{-2\pi im\hat{v}}{\cal H}_{R}\;, \quad v\sim\sigma-t-\lambda\phi \tag{3.10}\]
where \(\phi\) is roughly the bosonisation of the \(U(1)\) current with its zero mode removed - see [37] for the full expression - \(R_{v}\) is the (field-dependent) circumference of the above field-dependent coordinate and \(\hat{v}=v/R_{v}\). The \(\bar{J}_{m}\) implement similar field-dependent affine \(U(1)\) transformations. As discussed at length in [41], it is the \(L_{n}\), \(\bar{L}_{n}\) - rather than their tilded counterparts - that act naturally on the operators in the theory, and which should therefore be considered as the 'physical' symmetry generators in the theory; in particular, it is the \(L_{m},\bar{L}_{m}\) that have a simple integral expression in terms of the conserved currents in the theory, at least at the classical level. The algebra of these generators follows from their definition (3.8) - (3.9) and the fact that the tilded generators satisfy two copies of the Virasoro \(\ltimes\) Kac-Moody commutation relations. One finds that the algebra of the left-moving generators \(L_{n},J_{n}\) is again Virasoro \(\ltimes\) Kac-Moody, while that of \(\bar{L}_{n},\bar{J}_{n}\) is a non-linear modification of the right Virasoro \(\ltimes\) Kac-Moody algebra that does not commute with the left generators. It has been worked out explicitly in [37] using identities such as
\[[\bar{L}_{m},\bar{\alpha}_{n}]=(\bar{\alpha}_{m+n}-\bar{\alpha}_{m}-\bar{ \alpha}_{n})\bar{L}_{m} \tag{3.11}\]
which follow from the special properties of the functions \(\bar{\alpha}_{m}\) defined in (3.6).
The \(T\bar{T}\) case appears to work similarly, though to date is less understood. In the classical limit, the flowed generators take the form
\[\widetilde{L}_{m}^{cls}=\frac{R_{u}}{R}\int d\sigma e^{2\pi im\hat{u}}{\cal H }_{L}\qquad\quad\widetilde{L}_{m}^{cls}=\frac{R_{v}}{R}\int d\sigma e^{-2\pi im \hat{v}}{\cal H}_{R} \tag{3.12}\]
where \({\cal H}_{L,R}\) are the left/right-moving Hamiltonian densities, \(R_{u,v}\equiv R+2\mu H_{R/L}\), \(\hat{u}\equiv\frac{u}{R_{u}}\), \(\hat{v}\equiv\frac{v}{R_{v}}\) and \(u,v\) are the \(T\bar{T}\) field-dependent coordinates that emerge13 from the solution to the flow equation
Footnote 13: More precisely, one can view \(\widetilde{L}_{m}^{cls}\) as the Fourier modes of a current \({\cal H}_{L}(\sigma)\) that satisfies the flow equation \(\partial_{\mu}{\cal H}_{L}=[{\cal X}_{T\bar{T}},{\cal H}_{L}]\) and is chiral. The full solution is given in [38]. Upon integrations by parts, the Fourier modes can be put in the form (3.12), where the expressions for \(\hat{u},\hat{v}\) simply follow from the flow equation. In this sense, the field-dependent coordinates are “emergent”.
\[u\sim\sigma+t+2\mu\int^{\sigma}\!{\cal H}_{R}\;,\;\;\;\;\;\;\;v\sim\sigma-t+2 \mu\int^{\sigma}\!\!{\cal H}_{L} \tag{3.13}\]
whose full definition can be found in [38]. Classically, (3.12) generate field-dependent coordinate transformations. Note, however, that the generators of the symmetries in the natural Fourier basis are, rather, the so-called "unrescaled" generators [38]
\[Q_{m}=\frac{R\widetilde{L}_{m}}{R_{u}}\;\;\;\;\;\;\;\;\;\;\bar{Q}_{m}=\frac{R \widetilde{L}_{m}}{R_{v}} \tag{3.14}\]
Their algebra is given by a non-linear modification of the Virasoro algebra
\[[Q_{m},Q_{n}]=\frac{2\pi\hbar}{R_{u}}(m-n)Q_{m+n}+\frac{8\pi\hbar\mu^{2}H_{R} }{RR_{H}R_{u}}(m-n)Q_{m}Q_{n}+\frac{\pi^{2}c\hbar\,m^{3}}{3R_{u}^{2}}\delta_{m +n}+{\cal O}(\hbar^{2}) \tag{3.15}\]
where \(R_{H}=R+2\mu H\) and the \({\cal O}(\hbar^{2})\) terms and higher can be computed once the full quantum relation between the \(Q_{m}\) and \(\widetilde{L}_{m}\) is given14. This result was also confirmed holographically by the analysis of [39].
Footnote 14: An example of a fully quantum relation between the two that takes into account operator ordering is given by the symmetrization of (3.14), by letting \(\widetilde{L}_{m}=Q_{m}+\mu H_{R}Q_{m}+\mu Q_{m}H_{R}\). Note that the algebra of these \(Q_{m}\) generators is entirely determined by their definition and the fact that the \(\widetilde{L}_{m}\) satisfy a Virasoro algebra. It can be worked out using identities of the form (3.11), which also apply to the \(T\bar{T}\) generators for the appropriate choice of \(\alpha_{m}\).
We end our review of the extended symmetries of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs by discussing the charges associated to integrability, which have been known to be preserved by these deformations since the original work of [11]. In a CFT\({}_{2}\), these charges correspond to the so-called KdV charges [66], which are associated to currents constructed from higher powers of the (anti)holomorphic stress tensor. For example, the first two non-trivial KdV charges are given by (for \(R=2\pi\))
\[I_{3}=2\sum_{n=1}^{\infty}L_{-n}L_{n}+f(L_{0}) \tag{3.16}\]
\[I_{5}=\!\!\sum_{n_{1}+n_{2}+n_{3}=0}:L_{n_{1}}L_{n_{2}}L_{n_{3}}:+\sum_{n=1}^ {\infty}\left(\frac{c+11}{6}n^{2}\!-\!1\!-\!\frac{c}{4}\right)\!L_{-n}L_{n}\!+ \!\frac{3}{2}\sum_{n=1}^{\infty}L_{1-2n}L_{2n-1}\!+\!g(L_{0}) \tag{3.17}\]
where \(f(L_{0}),g(L_{0})\) are quadratic functions of \(L_{0}\), whose explicit expressions are given e.g. in [67], but are not relevant for our purposes. As discussed in [38], one can easily define the flowed KdV charges \(\widetilde{I}_{s}\) by requiring that they satisfy a homogenous flow equation analogous to (3.3); its solution is simply given by replacing \(L_{n}\to\widetilde{L}_{n}\) in the expressions above. A rescaled version of these charges satisfies the flow equations discussed in [68].
For this prescription to make sense, we should also show that the flowed KdV charges are conserved, i.e. they commute with the Hamiltonian. This commutator can be computed using (3.5) and the commutation relation (3.11), which turns out to also hold in \(T\bar{T}\) - deformed CFTs for the corresponding \(\alpha_{m}\). One can check explicitly that the commutator of \(H\) with any product of \(\widetilde{L}_{m}\) whose indices sum to zero is proportional to \(\alpha_{0}\), which vanishes. Since this is true term by term, we conclude that all the KdV charges constructed via the flow remain conserved.
### Brief review of the symmetries of symmetric orbifold CFTs
Let us now review a few facts about the extended symmetries of symmetric product orbifolds of two-dimensional CFTs. These include the standard Virasoro symmetries, the fractional Virasoro generators that act in the twisted sectors, as well as higher spin symmetries [40], whose structure has yet to be fully understood.
In this subsection, we concentrate mostly on the Virasoro symmetries and their fractional counterparts, which we will subsequently generalise to single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs. Given that the action of the symmetric product orbifold CFT is simply the sum over individual CFT copies, the (e.g., holomorphic) stress tensor of the theory (classically denoted \({\cal H}_{L}^{[0]}\)) is a sum over copies
\[T(\sigma)=\sum_{I=1}^{N}T^{I}(\sigma) \tag{3.18}\]
where we are working on a fixed time slice, say \(t=0\). The Virasoro generators correspond to the Fourier modes of the stress tensor, whose quantisation depends on the boundary conditions the latter obeys. In the untwisted sector, the stress tensor in each copy obeys periodic boundary conditions, so we can write
\[L_{m}=\sum_{I=1}^{N}L_{m}^{I} \tag{3.19}\]
Note that in our conventions, the Virasoro generators are dimensionful (with dimensions of mass) for reasons that will become clear in the sequel.
In the twisted sectors, the boundary conditions relate operators from the different copies. We will focus on \(\mathbb{Z}_{w}\) cyclic orbifolds, which are the building blocks of the twisted-sector operators. Since we will only be interested in single-trace operators, we will consider only one cycle of length \(w\); for simplicity, we will assume that the copies of the symmetric orbifold involved are \(1,...,w\), in this order. The final expressions need of course to be symmetrized over all possible choices of the \(w\) copies, to ensure \(S_{N}\) - invariance. Since we will be working on a fixed time slice, all dependence on the time coordinate will be dropped.
We thus consider \(w\) copies, \(\phi_{I}(\sigma)\), of a (bosonic) field \(\phi\) - now taken to be generic - with the boundary condition \(\phi_{I}(\sigma+R)=\phi_{I+1}(\sigma),\;\forall I\) defined mod \(N\). This twisted boundary condition can be thought of as being due to the insertion of a \(w\)-twist operator at Euclidean time \(\tau=-\infty\) on the cylinder. It is natural to consider the combinations
\[\phi^{(k)}(\sigma)=\sum_{I=1}^{w}\phi_{I}(\sigma)\,e^{-2\pi ik(I-1)/w}\;,\;\; \;\;\phi^{(k)}(\sigma+R)=e^{2\pi ik/w}\phi^{(k)}(\sigma)\;,\;\;\;\;k\in[0,w-1] \tag{3.20}\]
which transform diagonally under \(\mathbb{Z}_{w}\). Thus, each field in the theory will give rise to \(w\) fields with twisted boundary conditions. The moding of each of them will be \(n-k/w\) for \(k\in[0,w-1]\) and \(n\in\mathbb{Z}\); ultimately, this will result in a field with all possible fractional modes. The fractional modes can be expressed via a Fourier transform as
\[\phi_{m/w}=\int_{0}^{R}d\sigma\;e^{2\pi i\sigma m/Rw}\phi^{(k)}(\sigma)\Big{|} _{k=-m\;mod\;w}=\int_{0}^{R}d\sigma\sum_{I=1}^{w}\phi_{I}(\sigma)\,e^{2\pi im (\sigma+(I-1)R)/Rw} \tag{3.21}\]
Conversely, the modes of the field in a single copy can be obtained by inverting the relation above
\[\phi_{I}(\sigma)=\frac{1}{wR}\sum_{m=-\infty}^{+\infty}\phi_{m/w}\,e^{-2\pi im \sigma/Rw}e^{-2\pi im(I-1)/w} \tag{3.22}\]
This is not quite an operatorial relation, for two reasons: first, it can only be used when acting on states from the \(w\) - twisted sector; second, the left-hand-side is not an operator in the symmetric orbifold, because it is not gauge-invariant. This relation can nevertheless be used to construct operators that act on the twisted sector. For example, the action of the single-trace untwisted operator \(\sum_{I}\phi_{I}(\sigma)\) on this sector is given by the Fourier sum where only integer modes appear,
as all \(m\) in (3.22) that are not multiples of \(w\) will be projected out by the sum over \(I\). Even so, note that the integer modes in this sector are not associated to any particular copy, but rather correspond to Fourier modes of the entire sum over operators, which satisfies periodic boundary conditions by construction.
The modes (3.21) can be related to the integer modes on a cylinder of circumference \(Rw\). This is simply achieved by letting the coordinate \(\tilde{\sigma}\) on this larger cylinder equal \(\tilde{\sigma}=\sigma+(I-1)R\) in the \(I^{th}\) patch given by \(\tilde{\sigma}\in[R(I-1),RI)\), and defining a field on this covering space via \(\phi^{cov}(\tilde{\sigma})=\phi_{I}(\sigma)\) on the given patch. One can easily see that on the \(I^{th}\) patch, \(\phi^{cov}(\tilde{\sigma})=\phi_{I}(\sigma)=\phi_{1}(\sigma+(I-1)R)\), \(\forall I\), and thus \(\phi^{cov}(\tilde{\sigma})\) is simply the field of the seed QFT, defined on this larger cylinder with periodic boundary conditions. The fractional Fourier modes discussed above become simply the Fourier modes of the field of the seed QFT on this larger cylinder
\[\phi_{m/w}=\int_{0}^{Rw}d\tilde{\sigma}\,\phi^{cov}(\tilde{\sigma})\,e^{2\pi im \tilde{\sigma}/wR}=\phi^{cov}_{m} \tag{3.23}\]
Note that so far no use of conformal symmetry was made, but rather we simply sewed the various copies of the fields into a single copy on the covering space15.
Footnote 15: The standard discussions of the covering map consider the CFT on the plane, and the map is of the form \(z_{base}=t^{w}_{cov}\). The Fourier modes in radial quantization are integrated over a circle of circumference \(2\pi\). To relate these to the Fourier modes on the RHS of (3.23), which are defined on a circle \(w\) times larger, one needs to perform a conformal rescaling \(\tilde{\sigma}\to\sigma=\tilde{\sigma}/w\). Under it, if \(\phi\) is a primary field of weight \(h\), then \(\phi^{\{wR\}}(\tilde{\sigma})=w^{-h}\phi^{\{R\}}(\sigma)\), where the superscript indicates the size of the cylinder on which the theory is defined. It follows that \(\phi_{m/w}=\phi^{\{wR\}}_{m}=w^{1-h}\phi^{\{R\}}_{m}\) where for \(R=2\pi\), \(\phi^{\{R\}}_{m}\) are the standard Fourier modes of the field; this reproduces the expressions in the literature. We emphasize the fact that the conformal transformation is not needed to relate the fractional modes of the twisted field to those of the seed on the covering cylinder, but only in resizing that covering cylinder back to the initial length \(R\). For the non-conformal models we are interested in, we will prefer to work directly with the cylinder of size \(wR\).
This procedure may be applied to any field in the theory; in particular, it may be applied to the stress tensor, which results in an infinite set of fractional Virasoro modes \(L_{m/w}\). The algebra of these modes defined on the \(t=0\) circle of the cylinder can be obtained using (3.23)
\[[L_{\frac{m}{w}},L_{\frac{n}{w}}]=[L^{cov}_{m},L^{cov}_{n}]=\frac{2\pi(m-n)} {wR}L_{m}^{cov}+\frac{4\pi^{2}c\,m^{3}}{12R^{2}w^{2}}\delta_{m+n}=\frac{2\pi} {R}\cdot\frac{m-n}{w}L_{\frac{m+n}{w}}+\frac{4\pi^{2}}{R^{2}}\cdot\frac{c\,m ^{3}}{12w^{2}}\delta_{m+n} \tag{3.24}\]
where we have used the fact that in our conventions, the generators are dimensionful, and thus explicit factors of \(R\) appear in their algebra (\(Rw\) if we are on the covering space). The above is known as the fractional Virasoro algebra and, by construction, it is isomorphic to the Virasoro algebra of the seed CFT. This algebra may be brought to a more standard form by rescaling the fractional generators by \(R/2\pi\), or simply setting \(R=2\pi\). The reason that the central term simply involves \(m^{3}\) is that we work on the cylinder, where the eigenvalue of \(L_{0}\) is shifted with respect to that on the plane by an amount proportional to the central charge in that sector.
The global Virasoro symmetry generators correspond to choosing \(m\in w\mathbb{Z}\) which, as we already explained, are the only modes that survive in the action of the gauge-invariant operator (3.18) on this sector. The fractional Virasoro modes can be used to build fractional descendants, which may be Virasoro primaries under certain conditions [69]. Relatedly, if in the seed CFT on the covering space, two fields are related via the action of \(L_{-n}\), then their images in the twisted sector will be related via \(L_{-n/w}\).
A similar discussion holds for the case of other symmetries of the seed CFT, such as an \(U(1)\) affine symmetry. The commutation relations of the fractional current modes are found to be
\[[J_{m/w},J_{n/w}]=\frac{k}{2}m\,\delta_{n+m,0}\;,\;\;\;\;\;\;[L_{m/w},J_{n/w}]=- \frac{2\pi n}{wR}\,J_{\frac{m+n}{w}} \tag{3.25}\]
where, as before, \(k\) represents the \(U(1)\) level of the seed CFT. Note that since the mode number is \(m/w\), the level of this algebra, as it appears in the position space OPE, is \(k^{(w)}=kw\), in agreement with the fact that \(w\) copies of the seed CFT are involved in the computation. More generally, for a primary operator \(\mathcal{O}\) from the seed CFT with left conformal dimension \(h^{(s)}\) and \(U(1)\) charge
\(q^{(s)}\), we find that the seed commutation relations on the covering cylinder descend to the following commutation relations with the fractional Virasoro and Kac-Moody generators on the base
\[[L_{m/w},{\cal O}_{n/w}]=\frac{2\pi}{Rw}\bigg{(}(h^{(s)}-1)m-n\bigg{)}{\cal O}_{( m+n)/w} \tag{3.26}\]
\[[J_{m/w},{\cal O}_{n/w}]=q^{(s)}{\cal O}_{(m+n)/w} \tag{3.27}\]
where we have used the relation (3.23) between the fractional modes on the base and Fourier modes on the covering cylinder. These are nothing but the momentum-space commutation relations on the cylinder, particularized to fractional momenta.
Note that the conformal dimension of the operator is \(h^{(s)}\), corresponding to a standard untwisted-sector operator acting on a sector with twisted boundary conditions; in particular, its short-distance behaviour is governed by \(h^{(s)}\). Just as for the energy-momentum modes, the twisted boundary conditions can be thought of as generated by the insertion of a \(w\)-twist operator at \(\tau=-\infty\), which allows the modes of any operator acting on the cylinder to be fractional.
We would like to draw a distinction - which will become important in section 4.2 - between these operators, which act in the presence of a twist inserted at a different location, and genuine twisted operators, denoted \({\cal O}^{(w)}\), which contain a twist at their own location. To obtain the Ward identities of the latter with the Virasoro generators one may simply start on the plane, and then lift the result to the covering space using the standard map \(z=z_{0}+(t-t_{0})^{w}\), where \(z_{0}\) is the insertion point of the operator. Following the steps in [69] and suppressing, for simplicity, the right-moving labels, we find
\[[L_{n},{\cal O}^{(w)}(z_{0})]=\oint_{t_{0}}\frac{dt}{2\pi i}\,\frac{dz}{dt}z^{ n+1}\frac{1}{z^{\prime}(t)^{2}}\left(T(t)-\frac{c}{12}\{z,t\}\right){\cal O}(t_{0}) \tag{3.28}\]
Using the OPE on the covering space and the fact that the Schwarzian derivative is \(\{z,t\}=\frac{1-w^{2}}{2(t-t_{0})^{2}}\), the above reduces to
\[[L_{n},{\cal O}^{(w)}(z_{0})]=\oint_{t_{0}}\frac{dt}{2\pi i}\,(z_{0}+(t-t_{0}) ^{w})^{n+1}\bigg{[}\frac{h^{(w)}{\cal O}(t_{0})}{(t-t_{0})^{w+1}}+\frac{ \partial_{z}{\cal O}(t_{0})}{t-t_{0}}\bigg{]} \tag{3.29}\]
where \(h^{(w)}\) is given in (2.47). Integrating and redescending the result to the base space, we obtain
\[[L_{n},{\cal O}^{(w)}(z)]=h^{(w)}(n+1)z^{n}{\cal O}^{(w)}(z)+z^{n+1}\partial_{ z}{\cal O}^{(w)}(z) \tag{3.30}\]
where we have dropped the index '0' from the base space coordinate. In contrast with our previous computation, the conformal dimension which appears in the Ward identity is now \(h^{(w)}\), due to the presence of a twist at the location of \({\cal O}^{(w)}\). The covering space considered is also different from the larger cylinder previously used and, in particular, in this case the Schwarzian derivative does yield a non-trivial contribution. One may then use the plane to cylinder map on the base, \(z=e^{2\pi\zeta/R}\) with \(\zeta=\tau+i\sigma\), to obtain the standard cylinder Ward identities
\[[L_{n},{\cal O}^{(w)}(\zeta,\bar{\zeta})]=e^{\frac{2\pi n\zeta}{R}}\bigg{(} \frac{2\pi nh^{(w)}}{R}{\cal O}^{(w)}(\zeta,\bar{\zeta})+\partial_{\zeta}{ \cal O}^{(w)}(\zeta,\bar{\zeta})\bigg{)} \tag{3.31}\]
\[[J_{n},{\cal O}^{(w)}(\zeta,\bar{\zeta})]=q^{(s)}\,e^{\frac{2\pi n\zeta}{R}}{ \cal O}^{(w)}(\zeta,\bar{\zeta}) \tag{3.32}\]
These commutation relations correspond to the cylinder Ward identities for a (properly periodic) operator of conformal dimension \(h^{(w)}\) and charge \(q^{(w)}=q^{(s)}\). Note that we could have also considered the commutation relations of this twisted operator with the fractional Virasoro modes, which would have simply amounted to replacing \(n\to n/w\) - or, equivalently, \(R\to Rw\) - in he expressions above. However, since the operator creating the twist is located at \((\zeta,\bar{\zeta})\), and not at \(-\infty\), these Ward identities would have only held provided no other operator was inserted between these two points, which is a very restrictve requirement. On the other hand, the commutation relations with the globally-defined Virasoro generators are entirely general. The right-moving commutation relations take an analogous form.
### Flow of the states in single-trace \(T\bar{T}\) and \(J\bar{T}\)-deformed CFTs
To uncover the Virasoro and Kac-Moody symmetries of single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, we follow exactly the same strategy as in the double-trace case. For this, we first need to understand the structure of the operator that drives the flow of the energy eigenstates via the single-trace analogue of (3.1), where now \(|n_{\lambda}\rangle\) represent the energy eigenstates of the \(T\bar{T}/J\bar{T}\) symmetric product orbifold. For simplicity, we restrict our discussion to the single-trace \(T\bar{T}\) deformation; a virtually identical analysis holds in the \(J\bar{T}\) case.
To determine the flow operator, it is sufficient to consider first order degenerate quantum-mechanical perturbation theory about the instantaneous energy eigenstates \(|n_{\lambda}\rangle\). Ultimately, we will show that the flow operator in single-trace \(T\bar{T}\) - deformed CFTs simply corresponds to the standard \(T\bar{T}\) flow operator on the covering space, namely on the cylinder of circumference \(Rw\). However, to arrive at this result, we first need to understand the degeneracies that are broken by the deformation, as these play a role in determining the flow operator. To make the discussion self-contained, we start by reminding the reader a few facts about degenerate perturbation theory, following e.g. [70].
#### Brief review of degenerate perturbation theory
Assume we have a quantum-mechanical system where a subspace - denoted \({\cal H}_{n}\) - of the Hilbert space is degenerate. In this subspace, the undeformed energy levels will be denoted as \(|n^{(0)},k\rangle\), where \(k\in\{1,\ldots,dim_{{\cal H}_{n}}\}\), and their energy as \(E_{n}^{(0)}\). The energy levels in the orthogonal part of the Hilbert space will be denoted as \(|p^{(0)}\rangle\). The \({\cal O}(\lambda)\) change in the energy eigenstates and eigenvalues under a perturbation \(\delta H=\lambda H^{(1)}+\lambda^{2}H^{(2)}+\ldots\) is given by
\[|n,k\rangle=|n^{(0)},k\rangle+\lambda\left(\sum_{p\neq n}|p^{(0)}\rangle\, \frac{\langle p^{(0)}|H^{(1)}|n^{(0)},k\rangle}{E_{n}^{(0)}-E_{p}^{(0)}}+\sum _{l\neq k}a_{k,l}|n^{(0)},l\rangle\right)+{\cal O}(\lambda^{2}) \tag{3.33}\]
\[E_{n,k}(\lambda)=E_{n}^{(0)}+\lambda\langle n^{(0)},k|H^{(1)}|n^{(0)},k \rangle+{\cal O}(\lambda^{2}) \tag{3.34}\]
We note that the first order correction, denoted \(|n^{(1)},k\rangle\), to the state corresponds simply to the action of the flow operator \({\cal X}\) defined as in (3.1) on \(|n^{(0)},k\rangle\) at leading order in perturbation theory. We can then easily show that, at this order
\[[H,{\cal X}]|n^{(0)},k\rangle=H|n^{(1)},k\rangle-E_{n}^{(0)}|n^{(1)},k\rangle =-\sum_{p\neq n}|p^{(0)}\rangle\,\langle p^{(0)}|H^{(1)}|n^{(0)},k\rangle \tag{3.35}\]
Since the state \(|n^{(0)},k\rangle\) is arbitrary, we conclude that, to this order
\[[H,{\cal X}]=-(H^{(1)}-diag) \tag{3.36}\]
where '\(diag\)' represents the diagonal matrix elements of the deforming operator on the union, denoted \({\cal H}_{D}\), of all the degenerate subspaces. Thus, the commutator of \({\cal X}\) with \(H\) is determined by the fully off-diagonal pieces of the perturbation, whether the degeneracy is lifted or not.
The basis \(|n^{(0)},k\rangle\) and the coefficients \(a_{k,l}\) depend on whether the perturbation breaks or not the degeneracy of the undeformed theory. If it does, then the basis \(|n^{(0)},k\rangle\) must be _chosen_ so that \(\delta H\) is diagonal, and the \(a_{k,l}\) - which correspond to the matrix elements of \({\cal X}\) that lie inside the initially degenerate subspace - are determined by the \({\cal O}(\lambda^{2})\) analysis to be
\[a_{k,l}=\frac{1}{E_{n,k}^{(1)}-E_{n,l}^{(1)}}\left(\sum_{p}\frac{H_{nl,p}^{(1 )}H_{p,nk}^{(1)}}{E_{n}^{(0)}-E_{p}^{(0)}}+\langle n^{(0)},l|H^{(2)}|n^{(0)}, k\rangle\right) \tag{3.37}\]
where \(H^{(2)}\) is an eventual \({\cal O}(\lambda^{2})\) correction to the Hamiltonian (not considered in the analysis of [70]) and \(H_{p,nk}^{(1)}\equiv\langle p^{(0)}|H^{(1)}|n^{(0)},k\rangle\), \(H_{nk,p}^{(1)}=(H_{p,nk}^{(1)})^{*}\) and we have assumed, for simplicity, that the breaking of the degeneracy is complete. If, on the other hand, the perturbation does not break the degeneracy at \({\cal O}(\lambda)\), but rather at some higher order \(b\), then the coefficients \(a_{k,l}\) are determined by the analysis at \({\cal O}(\lambda^{b+1})\). If the degeneracy is never broken, then one is free to
choose any basis on the degenerate subspace. Note also that the matrix elements of the perturbing operator \(\delta H\) are diagonal on \({\cal H}_{D}\), either because the basis had to be chosen so that this holds, or because \(\delta H\) is proportional to the identity on this subspace, as is the case when the degeneracies are not broken.
### Degeneracies of single-trace \(T\bar{T}\) - deformed CFTs
The main lesson of the discussion above is that, whenever a deformation breaks an existing degeneracy, the elements of the flow operator in the broken subspace of the Hilbert space are fixed. Thus, in order to understand the structure of the single-trace \(T\bar{T}\) flow operator, we need to check whether the deformation breaks - or not - the existing degeneracies. This can be determined by the exact knowledge of the spectrum of the deformed theory. For example, in the case of the double-trace \(T\bar{T}\) deformation, the fact that the deformed energies are only functions of the undeformed ones implies that any degeneracy initially present in the CFT will not be lifted by the deformation. Consequently, the elements of \({\cal X}_{T\bar{T}}\) on the degenerate subspace of the corresponding double-trace \(T\bar{T}\) - deformed CFT are not fixed. We can in particular choose, as in [38], the same expression for \({\cal X}_{T\bar{T}}\) that is given by the assumption of non-degenerate eigenstates; this amounts to a particular way to continue the arbitrarily-chosen basis of degenerate eigenstates from the undeformed CFT to the deformed one.
The case of the single-trace \(T\bar{T}\) deformation is different from the double-trace one in that - as remarked in [29] - the degeneracy is partially lifted when we turn on the deformation. This breaking of the degeneracies can be easily seen from the energy formula (2.54)
\[E=\sum_{cycles}E^{(w_{i})}\left(\mu,R,E^{(w_{i})}_{CFT},P^{(w_{i})}_{CFT}\right) \tag{3.38}\]
where \(E^{(w_{i})}_{CFT}\) represents the undeformed CFT energy in that sector.
Let us first discuss the untwisted sector, where the the undeformed energies are of the schematic form \(\sum_{I=1}^{N}\Delta_{I}+n_{I}\), with \(\Delta_{I}\) the primary operator dimensions (we omit writing the shift by \(c/12\), and set the radius to one) and \(n_{I}\) the total level of the descendants. One type of degeneracies are those within a single copy (fixed \(n_{I}\)), which are the same as in the double-trace case, and thus are not lifted. Another type of degeneracies are those among different copies (fixed \(\sum_{I}n_{I}\)) of the seed CFT. These degeneracies are generically broken when the deformation is first turned on, as the first order correction to the energy is \(\mu\sum_{I}(\Delta_{I}+n_{I})^{2}\). For generic operator dimensions of the seed CFT, the \(\sum\Delta_{I}n_{I}\) term completely breaks any initial degeneracy; if some of the \(\Delta_{I}\) happen to coincide, then the \(\sum n_{I}^{2}\) term will break the degeneracy. A similar discussion holds for the twisted sectors, where the energy at the CFT point is given by \(E=\sum_{i\in cycles}(\Delta_{i}+n_{i})/w_{i}\). Degeneracies within a single cycle will not be broken at any order in perturbation theory, because the flow equation (2.58) within a single cycle is nothing but the standard \(T\bar{T}\) flow equation with an effective parameter \(\mu/w_{i}\). However, the degeneracies corresponding to different ways of distributing the energy among the different cycles, with \(\sum_{i}n_{i}/w_{i}\) fixed, will be broken once we turn on the single-trace \(T\bar{T}\) perturbation. We will ignore the possibility of level crossing at finite \(\mu\).
Thus, the degeneracies are only broken when the deformation is first turned on. Denoting the degenerate subspace at \(\mu=0\) by \({\cal H}_{D^{(0)}}\), and the smaller degenerate subspace at \(\mu\neq 0\) by \({\cal H}_{D}\), then the matrix elements of the single-trace flow operator on \({\cal H}_{D^{(0)}}\setminus{\cal H}_{D}\) are fixed and are given by (3.37) ; the matrix elements on \({\cal H}_{D}\) can be chosen at will, since further tuning \(\mu\) away from zero is not expected to break any additional degeneracies. For this statement to be true, we assume that the spectrum of the seed CFT is generic, i.e. the primary dimensions are arbitrary real numbers subject to the unitarity and all other consistency constraints.
Note that the above result should be consistent with that obtained from _instantaneous_ perturbation theory \(\lambda\rightarrow\lambda+\delta\lambda\), in the limit \(\lambda\to 0\). The expression for the instantaneous correction to the state vector is given by (3.33), with \(|n^{(0)}\rangle\) and \(E^{(0)}_{n}\) now representing the state and energy at a given finite \(\lambda\), and the sum in the first term running over the states outside \({\cal H}_{D}\). As \(\lambda\to 0\), the energy difference between some of these states (namely, those who have inter-cycle degeneracies at \(\lambda=0\)) becomes \({\cal O}(\lambda)\), which implies that the numerator of that expression will receive contributions from different orders in the \(\lambda\) expansion. The structure of these terms is similar to the expected
contribution (3.37), and we expect that the matrix elements that get fixed on \({\cal H}_{D^{(0)}}\setminus{\cal H}_{D}\) precisely coincide with those predicted by general non-degenerate perturbation theory outside \({\cal H}_{D}\), and we need not worry about them. The elements inside \({\cal H}_{D}\) are not fixed, and can be chosen conveniently.
The case of the single-trace \(J\bar{T}\) deformation can be treated in an exactly analogous manner. Since the first order correction to the energy is \(\lambda\sum_{I}q_{I}(\bar{h}_{I}+n_{I})\), with \(\bar{h}_{I}\) the undeformed right-moving dimensions, it follows that the degeneracy among the different cycles will generically be broken as soon as the deformation is turned on. The rest of our conclusions follow straightforwardly.
### The single-trace \(T\bar{T}\) flow operator
Let us now apply this to the single-trace \(T\bar{T}\) deformation. Using (3.36), the matrix elements of the single-trace flow operator, schematically denoted as \({\cal X}_{T_{j}\bar{T}_{I}}\), satisfy
\[[H,{\cal X}_{T_{j}\bar{T}_{I}}]=\sum_{I}\int d\sigma\,{\cal O}_{T_{j}\bar{T}_{ I}}-diag \tag{3.39}\]
where '\(diag\)' now refers to the diagonal matrix elements of the integrated single-trace \(T\bar{T}\) operator on \({\cal H}_{D}\). As discussed in the previous subsection, even though the single-trace \(T\bar{T}\) operator itself is a sum over copies, its integrated version is not, except in the untwisted sector. Since then also \(H=\sum_{I}H_{I}\), it is natural that
\[{\cal X}_{T_{j}\bar{T}_{I}}=\sum_{I}{\cal X}^{I}_{T\bar{T}}\ \ \ \ \ \mbox{in the untwisted sector, outside }\,{\cal H}_{D} \tag{3.40}\]
where \({\cal X}^{I}_{T\bar{T}}\) is the flow operator associated with a single copy of the \(T\bar{T}\) - deformed CFT, which is worked explicitly in [38], at least at the classical level. This relation only need to hold outside \({\cal H}_{D}\), which consists of only in-cycle degeneracies; however, as discussed, the elements of \({\cal X}_{T_{l}\bar{T}_{I}}\) can be chosen at will inside it.
The fact that the flow operator takes this form in the untwisted sector can also be seen very explicitly by studying the flow equation for states in this sector, which are built as sums over all possible permutations \(|n_{\lambda}^{\sigma(1)}\rangle\otimes\ldots|n_{\lambda}^{\sigma(N)}\rangle\) of \(N\) fixed energy eigenstates, \(|n_{\lambda}^{I}\rangle\), in the seed theory. Plugging this into the right-hand-side of (3.33) and using the fact that \(\delta H\) consists of a sum over copies in this sector, we find that the instantaneous energy eigenstates \(|m_{\lambda}\rangle\) (corresponding to \(|p^{(0)}\rangle\) in (3.33)), which in this sector take the form \(|m_{\lambda}^{1}\rangle\otimes\ldots|m_{\lambda}^{N}\rangle\), fully symmetrized, should "click" with \(|n_{\lambda}\rangle\) in all its entries but one, where the particular copy of the \(T\bar{T}\) operator acts. More precisely
\[\partial_{\lambda}|n_{\lambda}\rangle=\sum_{m\neq n}|m_{\lambda}\rangle\, \frac{\langle m_{\lambda}|\int{\cal O}_{T_{l}\bar{T}_{I}}|n_{\lambda}\rangle} {E_{n}^{\lambda}-E_{m}^{\lambda}}=\sum_{perm}\sum_{I}|n_{\lambda}^{1}\rangle \otimes\ldots\sum_{m^{\prime}\neq n^{I}}|m_{\lambda}^{I}\rangle\,\frac{\langle m _{\lambda}^{I}|\int{\cal O}_{T_{l}\bar{T}_{I}}|n_{\lambda}^{I}\rangle}{E_{n}^ {I,\lambda}-E_{m}^{I,\lambda}}\otimes\ldots|n_{\lambda}^{N}\rangle \tag{3.41}\]
where in the second step we used the fact that the energies in the untwisted sector are just sums of the energies of the individual copies. Both sums run only over non-degenerate eigenstates; the sum over permutations should be properly normalised. We have also set all ambiguities in the state due to (intra-cycle) degeneracies to zero; including them would shift the individual copy contributions, without affecting the general structure. Near \(\lambda=0\), one could also check that the fixed matrix elements of \({\cal X}_{T_{j}\bar{T}_{I}}\) do not break the symmetric product orbifold structure, as would be implied by having \(a_{k,l}\neq 0\) between different cycles; however, this is ensured by the fact that the \(\lambda\to 0\) case can be embedded in the generic \(\lambda\neq 0\) one, which does respect the symmetric orbifold structure. Since the terms in the intermediate sums may be written as \(\partial_{\lambda}|n_{\lambda}^{I}\rangle={\cal X}^{I}_{T\bar{T}}|n_{\lambda}^ {I}\rangle\), we find that
\[\partial_{\lambda}|n_{\lambda}\rangle=\sum_{perm}\sum_{I}|n_{\lambda}^{1} \rangle\otimes\ldots\partial_{\lambda}|n_{\lambda}^{I}\rangle\otimes\ldots|n_ {\lambda}^{N}\rangle \tag{3.42}\]
and so the deformed state is just the tensor product of \(T\bar{T}\) - deformed states - thus confirming the fact that single-trace \(T\bar{T}\) preserves the symmetric orbifold structure16
Footnote 16: By contrast, the action of the double-trace \(T\bar{T}\) deformation on the symmetric orbifold is given by
\[\partial_{\lambda}|n_{\lambda}\rangle=\sum_{perm}\sum_{I,J}|n_{\lambda}^{1} \rangle\otimes\ldots\sum_{m^{\prime}\neq n^{I}}|m_{\lambda}^{I}\rangle\,\frac {\langle m_{\lambda}^{I}|T|n_{\lambda}^{I}\rangle}{E_{m}^{\lambda}-E_{n}^{ \lambda}}\otimes\ldots\otimes\sum_{m^{\prime}\neq n^{J}}|m_{\lambda}^{J} \rangle\,\frac{\langle m_{\lambda}^{J}|\bar{T}_{J}|n_{\lambda}^{J}\rangle}{E_{m}^ {\lambda}-E_{n}^{\lambda}}\otimes\ldots|n_{\lambda}^{N}\rangle \tag{3.43}\]
and that the flow operator
in this sector takes the form (3.40).
Footnote 10: This is the case of the \(W\)-twisted sector, which implies that \({\cal X}_{T\bar{T}\,on\,SPO}\) has a bilocal structure, which does not respect the symmetric product form of the state.
In the twisted sector, it is no longer true that the deforming operator is a sum over copies; nevertheless, an explicit expression can still be easily obtained for \({\cal X}_{T_{I}\bar{T}_{I}}\), at least at the (semi)classical level, by following the steps of [38]. More precisely, we have
\[\int d\sigma\,{\cal O}_{T_{I}\bar{T}_{I}}=\left[H,i\!\int d\sigma d\bar{ \sigma}\,G(\sigma-\tilde{\sigma}){\cal H}^{I}(\sigma){\cal P}^{I}(\tilde{ \sigma})\right]+\frac{1}{R}\int d\sigma d\bar{\sigma}\left({\cal H}^{I}( \sigma)T^{I}_{\sigma\sigma}(\tilde{\sigma})-{\cal P}^{I}(\sigma){\cal P}^{I}( \tilde{\sigma})\right) \tag{3.44}\]
Upon summing over the various copies, the left-hand-side of this equation, minus its diagonal piece, corresponds precisely to \([H,{\cal X}_{T_{I}\bar{T}_{I}}]\). To obtain a closed-form expression for it, one may use the \(T\bar{T}\) trace relation in each copy
\[T^{I}_{\sigma\sigma}(\sigma)={\cal H}^{I}(\sigma)-2\mu{\cal O}_{T_{I}\bar{T} _{I}}(\sigma) \tag{3.45}\]
which holds classically as is, and quantum-mechanically up to a total derivative. Plugging this into (3.44) and acting with the whole equation on a twisted sector, in which the expansion of the fields takes the form (3.22) (we are ignoring for now the copies that do not participate in the twist), the integral over \({\cal H}^{I}(\sigma)\) that multiplies \({\cal O}_{T_{I}\bar{T}_{I}}\) in the second term yields a factor of \(H^{(w)}/w\), where \(H^{(w)}\) is the Hamiltonian restricted to the \(w\)-twisted sector and the index '\(w\)' is now a shorthand for the individual \(w\) copies that enter the cycle. Ultimately we find, at the classical level
\[{\cal X}^{(w),cls}_{T_{I}\bar{T}_{I}}=\frac{1}{1+2\mu H^{(w)}/(Rw)}\int d\sigma d \tilde{\sigma}\,G(\sigma-\tilde{\sigma}){\cal H}^{I}(\sigma){\cal P}^{I}( \tilde{\sigma}) \tag{3.46}\]
This expression corresponds to nothing but the seed \({\cal X}_{T\bar{T}}\) on the covering space. One way of showing this is by expanding \({\cal H}^{I}(\sigma)\) above as a function of the undeformed CFT generators \({\cal H}^{(0)}_{I}(\sigma)\) and \({\cal P}^{I}(\sigma)\), using the closed-form expression given in [38]. When acting upon a twisted-sector state, which we choose again to be a single cycle of length \(w\), each of the fields can be expanded in a sum over fractional modes as in (3.22). We then note that the integral over the sum of \(\sigma\), \(\tilde{\sigma}\) ensures that the \(I\) dependence will drop out from the integral, leading to an overall \(w\) factor. The dependence on the Fourier modes of \({\cal H}^{(0)}\) and \({\cal P}\) will be the same as in the double-trace expression, but with \(m\to m/w\). Since each factor of \(\mu\) in the expansion is accompanied by a factor of \({\cal H}^{(0)}_{I}\) or \({\cal P}^{I}\), the overall factor of \(w\) that multiplies the \(\mu^{p}\) term in the expansion is \(w^{-p}\times w\times w\), where the last factor comes from the Fourier transform of the Green's function. The Fourier integrals also bring a factor of \(R^{-(p-2)}\), by dimensional analysis. We therefore notice that the \(w\) and \(R\) dependence is such that it combines into precisely a dependence on \(Rw\), which coincides, upon lifting to the covering space, with the flow operator of the double-trace \(T\bar{T}\) - deformed CFT. Including quantum corrections amounts to replacing factors of \(L_{m/w}\) in the expansion by factors of \(n/wR\) that would result from commuting two such fractional Virasoro modes, in perfect agreement with the general counting. Thus, in the \(w\)-twisted sector, \({\cal X}_{T_{I}\bar{T}_{I}}\) acts just as the descent of \({\cal X}_{T\bar{T}}\) on the covering space to the base cylinder.
More generally, if there are several twisted and untwisted sectors present - as determined by the conjugacy class, \([g]\), of the permutation group - then (3.44) reduces to a sum over sectors which, using the orthogonality of the various subsectors of the Hilbert space, leads to the following expression for the total flow operator \({\cal X}_{T_{I}\bar{T}_{I}}\) when acting on \({\cal H}^{[g]}\)
\[{\cal X}^{[g]}_{T_{I}\bar{T}_{I}}=\sum_{\sigma\in S_{N}}{\cal X}^{(\sigma(1) \ldots\sigma(w_{1}))}_{T_{I}\bar{T}_{I}}+{\cal X}^{(\sigma(w_{1}+1)\ldots \sigma(w_{1}+w_{2}))}_{T_{I}\bar{T}_{I}}+\ldots \tag{3.47}\]
Here \(w_{i}\) are the lengths of the cycles that appear in \([g]\), each contributing with a flow operator constructed along the lines of the previous paragraph, which lifts to the flow operator of the seed \(T\bar{T}\) - deformed CFT on the cylinder of circumference \(Rw_{i}\). The sum over permutations considers all combinations in which the copies enter the cycles of \([g]\), making the full operator well-defined on the Hilbert subspace \({\cal H}^{[g]}\). Note that when all cycles have length one, this reduces to the expression (3.40) for \({\cal X}\) in the untwisted sector.
Being obtained from the commutator (3.39), the above expression for \({\cal X}_{T_{I}\bar{T}_{I}}\) holds except possibly on the diagonal degenerate subspaces. However, as we have just argued, we shouldn't expect the matrix elements on \({\cal H}_{D}\) to be fixed; our expression (3.47) corresponds to a particular choice.
### Symmetries of single-trace \(T\bar{T}\) and \(J\bar{T}\)-deformed CFTs
Having discussed the flow operator, we would now like to study the extended symmetries of single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, following the steps of the double-trace analysis. We therefore define a set of Virasoro generators as17
Footnote 17: Even though we use exactly the same notation as in the double-trace case, it should be clear from the context that these are the Virasoro generators in single-trace \(T\bar{T}/J\bar{T}\) deformed CFTs.
\[\partial_{\lambda}\widetilde{L}^{\lambda}_{m}=[{\cal X}_{T_{\bar{T}}\bar{T}_{ \bar{T}}},\widetilde{L}^{\lambda}_{m}]\;,\;\;\;\;\;\widetilde{L}^{\lambda=0}_{ m}=L^{{}_{CFT}}_{m} \tag{3.48}\]
### Untwisted sector analysis
This equation is very easy to solve in the untwisted sector: since \({\cal X}_{T_{\bar{T}}\bar{T}_{\bar{T}}}\) is a single-trace operator and so is the initial \(L^{{}_{CFT}}_{m}\), it follows that the flowed \(\widetilde{L}^{\lambda}_{m}\) will simply be
\[\widetilde{L}^{\lambda}_{m}=\sum_{I}\widetilde{L}^{I,\lambda}_{m} \tag{3.49}\]
where the \(\widetilde{L}^{I,\lambda}_{m}\) represent the solution to the flow equation in a single copy of a \(T\bar{T}\) / \(J\bar{T}\) - deformed CFT. The same definition can be extended to all the other Virasoro and Kac-Moody generators of the seed symmetric product orbifold. Their algebra consists of two-commuting copies of the Virasoro (\(\ltimes\) Kac-Moody) algebra by construction. As explained in the introductory subsection, the non-trivial check that these generators represent symmetries of the theory is to prove that they are conserved. For this, we compute their commutator with the Hamiltonian
\[[H,\widetilde{L}^{\lambda}_{m}]=\bigg{[}\sum_{I}H_{I},\sum_{J}\widetilde{L}^{ J,\lambda}_{m}\bigg{]}=\sum_{I}\alpha^{I}_{m}\widetilde{L}^{I,\lambda}_{m}\;, \;\;\;\;\;\alpha^{I}_{m}\equiv\alpha_{m}(H_{I},P_{I}) \tag{3.50}\]
where the \(\alpha_{m}\) for the cases of interest are given in (3.6). This immediately implies that \(\sum_{I}e^{i\alpha^{I}_{m}t}\widetilde{L}^{I,\lambda}_{m}\) satisfies the conservation equation (3.4). While one may worry that different operators \(\alpha^{I}_{m}\) appear in the time-dependent factors above, their expectation value in any state of the symmetric orbifold is the same, as it cannot depend on the particular copy.
### Twisted sector analysis
In the twisted sectors on the cylinder, \(\widetilde{L}^{\lambda}_{m}\) and \({\cal X}_{T_{\bar{T}}\bar{T}_{\bar{T}}}\) no longer take the form of a sum over copies. As explained in the introductory subsection, in this sector we will generically obtain fractionally-moded generators, of which the integer-moded generators are a particular case. We will therefore discuss the preservation of the fractional Virasoro and Kac-Moody symmetries generically. We again define them via the flow equation
\[\partial_{\lambda}\widetilde{L}^{\lambda}_{m/w}=[{\cal X}_{T_{\bar{T}}\bar{T} _{\bar{T}}},\widetilde{L}^{\lambda}_{m/w}]\;,\;\;\;\;\;\;\widetilde{L}^{ \lambda=0}_{m/w}=\widetilde{L}^{{}_{CFT}}_{m/w} \tag{3.51}\]
This is a perfectly well-defined equation in the Hilbert space of the \(T\bar{T}/J\bar{T}\) symmetric orbifold. Since \({\cal X}_{T_{\bar{T}}\bar{T}_{\bar{T}}}\) belongs to the untwisted sector, it infinitesimally takes a \(w\)-twisted sector operator to another one. The standard Virasoro generators are obtained when \(m\) is a multiple of \(w\).
We would now like to show that the solution to this flow equation corresponds precisely to the \(T\bar{T}\) solution for the \(\widetilde{L}^{\lambda}_{m}\) on the covering space - which is a cylinder of size \(Rw\) - reinterpreted on the base via (3.23). As discussed at the beginning of this section, this map makes no use of conformal invariance, but only sews together the different copies. For simplicity, we assume again that the twist acts on the first \(w\) copies of the seed, and as identity on the remaining \(N-w\) ones. A summation over \(S_{N}\) will render the final result gauge-invariant.
As we have already discussed, in the \(w\)-twisted sector, the flow operator on the base simply corresponds to the flow operator \({\cal X}_{T\bar{T}}\) of the seed theory on the covering cylinder. Also, in the undeformed CFT, the fractional Virasoro modes correspond to integer modes on the covering. It follows that the solution to (3.51) will be the same as the solution to (3.3) for the seed theory, but on a cylinder of radius \(Rw\). Since we particularized our discussion to a given number of copies on
which the single-cycle twist acts, only one of the \({\cal X}^{(w)}\) factors in (3.47) will act non-trivially on the generator, but the final expressions will be symmetrized with respect to \(S_{N}\). For example, in the trivial case \(w=1\), we obtain \(\widetilde{L}^{A}_{m}\) in the seed for the first copy and the symmetrization will yield the untwisted sector result (3.49). The same procedure can be applied to the right-moving Virasoro generators, as well as to the Kac-Moody currents.
In order for the solution to (3.51) to define a set of conserved operators, we first need to compute its commutator with the Hamiltonian. This may be simply evaluated on the cover, where it yields \(\alpha_{m}(H^{cov},...,wR)\widetilde{L}^{cov}_{m}\), where the dots stand for the other conserved quantities that enter the definition (3.6) of \(\alpha_{m}\) (i.e., momentum for \(T\bar{T}\), and also \(U(1)\) charge for \(J\bar{T}\)) and we have dropped the index '\(\lambda\)' from the generator, to lighten the notation. Translating this to the base cylinder, we find \(\alpha_{m}(H^{(w)},...,wR)\widetilde{L}_{m/w}\). One may also derive these expressions from the non-linear relation (2.56) between the undeformed and deformed spectrum in the twisted sector, which implies a (non-linear and \(w\) - dependent) relation between \(H^{(w)}\) and \(\widetilde{L}^{(w)}_{0}\). That this relation depends on the particular sector is not surprising, given that the definition of \(\widetilde{L}_{0}\) is sector-dependent.
Adding the appropriate time-dependence to ensure conservation, and taking into account all the possible choices of copies that can enter the cycle, the full answer for the fractional Virasoro generators then takes the form
\[\widetilde{L}_{m/w}(t)=\sum_{\sigma\in S_{N}}e^{i\alpha_{m}(H^{(\sigma(1)... \sigma(w))},...,wR)t}\widetilde{L}^{(\sigma(1)...\sigma(w))}_{m/w} \tag{3.52}\]
where the superscripts indicate the particular copies entering the non-trivial cycle. Note this reduces to the correct answer in the untwisted sector (\(w=1\)). The same holds for the right-moving Virasoro and Kac-Moody generators. The algebra of the fractional Virasoro generators above is the same as in the undeformed CFT, as follows from the definition (3.51). In particular, for \(m\) a multiple of \(w\) we obtain the integer Virasoro modes, which are present in any sector of the theory.
Note that in principle, these modes could also be obtained by flowing \({\cal H}_{L,R}(\sigma)\) and then performing the Fourier transform, as in [38]. Even though the operators we start from are single-trace operators, \(\sum_{I}{\cal H}^{I}_{L,R}(\sigma)\), the solution to the flow equation looks differently in the different sectors because \({\cal X}_{T_{I}\bar{T}_{I}}\) does. The result of the flow equation will lead to a notion of emergent field-dependent coordinates, which will also be defined in the twisted sectors; from the structure of the flow we see they will correspond precisely to the field-dependent coordinates \(u,v\) on the covering space.
To conclude this section, we have explicitly shown that single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs contain operators that are conserved and satisfy two commuting copies of the Virasoro (\(\ltimes\) Kac-Moody) algebra, showing they possess the corresponding symmetry. The central charge and \(U(1)\) level of the algebra are simply \(N\) times those of the seed theories. In twisted sectors, one finds also fractional Virasoro (\(\ltimes\) Kac-Moody) conserved modes.
### Other bases of generators
As discussed in section 3.1, the flowed Virasoro generators may not provide the most natural basis of generators of the extended symmetries of these theories. In single-trace \(J\bar{T}\) - deformed CFTs, the most natural basis of left-moving generators is given by the standard generators of conformal and affine \(U(1)_{L}\) symmetries. In the untwisted sector, they take the form
\[L_{m}=\sum_{I}L^{I}_{m}=\sum_{I}\widetilde{L}^{I}_{m}+\frac{\lambda}{R}H^{I}_{ R}\tilde{J}^{I}_{m}+\frac{\lambda^{2}k}{8\pi R}H^{2}_{R,I}\,\delta_{m,0}\;, \;\;\;\;\;J_{m}=\sum_{I}J^{I}_{m}=\sum_{I}\widetilde{J}^{I}_{m}+\frac{\lambda k }{4\pi}H^{I}_{R}\,\delta_{m,0} \tag{3.53}\]
and similarly for the right-moving generators, where we are now working in the convention in which the \(L_{m}\) are dimensionful. In the \(w\)-twisted sector, one can write similar relations by considering the seed on the cylinder of circumference \(Rw\), as instructed by the solution of the flow equations
\[L_{m/w}=\widetilde{L}_{m/w}+\frac{\lambda}{Rw}H^{(w)}_{R}\tilde{J}_{m/w}+\frac {\lambda^{2}k}{8\pi Rw}H^{(w)2}_{R}\,\delta_{m,0}\;,\;\;\;\;\;\;J_{m/w}= \widetilde{J}_{m/w}+\frac{\lambda k}{4\pi}H^{(w)}_{R}\,\delta_{m,0} \tag{3.54}\]
where \(H^{(w)}_{R}\) is the (globally-defined) right-moving Hamiltonian, restricted to the \(w\) - twisted sector, with eigenvalues the \(w\)-twisted sector energies (2.64). One may easily check that for \(m=0\), these
yield the correct expression (2.64) for the deformed energies in the \(w\) - twisted sector, taking into account the fact that the eigenvalue of \(\widetilde{L}_{0}\) is identical to that in the undeformed CFT. Note that, throughout this section, the twisted generators will be built from the first \(w\) copies of the seed, and symmetrization is assumed only for the final expressions.
Rewriting the relations above in terms of the effective Kac-Moody level in the \(w\)-twisted sector \(k^{(w)}=wk\), we obtain that the relation between the two sets of generators is given by spectral flow with \(\lambda H_{R}^{(w)}/w\). Since the relation is non-linear, the Poisson algebra spanned by the untilded generators is non-linear in these generators.
Note that for \(m\) a multiple of \(w\), we obtain the global generators of extended symmetries, which correspond to the Fourier integrals of state (and thus sector) - independent quantities over the base cylinder. This can be established, at least classically, as follows: the solution for the generators \(L_{m},\bar{L}_{m}\) is the same as the double-trace solution, uplifted to the covering space. E.g., for the left-movers, we have, using (3.23)
\[L_{m}=L_{mw}^{cov}=\int_{0}^{wR}d\tilde{\sigma}\,e^{2\pi im\tilde{\sigma}/R} \mathcal{H}_{L}(\tilde{\sigma}) \tag{3.55}\]
with \(\mathcal{H}_{L}(\tilde{\sigma}+wR)=\mathcal{H}_{L}(\tilde{\sigma})\). This integral can be reduced to the integral on an interval of size \(R\) of \(\sum_{I=0}^{w-1}\mathcal{H}_{L}(\tilde{\sigma}+RI)\), which is periodic with period \(R\). Thus, the integral above becomes a local integral of the periodic current on the base space. The same argument can also be applied to the right-movers, the only difference being that the integrand is now a non-linear function of the fields. We also expect it to extend to the quantum case; note it implies that \(L_{m},\bar{L}_{m}\) (whose most appropriate quantum definition may or may not exactly coincide with (3.8), (3.9) with \(\lambda\to\lambda/w\)) are the physical symmetry generators in the full theory.
If \(L_{m}\), \(J_{m}\) and their right-moving counterparts are the global symmetry generators in single-trace \(J\bar{T}\) - deformed CFTs then, given that the relation between them and \(\widetilde{L}_{m}\), \(\widetilde{J}_{m}\), etc. is that of spectral flow with parameter \(\lambda H_{R}/w\), it follows that the flowed generators are explicitly sector-dependent. This fact is not suprising, given that the flow operator used to define these generators also depends explicitly on the sector of the theory. It appears possible that requiring at most implicit sector dependence of the symmetry generators in single-trace \(T\bar{T}/J\bar{T}\) - deformed CFTs may provide a criterion for selecting the physical basis of generators in these theories.
In single-trace \(T\bar{T}\) - deformed CFTs, one may argue, at least classically, that the natural generators of "unrescaled" field-dependent coordinate transformations in the untwisted sector of the symmetric product orbifold should be
\[Q_{m}=\sum_{I}Q_{m}^{I}=\sum_{I}R\,\widetilde{L}_{m}^{I}/R_{u}^{I}\;,\quad \mbox{with}\quad\quad R_{u}^{I}=R+2\mu H_{R}^{I} \tag{3.56}\]
where the relevant classical expression for \(\widetilde{L}_{m}\) for the \(T\bar{T}\) deformation is given in (3.12); dividing by the field-dependent radius factors yields an expression for \(Q_{m}\) that is the integral of a quasi-local current. Once the appropriate quantum definition of the "unrescaled" generators of the field-dependent symmetries is understood in the seed theory, the result generalizes trivially in the untwisted sector as above.
The algebra of these generators is a sum over copies of the non-linearly-deformed Virasoro algebras obtained in the double-trace case, given up to \(\mathcal{O}(\hbar^{2})\) by
\[[Q_{m},Q_{n}]=2\pi\hbar(m-n)\sum_{I}\frac{Q_{m+n}^{I}}{R+2\mu H_{R}^{I}}+(m-n )\sum_{I}\frac{8\pi\hbar\,\mu^{2}H_{R}^{I}Q_{m}^{I}Q_{n}^{I}}{R\,R_{u}^{I}R_{ H}^{I}}+\frac{\pi^{2}c\hbar\,m^{3}}{3}\sum_{I}\frac{1}{(R_{u}^{I})^{2}} \delta_{m+n} \tag{3.57}\]
It is interesting to note that _new_ operators appear on the right-hand-side in the single-trace case, rather than a product of operators that were already among the generators, as in the double-trace. As we already discussed, it would be good to understand more deeply whether this basis of operators may be preferred for physical reasons and, if so, what is the significance of this non-linear algebra.
In the \(w\)-twisted sector, one can introduce new fractional generators by performing the division
on the covering space of circumference \(Rw\), with the result
\[Q_{m/w}=\frac{Rw\,\widetilde{L}_{m/w}}{R_{u}^{(w)}} \tag{3.58}\]
with \(R_{u}^{(w)}=Rw+2\mu H_{R}^{(w)}\). For \(m\) multiple of \(w\) we obtain the integer versions on the base. These operators may be interpreted as implementing field-dependent coordinate transformations on the covering space, using a Fourier basis. The algebra of these operators is obtained by replacing \(Q^{I},H_{R}^{I}\) by \(Q^{(w)},H_{R}^{(w)}\) in the above, \(m\) by \(m/w\), the sums over copies with sums over cycles. Interestingly, if we take the expectation value of this algebra (restricted to its integer modes) in a high-energy state, the result precisely coincides with that of the asymptotic symmetry group analysis of the asymptotically linear dilaton black hole backgrounds performed in [39]. Note this effectively amounts to replacing \(\mu\rightarrow\mu/N\) in the double-trace algebra (3.15), which is precisely what was found by the holographic analysis.
### Higher spin currents
As discussed in the previous section, \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs also preserve the KdV charges associated with integrability. In their single-trace version, it is natural to introduce the single-trace analogues of the \(\tilde{I}_{s}\), which in the untwisted sector are given as a sum over copies of the corresponding expressions (3.16) - (3.17) with \(L_{n}^{I}\rightarrow\widetilde{L}_{n}^{I}\). Their conservation follows trivially from the conservation of the flowed KdV charges in the seed \(T\bar{T}/J\bar{T}\) - deformed CFT, given that the Hamiltonian is a sum over the Hamiltonians in each copy. In the twisted sectors, their conservation follows from that on the covering space.
The symmetric product orbifold of a two-dimensional CFT possesses, however, many other higher spin conserved currents, associated to its much larger symmetry algebra. As explained in e.g. [40], these higher-spin primary currents only exist thanks to multi-trace contributions to an otherwise non-primary field. One such higher-spin current that has been given as an example therein is, for \(R=2\pi\) and specialising to the untwisted sector
\[(W_{4})_{n}=\sum_{I=1}^{N}\sum_{m}L_{m}^{I}L_{n-m}^{I}-\frac{3}{10}(n+2)(n+3) \sum_{I}L_{n}^{I}-\frac{\frac{22}{5c}+1}{N-1}\sum_{m,I\neq J}L_{m}^{I}L_{n-m} ^{J} \tag{3.59}\]
One may wonder whether this current could be transported along the single-trace \(T\bar{T}/J\bar{T}\) flow in a similar manner to how the Virasoro and KdV currents were transported, i.e. by requiring it to be covariantly constant along the flow, which simply amonts to replacing the Virasoro modes by their flowed counterparts. Taking \(n=0\) for simplicity, we would like to show that this simple procedure does not lead to a conserved charge.
For this, we compute the commutator of \((\widetilde{W}_{4})_{0}\) with the Hamiltonian. The terms that correspond to single-trace sums are conserved, following the same steps as we used to show the conservation of the KdV charges. However, the multitrace terms lead to a different result, due to the fact that the commutator \([L_{m}^{I},\alpha_{n}^{J}]\) only takes the form (3.11) if \(I=J\), and is zero otherwise. Using this, we find, for \(I\neq J\)
\[[\widetilde{L}_{m}^{I}\widetilde{L}_{-m}^{J},H]=\widetilde{L}_{m}^{I}\alpha_{ -m}^{J}\widetilde{L}_{-m}^{J}+\alpha_{m}^{I}\widetilde{L}_{m}^{I}\widetilde{L} _{-m}^{J}=(\alpha_{m}^{I}+\alpha_{-m}^{J})\widetilde{L}_{m}^{I}\widetilde{L} _{-m}^{J} \tag{3.60}\]
In a CFT, \(\alpha_{m}^{I}=m\hbar=-\alpha_{-m}^{J}\) so this term vanishes; however, in \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, this is no longer the case. To leading order in \(\mu\), \(\alpha_{m}^{I}=m\hbar-2\mu m\hbar(H^{I}-P^{I})\), implying that the leading term breaking the conservation is proportional to \(\mu\hbar\sum_{m}m(H_{R}^{I}-H_{R}^{J})\widetilde{L}_{m}^{I}\widetilde{L}_{-m} ^{J}\), a combination that does not appear to vanish. Similar results hold in single-trace \(J\bar{T}\) - deformed CFTs, on the right-moving side. Thus, the naive flowed charge associated to this higher-spin symmetry is not conserved in the deformed theory, thus confirming the idea that the conservation of the flowed Virasoro generators and KdV charges is a special feature of these operators, which does not extend to arbitrary currents in the theory. This analysis points towards the conclusion that the associated higher spin symmetries are broken by the single-trace \(T\bar{T}\) / \(J\bar{T}\) deformations; one should ascertain though that it is not possible to construct corrections to the charges that would restore their conservation.
## 4 Correlation functions
In this section, we would like to discuss correlation functions of operators in symmetric product orbifolds of \(T\bar{T}\) and, especially, \(J\bar{T}\) - deformed CFTs. Since these theories are neither conformal, nor local, our focus and methods will naturally be quite different from the standard discussion of correlation functions in symmetric product orbifolds of CFTs [71, 72, 73, 74], which is centered around computing correlators of twist operators using non-trivial covering maps, and makes essential use of the conformal transformation properties of the operators in question. We will instead concentrate on momentum-space operators, which are natural to consider in a non-local theory, and our main goal will be to understand how to choose a special basis of these operators and compute their correlation functions in terms of the correlators of the undeformed symmetric orbifold CFT.
Our main focus will be \(J\bar{T}\) - deformed CFTs, for whose double-trace version [41] has proposed a concrete basis of "primary operator analogues" and computed their correlation functions exactly. The main goal of this section will be to adapt this prescription to the single-trace \(J\bar{T}\) case and use it to compute correlation functions of both untwisted and twisted-sector operators. It is worth mentioning that, quite recently, [43] has also put forth a special basis of operators in \(T\bar{T}\) - deformed CFTs and computed their correlation functions, obtaining similar expressions. While a generalisation of these results to the single-trace case would be both interesting and likely possible using their formalism, we do not address this problem here, mainly because it would require a very different method than the one we use for the \(J\bar{T}\) case.
The correlation functions in the symmetric product orbifold of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs can then be compared to the correlation functions evaluated using worldsheet techniques in the holographic setups that have been related to these deformations. More precisely, they are expected to match the correlation functions of vertex operators associated to long strings18 in these backgrounds, which are well described by a symmetric product orbifold. Such worldsheet correlation functions were recently computed in [24] for the case of the asymptotically linear dilaton background, which is related to the single-trace \(T\bar{T}\) deformation; their large-momentum behaviour in the untwisted sector agrees with that found by [43]. One may similarly compute correlation functions of long string vertex operators in warped AdS\({}_{3}\) by adapting the results of [20] along the lines of [24], and then compare them with our single-trace \(J\bar{T}\) result; we find a slight disagreement on the non-local side that we comment upon.
Footnote 18: By contrast, the short string sector is not described by a symmetric product orbifold, and neither their spectrum, nor their correlation functions [21, 22, 23] match those in \(T\bar{T}/J\bar{T}\) - deformed CFTs.
For completeness, we start this section with a review of the correlation functions in double-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, focusing on the explicit proposal of [41] for the latter case.
### Review of correlation functions in \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs
In local QFTs, there is a special set of operators whose correlation functions are interesting to study, namely local operators. This set is further specialised in CFTs, where much of the focus is on operators that transform as primaries under conformal transformations, as their correlation functions are highly constrained by conformal invariance.
Since \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs are non-local, it is not a priori clear which are the natural operators to consider, if a preferred basis exists at all [75]. Indeed, due to the non-locality of these theories, operators are best defined in momentum space. The computation of their correlation functions using e.g. conformal perturbation theory yields divergences, which need to be subtracted via counterterms. Since in a non-local QFT the structure of the allowed counterterms is in general not known, the finite part of the correlator may itself become ambiguous. The situation is under better control in \(J\bar{T}\)-deformed CFTs, whose locality and \(SL(2,\mathbb{R})\) invariance on the left-moving side single out a set of primary operators under these symmetries, for which at least the left-moving part of the correlation function is fixed [42]; however, these questions remain for the right-moving, non-local piece of the correlator.
Despite these concerns and complications, there has been much recent progress in the computation of correlation functions of both \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs. Upon making a judicious choice of the operators whose correlation functions one would like to study, it can be shown that the end
result is a simple integral transform of the correlation functions of the original CFT. For the case of two-and-three point functions, this integral simply yields the CFT momentum-space correlator, with the conformal dimensions replaced by certain momentum-dependent combinations. Thus, the deformed correlators, though explicitly non-local, are directly and universally determined by the correlation functions in the undeformed CFT, indicating that both \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs have a structure that is as rigid as that of two-dimensional CFTs.
### \(T\bar{T}\) - deformed CFTs
The effect of the \(T\bar{T}\) perturbation on correlation functions can be studied at small coupling using conformal perturbation theory, where one can easily note that it induces a momentum-dependent correction to the conformal dimension [76]. The first all-orders analysis of the correlation functions of \(T\bar{T}\)-deformed CFTs has been performed by [44], who also studied their leading UV divergences and showed they can be absorbed into a non-local renormalization of the operators. More recently, [43] approached this problem using the path integral formulation of the \(T\bar{T}\) deformation in terms of coupling the undeformed CFT to JT gravity [77, 78]. As it is most clear from its vielbein formulation, this description relates the \(T\bar{T}\) - deformed dynamics to that of the original CFT, but seen through a set of "dynamical coordinates" that are related to the \(T\bar{T}\) ones in a universal, but field-dependent fashion. These coordinates parametrize a flat target space, identified with the space of the undeformed CFT. The basis of operators considered in [43] corresponds to the original CFT operators on this space or, alternatively, their Fourier transform with respect to the target space coordinates. The correlation function of these momentum-space operators is obtained by carefully performing the JT path integral and absorbing the UV divergences into a renormalization of the operators. The procedure is rather subtle and involved, and the final result takes the form
\[\langle{\cal O}_{1}(p_{1})\ldots{\cal O}_{n}(p_{n})\rangle\propto\delta(\sum_ {i=1}^{n}p_{i})\int\prod_{i}d^{2}\sigma_{i}\,\delta(\sigma_{1})e^{i\sum_{i}p_ {i}\sigma_{i}}{\cal F}(p_{i})\langle{\cal O}_{1}(\sigma_{1})\ldots{\cal O}_{n }(\sigma_{n})\rangle_{CFT}\prod_{i<j}\left(\Lambda|\sigma_{ij}|\right)^{\frac {p_{pi}\cdot p_{j}}{\pi}} \tag{4.1}\]
where \(\Lambda\) is a renormalization scale and \({\cal F}(p_{i})\) is a smooth function of the momenta that grows at most polynomially at large \(p_{i}\). If one ignores this term and considers e.g. the two-point function, one finds it corresponds precisely to the CFT momentum-space correlator
\[\langle{\cal O}(p,\bar{p}){\cal O}(-p,-\bar{p})\rangle=\frac{(2\pi)^{2}}{2^{2 (h+\bar{h})}\sin(\pi(h+\bar{h}))}\,\frac{p^{2h-1}\bar{p}^{2\bar{h}-1}}{\Gamma (2h)\Gamma(2\bar{h})} \tag{4.2}\]
but with the dimensions replaced by the momentum-dependent combinations (in our conventions)
\[h=h(p,\bar{p})=h_{{ CFT}}+\frac{\mu}{\pi}p\bar{p}\,,\qquad\bar{h}=\bar{h}(p, \bar{p})=\bar{h}_{{ CFT}}+\frac{\mu}{\pi}p\bar{p} \tag{4.3}\]
This is precisely the answer obtained in [24] by computing the correlation functions of long string worldsheet vertex operators, as we review in section 4.3. It is interesting to ask whether a different renormalization of the operators in [43] could reproduce this exact formula.
### \(J\bar{T}\) - deformed CFTs
As already stated, the main goal of this section is to compute correlation functions of both untwisted and twisted-sector operators in single-trace \(J\bar{T}\) - deformed CFTs. For this, we will adapt the prescription of [41] for defining appropriate analogues of primary operators and computing their correlation functions. In order to facilitate the generalisation of these results to the single-trace case, we present the construction of [41] in a slightly different fashion, which we hope also makes its physical interpretation more transparent.
This construction relies on the interplay of physical and flowed Virasoro generators in \(J\bar{T}\) - deformed CFTs. Let us start by discussing the left-moving sector, where the meaning of the various operators is very clear. As we already explained, the standard \(SL(2,\mathbb{R})_{L}\) conformal symmetry of
the theory unambigously identifies a set of left primary states whose conformal dimensions become momentum-dependent due to the irrelevant deformation [79]
\[h(\bar{p})=h^{[0]}+\lambda q^{[0]}\bar{p}+\frac{\lambda^{2}k\bar{p}^{2}}{4}\;, \;\;\;\;\;\;q(\bar{p})=q^{[0]}+\frac{\lambda k\bar{p}}{2} \tag{4.4}\]
where \(h^{[0]},q^{[0]}\) are the undeformed left conformal dimension and charge. Note this relation takes precisely the form of a spectral flow by \(\lambda\bar{p}\). To simplify the expressions in this section, we have rescaled the deformation parameter \(\lambda\to 2\pi\lambda\) with respect to the previous ones. We also set \(R=2\pi\).
The state-operator correspondence, which is still valid [80], then predicts the existence of a set of primary operators with these conformal dimensions, which obey the standard Ward identities with respect to the true conformal generators \(L_{m},J_{m}\), and for which the left-moving part of the correlation function is fixed by \(SL(2,\mathbb{R})_{L}\) in the standard way. To construct them, one starts by formally defining the 'flowed' operators [64]
\[\partial_{\lambda}\widetilde{\mathcal{O}}^{\lambda}(\zeta,\bar{\zeta})=[ \mathcal{X}_{J\bar{T}},\widetilde{\mathcal{O}}^{\lambda}(\zeta,\bar{\zeta})] \tag{4.5}\]
where \(\zeta,\bar{\zeta}\) are the coordinates on the cylinder and \(\mathcal{X}_{J\bar{T}}\) is the operator that drives the flow of the energy eigenstates in the \(J\bar{T}\) - deformed CFT19. By construction, their correlation functions in the flowed vacuum state and their commutation relations with \(\widetilde{L}_{n},\widetilde{J}_{n}\) are identical to those in the undeformed CFT
Footnote 19: The flow is defined on a fixed time slice, say \(\tau=0\). The operator at \(\tau\neq 0\) is simply defined as \(\mathcal{O}_{CFT}(\tau,\sigma)=e^{iH_{CFT}\tau}\mathcal{O}_{CFT}(0,\sigma)e^{ -iH_{CFT}\tau}\), and the whole expression is then flowed.
\[[\widetilde{L}_{n},\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})]=e^{n\zeta} \bigg{(}n\,h^{[0]}\,\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})+\partial_{ \zeta}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})\bigg{)}\;,\;\;\;\;\;\;[ \widetilde{J}_{n},\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})]=e^{n\zeta}q^{[0 ]}\,\widetilde{\mathcal{O}}(\zeta,\bar{\zeta}) \tag{4.6}\]
and similarly on the right-moving side. Despite this fact, the \(\tilde{\mathcal{O}}(\zeta,\bar{\zeta})\) do not correspond to physical operators in the theory and are non-local even on the left-moving side; thus, \(\zeta,\bar{\zeta}\) should be simply viewed as labels inherited from the undeformed CFT.
The operators we are interested in should instead be primary with respect to the untilded Virasoro generators (3.8), which in our new conventions read
\[L_{m}=\widetilde{L}_{m}+\lambda H_{R}\widetilde{J}_{m}+\frac{\lambda^{2}k}{4} \,H_{R}^{2}\,\delta_{m,0}\;,\;\;\;\;\;J_{m}=\widetilde{J}_{m}+\frac{\lambda k }{2}\,H_{R}\,\delta_{m,0} \tag{4.7}\]
with left dimensions given by (4.4). In order for these dimensions to make sense, they should also be eigenoperators of \(H_{R}\), with eigenvalue \(\bar{p}\). We will find it convenient to work in a mixed basis, \((\zeta,\bar{p})\). The operators should satisfy
\[[H_{R},\mathcal{O}(\zeta,\bar{p})]=\bar{p}\,\mathcal{O}(\zeta,\bar{p}) \tag{4.8}\]
Since the relation between the undeformed and deformed dimension is given by spectral flow, the relation between the physical and tilded operator should involve dressing by an appropriate vertex operator. Note, however, that \(\widetilde{\mathcal{O}}\) already carries the correct charges with respect to \(J_{0}\) and \(L_{0}\), implying that we should remove by hand the zero mode of the vertex operator involved. Our Ansatz is, therefore
\[\mathcal{O}(\zeta,\bar{p})=\int d\bar{\zeta}\,e^{-\bar{\zeta}\bar{p}}: \widetilde{\mathcal{V}}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta}):e^{\mathcal{ V}\mathcal{O}\zeta+\widetilde{\mathcal{V}}\mathcal{O}\bar{\zeta}} \tag{4.9}\]
where the normal-ordered dressed operator is given by
\[:\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})\!:\;\equiv \;\widetilde{\mathcal{V}}^{+}_{\eta}(\zeta)\widetilde{\mathcal{V}}^{\mp}_{\eta }(\bar{\zeta})\,\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})\,\widetilde{ \mathcal{V}}^{-}_{\eta}(\bar{\zeta})\widetilde{\mathcal{V}}^{-}_{\eta}(\zeta) \tag{4.10}\]
Here, \(\widetilde{\mathcal{V}}^{\pm}\) represent the positive and, respectively, negative-frequency parts of the dressing operator and the \(\widetilde{\ }\) stands for the fact that they are simply the \(J\bar{T}\) flow of an identical expression in the undeformed CFT. With foresight, we have included vertex operator dressings for both the left and right-movers, and have left the spectral flow parameter arbitrary in order to be able to reuse this
computation in the single-trace case. The argument above yields the following expression for the left dressing operators
\[\widetilde{\mathcal{V}}^{+}_{\eta}(\zeta)=e^{\eta\sum_{n=1}^{\infty}\frac{1}{n} \widetilde{J}_{-n}e^{n\zeta}}\;,\qquad\widetilde{\mathcal{V}}^{-}_{\eta}(\zeta) =e^{\eta\left(\widetilde{J}_{0}\,\zeta-\sum_{n=1}^{\infty}\frac{1}{n} \widetilde{J}_{n}e^{-n\zeta}\right)} \tag{4.11}\]
Note in (4.9) we have also allowed for a correction, \(\mathcal{Y}_{\mathcal{O}}\), to the zero mode of the current, which commutes with all the left generators and will be fixed by the commutation relations of \(\mathcal{O}(\zeta,\bar{p})\) with the physical Virasoro generators.
In order to prepare the ground for generalizing these results to the single-trace case, it will be useful to compute of the Ward identities obeyed by \(\mathcal{O}(\zeta,\bar{p})\) in two steps: first, we compute the commutation relations of \(:\!\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}\) with the left-moving flowed generators using the explicit expressions (4.11) for the dressings; then, we assemble them into commutation relations of the non-linear combinations (4.7) with \(\mathcal{O}(\zeta,\bar{p})\). The advantage of performing the first step separately is that the commutation relations of \(:\!\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}\!:\) with the flowed generators are _identical_ to the corresponding expressions in the undeformed CFT, given that the flow acts by simple conjugation. The result is
\[[\widetilde{J}_{m},\,;\!\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}(\zeta, \bar{\zeta})\!:]=e^{m\zeta}\bigg{(}q^{[0]}+\frac{\eta k}{2}(1-\delta_{m,0}) \bigg{)}\,;\!\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta}) \!\!: \tag{4.12}\]
\[[\widetilde{L}_{m},\,;\!\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})\!:]=e^{m\zeta}\left(m(h^{[0]}+\eta q^{[0]}+\frac{k\eta^{2}}{4})- \frac{\eta^{2}k}{4}(1-\delta_{m,0})+\partial_{\zeta}\right):\!\widetilde{ \mathcal{V}}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})\!\!:-\eta\,;\! \widetilde{\mathcal{V}}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta})\!\!:\, \widetilde{J}_{m}\]
and corresponds, as it is clear from (4.11), to the CFT Ward identities for a normal-ordered left vertex operator of charge \(q^{[0]}+\eta k/2\), but with part of its zero mode left out (so that the charge carried with respect to \(\widetilde{J}_{0}\) is just \(q^{[0]}\)). Note the \(\widetilde{\widetilde{\mathcal{V}}}_{\eta}\) (constructed from right-moving current modes only) do not affect the left Ward identities.
It is then not hard to check that the commutation relations of \(\mathcal{O}(\zeta,\bar{p})\) with the generators (4.7) are precisely those of a primary of the expected dimension and charge, provided we set \(\eta=\lambda\bar{p}\) and
\[\mathcal{Y}_{\mathcal{O}}=\lambda qH_{R}+\lambda\bar{p}q^{[0]}+\frac{k\lambda ^{2}\bar{p}^{2}}{4} \tag{4.13}\]
This can also be seen from the fact that the left-moving piece of \(\mathcal{O}(\zeta,\bar{p})\) organises into a left vertex operator of charge \(q\) in the deformed theory, i.e. constructed from the current modes \(J_{m}\): the dressings (4.11) shift the charge from \(q^{[0]}\) to \(q\) for the non-zero modes, whereas the the first term in the expression for \(\mathcal{Y}_{\mathcal{O}}\) shifts the \(\widetilde{J}_{0}\) term in (4.9) (whose total coefficient is \(2q/k\)) to \(J_{0}\). The last two terms correspond to the difference \(h(\bar{p})-h^{[0]}\) in the operator dimensions, which becomes important when translating the operators defined on the cylinder to those on the plane via the standard map \(\mathcal{O}_{(cyl)}(\zeta)=e^{h\zeta}\mathcal{O}_{(pl)}(z)\). Finally, the total charge of the operator, as measured by \(J_{0}\), is \(q\). One may check that, upon performing a conformal transformation to the plane, the left-moving piece of (4.9) corresponds precisely to a local left vertex operator in the deformed theory with charge \(q\), with no mismatch between the coefficients of the zero and non-zero modes.
Having understood in detail the construction of the left-moving piece of the operator - which is entirely fixed by conformal symmetry - via the dressing discussed above, the proposal of [41] consists in choosing the dressing operators for the right-movers to have an identical form, but with all quantities barred. Requiring that the resulting operator be an eigenoperator of \({H_{R}}^{20}\), we obtain the following expression for \(\widetilde{\mathcal{Y}}_{\mathcal{O}}\)
\[\widetilde{\mathcal{Y}}_{\mathcal{O}}=\lambda\bar{p}(\widetilde{J}_{0}- \widetilde{J}_{0})+\lambda qH_{R}+\lambda\bar{p}q^{[0]}+\frac{k\lambda^{2}\bar{ p}^{2}}{4} \tag{4.14}\]
The right-moving piece of \(:\!\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}:\) can again be interpreted as a vertex operator in the undeformed CFT, of charge \(\bar{q}^{[0]}+k\eta/2\), but with a discrepancy between the coefficient of the zero and non-zero modes, which is then trivially flowed to the deformed theory. A factor of \(\lambda\bar{q}H_{R}\) in the expression for \(\widetilde{\mathcal{Y}}_{\mathcal{O}}\) could again be understood as a correction to the zero mode of the right-moving current. However, since this term does not commute with the right current generators, we no longer have a useful interpretation for the full operator as a vertex operator in the deformed theory. In addition,
there are a number of discrepancies involving winding between the operator we obtain and a naive spectrally-flowed operator on the right-moving side, at least at finite \(R\). These discrepancies can presumably be traced back to the fact that \(\bar{L}_{0}\) and \(H_{R}\) differ by winding terms. In any case, given the explicit form (4.9) of the operator, the the commutation relations with the generators (3.9) can be shown to take the form of CFT Ward identities in the limit \(R\to\infty\), with
\[\bar{h}(\bar{p})=\bar{h}^{[0]}+\lambda\bar{q}^{[0]}\bar{p}+\frac{k\lambda\bar {p}^{2}}{4}\,\qquad\bar{q}(\bar{p})=\bar{q}^{[0]}+\frac{k\lambda\bar{p}}{2} \tag{4.15}\]
which resembles a spectral flow transformation by \(\lambda\bar{p}\) of the right-moving conformal dimensions and charges.
Let us now compute correlation functions of these operators. Since \({\cal Y}_{\cal O},\bar{\cal Y}_{\cal O}\) only involve \(H_{R}\) and the winding and the \({\cal O}(\zeta,\bar{p})\) are eigenoperators of both, the correlation function can be simplified to
\[\langle{\cal O}_{1}(\zeta_{1},\bar{p}_{1})\ldots{\cal O}_{n}( \zeta_{n},\bar{p}_{n})\rangle=\int\prod_{i=1}^{n}d\bar{\zeta}_{i}\,e^{-\sum_{ i}\bar{p}_{i}\bar{\zeta}_{i}}\,e^{\sum_{i}\langle\lambda\bar{p}_{i}q_{i}^{[0]}+ \frac{k\lambda^{2}\bar{p}_{i}^{2}}{4}\rangle(\zeta_{i}+\bar{\zeta}_{i})}\,e^ {\lambda H_{R}^{vac}q_{i}(\zeta_{i}+\bar{\zeta}_{i})}\times\] \[\times\ \ e^{\sum_{i<j}(\lambda\bar{p}_{i}(q_{j}^{[0]}-\bar{q}_{j}^{ [0]})\bar{\zeta}_{i}+\lambda q_{i}\bar{p}_{j}(\zeta_{i}+\bar{\zeta}_{i}))}\, \langle\widetilde{\cal Y}_{1}\widetilde{\cal O}_{1}(\zeta_{1},\bar{\zeta}_{1} )\!:\ldots\!:\!\widetilde{\cal W}_{n}\widetilde{\cal O}_{n}(\zeta_{n},\bar{ \zeta}_{n})\!:\rangle \tag{4.16}\]
where \(H_{R}^{vac}\) represents the eigenvalue of \(H_{R}\) in the flowed vacuum. The last term in the integrand is almost a correlation function of vertex operators of charges \(q_{i}\) in the undeformed CFT, up to the missing zero modes. Using the explicit expressions for the vertex operators to evaluate the zero mode contribution, we obtain
\[\langle:\!\widetilde{\cal V}_{1}\widetilde{\cal O}_{1}(\zeta_{1}, \bar{\zeta}_{1})\!:\ldots\!:\!\widetilde{\cal V}_{n}\widetilde{\cal O}_{n}( \zeta_{n},\bar{\zeta}_{n})\!:\rangle\,=\,e^{-\sum_{i<j}\lambda\bar{p}_{j}(q_{ i}\zeta_{i}+\bar{q}_{i}\bar{\zeta}_{i})}e^{-\sum_{i}(\lambda\bar{p}_{i}(q_{i}^{[0]} \zeta_{i}+\bar{q}_{i}^{[0]}\bar{\zeta}_{i})+\frac{k\lambda^{2}\bar{p}_{i}^{2} }{4}(\zeta_{i}+\bar{\zeta}_{i}))}\times\] \[\times\prod_{i<j}\left(e^{\frac{\zeta_{i}j}{2}}-e^{-\frac{\zeta_ {i}j}{2}}\right)^{\frac{2}{2}(q_{i}q_{j}-q_{i}^{[0]}q_{j}^{[0]})}\left(e^{ \frac{\zeta_{i}j}{2}}-e^{-\frac{\zeta_{i}j}{2}}\right)^{\frac{2}{2}(\bar{q}_{i }\bar{q}_{j}-\bar{q}_{i}^{[0]}\bar{q}_{j}^{[0]})}\langle\widetilde{\cal O}_{1} (\zeta_{1},\bar{\zeta}_{1})\ldots\widetilde{\cal O}_{n}(\zeta_{n},\bar{\zeta}_ {n})\rangle \tag{4.17}\]
where \(\zeta_{ij}=\zeta_{i}-\zeta_{j}\), \(\bar{\zeta}_{ij}=\bar{\zeta}_{i}-\bar{\zeta}_{j}\). The second line simply represents the spectrally-flowed correlation function on the cylinder. The first exponential factor on the first line corresponds to the correction due to the missing zero modes, while the second one accounts for the change in the definition of the cylinder operators due to the shift in their dimensions. The same result can be obtained by commuting the current modes until they annihilate the (flowed) vacuum. Plugging in this expression into (4.16) and reinstating the factors of the radius, the final result for the \(n\)-point function simplifies to
\[\langle{\cal O}_{1}(\zeta_{1},\bar{p}_{1})\ldots{\cal O}_{n}( \zeta_{n},\bar{p}_{n})\rangle=\int\prod_{i=1}^{n}d\bar{\zeta}_{i}\,e^{-\sum_{ i}\bar{p}_{i}\bar{\zeta}_{i}}\,e^{\frac{2\pi}{R}{\cal A}}\prod_{i<j}\left(e^{\frac{ \pi\zeta_{ij}}{R}}-e^{-\frac{\pi\zeta_{ij}}{R}}\right)^{\frac{2}{2}(q_{i}q_{j} -q_{i}^{[0]}q_{j}^{[0]})}\times\] \[\times\ \left(e^{\frac{\pi\zeta_{i}j}{R}}-e^{-\frac{\pi\zeta_{ij}}{R}} \right)^{\frac{2}{2}(\bar{q}_{i}\bar{q}_{j}-\bar{q}_{i}^{[0]}\bar{q}_{j}^{[0]}) }\langle{\cal O}_{1}^{CFT}(\zeta_{1},\bar{\zeta}_{1})\ldots{\cal O}_{n}^{CFT}( \zeta_{n},\bar{\zeta}_{n})\rangle \tag{4.18}\]
where we plugged in the fact that the correlation function of the \(\widetilde{\cal O}\) is identical to the corresponding undeformed CFT correlator and the exponent \({\cal A}\) can be written, using charge conservation, as
\[{\cal A}=\sum_{i=1}^{n}\lambda H_{R}^{vac}q_{i}(\zeta_{i}+\bar{\zeta}_{i})+\sum _{i<j}\lambda\bar{p}_{j}(q_{i}-\bar{q}_{i})\bar{\zeta}_{ij} \tag{4.19}\]
If we ignore this exponential factor, then the result of the integral is simply the spectrally-flowed CFT correlation function in the mixed \((\zeta_{i},\bar{p}_{i})\) basis. Note that the spectral flow only affects the operator dimensions, but not the OPE coefficients of appropriate Virasoro-Kac-Moody primaries. In particular, for the case of two-and three-point function, the spectral flow will simply shift the operator dimensions to the momentum-dependent combinations (4.4), (4.15) in the corresponding mixed position/momentum-space correlator. Including the exponential factor will simply shift the
coefficients of the \(\bar{\zeta}_{i}\) in the Fourier integral by a factor that, importantly, is proportional to \(1/R\). The result will be the spectrally-flowed CFT momentum-space correlator evaluated at momenta that differ from \(\bar{p}_{i}\) by terms that vanish in the decompactification limit.
Note that on the left-moving side, the only term that could potentially upset the conformal invariance of the spectrally-flowed correlator is the shift proportional to \(H_{R}^{ae}\zeta_{i}\) in the expression for \({\cal A}\). As discussed in [41], this discrepancy is related to the lack of standard \(SL(2,\mathbb{R})_{L}\) invariance of the flowed vacuum in finite size, which also disappears in the \(R\to\infty\) limit.
### Correlation functions in single-trace \(J\bar{T}\) - deformed CFTs
Let us, for completeness, start with some generalities. There are several classes of operators one may discuss in a symmetric product orbifold QFT: untwisted or twisted, single-trace or multitrace. Operators that are untwisted take the form
\[\sum_{\sigma\in S^{N}}{\cal O}^{1}_{\sigma(1)}\otimes{\cal O}^{2}_{\sigma(2)} \otimes\ldots\otimes{\cal O}^{N}_{\sigma(N)} \tag{4.20}\]
where all insertions are at the same position \((\zeta,\bar{\zeta})\). In the particular case where all but one operator are the identity, this reduces to \(\sum_{I}{\cal O}_{I}(\zeta,\bar{\zeta})\), which is a single-trace untwisted operator. When more than one insertions are non-trivial, the operator is called multitrace. The Fourier transform of these operators can be written as a sum over copies, e.g. \(\sum_{I}{\cal O}_{I}(p,\bar{p})\) in the single-trace case, provided one acts on a state from the untwisted sector. The multi-trace untwisted operators take a similar form, but with several sums over copies and an integral over all the possible ways to distribute the momenta among them: e.g., in the double-trace case, the relevant operators take the form \(\sum_{I,J}\int d^{d}k\,{\cal O}^{1}_{I}(k){\cal O}^{2}_{J}(p-k)\). The untwisted-sector correlation functions of such operators can be straightforwardly computed using those of the seed: in the single-trace case, they are simply proportional to seed correlators; for multi-trace operators, one obtains a momentum-space convolution of seed correlation functions, which may be evaluated if the latter are known explicitly.
Thus, all the new correlators with respect to the seed lie in the twisted sector. One may again classify the twisted-sector operators into single-trace and multi-trace. For the single-trace operators, the twist involves only one non-trivial cycle; for the multitrace ones, several cycles should be considered. The end result should of course be symmetrized with respect to all permutations of the various copies.
The construction of twisted-sector correlation functions in single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs should depend on the particular method used to build the corresponding double-trace correlators. In the \(T\bar{T}\) case, the method of [43] uses the JT formulation of the deformation, which involves maps from the space where the theory is defined to a flat target space. Given the similarity between this formalism and one involving the string worldsheet, for example in what regards the computation of the finite-size spectrum [25, 26], it appears reasonable to assume that the correlation functions in the twisted sectors will be captured by considering "worldsheets" in the JT formulation that wind around the target space. The string worldsheet computation of [24] suggests the effect of the winding on the correlation functions is extremely straightforward, as it simply amounts to replacing the \(T\bar{T}\) coupling \(\mu\) by \(\mu/w\) in the twisted sectors.
The main goal of this section is to put forth an appropriate basis of twisted-sector operators in single-trace \(J\bar{T}\) - deformed CFTs and to compute their correlation functions. We will be following step-by-step the procedure used in the double-trace case. We will concentrate on single-trace operators, which contain a twist corresponding to a single non-trivial cycle of length \(w\), inserted at their location. The operators of interest are primary on the left-moving side, with the following momentum-dependent \(SL(2,\mathbb{R})_{L}\) dimensions and \(U(1)\) charges
\[h^{(w)}(\bar{p})=h^{(w)}_{CFT}+\frac{\lambda\bar{p}}{w}\,q^{[0]}+\frac{k \lambda^{2}\bar{p}^{2}}{4w}\,\ \ \ \ q=q^{[0]}+\frac{\lambda k\bar{p}}{2} \tag{4.21}\]
which were computed in section 2.3 via an infinite boost of the twisted-sector deformed energies. \(h^{(w)}_{CFT}\) is the dimension of the twisted-sector operator in the undeformed symmetric product orbifold CFT, which is related to a seed dimension by the holomorphic counterpart of (2.47). Note \(\lambda\) has been rescaled by a factor of \(2\pi\) with respect to (2.65). The extension to several non-trivial cycles is straightforward, as the dimensions are additive.
The relation between the deformed and undeformed twisted-sector dimensions corresponds to a transformation known as _fractional_ spectral flow [60, 61], defined as the base space transformation that lifts to the standard spectral flow on the covering space. It acts as
\[h^{(w)}\to h^{(w)}+\eta\,q^{(w)}+\frac{kw\,\eta^{2}}{4}\,\qquad q^{(w)} \to q^{(w)}+\frac{kw\,\eta}{2} \tag{4.22}\]
on the base. In the standard discussions, \(\eta\) is taken to be fractional, hence the name. In our case, \(\eta=\lambda\bar{p}/w\) is a continuous parameter, so its fractionality is not very important; what is important is that the level that enters the spectral flow transformation is the effective level in the \(w\)-twisted sector, \(k^{(w)}=kw\).
As argued in section 3.4, the physical generators of conformal symmetries in the \(w\) - twisted sectors are the integer modes21 of (3.54), which are related to the flowed generators in that sector by a transformation resembling fractional spectral flow with parameter \(\lambda H_{R}/w\)
Footnote 21: In principle, one could also consider fractional Virasoro modes around the location of the twist operator. The cylinder fractional modes appearing in (3.54) are associated to a twist operator inserted at \(-\infty\), whereas in this section we will be considering twists inserted at finite distance on the cylinder. The two sets of fractional generators can be related provided no other operator insertion is present - an overly restrictive requirement. On the other hand, the integer-moded Virasoro generators are everywhere well-defined.
\[L_{m}=\widetilde{L}_{m}+\frac{\lambda}{w}\,H_{R}\widetilde{J}_{m}+\frac{ \lambda^{2}k}{4w}H_{R}^{2}\,\delta_{m,0}\,\qquad J_{m}=\widetilde{J}_{m}+\frac{\lambda k}{2}H_{R}\,\delta_{m,0} \tag{4.23}\]
where we have again set \(R=2\pi\), rescaled \(\lambda\) by a \(2\pi\) factor and dropped the index indicating the twist sector from \(H_{R}\). Throughout this section, we will work with operators corresponding to the first \(w\) copies of the seed QFT, and symmetrization is assumed only for the final expressions. Our first task is to construct a basis of operators that are primary with respect to the generators above, with conformal dimensions and charges given in (4.21).
We may again proceed by defining a flowed operator \(\widetilde{\mathcal{O}}^{(w)}(\zeta,\bar{\zeta})\) via an equation of the form (4.5), with the initial condition that at \(\lambda=0\), the operator is a \(w\)-twisted operator in the undeformed CFT22. This operator will have the same conformal dimension and will satisfy the same Ward identities with respect to the flowed Virasoro and Kac-Moody currents as in the undeformed theory. As in the previous subsection, one should add an appropriate dressing to this operator, so it becomes primary with respect to the standard conformal generators, with the expected dimension (2.65). However, since the relation between the deformed and undeformed dimension is now given by _fractional_ spectral flow, we can no longer write a simple, explicit expression for the dressing vertex as in (4.11); instead, any explicit expression would involve fractional current modes. Since the fractional modes are defined only locally around operators of generically different twists, it does not make sense to commute them as we did in the double-trace case, and thus we need a different way to evaluate the correlation functions of interest.
Footnote 22: Strictly speaking, since \(\mathcal{O}^{(w)}\) is only associated with the first \(w\) copies of the QFT, it should be flown with the flow operator associated with these copies - the quantum analogue of (3.46). While this analogue of (4.5) is a well-defined operator equation, it is difficult to lift it to the covering space due to the presence of the twist operator at the location of \(\widetilde{\mathcal{O}}^{(w)}\). In particular, the mapping to the larger cylinder used in section 3.4, which did not rely on conformal invariance, cannot be used anymore.
The approach that we will instead take will be to reformulate, to the largest extent possible, the computations in terms of undeformed CFT ones - where the lift to the appropriate covering space is standard - and a \(J\bar{T}\) flow, which does not affect the end result for correlators and commutators. Concretely, we will write, in analogy with (4.9)
\[\mathcal{O}^{(w)}(\zeta,\bar{p})=\int d\bar{\zeta}\,e^{-\bar{\zeta}\bar{p}}: \widetilde{\mathcal{V}}_{\eta}\widetilde{\mathcal{O}}^{(w)}(\zeta,\bar{\zeta })\colon e^{\mathcal{V}^{(w)}_{\mathcal{O}}\zeta+\bar{\mathcal{V}}^{(w)}_{ \mathcal{O}}\bar{\zeta}} \tag{4.24}\]
where \(:\widetilde{\mathcal{V}}_{\eta}\widetilde{\mathcal{O}}^{(w)}\colon\) now represents the single-trace \(J\bar{T}\) flow of an operator in the undeformed theory that is almost given by the fractional spectral flow, with parameter \(\eta=\lambda\bar{p}/w\), of the original \(w\)-twisted CFT operator, up to some missing zero modes and some rescaling factors. The fractional spectral flow in the undeformed CFT may be implemented by lifting to the covering space, where it
becomes usual spectral flow. Guided by the double-trace case, one should subtract by hand23 from the result the zero mode of the dressing vertex operator, so that its commutation relations with the undeformed CFT generators - which, upon \(J\bar{T}\) flow, will be the same as those of the flowed generators with \(:\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}:\) - take precisely the form (4.12), but with \(k\to kw\). Note that, in order to reach this conclusion, we do not absolutely need the explicit form of \(\widetilde{\mathcal{V}}\).
Footnote 23: Note that this procedure is not as innocuous as it may sound. In the CFT, the transformation under the covering map is understood for local operators, but once we remove some zero modes e.g. from the operator on the covering space, it is no longer clear how to map it back to the base.
Using the fact that, by assumption, \(\mathcal{O}^{(w)}(\zeta,\bar{p})\) is an eigenoperator of the right Hamiltonian \(H_{R}\), with eigenvalue \(\bar{p}\), and requiring that the commutation relations with the generators (4.23) are CFT Ward identities for operators with conformal dimension (2.65), we find \(\eta=\lambda\bar{p}/w\), and
\[\mathcal{Y}^{(w)}_{\mathcal{O}}=\frac{1}{w}\bigg{(}\lambda qH_{R}+\lambda\bar {p}q^{[0]}+\frac{k\lambda^{2}\bar{p}^{2}}{4}\bigg{)} \tag{4.25}\]
Assuming that, as in the double-trace case, the commutation relation of \(\widetilde{\mathcal{L}}_{0}\) with \(:\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}(\zeta,\bar{\zeta}):\) still yields a simple \(\bar{\zeta}\) derivative of the operator, we also deduce that
\[\bar{\mathcal{Y}}^{(w)}_{\mathcal{O}}=\frac{1}{w}\bigg{(}\lambda qH_{R}+ \lambda\bar{p}q^{[0]}+\frac{k\lambda^{2}\bar{p}^{2}}{4}+\lambda\bar{p}(\bar{J }_{0}-\tilde{J}_{0})\bigg{)} \tag{4.26}\]
Thus, choosing \(:\widetilde{\mathcal{V}}\widetilde{\mathcal{O}}:\) to be a fractionally spectrally flowed operator on the left (with some zero modes removed) reproduces the correct left Ward identities, even in absence of explicit expressions for the dressing factors. Our proposal for the right-moving piece of this operator - which will fix the basis (4.24) of operators we consider in single-trace \(J\bar{T}\) - deformed CFTs - is that it behaves exactly the same in the undeformed CFT, i.e. it is a fractional spectral flow on the right with parameter \(\lambda\bar{p}/w\). The commutation relations with the flowed right-moving generators then follow from the CFT ones, and are given by (4.12) with all quantities barred and \(k\to kw\). The commutation relations of \(\mathcal{O}^{(w)}(\zeta,\bar{p})\) with \(\bar{L}_{m},\bar{J}_{m}\), whose relationship to the flowed right-moving generators is given by (4.23) with all quantities barred, then follows straightforwardly from their definitions. It is not hard to show that in the \(R\to\infty\) limit, they correspond to standard CFT right-moving Ward identities with
\[\bar{h}^{(w)}(\bar{p})=\bar{h}^{(w)}_{{}_{CFT}}+\frac{\lambda\bar{q}^{[0]} \bar{p}}{w}+\frac{k\lambda^{2}\bar{p}^{2}}{4w}\;,\;\;\;\;\;\bar{q}(\bar{p})= \bar{q}^{[0]}+\frac{k\lambda\bar{p}}{2} \tag{4.27}\]
which resembles a right-moving spectral flow with parameter \(\lambda\bar{p}/w\).
Let us now discuss the correlations functions of these operators, which take the form
\[\langle\mathcal{O}^{(w_{1})}_{1}(\zeta_{1},\bar{p}_{1})\dots\mathcal{O}^{(w_{ n})}_{n}(\zeta_{n},\bar{p}_{n})\rangle \tag{4.28}\]
where each \(\mathcal{O}^{(w_{i})}_{i}\) is now a gauge-invariant operator in the \(w_{i}\)-twisted sector of the theory, and we assume that the twists are such that the correlation function is non-vanishing. For simplicity, we will consider only correlation functions of single-trace twist operators.
As in the previous section, we can evaluate the correlation function in two steps: first, we use the fact that (4.24) are eigenoperators of \(\mathcal{Y}^{(w_{i})}_{\mathcal{O}},\widetilde{\mathcal{Y}}^{(w_{i})}_{\mathcal{ O}}\) to reduce the computation to the evaluation of a correlation function of "almost CFT vertex operators", as in (4.16)
\[\langle\mathcal{O}^{(w_{1})}_{1}(\zeta_{1},\bar{p}_{1})\dots \mathcal{O}^{(w_{n})}_{n}(\zeta_{n},\bar{p}_{n})\rangle = \int\prod_{i=1}^{n}d\bar{\zeta}_{i}\,e^{-\sum_{i}\bar{p}_{i}\bar{ \zeta}_{i}}\,e^{\sum_{i}(\lambda\bar{p}_{i}q^{[0]}_{i}+\frac{k\lambda^{2}p^{2 }_{i}}{4})\frac{(\zeta_{i}+\bar{\zeta}_{i})}{w_{i}}}\,e^{\lambda H^{wa_{i}}_{R} _{R}q_{i}\frac{(\zeta_{i}+\bar{\zeta}_{i})}{w_{i}}}\times \tag{4.29}\] \[\times\,e^{\sum_{i<j}\frac{1}{w_{i}}(\lambda\bar{p}_{i}(q^{[0]}_{ j}-\bar{q}^{[0]}_{j})\bar{\zeta}_{i}+\lambda q_{i}\bar{p}_{j}(\zeta_{i}+\bar{ \zeta}_{i}))}\,(:\!\widetilde{\mathcal{V}}_{1}\widetilde{\mathcal{O}}^{(w_{1})} _{1}(\zeta_{1},\bar{\zeta}_{1})\!:\dots\!:\!\widetilde{\mathcal{V}}_{n} \widetilde{\mathcal{O}}^{(w_{n})}_{n}(\zeta_{n},\bar{\zeta}_{n})\!:)\]
The computation of the correlator of \(:\widetilde{\mathcal{V}}_{i}\widetilde{\mathcal{O}}_{i}:\) can then be traced back (by inverting the \(J\bar{T}\) flow) to the cylinder correlation function of the original CFT operators, fractionally spectrally flowed by an amount \(\lambda\bar{p}_{i}/w_{i}\), up to some missing zero modes. Note that, unlike for the standard spectral flow, whose action on a general correlation function is known in closed form (4.17), for fractional
spectral flow involving operators from different twisted sectors no such formula appears to exist. On the other hand, there does exist a well-defined prescription for computing such a correlation function by mapping the CFT correlators to the covering space, so we may consider this part of the problem as being in principle solved.
The additional issue of subtracting the zero modes is complicated by the fact that: i) unlike in the double-trace case, we no longer have a globally defined explicit expression for the dressing vertex operators and ii) the maps to the covering space are for local operators, but locality is spoiled by the removal of the zero modes. Note one constraint on the corrections is that they should be consistent with translation invariance on the cylinder, which the prefactor in (4.29) currently violates, as did the analogous prefactor in (4.16). We will not attempt to estimate the zero-mode contribution herein. Instead, we argue, guided by the explicit expressions in the double-trace case, that the corrections from such zero modes will take the form of exponential factors, which are all suppressed in the \(R\to\infty\) limit, and will thus not contribute to the final correlation function on the plane. Thus, in this limit we are left with usual CFT \(n\)-point function in mixed position/ momentum space, for twisted-sector operators with fractionally spectrally flowed conformal dimensions given by (4.21), (4.27)
\[\langle{\cal O}_{1}^{(w_{1})}(\zeta_{1},\bar{p}_{1})\ldots{\cal O }_{n}^{(w_{n})}(\zeta_{n},\bar{p}_{n})\rangle_{plane} =\] \[=\ \int\prod_{i=1}^{n}d\bar{\zeta_{i}}\,e^{-\sum_{i}\bar{p}_{i} \bar{\zeta_{i}}}\,\langle:({\cal V}_{1}{\cal O}_{1}^{(w_{1})})_{CFT}(\zeta_{1 },\bar{\zeta_{1}}):\ldots:({\cal V}_{n}{\cal O}_{n}^{(w_{n})})_{CFT}(\zeta_{n },\bar{\zeta_{n}}):\rangle_{plane}\]
Thus, we have succeeded in evaluating correlation functions of a particular basis of of twisted-sector operators in single-trace \(J\bar{T}\) - deformed CFTs in terms of correlation functions of the undeformed theory. In order to compare with the holographic computation of the next section, it is useful to fully transform the result to momentum space and directly consider the Fourier transform of the euclidean correlator. For the specific case of the two-point function, the result is
\[\langle{\cal O}^{(w)\dagger}(p,\bar{p}){\cal O}^{(w)}(-p,-\bar{p})\rangle= \frac{(2\pi)^{2}}{2^{2(h^{(w)}+\bar{h}^{(w)})}\sin(\pi(h^{(w)}+\bar{h}^{(w)}) )}\,\frac{p^{2\bar{h}^{(w)}-1}\bar{p}^{2\bar{h}^{(w)}-1}}{\Gamma(2\bar{h}^{(w )})\Gamma(2\bar{h}^{(w)})} \tag{4.31}\]
where \(h^{(w)},\bar{h}^{(w)}\) stand for the momentum-dependent combinations (4.21), (4.27).
### Comparison with holographic results
The single-trace \(T\bar{T}\) and \(J\bar{T}\) deformations have been linked to holography for certain non-AdS backgrounds, namely an asymptotically linear dilaton spacetime for the case of \(T\bar{T}\)[8], and warped AdS\({}_{3}\) for \(J\bar{T}\)[9, 10]. Both of these backgrounds are supported by pure NS-NS flux and flow to AdS\({}_{3}\) in the interior; the full spacetimes can be viewed as non-normalizable deformations thereof. Perturbative worldsheet string theory in these backgrounds is given by the \(SL(2,\mathbb{R})\) WZW model (for level \(N_{5}>1\)), deformed by a certain class of exactly marginal current-current operators. Such deformations are exactly solvable. In certain examples, the deformed string background can be thought of as the near-horizon geometry of a stack of \(N_{5}\) NS5 branes and \(N_{1}\gg 1\) F1 strings [8, 81].
For \(N_{5}>1\), the full CFT dual to the infrared AdS\({}_{3}\) is known to _not_ be described by a symmetric product orbifold. It is a somewhat singular theory [5], due to the presence of states with a continuous spectrum known as long strings, which wind around the asymptotic AdS\({}_{3}\) boundary. Its long string subsector has, on the other hand, been argued to be described by a symmetric orbifold [5, 82]; note, however, that it captures only a small fraction of the system, at least in the regime of interest [82]. Given this structure, it should be clear that the symmetric product orbifold of \(T\bar{T}/J\bar{T}\)-deformed CFTs cannot be exactly dual to these non-AdS backgrounds for \(N_{5}>1\), because the remaining sectors do not possess a symmetric orbifold structure24; at the same time, the long string subsector
survives in the deformed backgrounds and, moreover, its dynamics is well-described by single-trace \(T\bar{T}\) and, respectively, \(J\bar{T}\) - deformed CFTs. Concretely, the spectrum of long strings has been shown [8, 9, 10, 18] to perfectly match that of single-trace \(T\bar{T}/\)\(J\bar{T}\) (as derived in this article, or also in [29]); more recently, correlation functions of long string vertex operators in the asymptotically linear dilaton background have been computed [29] and appear to match well with the recent \(T\bar{T}\) results of [43].
The aim of this section is to compare the correlation functions of our proposed set of "primary operator analogues" in single-trace \(J\bar{T}\) - deformed CFTs with the those of long string vertex operators in warped AdS\({}_{3}\). The latter will be estimated by adapting the short-string computation of [20] for the same background to long strings, along the lines of [24]. The two results turn out to not exactly match. To explain this, we first review some relevant prior work.
Let us start by considering worldsheet vertex operators of type II superstring theory in \(AdS_{3}\times{\cal N}\) in the presence of pure NS-NS flux. Here \({\cal N}\) is a 7-dimensional compact manifold. Long strings correspond to worldsheet vertex operators that belong to the continuous series representation of \(SL(2,\mathbb{R})\). Their worldsheet dimension is given by
\[\begin{split}\Delta&=-\frac{j(j+1)}{N_{5}}-w\left(h +\frac{N_{5}w}{4}\right)+\Delta_{\cal N}+N\\ \bar{\Delta}&=-\frac{j(j+1)}{N_{5}}-w\left(\bar{h}+ \frac{N_{5}w}{4}\right)+\bar{\Delta}_{\cal N}+\bar{N}\end{split} \tag{4.32}\]
where \(j\in-1/2+i\mathbb{R}\) labels the Casimir of the global \(SL(2,\mathbb{R})\) algebra, \(w\geq 1\) denotes the integer spectral flow in \(AdS_{3}\) - identified with the winding of the long string around the AdS\({}_{3}\) boundary, \((\Delta_{\cal N},\bar{\Delta}_{\cal N})\) are left/right vertex operator dimensions of the worldsheet CFT in \({\cal N}\), \((N,\bar{N})\) are the left/right oscillator numbers in \(AdS_{3}\) and, finally, \((h,\bar{h})\) represent the eigenvalues of the \(J^{3}\), \(\bar{J}^{3}\) zero modes of the worldsheet \(SL(2,\mathbb{R})\), which for continuous series representations are unrelated to the eigenvalue of the Casimir. In global AdS\({}_{3}\), \((h,\bar{h})\) are identified with the left/right energies of the state on the cylinder, and thus to the dual operator dimensions via the standard state-operator map. The physical on-shell condition for these superstring vertex operators is \(\Delta=\bar{\Delta}=1/2\). They can be constructed explicitly [84], and their correlation function takes the form25
Footnote 25: In this article, we normalize the worldsheet operators such that the two-point function of the dual CFT operators takes the form \(x_{12}^{-2h}\bar{x}_{12}^{-2h}\). Note this normalization is different from the standard convention in string theory in \(AdS_{3}\).
\[\langle V(z_{1},x_{1})V(z_{2},x_{2})\rangle=\frac{1}{z_{12}^{2\Delta}\bar{z}_ {12}^{2\Lambda}z_{12}^{2h}\bar{x}_{12}^{2\bar{h}}} \tag{4.33}\]
where \(x,\bar{x}\) are auxiliary coordinates that become identified with the space where the dual CFT lives. Integrating over the worldsheet coordinates, one obtains the standard correlation function of CFT operators on the boundary. One may also perform a Fourier transform of the latter, to obtain the momentum-space boundary correlator. For example, the momentum space two-point function takes the form
\[\langle V(z_{1},p)V(z_{2},-p)\rangle=\frac{(2\pi)^{2}p^{2h-1}\bar{p}^{2\bar{h}- 1}}{2^{2(h+\bar{h})}\sin(\pi(h+\bar{h}))\Gamma(2h)\Gamma(2\bar{h})}\frac{1}{z_ {12}^{2\Delta}\bar{z}_{12}^{2\bar{\Delta}}} \tag{4.34}\]
which will be useful in this section.
The asymptotically linear dilaton and warped AdS\({}_{3}\) backgrounds can both be obtained from AdS\({}_{3}\)\(\times\)\({\cal N}\) via a transformation known as TsT: T-duality, shift, T-duality. In both cases, the effect of the TsT transformation can be encoded in a non-local coordinate transformation, which is equivalent to twisting the boundary conditions of the fields on the AdS\({}_{3}\) worldsheet theory in a charge-dependent way [85]. This mildly affects the relationship between \(\Delta\) and \(h\), by adding
\[\delta_{T\bar{T}}\Delta=\delta_{T\bar{T}}\bar{\Delta}=\frac{\mu}{\pi}\,p\bar{p }\;,\;\;\;\;\;\;\delta_{J\bar{T}}\Delta=\delta_{J\bar{T}}\bar{\Delta}=\lambda \bar{p}\left(q^{[0]}+\frac{\lambda k\bar{p}}{4}\right) \tag{4.35}\]
on the left-hand side of (4.32). If \((h,\bar{h})\) are interpreted as the left/right global energy, then these shifts yield the correct deformed energy formulae to match to single-trace \(T\bar{T}/J\bar{T}\).
The vertex operators in the deformed theory may be expressed in terms of the vertex operators in the undeformed theory, with some appropriate dressing. This has been worked out in [20] for the warped \(AdS_{3}\) background and in [24] for the asymptotically linear dilaton one. It turns out that the correlation function of the appropriate vertex operators still takes the form (4.34), but the relationship between \(h\) and \(\Delta\) is modified by the shifts (4.35). Since \(\Delta\) must equal \(1/2\), this translates into a shift of \(h,\bar{h}\) that takes the form:
\[T\bar{T} :h\to h+\frac{\mu}{w\pi}p\bar{p}\;, \bar{h}\to\bar{h}+\frac{\mu}{w\pi}p\bar{p}\] \[J\bar{T} :h\to h+\frac{\lambda q^{[0]}\bar{p}}{w}+\frac{\lambda^{2}k\bar{p}^{2} }{4w}\;, \bar{h}\to\bar{h}+\frac{\lambda q^{[0]}\bar{p}}{w}+\frac{\lambda^{2}k \bar{p}^{2}}{4w} \tag{4.36}\]
and can be obtained by combining (4.32) with the appropriate shift in (4.35). Thus, we obtain a two-point function that is identical to the momentum-space two-point function in a CFT, but with shifted dimensions26. The \(T\bar{T}\) shift of \(h,\bar{h}\) nicely agrees with [43] for the case \(w=1\). More generally, one obtaines the shifts in the twisted sectors to be the same, but controlled by \(\mu/w\).
Footnote 26: Note that this argument appears to imply that, if the normalization of the undeformed AdS\({}_{3}\) vertex operators is rescaled by some function of the dimensions, then all these factors would become momentum-dependent. This is _not_ what happens if we perform a similar rescaling in the field-theory analysis of section 4.1, where the only functions of \(h\) that acquire a momentum dependence are those that result from the Fourier transform. This suggests that the argument of [24] may be more subtle than it naively appears.
Note that the shift in the left-moving single-trace \(J\bar{T}\) dimension precisely agrees with our previous analysis (2.65), including the \(w\) dependence. However, the right-moving piece of the correlator (4.34) does not agree with (4.31), which involves \(\bar{q}^{[0]}\) instead of \(q^{[0]}\) in the right-moving dimension \(\bar{h}^{(w)}(\bar{p})\). It would be interesting to understand the origin of this mismatch. Remember, in particular, that in our field-theory analysis we did encounter shifts of the right-movers that involved \(J_{0}\) instead of \(\bar{J}_{0}\); however, the discrepancies produced by these terms disappeared in the \(R\to\infty\) limit27. It would be interesting to perform a more careful worldsheet analysis, possibly on the cylinder, in order to track this discrepancy. Of course, it could also be that the string theory vertex operators simply are a different set of operators from those for which we computed correlation functions in field theory. Our criterion for fixing the right-moving piece of the operators in field theory was based on symmetries, i.e. by requiring that they satisfy CFT-like Ward identities with respect to the right-moving generators (3.9), which were related to the flowed right-moving Virasoro generators by an operator-dependent spectral flow. Since these Virasoro symmetries are not yet understood from the worldsheet perspective, the analogous way to fix the operator basis is not yet available on the string theory side. Conversely, it could be that consistency of the worldsheet vertex operators (e.g., mutual locality) would single out a different set of constraints on the operators that would be natural to impose. In any case, it would be interesting to further explore the properties of the two sets of operators and check whether one may be preferred to the other.
Footnote 27: The reason was that these discrepancies only appeared in a single exponential factor. Had they appeared in a sinh factor, they would have contributed to the shift in dimension, which would have then agreed with the long string result.
Finally, let us mention that exactly the same method may be used to compute correlation functions of the short string vertex operators [21, 22, 23]; one simply needs to reinterpret the relation between \(\Delta\) and \(h\) for the discrete \(SL(2,\mathbb{R})\) representations on the worldsheet. In this case, one most commonly considers \(w=0\) short strings. Since now the representation is lowest-weight, the spacetime dimension is related to the worldsheet Casimir as \(h=j+1\). Taking into account the shift in the relation between \(\Delta\) and the worldsheet dimension, one finds
\[h_{ALD}=\frac{1}{2}+\sqrt{\left(h-\frac{1}{2}\right)^{2}+\frac{\mu N_{5}}{2\pi }\,p\bar{p}}\;,\;\;\;\;\;h_{AdS}=\frac{1}{2}+\sqrt{\left(h-\frac{1}{2}\right) ^{2}+\lambda N_{5}q^{[0]}\,\bar{p}+\frac{\lambda^{2}N_{5}k}{4}\,\bar{p}^{2}} \tag{4.37}\]
Since the information about the \(T\bar{T}\) deformed symmetric product is only captured by the long string sector, it is not surprising that the momentum dependence of the conformal dimensions is different from [43]. The same comment applies to \(J\bar{T}\), where the short string sector conformal dimensions do not match with our symmetric product orbifold results.
## 5 Conclusions
In this article, we have studied various properties of symmetric product orbifolds of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs - namely the spectrum, the symmetries and the correlation functions - from a purely field-theoretical perspective. Our derivations relied mostly on Hilbert space techniques and made no use of conformal invariance, which is not present in these models.
The first observable we discussed was the torus partition function. We showed that the group-theoretical techniques [32, 33, 34] that were previously developed to determine the partition function of a symmetric product orbifold of _C_FTs in terms of that of the seed can be easily generalised to two-dimensional QFTs (not necessarily Lorentz-invariant) by appropriately taking into account the dependence of the partition function on the size of the circle on which the theory is defined. We also showed that the modular properties of the symmetric product orbifold followed from those of the seed theory. We then applied these results to the symmetric product orbifold of \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs, reproducing the finite-size spectra that were previously computed using worldsheet techniques. It would be interesting to find other classes of UV-complete QFTs with dimensionful couplings whose symmetric product orbifold can be studied with this method.
Our second result was a proof that the full Virasoro \(\times\) Virasoro (\(\times\) Kac-Moody2) symmetries of the symmetric product orbifold CFT, including their fractional counterparts, survive the single-trace \(T\bar{T}\)/ \(J\bar{T}\) deformation. As in the double-trace case [37], the argument was based on transporting the extended symmetry generators of the undeformed CFT along the irrelevant flow, and then showing that they remain conserved. The "physical" symmetry generators may now be singled out by the fact that they correspond to integrals of quasi-local current densities descended from the covering space, whereas the flowed ones explicitly depend on the twist sector. We further exploited these symmetries - following the steps of the double-trace analysis [41] - to single out a special basis of operators in single-trace \(J\bar{T}\) - deformed CFTs, both from the untwisted and the twisted sector, and compute arbitrary correlation functions thereof. It seems reasonable to hope that similar techniques may be used in the future to construct the correlation functions of single-trace \(T\bar{T}\) - deformed CFTs, provided the construction of correlation functions in the double-trace \(T\bar{T}\)-deformed CFTs can be recast in the same language as in the \(J\bar{T}\) case, namely using an interplay of the symmetries and the flow equation.
Footnote 28: We replaced the integrability requirement of [11] by UV-completeness, because for just the standard \(T\bar{T}\)/ \(J\bar{T}\) deformations, the universal modification of the S-matrix does not require integrability of the underlying theory.
Our results show that the 'QFT data' of single-trace \(T\bar{T}\) and \(J\bar{T}\) - deformed CFTs are rigidly determined by the corresponding observables in the seed double-trace-deformed theories, which in turn are universally determined by those of the undeformed CFT. Thus, from the point of view of the program set forth by [11] - of understanding the space of UV-complete29 two-dimensional quantum field theories - these theories are not significantly more general than their double-trace counterparts, which themselves can be understood as mostly kinematical deformations of the underlying CFT [86]. On the other hand, symmetric product orbifold CFTs do sometimes allow for non-universal exactly marginal deformations that break the symmetric orbifold structure; if such deformations could also be applied to a \(T\bar{T}\)/\(J\bar{T}\) symmetric product orbifold in such a way that its UV completeness is preserved, then this would have the potential of significantly enlarging the space of known UV-complete, yet non-local QFTs.
Footnote 29: We replaced the integrability requirement of [11] by UV-completeness, because for just the standard \(T\bar{T}\)/ \(J\bar{T}\) deformations, the universal modification of the S-matrix does not require integrability of the underlying theory.
Another important motivation for understanding the detailed properties of single-trace \(T\bar{T}\)/\(J\bar{T}\) - deformed CFTs is their application to non-AdS holography. The exact symmetric orbifolds we studied should be holographically dual to a highly stringy spacetime, corresponding to \(N_{5}=1\) in our analysis of section 4.3. The worldsheet theory for a string propagating in such a background is no longer described by the RNS formalism, but could in principle be studied with the methods of [87, 88], which would be very interesting to adapt to this non-AdS setting. Alternatively, one could concentrate on the dual description of the weakly-curved spacetimes that are usually of most interest in holography, which involve deformations of these theories that break the symmetric orbifold structure. It would be very interesting to understand which features of the single-trace \(T\bar{T}\)/\(J\bar{T}\) - deformed CFTs we studied - the entropy, the symmetries, the structure of the correlators - remain universal once one moves off the symmetric orbifold point. The results of [8, 39] suggest that at least the entropy and the extended symmetries should be part of the list.
### Acknowledgements
We are grateful to Luis Apolo, Brando Bellazzini, Alex Maloney, Sylvain Ribault and especially Alex Belin for insightful conversations. The work of SC received funding under the "Horizon 2020" Framework Program for Research and innovation under the Marie Sklodowska-Curie grant agreement number 945298. The work of SG is supported by the PhD track fellowship of the Ecole Polytechnique. The work of SG and MG was supported in part by the ERC starting grant 679278 Emergent-BH.
|
2310.15863 | Metric Clustering and MST with Strong and Weak Distance Oracles | We study optimization problems in a metric space $(\mathcal{X},d)$ where we
can compute distances in two ways: via a ''strong'' oracle that returns exact
distances $d(x,y)$, and a ''weak'' oracle that returns distances
$\tilde{d}(x,y)$ which may be arbitrarily corrupted with some probability. This
model captures the increasingly common trade-off between employing both an
expensive similarity model (e.g. a large-scale embedding model), and a less
accurate but cheaper model. Hence, the goal is to make as few queries to the
strong oracle as possible. We consider both so-called ''point queries'', where
the strong oracle is queried on a set of points $S \subset \mathcal{X} $ and
returns $d(x,y)$ for all $x,y \in S$, and ''edge queries'' where it is queried
for individual distances $d(x,y)$.
Our main contributions are optimal algorithms and lower bounds for clustering
and Minimum Spanning Tree (MST) in this model. For $k$-centers, $k$-median, and
$k$-means, we give constant factor approximation algorithms with only
$\tilde{O}(k)$ strong oracle point queries, and prove that $\Omega(k)$ queries
are required for any bounded approximation. For edge queries, our upper and
lower bounds are both $\tilde{\Theta}(k^2)$. Surprisingly, for the MST problem
we give a $O(\sqrt{\log n})$ approximation algorithm using no strong oracle
queries at all, and a matching $\Omega(\sqrt{\log n})$ lower bound. We
empirically evaluate our algorithms, and show that their quality is comparable
to that of the baseline algorithms that are given all true distances, but while
querying the strong oracle on only a small fraction ($<1\%$) of points. | MohammadHossein Bateni, Prathamesh Dharangutte, Rajesh Jayaram, Chen Wang | 2023-10-24T14:22:29Z | http://arxiv.org/abs/2310.15863v1 | # Metric Clustering and MST with Strong and Weak Distance Oracles
###### Abstract
We study optimization problems in a metric space \((\mathcal{X},d)\) where we can compute distances in two ways: via a "strong" oracle that returns exact distances \(d(x,y)\), and a "weak" oracle that returns distances \(\tilde{d}(x,y)\) which may be arbitrarily corrupted with some probability. This model captures the increasingly common trade-off between employing both an expensive similarity model (e.g. a large-scale embedding model), and a less accurate but cheaper model. Hence, the goal is to make as few queries to the strong oracle as possible. We consider both so-called "point queries", where the strong oracle is queried on a set of points \(S\subset\mathcal{X}\) and returns \(d(x,y)\) for all \(x,y\in S\), and "edge queries" where it is queried for individual distances \(d(x,y)\).
Our main contributions are optimal algorithms and lower bounds for clustering and Minimum Spanning Tree (MST) in this model. For \(k\)-centers, \(k\)-median, and \(k\)-means, we give constant factor approximation algorithms with only \(\tilde{O}(k)\) strong oracle point queries, and prove that \(\Omega(k)\) queries are required for any bounded approximation. For edge queries, our upper and lower bounds are both \(\bar{\Theta}(k^{2})\). Surprisingly, for the MST problem we give a \(O(\sqrt{\log n})\) approximation algorithm using no strong oracle queries at all, and a matching \(\Omega(\sqrt{\log n})\) lower bound. We empirically evaluate our algorithms, and show that their quality is comparable to that of the baseline algorithms that are given all true distances, but while querying the strong oracle on only a small fraction (\(<1\%\)) of points.
## 1 Introduction
Large-scale similarity models are ubiquitous in modern machine learning, where they are used to generate real-valued distances for non-metric data, such as images, text, and videos. A popular example is embedding models [39, 48, 29, 18], which transform a data point \(x\) into a point \(f(x)\) in a metric space \((\mathcal{X},d)\), such that the similarity between \(x,y\) can be inferred by the distance \(d(f(x),f(y))\). However, as the scale and quality of these models grow, so too increases the resources required to run them. Thus, a common component of many ML pipelines is to additionally employ an efficient but less precise similarity model to reduce the number of expensive distance comparisons made with the more accurate model [34, 33]. Common examples of such "weak" secondary similarity models include hand-crafted models based on simple features (location, timestamp, bitrate, etc.),
lightweight neural network, models trained on cheap but sometimes inaccurate data [33], metadata obtained in video transcoding [33, 44], previously computed similarities from historical data [41], and the retrieve-then-rerank architecture for recommendation systems [35], text retrieval [52], question-answering [8] and vision-applications [53].
Understanding the complexity of computational tasks in the presence of noisy or imprecise oracles is a fundamental problem dating back multiple decades [22], and many problems such as clustering, sorting, and nearest neighbor search have been intensively studied therein [10, 11, 37, 26, 36]. However, despite the popularity of combining two oracles in practice, the majority of this work considers only a single imprecise oracle, whereas less work has been done to understand the complexity of tasks using _both_ a noisy (weak) oracle, and an exact (strong) oracle. In this paper, we initiate a formal study of this setting for metric optimization problems.
Specifically, we introduce the _Weak-Strong Oracle Model_: here, we are given a metric space \((\mathcal{X},d)\) of \(|\mathcal{X}|=n\) points, where \(d:\,\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) is the underlying metric, representing the output of an expensive but accurate similarity model,1 as well as a corruption probability \(\delta\in(0,1/2)\). The metric \(d\) is not known to the algorithm a priori, but can be accessed through two types of queries: _strong_ and _weak oracle_ queries. For the strong oracle, we consider two possible forms of queries: edge queries and point queries. These queries are defined as follows:
Footnote 1: We often use “similarity” and “distance” interchangeably, as similarity models, especially embedding-based models, can usually be easily converted to distance models and vice versa.
* **Weak oracle queries (\(\mathsf{WO}(x,y)\)):** given \((x,y)\in\mathcal{X}^{2}\), the weak oracle returns a value \(\widetilde{d}(x,y)\) such that: with probability \(1-\delta\) we have \(\tilde{d}(x,y)=d(x,y)\), and otherwise, with probability \(\delta\), the value \(\tilde{d}(x,y)\) is set arbitrarily.2 The randomness is _independent_ across different pairs \((x,y)\), and drawn _exactly once_ (i.e., repeated queries to \(\tilde{d}(x,y)\) will yield the same result). Footnote 2: A pair \((x,y)\) such that the weak oracle distance \(\tilde{d}(x,y)\) can be set arbitrarily is called _corrupted_.
* **Strong oracle (point) queries (\(\mathsf{SO}(x)\)):** given a point \(x\in\mathcal{X}\), a strong point oracle returns a symbolic value \(\mathsf{SO}(x)\). The value \(\mathsf{SO}(x)\) gives no information on its own. However, given any two values \(\mathsf{SO}(x),\mathsf{SO}(y)\), the algorithm can compute the true distance \(d(x,y)\).
* **Strong oracle (edge) queries (\(\mathsf{SO}(x,y)\)):** given \(x,y\in\mathcal{X}\), a strong edge oracle returns the true distance \(\mathsf{SO}(x,y)=d(x,y)\).
The weak oracle distances \(\tilde{d}\) capture a cheap but less precise distance model, whereas the strong oracle is considered to be significantly more expensive. As a result, our goal is to produce a high-quality solution to an optimization problem (e.g. clustering) for the underlying metric \((\mathcal{X},d)\) while _minimizing_ the number of queries made to the strong oracle. We even allow the corruptions that occur to the weak oracle to be _adversarial_ (see Section 2 for precise model definitions); this captures a very general class of "imprecise weak oracles", allowing them to produce arbitrary bad distances with some probability.
Depending on the context, it may make sense to allow only one of the two types of strong oracle queries. Therefore, we consider two models: **(1)** where only strong oracle point queries are allowed, and **(2)** where only strong oracle edge queries are allowed. Note that the two types of strong oracle queries are closely related. In particular, any algorithm that makes \(q\) strong oracle point queries can be simulated by an algorithm that makes \(q^{2}\) strong oracle edge queries. In this paper, we will give algorithms and lower bounds for both strong oracle query models. We recall that an important motivation for strong oracle point queries is the example of an expensive embedding model: in this case, \(\mathsf{SO}(x)\) represents the embedding into \((\mathcal{X},d)\). Conversely, the strong oracle edge query
model is natural in settings where the expensive model can compute pair-wise similarities, such as cross-attention models [12, 47].
We focus on clustering, which is one of the most fundamental unsupervised learning tasks, and the classic metric minimum spanning tree (MST) problem, which has applications to network design and hierarchical clustering. Both tasks have been studied extensively in the literature on noisy oracles [7, 37, 5, 20, 43, 46]. However, given the strong type of inaccuracies allowed by our weak oracle, a priori it is not clear whether we can solve these foundational tasks without querying the strong oracle for essentially all the distances. Specifically, we pose the following question:
_Is it possible to solve metric optimization tasks, like clustering and MST, in the Weak-Strong Oracle Model while making fewer that \(\Omega(n)\) strong oracle point queries (or \(\Omega(n^{2})\) edge queries)?_
### Contributions
Our main contribution is to answer the above question in the affirmative. Specifically, we design constant factor approximation algorithms for \(k\)-centers, \(k\)-means, \(k\)-medians with \(\tilde{O}(k)\)3 point queries to the strong oracle. For MST, we design an algorithm that achieves a \(O(\sqrt{\log n})\) approximation without _any_ strong oracle queries. For both problems, we prove matching or nearly matching lower bounds, demonstrating the optimality of our algorithms. Our results for \(k\)-clustering hold for any corruption probability \(\delta\in(0,1/2)\) bounded away from \(1/2\) by a constant, and for MST our results hold for any \(\delta\in(0,1)\) bounded away from \(1\) by a constant.
Footnote 3: Throughout, we write \(\tilde{O}\) to suppress \(\log n\) factors.
Clustering.We begin with our results for \(k\)-clustering. Here, our goal is to produce a set of \(k\)_centers_\(c_{1},\ldots,c_{k}\in\mathcal{X}\), as well as a mapping \(\mathcal{C}:\mathcal{X}\to\{c_{i}\}_{i=1}^{k}\), so as to minimize the \(k\)-clustering cost _with respect to the original metric_\((\mathcal{X},d)\). Recall that for \(k\)-centers, the objective is to minimize \(\max_{p\in\mathcal{X}}d(p,\mathcal{C}(p))\). For the other two objectives, the goal is to minimize \(\sum_{p\in\mathcal{X}}d^{q}(p,\mathcal{C}(p))\), where \(q=1\) for \(k\)-median and \(q=2\) for \(k\)-means. Our results for \(k\)-clustering tasks are as follows:
**Theorem 1 and 8** (Clustering Upper Bounds).: _There exists algorithms in the weak-strong oracle model that, with high probability, obtain \(O(1)\) approximations to \(k\)-centers, \(k\)-means, and \(k\)-median. The algorithms use \(\tilde{O}(k)\) strong oracle point queries, or \(\tilde{O}(k^{2})\) edge queries, and run in time \(\tilde{O}(nk)\)._
Despite the similar query-complexities, the clustering algorithms from Theorems 1 and 8 require very different techniques. Moreover, since it is NP-Hard to give better than a \(2\) approximation to any of the above clustering tasks [31, 13, 14], our algorithm's approximations are optimal up to a constant. Next, we show that the query complexity of our algorithms are nearly optimal, even for arbitrarily large approximations, which settles the complexity of this problem up to \(\log n\)-factors.
**Theorem 25** (Clustering Lower Bound).: _Any algorithm which obtains a multiplicative \(c\)-approximation, for any approximation factor \(c\), with probability at least \(1/2\), to either \(k\)-centers, \(k\)-means, or \(k\)-medians, must make at least \(\Omega(k^{2})\) strong oracle edge queries, or \(\Omega(k)\) strong oracle point queries._
Minimum Spanning Tree.In the classic metric MST problem, the goal is to produce a spanning tree \(T\) of the points in \(\mathcal{X}\) so as to minimize the weight of the tree in the original metric \((\mathcal{X},d)\): namely \(w(T)=\sum_{\deg{(x,y)}\in T}d(x,y)\). We consider the problem in two settings, corresponding to whether or not the weak oracle distances \(\tilde{d}:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) are themselves a metric over \(\mathcal{X}\). We refer to the case where \((\mathcal{X},\tilde{d})\) is restricted to being a metric as the _metric-weak oracle_ setting. This
setting is especially motivated by weak oracles which are themselves embedding models, such as lighter-weight embeddings or pre-computed embeddings trained on stale or possibly inaccurate data. We demonstrate that, perhaps surprisingly, given a metric weak oracle, we can obtain a good approximation to the optimal MST without resorting to the strong oracle at all.
**Theorem 15**.: _There is an algorithm that, given only access to the distances \(\tilde{d}\) produced by a metric weak oracle (namely, \((\mathcal{X},\tilde{d})\) is metric), produces a tree \(\hat{T}\) such that \(\mathbb{E}[w(\hat{T})]\leq O(\sqrt{\log n})\cdot\min_{T}w(T)\)._
A natural question that arises following our MST algorithm is whether a constant approximation is possible, perhaps by allowing for a small number of strong oracle queries as well. We demonstrate, however, that this is impossible in a strong sense: any algorithm that achieves a better than \(O(\sqrt{\log n})\) approximation must essentially query the strong oracle for all the distances in \(\mathcal{X}\).
**Theorem 26**.: _There exists a constant \(c\) such that any algorithm that outputs a spanning tree \(\hat{T}\) such that \(\mathbb{E}[w(\hat{T})]\leq c\sqrt{\log n}\cdot\min_{T}w(T)\) must make at least \(\Omega(n/\sqrt{\log n})\) queries to the strong oracle, and this holds even when \(\tilde{d}:\mathcal{X}^{2}\to\mathbb{R}\) is restricted to being a metric._
Thus, Theorems 15 and 26 prove tight bounds for the approximation of MST in the metric weak oracle setting. A final question is whether a \(O(\sqrt{\log n})\) approximation is possible without the metric restriction on \(\tilde{d}\). We demonstrate that this too is impossible, by proving a \(\Omega(\log n)\) lower bound on the approximation in the general case. Taken with our upper bound in Theorem 15, this proves a strong separation between the metric and non-metric weak oracle models.
**Theorem 29**.: _There exists a constant \(c\) such that any algorithm which outputs a spanning tree \(\hat{T}\) such that \(\mathbb{E}[w(\hat{T})]\leq c\log n\cdot\min_{T}w(T)\) must make at least \(\Omega(n)\) point queries to the strong oracle._
We conjecture that the lower bound from Theorem 29 is tight, and that an algorithm exists with _no_ strong oracle queries and a \(O(\log n)\) approximation. We leave this as an open question for future work to determine the exact query complexity of non-metric MST in the weak-strong oracle model.
Experiments.We empirically evaluate the performance of our algorithms on both synthetic and real-world datasets. For the synthetic data experiments, we use the extensively studied Stochastic Block Model [30, 19, 16, 1, 2, 28, 42], which has a natural interpretation as clustering with faulty oracles [37]. For the real-world dataset, we run experiments to cluster embeddings of the MNIST dataset [17]; specifically, we consider both the SVD and t-SNE embeddings [48]. Our experiments demonstrate that our algorithms achieve clustering costs that are competitive with standard benchmark algorithms that have access to strong oracle queries on the _entire dataset_, while our algorithms only make strong oracle queries on a small fraction of the points (i.e. 1-2% of the points). Furthermore, we show that benchmark algorithms with _no_ strong oracle queries produce much significantly worse clusterings than our algorithms, demonstrating the necessity of exploiting the strong oracle.
### Other Related Work
Our paper studies metric optimization problems where we have easy access to corrupted distances, but accessing the true distances is expensive. This is closely related to both _active learning_ and _clustering under budget constraints_, which limit the number of pair-wise comparisons. For the two oracle setting, active learning with both weak and strong labelers [51, 50], as well as active learning with diverse labelers [32] have been studied. In the budget constrained clustering case, a line of work considered spectral clustering on partially sampled matrices [23, 45, 49], and [24] devise correlation
clustering algorithms with approximation depending on the query budget. Two other closely related lines of work are _clustering with noisy oracles_ and _algorithms with predictions_ (see [41] for a survey of the latter4 ). Many tasks, including correlation clustering and signed edge prediction [37, 40, 26], \(k\)-clustering [7, 5, 43, 3, 20], and MST [21, 9], have been studied.
Footnote 4: Also see the website [https://algorithms-with-predictions.github.io/](https://algorithms-with-predictions.github.io/)
The key difference between all the aforementioned settings and ours is that they are given immediate access to the true similarities (i.e. \((\mathcal{X},d)\) for us), and their noisy queries provide access to the _optimal_ clustering (or ground truth labels); for instance, their oracles can be asked queries like "should \(x\) and \(y\) be clustered together"? Comparatively, in our setting the strong oracle simply provides non-noisy access to the input distances. For such oracles, perhaps the most closely related work is the recent paper [46], which studies _correlation clustering_ with weak and strong oracles akin to ours. However, their model is limited to correlation clustering (where the input is a graph with binary labels), whereas our model is based in a metric space, and thus captures the entire span of metric optimization algorithms. For the setting where we only employ a weak oracle (such as our MST algorithm), perhaps the most closely related work is [36], which studies finding nearest neighbors when distances are corrupted by Gaussian noise, which is an incomparable noise setting to ours.
## 2 Preliminaries
A full instance of the weak-strong oracle model is specified by the triple \((\mathcal{X},d,\tilde{d})\), where \(\tilde{d}:\mathcal{X}^{2}\to\mathbb{R}\) are the distances returned by the weak oracle. We write \(\textsc{Corrupt}\subset\binom{n}{2}\) to denote the set of "corrupted" distances (where \(\tilde{d}(x,y)\neq d(x,y)\)). We allow the values of \(\tilde{d}(x,y)\) for \((x,y)\in\textsc{Corrupt}\) can be chosen arbitrarily and by an adversary who knows the full metric \((\mathcal{X},d)\) as well as the set Corrupt.
We write \(\Delta\geq 1\) to denote the _aspect ratio_ of the original metric space \((\mathcal{X},d)\). Without loss of generality (via scaling), we can assume that \(1\leq d(x,y)\leq\Delta\) for all \(x,y\in\mathcal{X}\). Note that this bound only applies to the strong oracle - the weak oracle distances \(\tilde{d}\) can of course be arbitrarily larger than \(\Delta\) or smaller than \(1\). Throughout, we will assume that the aspect ratio is polynomially bounded, namely that \(\Delta\leq n^{c}\) for any arbitrarily large constant \(c\geq 0\) - a natural assumption in the literature. We discuss the generalization of our work to arbitrary aspect ratio in Appendix A.2.
Our algorithms in Sections 3 and 4 will only use strong oracle point queries; since any algorithm that makes \(\tilde{O}(k)\) point queries can be transformed into an algorithm that makes at most \(\tilde{O}(k^{2})\) edge queries (simply by querying all stances between the set \(S\subset\mathcal{X}\) of point-queries), the edge query complexity follows as a corollary. Thus, in what follows, "strong oracle query" refers to strong oracle point queries, and edge queries will be explicitly specified as such. For simplicity, we present our clustering algorithms for the case of \(\delta=1/3\), and describe in Appendix B how we can generalize our algorithms for \(\delta\in(0,1/2)\).
**Notation.** For a set \(S\subset\mathcal{X}\), we write \(d(u,S)=\min_{y\in S}d(x,y)\), and we define \(\widetilde{d}(u,S)\) analogously. For a metric space \((\mathcal{X},d)\), a point \(x\in\mathcal{X}\), and a radius \(r>0\), we write \(\mathcal{B}_{d}(x,r)=\{y\in\mathcal{X}\mid d(x,y)\leq r\}\) to denote the closed metric ball centered at \(x\) with radius \(r\) under \(d\). When \(d\) is the original metric \((\mathcal{X},d)\), we simply write \(\mathcal{B}(x,r)=\mathcal{B}_{d}(x,r)\). We write \(\textsc{OPT}_{\text{k-center}}(d)\), \(\textsc{OPT}_{\text{k-means}}(d)\), and \(\textsc{OPT}_{\text{k-median}}(d)\) to denote the optimal clustering cost of \(k\)-center, \(k\)-means and \(k\)-median on \(\mathcal{X}\) with distance metric \(d\). When the problem of study is clear, we also simply use \(\textsc{OPT}\) to denote the optimal cost. For the metric minimum spanning tree problem, given a tree \(T=(\mathcal{X},E)\) spanning the points in \(\mathcal{X}\), we write \(w_{d}(T):=\sum_{(x,y)\in E}d(x,y)\) to denote the cost of the tree in the metric \(d\), and
\(w_{\tilde{d}}(T):=\sum_{\text{edge}(x,y)\in T}\tilde{d}(x,y)\) to denote the cost with respects to \(\tilde{d}\). For clarity, we will sometimes write \(w(T)=w_{d}(T)\) and \(\tilde{w}(T)=w_{\tilde{d}}(T)\).
## 3 \(k\)-Center Clustering in the Weak-Strong Oracle Model
We first consider the problem of \(k\)-center clustering, where the goal is to find \(k\) centers \(c_{1},\ldots,c_{k}\in\mathcal{X}\) and a mapping \(\mathcal{C}:\mathcal{X}\to\{c_{i}\}_{i=1}^{k}\) such that the maximum distance of a point in \(\mathcal{X}\) to its respective center, namely \(\max_{p\in\mathcal{X}}d(p,\mathcal{C}(p))\), is minimized. We give an algorithm for this problem that achieves \(O(1)\)-approximate solution using \(\widetilde{O}(k)\) strong oracle queries.
**Theorem 1**.: _For any \(\varepsilon>0\) and metric space \((\mathcal{X},d)\), there is an algorithm in the weak-strong oracle model that, with high probability, obtains a \((14+\varepsilon)\)-approximation to the k-center problem using \(O(k\log^{2}n\cdot\log\frac{\log n}{\varepsilon})\) strong oracle queries, \(O(kn\log^{2}n\cdot\log\frac{\log n}{\varepsilon})\) weak oracle queries, and running in time \(\tilde{O}(nk\log\frac{1}{\varepsilon})\)._
At a high level, our algorithm is a recursive procedure that, at each step, attempts to correctly cluster (up to a constant approximation) all of the points \(p\) whose optimal cluster contains at least a \(\frac{1}{10k}\)-fraction of the current set of points. We then remove these clustered points and recurse on the un-clustered points. In what follows, we suppose that we are given a number \(R\) such that \(2\mathsf{OPT}\leq R\leq(2+\varepsilon)\mathsf{OPT}\) (we will later find \(R\) by guessing it in powers of \((1+\varepsilon)\)).
Let \(\mathcal{B}_{1}^{*},\ldots,\mathcal{B}_{k}^{*}\) be the optimal clusters. Our algorithm first samples a set \(S\) of \(O(k\log n)\) points uniformly from \(\mathcal{X}\) and queries the strong point oracle on \(S\); by standard concentration bounds, for any optimal cluster \(\mathcal{B}_{i}^{*}\) containing at least a \(\frac{1}{10k}\)-fraction of the points, \(S\) will contain at least \(\Omega(\log n)\) points from \(\mathcal{B}_{i}^{*}\) with high probability. We call such a cluster _heavy_. Our goal will be to cluster the points in each heavy ball \(\mathcal{B}_{i}^{*}\), remove the clustered points, and recurse on the remaining points. Since at least a \(\frac{9}{10}\) fraction of points are in heavy balls, the recursion will complete after \(O(\log n)\) iterations.
The main challenge is now to identify the points that belong to a heavy cluster \(\mathcal{B}_{i}^{*}\). Notice that even if we knew the center \(c_{i}^{*}\) of \(\mathcal{B}_{i}^{*}\), we cannot use \(\tilde{d}(x,c_{i}^{*})\) alone to determine if a point \(x\) should be in \(\mathcal{B}_{i}^{*}\), since \(\tilde{d}(x,c_{i}^{*})\) could have been arbitrarily corrupted. Instead, suppose that we were given a sufficiently large set \(U\subset\mathcal{B}_{i}^{*}\) of points. Then we observe that one can estimate \(d(x,c_{i}^{*})\) by using the _median_ of the weak oracle distance between \(x\) and the points in \(U\). Since each distance \(\tilde{d}(x,u)\) is corrupted with probability \(\delta<1/2\) for \(u\in U\), we expect that the median distance should be between \(\min_{u\in U}d(x,u)\) and \(\max_{u\in U}d(x,u)\).
We now give the formal description and analysis of the algorithm as in Algorithm1. For simplicity, we first state the algorithm using monotonically increasing values of \(R\), and we lately describe how this can be sped up via binary searching for a usable value of \(R\).
```
Input: Set of points \(\mathcal{X}\), number of centers \(k\). Output: Clustering \(\mathcal{C}\) with centers \(C=\{c_{1},\cdots,c_{k}\}\) and assignment of each point \(x\in\mathcal{X}\) for\(R=(1+\varepsilon)^{\ell}\), \(\ell\in[O(\log n)]\)do Run sample and cover (Algorithm3) on \(\mathcal{X}\) to obtain candidate centers; Run greedy ball carving (Algorithm2) on candidate centers with threshold \(R\); If number of centers obtained from the above procedure is \(>k\), increase \(\ell\) and repeat.
```
**Algorithm 1**The \(k\)-center algorithm
Before describing the procedures and their guarantees, we first introduce some notations. Let
\(\mathcal{B}_{1}^{*}=\mathcal{B}_{1}^{*}(c_{1}^{*},\mathsf{OPT}),\cdots,\mathcal{B} _{k}^{*}=\mathcal{B}_{k}^{*}(c_{k}^{*},\mathsf{OPT})\) centered at \(c_{1}^{*},\cdots,c_{k}^{*}\) be the clusters corresponding to the optimal k-center solution on \((\mathcal{X},d)\). We define the "cover" of a point \(x\in\mathcal{X}\) as follows.
**Definition 1**.: _For any pair of point \((x,y)\in\mathcal{X}\), we say \(y\) is covered by the ball \(\mathcal{B}(x,r)\) centered at point \(x\) with radius \(r\) if \(d(x,y)\leq r\) (namely, if \(y\in\mathcal{B}(x,r)\))._
We also define the following notion of "heavy ball" that captures the balls with a sufficiently large number of points inside.
**Definition 2**.: _Let \((\mathcal{X},d)\) be a metric space, and fix any \(x\in\mathcal{X}\) and \(r>0\). We say that the ball \(\mathcal{B}(x,r)\) is heavy if \(|\mathcal{B}(x,r)\cap\mathcal{X}|\geq\frac{n}{10k}\)._
For \(\ell\leq k\), denote the corresponding set of _heavy balls_ from optimal solution for \(k\)-center by
\[\mathcal{B}_{\mathcal{H}}^{*}=\{\mathcal{B}_{1}^{*}(c_{1}^{*},\mathsf{OPT}), \cdots,\mathcal{B}_{\ell}^{*}(c_{\ell}^{*},\mathsf{OPT})\}.\]
**Observation 2**.: _The union of all heavy balls from \(\mathcal{B}_{H}^{*}\) cover at least \(\frac{9n}{10}\) points in \(\mathcal{X}\)_
Proof.: Consider the balls in optimal solution of \(k\)-center on \((\mathcal{X},d)\) that are not heavy as per definition 2. There are at most \(k-1\) such balls each covering at most \(\frac{n}{10k}\) points in \(\mathcal{X}\) which amounts to at most \(n/10\) points. As a result, heavy balls cover \(\frac{9n}{10}\) points in \(\mathcal{X}\).
We define the "cover" of set of points by a set of balls as follows.
**Definition 3**.: _For a set of points \(U\), we say \(U\) is covered by \(\mathcal{B}_{C}=\{\mathcal{B}(c_{1},r_{1}),\cdots,\mathcal{B}(c_{\ell},r_{\ell })\}\), a set of \(\ell\) balls, if for all \(u\in U\) these exists \(\mathcal{B}(c_{i},r_{i})\in\mathcal{B}_{C}\) such that \(\mathcal{B}(c_{i},r_{i})\) covers \(u\)._
We first describe the procedures "Greedy Ball Carving" and "Sample and Cover" in greater detail, whose guarantees will ultimately be used to complete the proof for theorem 1. We start by assuming knowledge of an estimate \(2\mathsf{OPT}\leq R\leq 2(1+\varepsilon)\mathsf{OPT}\). In the proof for theorem 1 we will guess \(R\) in powers of \((1+\varepsilon)\).
We first discuss the procedure greedy ball carving (Algorithm 2), which serves as a witness in estimating \(R\). For a set of points with strong oracle queries and some \(R\), the procedure first randomly picks a point as center. Removing all points in the set that are at distance \(\leq R\) from center, it recurses on the remaining points. We will use the centers generated by Algorithm 2 in identification of heavy balls and points within.
```
Input: Set of points \(S\), radius \(R\) Output: Set of centers \(C=\{c_{1},\cdots,c_{m}\}\) and assignment of \(s\in S\) to respective centers. Init:\(C=\{\}\) while\(S\) is not emptydo Pick arbitrary point \(c\in S\) Treating \(c\) as center, assign \(s\) to \(c\) if \(d(c,s)\leq R\) for all \(s\in S\). Add \(c\) to \(C\) and remove assigned points.
```
**Algorithm 2**Greedy Ball Carving
Note that greedy ball carving will _always_ be used by our algorithm on sets with strong oracle queries and as a result operates with true distances. The following observation helps us to use greedy ball carving as a witness for guessing \(R\), as discussed in appendix A.2.
**Observation 3**.: _If \(\mathsf{OPT}\) is the optimal k-center cost for \((\mathcal{X},d)\) and \(2\mathsf{OPT}\leq R\), then Algorithm 2 run on any subset of points \(S\subseteq\mathcal{X}\), with radius \(R\) returns \(m\leq k\) centers._
Proof.: The proof is by contradiction. If the number of centers returned by Algorithm 2 is \(m>k\), this implies there are \(k+1\) points with pairwise distance more than \(R\) which contradicts the assumption \(\mathsf{2OPT}\leq R\).
We now describe the recursive procedure sample and cover (Algorithm 3) which at each step aims to cluster points in each heavy balls.
```
Input: Set of points \(\mathcal{X}\), weak distance oracle \(\mathsf{WO}\), strong oracle \(\mathsf{SO}\), estimate \(R\) Output: Clustering \(\mathcal{C}\) with centers \(C=\{c_{1},\cdots,c_{m}\}\) and assignment of each point \(c\in\mathcal{X}\), and \(\mathcal{X}_{\mathsf{SO}}\) the set of points with \(\mathsf{SO}\) Init:\(C=\{\}\) and \(\mathcal{X}_{\mathsf{SO}}=\{\}\) while\(\mathcal{X}\) is not emptydo Sample \(|S|=100k\log n\) and \(|T|=2000k\log n\) points u.a.r. from \(\mathcal{X}\) Query \(\mathsf{SO}\) for \(S\cup T\) and set \(\mathcal{X}=\mathcal{X}\setminus\{S\cup T\}\). Step 1. Run Algorithm 2 on \(S\) with radius \(R\) and let the set of centers obtained be \(C\) Check if \(|C|>k\) Step 2. Identify complete balls for \(c_{i}\in C\) using \(T\) and add \(T\cup\{S\setminus C\}\) to \(\mathcal{X}_{\mathsf{SO}}\). for\(x\in\mathcal{X}\setminus\{S\cup T\}\)do Step 3. Compute \(d^{\text{est}}(x,c_{i})=\mathsf{Median}\{\widetilde{d}(x,y)|y\in T\cap \mathcal{B}(c_{i},3R)\}\) for all complete \(\mathcal{B}(c_{i},3R)\). Step 4. Call \(x\) covered by \(\mathcal{B}(c_{i},3R)\) if \(d^{\text{est}}(x,c_{i})\leq 6R\) and assign \(x\) to \(c_{i}\). Remove all covered points form \(\mathcal{X}\).
```
**Algorithm 3**The recursive sample and cover procedure
We will prove the approximation guarantee for the set of candidate centers generated by Algorithm 3 in the following order, corresponding to respective steps in Algorithm 3.
1. Centers returned by Algorithm 2 on set \(S\) can be used to identify heavy balls from \(\mathcal{B}^{*}_{\mathcal{H}}\). Step 2. \(T\) provides sufficient points in heavy balls identified in Step 1 for distance estimation. Step 3. Distance estimated using heavy balls is accurate enough for constant factor approximation. Step 4. Using heavy balls and distances from step 3, we cover a constant fraction of points each iteration.
Step 1.The combined objective of step 1 and 2 in the algorithm is to identify the heavy balls and the points belonging to these heavy balls. We by start sampling two sets of points (\(S\) and \(T\)), each of size \(O(k\log n)\) uniformly at random from \(\mathcal{X}\) and querying strong oracle on them. We first make following observations that help formalize the process of identification of heavy balls.
**Observation 4**.: _Let \(n\) be the number of points in \(\mathcal{X}\) and let \(S\) be a set of \(100\cdot k\log n\) points sampled uniformly at random from \(\mathcal{X}\). Then, with high probability, for every \(\mathcal{B}^{*}(c_{i}^{*},\mathsf{OPT})\in\mathcal{B}^{*}_{H}\), there are at least \(\log n\) points in \(S\) covered by \(\mathcal{B}^{*}(c_{i}^{*},\mathsf{OPT})\), i.e. \(|\mathcal{B}^{*}(c_{i}^{*},\mathsf{OPT})\cap S|\geq\log n\)._
Proof.: As each heavy \(\mathcal{B}^{*}_{i}(c_{i}^{*},\mathsf{OPT})\in\mathcal{B}^{*}_{H}\) covers at least \(n/10k\) points in \(\mathcal{X}\), in expectation we have at least \(10\log n\) points in \(S\) that are covered by \(\mathcal{B}^{*}_{i}(c_{i}^{*},\mathsf{OPT})\). By Chernoff bounds, for \(\mathcal{B}^{*}_{i}(c_{i}^{*},\mathsf{OPT})\)
\[\Pr\left(|\mathcal{B}^{*}_{i}(c_{i}^{*},\mathsf{OPT})\cap S|\leq\log n\right) \leq\exp\left(-\frac{81}{200}\cdot 10\log n\right)\leq n^{-4}.\]
Using union bound over at most \(k\) heavy balls gives us that each \(\mathcal{B}^{*}(c_{i}^{*},\mathsf{OPT})\in\mathcal{B}_{H}^{*}\) covers at least \(\log n\) points in \(S\) w.p. \(1-k/n^{4}\).
This tells us that \(S\) has at least \(\log n\) points from each heavy ball in optimal solution w.h.p. We now use Algorithm 2 to cover all points in \(S\) with at most \(k\) centers using radius \(R\), and use the centers to form heavy balls. Let \(C\) be the set of centers that are returned by Algorithm 2 for set \(S\). For each \(c_{i}\in C\), consider the balls \(\mathcal{B}(c_{i},3R)\). As \(2R+\mathsf{OPT}\leq 3R\), using \(3R\) as radius for \(\mathcal{B}(c_{i},3R)\) ensures that the union of these balls cover all points that are covered by \(\mathcal{B}_{H}^{*}\) (\(\frac{9n}{10}\) points in \(\mathcal{X}\)). We now define \(\mathcal{B}_{\mathcal{H}}\) as the set of \(\mathcal{B}(c_{i},3R)\) that are heavy by definition 2 among all \(c_{i}\in C\), i.e.,
\[\mathcal{B}_{H}=\{\mathcal{B}(c_{i},3R)\,\mid\,c_{i}\in C,\,|\mathcal{B}(c_{i},3R)|\geq\frac{n}{10k}\}. \tag{1}\]
Following similar argument as Observation 2, non-heavy balls \(\mathcal{B}(c_{i},3R)\) covers at most \(n/10\) points in \(\mathcal{B}_{H}^{*}\), which equates to \(\mathcal{B}_{\mathcal{H}}\) covering at least \((\frac{9n}{10}-\frac{n}{10})=\frac{8n}{10}\) points in \(\mathcal{X}\).
Step 2.We now argue that \(T\) has enough points for each heavy ball in \(\mathcal{B}_{H}\), which can be used for distance estimation for the remainder of points in \(\mathcal{X}\). We show that it suffices to have \(O(\log n)\) points in \(T\) that are covered by a heavy ball to get a good estimate to its center. Following this, we call \(\mathcal{B}(c_{i},3R)\)_'complete'_ if \(T\) contains at least \(100\log n\) points that are covered by \(\mathcal{B}(c_{i},3R)\). Note that this can be verified by the algorithm as we use strong oracle queries for \(S\) and \(T\). If any of the points from \(S\) or \(T\) are uncovered, we just add them to the list of candidate centers (complete balls centers), and since we run greedy ball carving at the end on candidate centers they will be covered. We start by proving the following claim.
**Claim 5**.: _For the set of centers \(C\) returned by Algorithm 2 on \(S\) and \(|T|=2000\cdot k\log n\) set of points sampled uniformly at random from \(\mathcal{X}\), every \(\mathcal{B}(c_{i},3R)\in\mathcal{B}_{H}\) for \(c_{i}\in C\) is complete w.h.p._
Proof.: The argument is similar to Observation 4, where we first show for a particular \(c_{i}\in C\) and then union bound over at most \(k\) such centers. From eq (1), in expectation \(T\) has at least \(200\log n\) points covered by a heavy \(\mathcal{B}(c_{i},3R)\). Using Chernoff bounds,
\[\Pr\left(|\mathcal{B}_{i}(c_{i},3R)\cap T|\leq 100\log n\right)\leq\exp\left(- \frac{200}{8}\log n\right)\leq n^{-25}.\]
Using union bound over at most \(k\) centers, w.p. \(1-(k/n^{25})\) every heavy \(\mathcal{B}(c_{i},3R)\) is complete.
Step 3.At this stage we have at most \(k\) complete balls \(\mathcal{B}(c_{i},3R)\), each with \(O(\log n)\) points. We now use median of weak oracle queries to these points in each heavy ball for distance estimation of remaining points \(x\in\mathcal{X}\setminus S\cup T\). For each \(x\in\mathcal{X}\setminus\{S\cup T\}\) and each complete \(\mathcal{B}(c_{i},3R)\), let \(d^{\text{est}}(x,c_{i})=\mathsf{Median}\{\widetilde{d}(x,q)|q\in T\cap\mathcal{ B}(c_{i},3R)\}\) where \(\widetilde{d}(v,q)\) denotes the weak oracle queries. These distance estimates form the 'workhorse' of sample and cover for assigning points to heavy balls. The following lemma quantifies the accuracy of the estimated distance.
**Lemma 6**.: _Fix any point \(u\in\mathcal{X}\), radius \(r\geq 0\), and set \(U\subset\mathcal{B}(u,r)\) with \(|U|=\Omega(\log n)\). For any \(x\in\mathcal{X}\), define \(d^{\text{est}}(x,u)=\mathsf{Median}\{\widetilde{d}(x,q)|q\in U\}\). Then with high probability, for all \(x\in\mathcal{X}\) we have \(|d^{\text{est}}(x,u)-d(x,u)|\leq r\)._
Proof.: Recall that the weak oracle returns arbitrary value for \(\widetilde{d}(x,y)\) w.p. \(1/3\), independently for each \(x,y\in\mathcal{X}\). For \(|U|\geq c\cdot\log n\) for a sufficiently large constant \(c\), in expectation \(\frac{2}{3}c\cdot\log n\) weak oracle calls are not corrupted. For using median as an estimate, we only need more than half queries
to not be corrupted. Let \(\mathbf{X}_{u}\) denote the number of uncorrupted weak oracle queries \(U\). Using Chernoff bounds,
\[\Pr\left(\mathbf{X}_{u}\leq\frac{c}{2}\log n\right)\leq\exp\left(-\frac{c\log n}{ 16\cdot 3}\right)\leq n^{-c^{\prime}}\]
Thus, for sufficiently large \(c\), with high probability the median distance lies between \(\min_{u\in U}d(x,u)\) and \(\max_{u\in U}d(x,u)\). It follows from union bound over at most \(n\) points and triangle inequality that \(|d^{\text{est}}(x,u)-d(x,u)|\leq r\) for all \(x\in\mathcal{X}\).
Considering Lemma 6 for estimating distance to centers of complete balls, for each complete \(\mathcal{B}(c_{i},3R)\) we have \(200\cdot\log n\) points in \(T\). For a particular complete \(\mathcal{B}(c_{i},3R)\) we have w.p. \(n^{-4}\), \(|d^{\text{est}}(x,c_{i})-d(x,c_{i})|\leq 3R\) for \(x\in\mathcal{X}\setminus\{S\cup T\}\). Taking union bound over at most \(k\) complete balls and distance estimates for at most \(n\) points, the estimation guarantee holds w.p. \(1-\frac{k}{n^{3}}\) for all distance estimates of \(x\in\mathcal{X}\setminus\{S\cup T\}\) to all complete balls \(\mathcal{B}(c_{i},3R)\).
Step 4.With the distance estimates from Lemma 6, the algorithm assigns \(x\) to a complete \(\mathcal{B}(c_{i},3R)\) if \(d^{\text{est}}(x,c_{i})\leq 6R\), with ties broken arbitrarily. Recall that union of all complete \(\mathcal{B}(c_{i},3R)\) balls cover \(8/10\) fraction of points in \(\mathcal{X}\). As the distance estimates are accurate upto \(\pm 3R\) and we assign if distance to centers is at most \(6R\), at each step \(8/10\) fraction of points are covered by the complete balls. This is achieved by making \(O(k\log n)\) strong oracle queries (recall the algorithm uses strong oracle queries only for set \(S\) and \(T\)).
Thus, at each iteration we cover a constant fraction of points in \(\mathcal{X}\) using at most \(k\) centers. For covering all points in \(\mathcal{X}\), the recursion will proceed for \(O(\log n)\) rounds, generating \(O(k\log n)\) centers. We now prove the approximation guarantee obtained by these set of centers.
**Lemma 7**.: _Suppose \(2\mathsf{OPT}\leq R\leq 2\cdot(1+\varepsilon)\mathsf{OPT}\), then with high probability, Algorithm 3 computes a set of \(O(k\log n)\) centers and the clustering assignment for each point \(x\in\mathcal{X}\) using \(O(k\log^{2}n)\) strong oracle queries, \(O(nk\log^{2}n)\) weak oracle queries such that each point is at most \(12\cdot(1+\varepsilon)\mathsf{OPT}\) from its respective center._
Proof.: At each iteration, Algorithm 3 produces at most \(k\) centers that cover constant fraction of points in \(\mathcal{X}\) and the procedure runs for \(O(\log n)\) iterations. The total number of strong oracle queries we make is \((\log n\cdot(|S|+|T|))=O(k\log^{2}n)\). For number of weak oracle queries, note at each iteration we have \(O(\log n)\) points in at most \(k\) complete balls. For estimating distances of at most \(n\) remaining points at each iteration we make \(O(nk\log n)\) weak oracles queries. For \(O(\log n)\) such iterations, we make total \(O(nk\log^{2}n)\) weak oracle queries.
At the end of recursion, any point \(x\in\mathcal{X}\) is at most \(6R\) from respective center in set of \(O(k\log n)\) centers, and \(2\mathsf{OPT}\leq R\leq 2\cdot(1+\varepsilon)\mathsf{OPT}\), this gives us \(12\cdot(1+\varepsilon)\)-approximate solution.
In order to obtain a solution for \(k\)-center problem, we now run Algorithm 2 on the set of \(O(k\log n)\) centers (also uncovered points with strong oracle query, if any, from \(S\) and \(T\) at any iteration).
From Observation 3 we obtain at most \(k\) centers. Using the approximation guarantee from Lemma 7 and additional iterations for estimation of \(R\), we now wrap up the proof for theorem 1.
Proof of theorem 1.: We first analyze the total number of strong and weak oracle queries used by the algorithm, by looking at additional overhead from estimating \(R\). First, while there are \(O(\frac{1}{\varepsilon}\log n)\) possible values of \(R\) to try, instead of trying each in increasing order as in theorem 1, we claim that we
can binary search to find the correct value. Specifically, for the range \(r\in\{1,(1+\varepsilon),(1+\varepsilon)^{2},\ldots,\Delta\}\), we simply need to find a value of \(R\) such that running Algorithm 1 on that value of \(R/(1+\varepsilon)\) returns more than \(k\) centers, (thus implying that \(R<2(1+\varepsilon)\mathsf{OPT}\)), and such that running Algorithm 1 on \(R\) returns at most \(k\) centers (thus implying that we get a valid solution from Algorithm 1 with a value of \(R\)). Importantly, this need not be the smallest \(R\) such that the above occurs. Thus, we can find such an \(R\) via binary search, which requires a total of \(O(\log(\frac{\log n}{\varepsilon}))\) rounds of running Algorithm 1. Since each run of Algorithm 1 uses \(O(k\log^{2}n)\) strong oracle queries and \(O(nk\log^{2}n)\) weak oracle queries, after the binary search it follows that the total query complexity is \(O(k\log^{2}n\log\left(\frac{\log n}{\varepsilon}\right))\) strong oracle queries and \(O(nk\log^{2}n\log\left(\frac{\log n}{\varepsilon}\right))\) weak oracle queries.
Now we look at the approximation factor. Let \(R=(1+\varepsilon)^{j}\) be such a value found via the above binary search. As above, we have that \(R\leq 2(1+\varepsilon)\mathsf{OPT}\), and that we obtained a valid solution from Algorithm 1 with a value of \(R\). For covering the points, since we assign points to a complete \(\mathcal{B}(c_{i},3R)\) with distance to center at most \(6R\) and later run Algorithm 2 with threshold \(R\) on \(O(k\log n)\) centers, any point is at most \(7R\) away from the center which gives a \(14(1+\varepsilon)\)-approximate solution (since \(R\leq 2(1+\varepsilon)\mathsf{OPT}\)).
For analyzing the run-time of our algorithm, we first look at one iteration of our algorithm. Note, greedy ball carving on a set of \(O(k\log n)\) points for at most \(k\) centers takes \(O(k^{2}\cdot\operatorname{polylog}n)\) time. Then, conditioning on the high probability event of Claim 5, for at most \(k\) complete balls with \(O(\log n)\) points each, estimating median of remaining points to respective centers takes \(O(nk\cdot\operatorname{polylog}n)\) time. At most \(O(\log n)\) such iterations coupled with iterations for estimating \(R\), the total run time is \(O\left(k^{2}\cdot\operatorname{polylog}n\log\left(\frac{\log n}{\varepsilon} \right)+nk\cdot\operatorname{polylog}n\log\left(\frac{\log n}{\varepsilon} \right)\right)=\tilde{O}\left(nk\log\left(\frac{1}{\varepsilon}\right)\right)\).
Theorem 1\(\Box\)
## 4 \(k\)-Means and \(k\)-Median Clustering in the Weak-Strong Oracle Model
We now consider algorithms for the \(k\)-means and \(k\)-median clustering problems, which will differ significantly from our \(k\)-centers algorithm. Our main algorithmic result is as follows.
**Theorem 8**.: _There exists an algorithm that given a metric space \((\mathcal{X},d)\) and the oracles under the weak-strong oracle model, with high probability computes an \(O(1)\)-approximate solution to the \(k\)-means and \(k\)-median problems using \(O(k\log^{2}n)\) strong oracle queries and \(O(nk\log^{2}n)\) weak oracle queries. Furthermore, the algorithms runs in \(O((nk+k^{3})\cdot\operatorname{polylog}n)\) time._
The algorithm proceeds by first constructing a _coreset_\(S\subset\mathcal{X}\) of at most \(O(k\log^{2}n)\) points, which contains a set of \(k\) centers that are \(O(1)\)-approximation to \(\mathsf{OPT}\). To build the coreset \(S\), we arbitrarily order the points, and for point \(p\) (in order), we sample \(p\) into \(S\) with probability proportional the the distance \(d(p,S)\). Since the weak-oracle distances can be arbitrarily corrupted, we cannot naively use the weak oracle to compute the \(d(p,S)\). Instead, we design a _proxy distance_ which we will use to estimate \(d(p,S)\), based on taking medians of weak-oracle distances from \(p\) to pair-wise close subsets of points in \(S\). Specifically, we define the "heavy balls distance" as follows:
**Definition 4** (Heavy-ball nearest distance).: _Let \(\mathcal{X}\) be a set of points and \(d\) be the underlying metric. Furthermore, let \(y\in\mathcal{X}\) be a point and \(S\subseteq\mathcal{X}\) be a set of points such that \(y\not\in S\) and \(|S|\geq 100\log n\). For any vertex \(x\in S\) and radius \(r>0\), we set \(Q(x,S,y,r)=\mathsf{Median}\{\widetilde{d}(z,y)\mid z\in S,d(x,z)\leq r\}+6 \cdot r\)
and define the heavy-ball nearest distance as_
\[Q(y,S)=\min_{\begin{subarray}{c}x\in S,r>0\\ |\mathcal{B}(x,r)\cap S|\geq 100\log(n)\end{subarray}}Q(x,S,y,r).\]
We prove that, with high probability, we always have \(Q(y,s)\geq d(y,S)\). In order to compute the heavy-ball distance, we will need to know the distance between all pairs of points within \(S\), thus we will query the strong oracle on all points added to the coreset \(S\). Our full \(k\)-means (and \(k\)-median) algorithm is then presented in in Algorithm 4. We demonstrate that, for the correct guess of \(\widehat{\mathsf{OPT}}\), the algorithm samples at most \(O(k\log^{2}n)\) points (therefore bounding the query complexity), and that this coreset contains a set of \(k\)-centers which are a \(O(1)\)-approximation for the \(k\)-means (or \(k\)-medians) objective. We can then run a point-weighted \(k\)-clustering algoithm on the coreset \(S\) to obtain this \(O(1)\)-approximate solution.
```
Input: Set of points \(\mathcal{X}\); weak oracle WO, strong oracle SO; estimation of the optimal cost \(\widehat{\mathsf{OPT}}\) Output: A clustering \(\mathcal{C}\), which includes a set of centers \(C=\{c_{1},\cdots,c_{m}\}\) and the assignment of each point \(x\in\mathcal{X}\) Init: Label the points as \(\{x_{1},\cdots,x_{n}\}\) by an arbitrary order; let \(S=\{x_{1}\}\); a counter \(w(x_{1})=1\). Compute the value \(f=\frac{1}{20}\cdot\widehat{\mathsf{OPT}}_{\frac{1}{k\log^{2}n}}\). for\(x_{i}\in\mathcal{X}\)do if\(|S|<100\log n\)then Add \(x_{i}\) to \(S\) and query the strong oracle SO for \(x_{i}\). else Sample \(x_{i}\) with probability \(\min\{1,Q(x_{i},S)/f\}\). if\(x_{i}\) is sampledthen Add \(x_{i}\) to \(S\), i.e. \(S\gets S\cup\{x_{i}\}\), make the strong oracle query SO on \(x_{i}\), and add to the counters \(w(x_{i})=1\). else Assign \(x_{i}\) to the cluster induced by \(x^{\prime}\), which is defined as the center of the ball that attains the heavy-ball nearest distance as in Definition 4. Increase \(w(x^{\prime})\) by \(1\). Run a weighted \(k\)-means (resp. \(k\)-median) clustering algorithm on \(S\), e.g., algorithms of [38, 4].
```
**Algorithm 4**The \(k\)-means (and \(k\)-median) algorithm
The analysis.We now proceed to analyze this algorithm and evetually prove Theorem 8. Algorithm 4 requires an approximation \(\widehat{\mathsf{OPT}}\),of the optimal cost \(\mathsf{OPT}\). We will first assume we have such a value \(\widehat{\mathsf{OPT}}\) satisfying \(2^{i}\leq\widehat{OPT}\leq 2^{i+1}\), and later describe how we can find it via binary search. Specifically, we will terminate any run of Algorithm 4 whenever it samples more than \(1800k\log^{2}n\) points to add to \(S\), and conduct a binary search by maintaining upper and lower bounds of the indices \(i\). In the end, we output the subroutine induced by an \(i^{\star}\) such that \((i).\) the subroutine with \(2^{i^{\star}}\) returns a clustering (i.e. did not terminate) and \((ii).\) the subroutine with \(2^{i^{\star}-1}\) is terminated without return. We then set \(\widehat{\mathsf{OPT}}=2^{i^{\star}}\). We will show that this value of \(\widehat{\mathsf{OPT}}\) satisfies the desired properties. As such, our analysis focus on the case when \(\mathsf{OPT}\leq\widehat{\mathsf{OPT}}\leq 2\mathsf{OPT}\).
For the simplicity of presentation, we focus on the analysis of \(k\)-median as it does not involved the square terms on the distances. We show in Remark 14 how the analysis also works for the \(k\)-means clustering with a slightly larger constant factor.
To proceed with the analysis, we also introduce some self-contained notations used in this analysis. We let \(c_{1}^{*},\cdots,c_{k}^{*}\) be the centers for the optimal \(k\)-median solution, and we let \(r_{1}^{*},r_{2}^{*},\cdots,r_{k}^{*}\) be the _average_ cost of the points in each cluster, i.e.,
\[r_{i}^{*}=\frac{1}{|\{x\mid\mathcal{C}(x)=c_{i}^{*}\}|}\cdot\sum_{x.s.t.\, \mathcal{C}(x)=c_{i}^{*}}d(x,c_{i}^{*}).\]
Based on this, we can define \(B_{i}^{j}\) as the ball centered at \(c_{i}^{*}\) and with distance at most \(2^{j}r_{i}^{*}\). We further define \(A_{i}^{j}\) to be the set of points with distance \((2^{j-1}r_{i}^{*},\ 2^{j}r_{i}^{*}]\) from the center of \(C_{i}\), i.e. an "annulus" between \(B_{i}^{j-1}\) and \(B_{i}^{j}\). We also define the following event:
\(\mathcal{E}_{A_{i,j}\text{-heavy}}\): the set \(A_{i}^{j}\) has at least \(100\log n\) points in \(S\), i.e. \(|A_{i}^{j}\cap S|\geq 100\log n\).
We let the points being sampled in \(S\) before \(\mathcal{E}_{A_{i,j}\text{-heavy}}\) (also denote as \(\neg\mathcal{E}_{A_{i,j}\text{-heavy}}\)) as \(S_{i,j}^{\text{light}}\), and let its complement be \(S_{i,j}^{\text{heavy}}\), i.e. the set of points sampled after \(\mathcal{E}_{A_{i,j}\text{-heavy}}\). In what follows, we will first show the approximation guarantees and the number of centers in \(S\).
**Lemma 9**.: _Suppose \(\text{OPT}_{\text{$k$-median}}\leq\widehat{\text{OPT}}\leq 2\text{OPT}_{ \text{$k$-median}}\). Then, with probability at least \(1-\frac{1}{n^{3}}\), the clustering outputted by \(S\) in Algorithm 4 gives an \(O(1)\)-approximation of the \(\text{OPT}_{\text{$k$-median}}\)._
Proof.: For each of the set \(A_{i}^{j}\), we analyze the cost it pays in the formation of \(S\) by looking at \(\neg\mathcal{E}_{A_{i,j}\text{-heavy}}\) (before \(A_{i}^{j}\) becomes heavy) and \(\mathcal{E}_{A_{i,j}\text{-heavy}}\) (after \(A_{i}^{j}\) becomes heavy), respectively as in Claim 10 and Claim 11.
**Claim 10**.: _For each set \(A_{i}^{j}\), with probability at least \(1-\frac{1}{n^{4}}\), the cost induced by all points in \(A_{i}^{j}\cap S_{i,j}^{\text{light}}\) is at most \(200f\cdot\log n\)._
Proof of Claim 10.: For any point \(x\in A_{i}^{j}\cap S_{i,j}^{\text{light}}\), we define random variable \(X_{x}\) as the indicator random variable for \(x\) to be sampled. Since we sample each point with probability at most \(Q(x,S)/f\), there is
\[\mathbb{E}\left[X_{x}\right]\leq Q(x,S_{:x})/f,\]
where we use \(S_{:x}\) to denote the sampled set before \(x\) is visited. Suppose \(\mathcal{E}_{A_{i,j}\text{-heavy}}\) happens after sampling \(N\) points from \(A_{i}^{j}\), which means up till the \(N-1\) point, we have
\[\mathbb{E}\left[\sum_{i\in[N-1]}X_{x_{i}}\right]\leq\sum_{i\in[N-1]}Q(x,S_{:x _{i}})/f<100\log n.\]
The last inequality holds since we condition on the event that the number of sampled points is (deterministically) less than \(100\log n\). Similarly, we can also get a lower bound on the expectation by using the fact that we are only one point short of reaching \(100\log n\) (assuming \(n\geq 10\)).
\[\mathbb{E}\left[\sum_{i\in[N-1]}X_{x_{i}}\right]\geq 99\log n.\]
On the other hand, note that if a point \(X_{x}\) is not sampled, the cost induced by \(X_{x}\) is exactly \(Q(x,S_{:x_{i}})\). As such, for the induced cost of the points in \(A_{i}^{j}\cap S_{i,j}^{\text{light}}\) to be more than \(200f\cdot\log n\), there must be
\[\sum_{i\in[N-1]}Q(x,S_{:x_{i}})>200f\cdot\log n.\]
Comparing the two inequalities, and using the fact that \(\sum_{i\in[N-1]}X_{x_{i}}\) is a summation of \(0/1\) random variables, we have that for the second inequality to happen, the probability is
\[\Pr\left(\text{cost induced by }A_{i}^{j}\cap S_{i,j}^{\text{light}}>2 00f\cdot\log n\right) =\Pr\left(\sum_{i\in[N-1]}Q(x,S_{:x_{i}})>200f\cdot\log n\right)\] \[\leq\Pr\left(\sum_{i\in[N-1]}X_{x_{i}}\geq 2\cdot\mathbb{E} \left[\sum_{i\in[N-1]}X_{x_{i}}\right]\right)\] \[\leq\exp\left(-\frac{1}{3}\cdot\mathbb{E}\left[\sum_{i\in[N-1]}X _{x_{i}}\right]\right)\] (by multiplicative Chernoff) \[\leq\frac{1}{n^{7}},\]
as desired.
We now turn to the cost of the points in \(A_{i}^{j}\) after the set becomes heavy, i.e. conditioning on \(\mathcal{E}_{A_{i,j}\text{-heavy}}\) happens.
**Claim 11**.: _For each set \(A_{i}^{j}\), with probability at least \(1-\frac{1}{n^{4}}\), the cost induced by all points in \(A_{i}^{j}\cap S_{i,j}^{\text{heavy}}\) is at most \(13\)-multiplicative of the cost induced by \(A_{i}^{j}\cap S_{i,j}^{\text{heavy}}\) on the optimal clustering._
Proof.: Note that the moment \(\mathcal{E}_{A_{i,j}\text{-heavy}}\) happens, we have the ball \(B_{i}^{j}\) becomes heavy as well. As such, by the same argument we used in Lemma 6, the cost of adding any \(x\in A_{i}^{j}\cap S_{i,j}^{\text{heavy}}\) to its nearest heavy-ball cluster is at most \(Q(x,S)\leq d(x,c_{i}^{*})+6\cdot 2^{j}r_{i}^{*}\) with probability at least \(1-\frac{1}{n^{4}}\). Furthermore, we have \(d(x,c_{i}^{*})\geq 2^{j-1}r_{i}^{*}\) by the fact that \(x\in A_{i}^{j}\). Therefore, we can charge the cost of \(x\) induced in \(S\) to at most \(13\cdot d(x,c_{i}^{*})\). This implies a \(13\)-multiplicative approximation for all points in \(A_{i}^{j}\cap S_{i,j}^{\text{heavy}}\), as desired. Claim 11\(\Box\)
We now finalize the proof of Lemma 9. We first argue that we can apply Claim 10 to _all_\(A_{i}^{j}\) sets. Note that there are at most \(O(k\log n)\leq O(n\log n)\) such sets, as for any cluster \(c_{i}^{*}\), there is no point whose distance is more than \(n\,r_{i}^{*}\). Therefore, we can apply a union bound to conclude that with probability at least \(1-\frac{1}{n^{5}}\), the bound of Claim 10 applies to all such \(A_{i}^{j}\) sets. Collectively, they induce at most \(200\cdot fk\log^{2}n\leq 10\,\widehat{OPT}\leq 20\,\mathsf{OPT}_{\text{k- median}}\). Furthermore, by Claim 11, the total cost we induce on \(A_{i}^{j}\cap S_{i,j}^{\text{heavy}}\) for all \(i,j\) is at most \(13\mathsf{OPT}_{\text{k-median}}\). As such, the total cost induced by \(S\) is at most \(O(1)\cdot\mathsf{OPT}_{\text{k-median}}\), as desired.
We now turn to the analysis of the number of centers that are ever sampled in \(S\).
**Lemma 12**.: _Suppose \(\mathsf{OPT}_{\text{k-median}}\leq\widetilde{OPT}\leq 2\mathsf{OPT}_{\text{k- median}}\). Then, with probability at least \(1-\frac{1}{n^{3}}\), the set \(S\) output in Algorithm 4 has at most \(O(k\log^{2}n)\) points._
Proof.: We again analyze the number of sampled points before and after the event \(\mathcal{E}_{A_{i,j}\text{-heavy}}\). Note that before \(\mathcal{E}_{A_{i,j}\text{-heavy}}\), we deterministically only provide at most \(100\log n\) centers from \(A_{i}^{j}\). After \(\mathcal{E}_{A_{i,j}\text{-heavy}}\), the ball \(B_{i}^{j}\) becomes heavy. As such, define \(X_{x}\) be the indicator random variable for sampling \(x\) from \(A_{i}^{j}\), we can condition on the high probability event of Claim 11, and argue that with probability at least \(1-\frac{1}{n^{3}}\), there is
\[\mathbb{E}\left[X_{x}\mid\mathcal{E}_{A_{i,j}\text{-heavy}}\right] \leq 260\cdot\frac{d(x,c_{i}^{*})\,k\,\log^{2}n}{\mathsf{OPT}}\] \[\leq 260\cdot\frac{d(x,c_{i}^{*})\,k\,\log^{2}n}{\mathsf{OPT}_{ \text{k-median}}}.\]
Therefore, we can add up all the centers and their \(j\) values, which gives us
\[\mathbb{E}\left[\sum_{i,j}\sum_{x\in A_{j}^{i}}X_{x}\mid\mathcal{ E}_{A_{i,j}\text{-heavy}}\right] =\sum_{i,j}\sum_{x\in A_{j}^{i}}\mathbb{E}\left[X_{x}\mid\mathcal{ E}_{A_{i,j}\text{-heavy}}\right]\qquad\qquad\text{(linearity of expectation)}\] \[\leq 260\cdot\frac{k\log^{2}n}{\mathsf{OPT}_{\text{k-median}}} \cdot\sum_{i,j}\sum_{x\in A_{j}^{i}}d(x,c_{i}^{*})\] \[=260\cdot k\log^{2}n.\qquad\qquad\quad(\sum_{i,j}\sum_{x\in A_{j} ^{i}}d(x,c_{i}^{*})=\mathsf{OPT}_{\text{k-median}})\]
As such, we can bound the _expectation_ of the total number of points that are ever sampled as:
\[\mathbb{E}\left[\sum_{i,j}\sum_{x\in A_{j}^{i}}X_{x}\right] \leq\mathbb{E}\left[\sum_{i,j}\sum_{x\in A_{j}^{i}}X_{x}\mid \mathcal{E}_{A_{i,j}\text{-heavy}}\right]+\mathbb{E}\left[\sum_{i,j}\sum_{x\in A _{j}^{i}}X_{x}\mid-\mathcal{E}_{A_{i,j}\text{-heavy}}\right]\] \[\leq 260\cdot k\log^{2}n+\sum_{i,j}100\log n\] \[\leq 360\cdot k\log^{2}n.\qquad\qquad\qquad\qquad\text{(there are only $k\log n$ possible $A_{i}^{j}$ sets)}\]
Finally, we analyze the concentration of the number of sampled points in \(S\). Note that using Fact 31, we can verify that the \(X_{x}\) indicator random variables are negatively correlated, i.e. \(\Pr(X_{x}=1\mid X_{u}=1)\leq\Pr(X_{x}=1)\) for \(u\neq x\). If \(\sum_{i,j}\sum_{x\in A_{j}^{i}}X_{x}<100k\log^{2}n\), we deterministically obtain the desired bound; on the other hand, assuming \(\sum_{i,j}\sum_{x\in A_{j}^{i}}X_{x}\geq 100k\log^{2}n\), we have
\[\Pr\left(\sum_{i,j}\sum_{x\in A_{j}^{i}}X_{x}\geq 5\cdot \mathbb{E}\left[\sum_{i,j}\sum_{x\in A_{j}^{i}}X_{x}\right]\right) \leq\exp\left(-\frac{16}{6}\cdot\mathbb{E}\left[\sum_{i,j}\sum_{ x\in A_{j}^{i}}X_{x}\right]\right)\] \[\qquad\qquad\text{(Chernoff bound for negatively correlated random variables, Proposition \ref{eq:cond
**Proposition 13** ([27]).: _Let \(\{c_{1},\cdots,c_{m}\}\) be a set of centers that achieves \(\alpha\)-approximation of the k-median (resp. \(k\)-means) objective, and let \(w_{1},\cdots,w_{m}\) be the number of points contained in each cluster induced by \(\{c_{i}\}_{i=1}^{m}\). Furthermore, let \(\{\tilde{c}_{1},\tilde{c}_{2},\cdots,\tilde{c}_{k}\}\) be a \(\beta\)-approximate \(k\)-median (resp. \(k\)-means) on the weighted points \(\{c_{1},\cdots,c_{m}\}\). Then, the clustering induced by \(\{\tilde{c}_{1},\tilde{c}_{2},\cdots,\tilde{c}_{k}\}\) gives an \(O(\alpha+\beta)\)-approximation of \(\mathsf{OPT}_{\text{k-median}}\) (resp. \(\mathsf{OPT}_{\text{k-means}}\))._
Proof.: For any point \(x\in\mathcal{X}\), let \(c(x)\) be its clustering center in \(\{c_{i}\}_{i=1}^{m}\) and \(\tilde{c}(x)\) be its clustering center in \(\{\tilde{c}_{j}\}_{j=1}^{k}\). If \(c(x)=\tilde{c}(x)\), the cost induced by \(x\) remains \(d(x,c(x))\). On the other hand, if \(c(x)\neq\tilde{c}(x)\), the cost induced by \(x\) is at most \(d(c(x),\tilde{c}(x))+d(x,c(x))\). Furthermore, note that if we let \(\mathsf{OPT}_{\text{ws}}\) be optimal cost of the weighted cost of clustering on \(\{\tilde{c}_{j}\}_{j=1}^{k}\), and let \(\tilde{c}(\cdot)\) and \(c^{*}(\cdot)\) be the functions that maps 1). points in \(\{c_{i}\}_{i=1}^{m}\) to the center in the weighted clustering and 2). points in \(\mathcal{X}\) to the optimal \(k\)-median clustering, respectively. As such, we have
\[\mathsf{OPT}_{\text{ws}} =\sum_{i=1}^{m}w_{i}\cdot d(c_{i},\tilde{c}(c_{i}))\] \[=\sum_{i=1}^{m}d(c_{i},\tilde{c}(c_{i}))\] (duplicating each \[c_{i}\] for \[w_{i}\] times and map all of them to the corresponding center) \[\leq\sum_{i=1}^{n}d(c_{i},c^{*}(c_{i}))+d(\tilde{c}(c_{i}),c^{* }(\tilde{c}(c_{i})))\] \[\leq 2\mathsf{OPT}_{\text{k-median}}.\]
Therefore, we can bound the total cost with
\[\sum_{x\in\mathcal{X}}d(x,\tilde{c}(x)) \leq\sum_{x\in\mathcal{X}}d(x,c(x))+\sum_{x\in\mathcal{X}}d(c(x),\tilde{c}(x))\] \[\leq\alpha\cdot\mathsf{OPT}_{\text{k-median}}+\beta\cdot \mathsf{OPT}_{\text{ws}}\] \[\leq(\alpha+2\beta)\cdot\mathsf{OPT}_{\text{k-median}},\]
as desired.
Finally, for the \(k\)-means case, we need to replace \(d(\cdot,\cdot)\) with \(d^{2}(\cdot,\cdot)\); although this is no longer a metric, we can use the approximate triangle inequality that
\[d^{2}(x,z)\leq 2(d^{2}(x,y)+d^{2}(y,z)).\]
As such, by replacing the \(k\)-median clustering centers with the \(k\)-means ones and use \(d^{2}(\cdot,\cdot)\), we can get
\[\sum_{x\in\mathcal{X}}d^{2}(x,\tilde{c}(x)) \leq 2\cdot\sum_{x\in\mathcal{X}}d^{2}(x,c(x))+2\cdot\sum_{x\in \mathcal{X}}d^{2}(c(x),\tilde{c}(x))\] \[\leq(2\alpha+8\beta)\cdot\mathsf{OPT}_{\text{k-means}},\]
as desired.
Proof of Theorem 8.: Conditioning on the right guess of \(\mathsf{OPT}_{\text{k-median}}\leq\widetilde{\mathsf{OPT}}\leq 2\mathsf{OPT}_{ \text{k-median}}\), by Lemma 9 and Lemma 12, in the construction of set \(S\) we sample at most \(O(k\log^{2}n)\) points and produce an \(O(1)\)-approximation. This also implies we make at most \(O(k\log^{2}n)\) strong oracle \(\mathsf{SO}\) queries. To bound the number of weak oracle queries, note that in each iteration of \(x_{i}\), we only need
to query the distances between the \(x_{i}\) and the points in \(S\), which is at most \(O(nk\,\log^{2}n)\), and our post-processing of the points in \(S\) does not involve any additional oracle queries.
To see the time efficiency, observe that we can maintain a heap for each point in \(S\) to represent the balls of size \(O(\log n)\). As such, when \(x_{i}\) is sampled and a strong oracle query \(\mathsf{SO}\) is added, it takes \(O(|S|\log n)\) time to insert the value and remove the largest one from the heap. Therefore, conditioning on the high probability event of Lemma12, the updates of the heavy balls takes at most \(O(\sum_{|S|=1}^{k\,\log^{2}n}|S|\log n)=O(k^{2}\,\mathrm{polylog}\,n)\) time. On the other hand, if \(x_{i}\) is not sampled, we need to estimate the median from \(x_{i}\) to every ball in \(S\), and it take \(O(|S|\log n)\) time by the heap structure. Therefore, the estimation of heavy ball nearest distance takes \(O(n\,|S|\log n)\leq O(nk\,\mathrm{polylog}\,n)\) time across the process. Finally, we can run \(O(1)\)-approximate \(k\)-median algorithms in \(O(|S|k)\) time by [38], which gives us \(O(k^{2}\mathrm{polylog}\,n)\) time by the size bound of \(|S|\). In total, this gives us the desired \(O(nk\,\mathrm{polylog}\,n)\) time.
For the approximation guarantee, by Lemma9 and Proposition13, we can run an \(O(1)\)-approximate \(k\)-median (resp. \(k\)-means) algorithm on the \(O(1)\)-approximate coreset of \(S\)5. The correctness is guaranteed since we know the _exact_ distance between the points in \(S\) (by the \(\mathsf{SO}\) queries). Therefore, we get an \(O(1)\) approximation of the optimal clustering cost.
Footnote 5: If we do not care about the time efficiency, we can run the state-of-the-art polynomial-time algorithm by [4]; we can even run a brute-force algorithm to search for the minimum-cost clustering
Finally, when running with unknown \(\widetilde{OPT}\), we break a run whenever it samples more than \(1800k\log^{2}n\) points in the construction of \(S\), and output the run with the \(\widetilde{OPT}\) value that \((i)\). is not break and \((ii)\). is next to a run with \(\widetilde{OPT}/2\) that breaks. Thus, we can binary search for such a value of \(\widetilde{OPT}\) (as discussed in AppendixA.2). Using that \(\Delta\leq\mathrm{poly}(n)\), this gives us \(O(\log\log n)\) overhead for the queries and running time. As such, it results in at most \(O(k\log^{2}n\log\log n)\) strong oracle \(\mathsf{SO}\) queries, at most \(O(nk\log^{2}n\log\log n)\) weak oracle \(\mathsf{WO}\) queries, and \(O(nk\,\mathrm{polylog}\,n)\) time.
Theorem8\(\Box\)
**Remark 14**.: _We now describe the changes needed to generalize Algorithm4 from the \(k\)-medians objective to \(k\)-means. Specifically, since the cost measure in \(k\)-means is \(\sum_{x\in\mathcal{X}}d^{2}(x,\mathcal{C}(x))\) instead of \(\sum_{x\in\mathcal{X}}d(x,\mathcal{C}(x))\), it will suffice to change our "distance" measure to \(d^{2}\). This can be dome by making the modification to the sampling probability of \(x_{i}\): instead of using \(\min\{1,Q(x_{i},S)/f\}\), we will use the sampling probability of \(\min\{1,Q^{2}(x_{i},S)/f\}\). The only downstream change that occurs is that we no longer can apply triangle inequality, since \(d^{2}(\cdot,\cdot)\) is no longer a metric. However, we can always employ the approximate triangle inequality of \(d^{2}(x,z)\leq 2\cdot(d^{2}(x,y)+d^{2}(y,z))\) (see Proposition13 for the usage). Note that the triangle inequality is only used in Claim11 and Proposition13, and the rest of the algorithm and the analysis proceeds exactly as in the \(k\)-median case (Lemma12 is affected by the high probability event in Claim11, but the analysis itself does not use triangle inequality). Substituting approximate triangle inequality for the triangle inequality induces an additional constant factor into the objective, which does not effect our overall \(O(1)\)-approximation._
## 5 Metric Minimum Spanning Tree
In Section3 and 4 we showed that \(\tilde{\Theta}(k)\) strong oracle queries are necessary and sufficient for \(k\)-clustering tasks. In light of this, a natural question is whether the strong oracle is necessary for all geometric optimization problems. In this section, we demonstrate that, surprisingly, this is not the case for the classic metric minimum spanning tree (MST) problem, so long as the weak distances \(\tilde{d}:\mathcal{X}^{2}\to\mathbb{R}\) are a metric. We refer to this as the _metric-weak oracle model_. Formally, we show:
**Theorem 15**.: _There is an algorithm that, given only access to the corrupted metric \(\hat{d}\) produced by a metric weak oracle (i.e., \((\mathcal{X},\tilde{d})\) is a metric), with corruption probability \(\delta\) such that \(1-\delta\geq c\) for some constant \(c>0\), in \(O(n^{2})\) time produces a tree \(\hat{T}\) such that \(\mathbb{E}[w(\hat{T})]\leq O(\sqrt{\log n})\cdot\min_{T}w(T)\)._
Our algorithm itself is simple: it computes the optimal MST \(T_{\hat{d}}^{*}\) of the corrupted metric \((\mathcal{X},\tilde{d})\), transforms \(T_{\hat{d}}^{*}\) into a bounded degree tree \(\hat{T}\) (Algorithm 5) such that \(\tilde{w}(\hat{T})\leq 2\tilde{w}(T_{\hat{d}}^{*})\), and then outputs \(\hat{T}\). The analysis of this algorithm, however, is fairly involved. To prove that \(\hat{T}\) is a good approximation, it suffices to show that with good probability we have both
1. \(w(\hat{T})\leq\tilde{w}(\hat{T})+O(\sqrt{\log n})w(T_{\hat{d}}^{*})\), where \(T_{\hat{d}}^{*}\) is the optimal MST for \((\mathcal{X},d)\).
2. \(\tilde{w}(T_{\hat{d}}^{*})\leq O(w(T_{\hat{d}}^{*}))\).
Then, using \(\tilde{w}(\hat{T})\leq 2\tilde{w}(T_{\hat{d}}^{*})\leq 2\tilde{w}(T_{\hat{d}}^{*})\), we can obtain
\[w(\hat{T})\leq\tilde{w}(\hat{T})+O(\sqrt{\log n})w(T_{\hat{d}}^{*})\leq 2 \tilde{w}(T_{\hat{d}}^{*})+O(\sqrt{\log n})w(T_{\hat{d}}^{*})\leq O(\sqrt{ \log n})w(T_{\hat{d}}^{*}).\]
We now give the follow description and analysis for our algorithm. To begin with, we first show the bounded-degree tree transformation as prescribed by Algorithm 5.
```
Input: Rooted tree \(T\) over a metric space \((X,d)\). Output: Rooted tree \(\hat{T}\) with degree at most \(5\) with \(w_{d}(\hat{T})\leq 2w_{d}(T)\). \(\hat{T}=(V,\hat{E})\), where \(\hat{E}=\{\}\) is an empty edge set. for every \(u\in T\)do Let \(C_{u}\) be the set of children of \(u\) in \(T\), and set \(k=|C_{u}|\). if\(k\leq 2\)then Add directed edge \((u,v)\) to \(\hat{E}\) for every \(v\in C_{u}\) else Order the children \(C_{u}=\{x_{1},\ldots,x_{k}\}\) so that \(d(u,x_{1})\leq d(u,x_{2})\leq\cdots\leq d(u,x_{k})\), and define \(x_{0}=u\). For each \(i=1,2,\ldots,k\), add the directed edge \((x_{\varphi(i)},x_{i})\) to \(\hat{E}\). Remove all covered points form \(V\).
```
**Algorithm 5**Bounded-Degree Tree Transformation
The notation of \(\varphi(i)\) in Algorithm 5 is defined as the parent index in the _binary complete tree_. Concretely, for any integer \(k\), we define \(H_{k}\) to be the unique complete binary labeled tree on \(k+1\) vertices such that the level-order traversal of \(H_{k}\) is \(0,1,2,\ldots,k\). In other words, \(H_{k}\) is a complete binary tree where the zero-th level contains just the root, labeled \(0\), the first level contains vertices labeled \(1,2\) (left to right ordered), and second the labels \(3,4,5,6\), and so on. Define the mapping \(\varphi:\mathbb{Z}_{\geq 1}\rightarrow\mathbb{Z}_{\geq 0}\) by \(\varphi(i)=j\) where \(j\) is the label of the parent of \(i\) in the graph \(H_{k}\) (for any \(k\geq i\)): namely, \(\varphi(i)=\lceil i/2\rceil-1\).
We show that the transformation in Algorithm 5 is always possible in Proposition 16.
**Proposition 16**.: _Fix any \(n\)-point metric space \((\mathcal{X},d)\) and any spanning tree \(T\) of \(\mathcal{X}\). Then the spanning tree \(\hat{T}\) produced by Algorithm 5 has degree at most \(5\), and satisfies \(w(\hat{T})\leq 2w(T)\). Moreover, the algorithm can be run in \(O(n)\) time._
Proof.: The runtime of the algorithm is straightforward, so we analyze the other two claims. For any vertex \(u\), let \(\pi(u)\) be its parent in \(T\). We first observe that the tree \(\hat{T}\) produced has degree at most \(5\). To see this, fix any node \(u\), and note that edges are added adjacent to \(u\) only when the for loop is called on \(u\) and on the parent \(\pi(u)\). In the case it is called on \(u\), at most two children (out-edges) are added to \(u\)., and when called on the parent, again at most two children and one parent (in-edge) are added to \(u\). Interpreting the in-edge as a parent and out-edges as children, we have that \(\hat{T}\) is a rooted tree with the same root as \(T\), where each node has at most \(4\) children. We can then define \(\hat{\pi}(u)\) to be the parent of \(u\) in \(\hat{T}\).
Next, we analyze the cost of the tree. Observe that \(w_{d}(T)=\sum_{u\in X}d(u,\pi(u))\). Thus it suffices to show that
\[d(u,\hat{\pi}(u))\leq 2d(u,\pi(u)) \tag{2}\]
for any non-root node \(u\). To see this, note that the parent \(\hat{\pi}(u)\) is set when the for loop is called on the parent \(v=\pi(u)\) of \(u\). In this case, we order the children \(C_{v}=\{x_{1},\ldots,x_{k}\}\) so that \(d(v,x_{1})\leq d(v,x_{2})\leq\cdots\leq d(v,x_{k})\), where \(u=x_{i}\) for some \(i\in\{1,\ldots,k\}\). First note that if \(k\leq 2\), then we have \(\hat{\pi}(u)=v=\pi(u)\), so (2) holds trivially. Otherwise, we have \(\hat{\pi}(u)=x_{j}\) for some \(j<i\) (interpreting \(x_{0}=v\)). By the ordering, and employing the triangle inequality, we have
\[\begin{split} d(u,\hat{\pi}(u))&=d(x_{i},x_{j})\\ &\leq d(x_{i},v)+d(x_{j},v)\\ &\leq 2d(x_{i},v)\\ &=2d(u,\pi(u)),\end{split} \tag{3}\]
which completes the proof.
Before proceeding with the analysis of our algorithm, for simplicity, we assume that all distances are unique, and we justify this assumption as follows.
**Fact 17**.: _Let \(\mathcal{A}\) be any algorithm that satisfies the correctness guarantees of Theorem15 under the assumption that the set of distance \(\{d(x,y)\}_{(x,y)\in\binom{\mathcal{X}}{2}}\) are unique. Then there is an algorithm \(\mathcal{A}^{\prime}\) that satisfies the correctness guarantees of Theorem15 without this assumption._
Proof.: The proof is by modifying the input to \(\mathcal{A}\) to satisfy this. Specifically, we for every \((x,y)\), we replace the input \(\tilde{d}(x,y)\) with \(\tilde{d}(x,y)+\varepsilon_{x,y}\), where \(\varepsilon_{x,y}\sim[\varepsilon/2,\varepsilon]\) is i.i.d. and uniform for an arbitrarily small value of \(\epsilon\). It is easy to verify that the result is still a metric, as we always have \(r_{x,y}\leq\varepsilon\leq r_{x,z}+r_{z,y}\) for any \((x,y,z)\). We run \(\mathcal{A}\) on the modified distances \(\hat{\tilde{d}}(x,y)+r_{x,y}\), which we can interpret as coming from the modified original metric with distances \(d(x,y)+r_{x,y}\). Since \(\{d(x,y)\}_{(x,y)\in\binom{\mathcal{X}}{2}}\) are now unique, the correctness of \(\mathcal{A}\) follows. Moreover, since we have changed each distance in \(d(x,y)\) by at most \(\varepsilon<<1/\mathrm{poly}(n)\cdot\min_{x,y}d(x,y)\), it follows that the cost of any spanning tree is changed by at most a \((1-1/\mathrm{poly}(n))\) factor, which completes the proof.
Given Fact17, in what follows we will always assume that all distances are unique. We can now introduce the notion of a \(\ell\)-heavy ball, whose definition relies on this fact.
**Definition 5** (\(\ell\)-heavy ball).: _Fix any \(\ell\geq 0\). For any point \(v\in\mathcal{X}\), we define the level-\(\ell\) heavy radius at the point \(v\) to be the smallest radius \(r=r_{v}^{\ell}\) such that the metric ball \(B_{d}^{\ell}(v,r)\) under the distance measure \(d\) contains exactly \(2^{\ell}\) points in \(\mathcal{X}\). We define the level-\(\ell\) heavy ball at \(v\) to be the metric-ball \(B_{d}^{\ell}(v,r_{v}^{\ell})\) under the metric \(d\)._
Note that the existence of a radius \(r_{v}^{\ell}\) such that \(B_{d}^{\ell}(v,r)\) contains exactly \(2^{\ell}\) points is guaranteed by the uniqueness of distances. We use the notion of \(\ell\)-th level heavy balls in Definition5 to show the following probabilistic guarantee for distance corruptions under the metric constraints.
**Lemma 18** (Probabilistic metric violation guarantee).: _Let \(\widetilde{d}\) be the corrupted distance of \(d\) that satisfies the metric property. Fix any three points \(x,y,u\in V\) and level \(\ell\geq 2\), such that \(x\in B_{d}^{\ell}(u,r_{u}^{\ell})\). Then with probability at least \(1-2^{-c_{\delta}\cdot 2^{\ell}}\), where \(c_{\delta}\) is a constant depending only on the corruption probability \(\delta\), the following inequalities hold:_
\[\widetilde{d}(x,y) \leq d(x,y)+4\cdot r_{u}^{\ell};\] \[d(x,y) \leq\widetilde{d}(x,y)+4\cdot r_{u}^{\ell}.\]
Proof.: For any point \(z\in B^{\ell}(u,r_{u}^{\ell})\setminus\{x,y\}\), let \(\mathcal{E}_{z}\) be the event that both \(\widetilde{d}(y,z)=d(y,z)\) and \(\widetilde{d}(x,z)=d(x,z)\); i.e., neither pair is corrupted. Then the probability that no such \(\mathcal{E}_{z}\) holds for any \(z\) is at most
\[\Pr\left(\bigcap_{z\in B^{\ell}(u,r_{u}^{\ell})\setminus\{x,y\}} \neg\mathcal{E}_{z}\right)\leq(1-(1-\delta)^{2})^{2^{\ell-1}}\leq 2^{-c_{ \delta}\cdot 2^{\ell}},\]
for some \(c_{\delta}\) which is a constant depending only on \(\delta\). Thus, with probability at least \(1-2^{-c_{\delta}\cdot 2^{\ell}}\), there exists at least one such \(z\in B^{\ell}(u,r_{u}^{\ell})\setminus\{x,y\}\) with \(\widetilde{d}(y,z)=d(y,z)\) and \(\widetilde{d}(x,z)=d(x,z)\). Condition on this event, and fix it \(z\) now. We have
\[\widetilde{d}(x,y) \leq\widetilde{d}(x,z)+\widetilde{d}(z,y) \text{(triangle inequaliy for $\widetilde{d}$)}\] \[=d(x,z)+d(z,y) \text{(the event $\mathcal{E}_{z}$)}\] \[\leq 2d(x,z)+d(x,y) \text{(triangle inequality for $d$)}\] \[\leq d(x,y)+4\cdot r_{u}^{\ell}. \text{($x,z\in B^{\ell}(u,r_{u}^{\ell})$)}\]
Similarly, for the second inequality, we have
\[d(x,y) \leq d(x,z)+d(z,y) \text{(triangle inequaliy for $d$)}\] \[=d(x,z)+\widetilde{d}(z,y) \text{(the event $\mathcal{E}_{z}$)}\] \[\leq 2r_{u}^{\ell}+\widetilde{d}(x,z)+\widetilde{d}(x,y) \text{($d(x,z)\leq r_{v}$ and triangle inequality for $\widetilde{d}$)}\] \[=2r_{u}^{\ell}+d(x,z)+\widetilde{d}(x,y) \text{(the event $\mathcal{E}_{z}$)}\] \[\leq\widetilde{d}(x,y)+4\cdot r_{u}^{\ell}, \text{($d(x,z)\leq r_{v}$)}\]
as desired.
We now use Lemma18 to prove the approximation guarantee for the MST. We first define a partition of the metric space \(\mathcal{X}\) via a ball-carving. Note that the following procedure is not algorithmic, and is only used in the analysis. Also, in what follows, recall that we define \(T_{d}^{*}\) and \(T_{\widetilde{d}}^{*}\) be the MST under the metric \(d\) and \(\widetilde{d}\) respectively.
**Claim 19**.: _Fix any \(1\leq\ell\leq\log n\), and let \(T_{d}^{*}\) be the minimum spanning tree of \(G=(V,E)\) under distance \(d\). Then we have_
\[w_{d}(T_{d}^{*})\geq\frac{1}{2}\cdot\sum_{i}r_{i}^{\ell},\]
_where \(\{r_{i}^{\ell}\}\) are the radii of the level-\(\ell\) ball-carving from Algorithm6._
Proof.: To avoid redundancy of notation, we drop the superscript of \(\ell\) since the proofs on every \(\ell\) are the same. We claim that the balls of radius \(B(x_{i},\frac{r_{i}}{2})\) are disjoint (where \(x_{i}\) is the selected center of the ball of the \(i\)-th iteration). To see this, note that for a fixed \(i\), if \(B(x_{i},\frac{r_{i}}{2})\) contains a point of \(z\in B(x_{j},r_{j})\) for \(j>i\), then the ball \(B(x_{i},r_{i})\) should have contained \(x_{j}\) (since \(r_{j}<r_{i}\)), and \(x_{j}\) should not have been selected during the ball carving process, which forms a contradiction. Similarly, if \(B(x_{i},\frac{r_{i}}{2})\) contains a point of \(z\in B(x_{k},r_{k})\) for \(k<i\), then the ball \(B(x_{k},r_{k})\) should have contained \(x_{i}\), and \(x_{i}\) should not have been selected during the ball carving process. Finally, since the points in each \(B(x_{i},\frac{r_{i}}{2})\) are disjoint, the minimum spanning tree must pay a cost to travel from the boundary to the center of each disjoint ball, which pays at least \(\frac{1}{2}\cdot\sum_{i}r_{i}\) in cost.
Besides lower-bounding the MST cost as in Claim 19, we present two technical steps toward the proof of Theorem 15.
**Lemma 20**.: _Let \(T\) be any spanning tree of the set of points \(\mathcal{X}\). Then we have_
\[\mathbb{E}\left[w_{\vec{d}}(T)\right]\leq w_{d}(T)+O(1)\cdot w_{d}(T_{d}^{*}),\]
_where the expectation is over the randomness over which distances are corrupted._
Proof.: We root the tree \(T\) arbitrarily and define the charging scheme as follows. First note, by Lemma Lemma 18, for any pair \((x,y)\), the event in Lemma Lemma 18 holds with probability \(1-1/\mathrm{poly}(n)\) for at least one value of \(\ell\) with \(\ell=O(\log n)\). It follows by a union bound over \(O(n^{2})\) pairs that, with high probability, all pairs \((x,y)\) satisfy the random event in Lemma Lemma 18 for some \(\ell=O(\log n)\).
A Charging Scheme.
For each tree edge \((u,v)\in T\):
1. Suppose w.l.o.g. that \(v\) is the child node. Let \(\ell\) be the smallest integer such that the ball \(B^{\ell}(x_{i},r_{i}^{\ell})\) satisfies the following properties: * \(v\) is included in \(B^{\ell}(x_{i},r_{i}^{\ell})\); and * \(\widetilde{d}(u,v)\leq d(u,v)+4\cdot r_{i}^{\ell}\).
2. Distribute a charge of \(4\cdot r_{i}^{\ell}\) to \(B^{\ell}(x_{i},r_{i}^{\ell})\)
It is straightforward that \(\sum_{(u,v)\in T}\tilde{d}(u,v)\leq\sum_{(u,v)\in T}\tilde{d}(u,v)+C\), where \(C\) is the sum of all charges distributed to balls in the above process. Thus it suffices to upper bound \(C\). To this end, we bound the expected number of times for a given ball at level \(\ell\) to be charged. Note that such a ball \(B^{\ell}(x_{i},r_{i}^{\ell})\) can be charged at most once for each of the at most \(2^{\ell}\) points \(x\) contained in that ball. Moreover, to be charged by the point \(v\), it must be that we did not have \(\widetilde{d}(u,v)\leq d(u,v)+4\cdot r_{i}^{\ell-1}\), which occurs with probaiblity at most \(2^{-c_{\delta}2^{\ell-1}}\) by Lemma18. Define \(X_{i}^{\ell}\) to be the number of times that \(B^{\ell}(x_{i},r_{i}^{\ell})\) is charged. Then, letting \(X^{\ell}(p)\) be the event that \(\ell\) is the level at which the point \(p\) is charged in the above scheme, we have
\[\mathbb{E}\left[X_{i}^{\ell}\right] \leq\sum_{v\in B^{\ell}(x_{i},r_{i}^{\ell})}\mathbb{E}\left[X^{ \ell}(v)\right]\] \[\leq\sum_{v\in B^{\ell}(x_{i},r_{i}^{\ell})}2^{-c_{\delta}2^{ \ell-1}}\] \[\leq 2^{\ell}2^{-c_{\delta}2^{\ell-1}}\leq\frac{c_{\delta}^{\prime }}{2^{\ell}}\]
for some \(c_{\delta}^{\prime}\geq 0\) which is another constant. Thus, we can bound the total cost of the charging scheme by
\[\sum_{\ell=1}^{\log n}\sum_{i}\mathbb{E}\left[4r_{i}^{\ell}X_{i}^{\ell}\right] \leq\sum_{\ell=1}^{\log n}\sum_{i}4\frac{c_{\delta}^{\prime}}{2^{\ell}}\cdot r _{i}^{\ell}\leq\sum_{\ell=1}^{\log n}\frac{8}{2^{\ell}}c_{\delta}^{\prime}w_{ d}(T_{d}^{*})=16c_{\delta}^{\prime}w_{d}(T_{d}^{*}),\]
where the last inequality follows from Claim19 and the fact that the sum is geometric. Putting these bounds together, we have
\[\mathbb{E}\left[w_{\widetilde{d}}(T)\right] \leq\sum_{(u,v)\in T}d(u,v)+\sum_{\ell=1}^{\log n}\sum_{i}\mathbb{ E}\left[4r_{i}^{\ell}X_{i}^{\ell}\right]\] \[\leq w_{d}(T)+O(1)\cdot w_{d}(T_{d}^{*}).\]
as desired.
Lower Bounding \(\tilde{w}(T)\) for any tree \(T\)We now prove an inequality in the reverse direction, demonstrating that, with high probability over the choice of corrupted distances, for any spanning tree \(T\) the cost of \(\tilde{w}(T)\) is not too small. We begin with the following definition. In what follows, we fix \(\alpha=\Theta(\sqrt{\log n})\) with a sufficiently large constant. For any \(\ell\), we will write \(B_{1}^{\ell},B_{2}^{\ell},\ldots,B_{k}^{\ell}\) to denote the set of balls produced by the Level-\(\ell\) Heavy Ball Carving ( Algorithm6), and let \(r_{i}^{\ell}\) denote the radius of \(B_{i}^{\ell}\). Note that by construction, each ball \(B_{i}^{\ell}\) contains exactly \(2^{\ell}\) points. In what follows, set \(\beta=1-\delta\), and note that \(\beta=\Omega(1)\) is at least a constant.
**Definition 6**.: _Fix \(\alpha\) as above, and fix any \(x\in\mathcal{X}\). Let \(B_{i}^{\ell}\) be the ball in the level-\(\ell\) heavy ball carving containing \(x\), where \(\ell=\log(\alpha)\). Then we say that \(x\) is good if at least \(\beta 2^{\ell-1}\) distances in the set \(\{(x,y)\mid y\in B_{i}^{\ell}\}\) are not corrupted. Call a point bad if it is not good._
**Proposition 21**.: _With probability \(1-1/\mathrm{poly}(n)\), the following holds: for every pair of two good points \((x,y)\), we have_
\[d(x,y)\leq\tilde{d}(x,y)+4(r_{i}^{\ell}+r_{j}^{\ell})\]
_where \((i,j)\) is such that \(x\in B_{i}^{\ell},y\in B_{j}^{\ell}\), and \(\ell=\log\alpha\)._
Proof.: We prove that for any two balls \(B_{i}^{\ell},B_{j}^{\ell}\) (possibly with \(i=j\)), and any subsets \(S_{i}\subset B_{i}^{\ell},S_{j}\subset B_{j}^{\ell}\) with size \(|S_{i}|,|S_{j}|\geq\beta 2^{\ell-1}\), there exists at least one \(w\in S_{i},z\in S_{j}\) such that \((w,z)\notin\textsc{Corrupt}\). Let \(\mathcal{E}\) denote this event, we prove that \(\Pr[\mathcal{E}]\geq 1-1/\mathrm{poly}(n)\). Given any two sets \(S_{i},S_{j}\), there are at least \(\binom{\beta 2^{\ell-1}}{2}=s=\Theta(\alpha^{2})>100\log n/\beta\) distinct pairs \((w,z)\in S_{i}\times S_{j}\) (where we used that \(\alpha\) is set with a sufficiently large constant depending on \(1/\beta\)). Since each pair is corrupted independently with probability \(\delta\), the probability that all pairs \((w,z)\in S_{i}\times S_{j}\) are contained in Corrupt is at most
\[(\delta)^{\frac{100\log n}{(1-\delta)}}=(1-(1-\delta))^{\frac{100\log n}{(1- \delta)}}\leq\left(\frac{1}{2}\right)^{100\log n}=n^{-100},\]
where the inequality follows using the fact that \((1-x)^{r}\leq\frac{1}{1+rx}\) for all \(x\in[-\frac{1}{r},1]\) and \(r\geq 0\). We can then union bound over all \(\binom{n}{2}2^{\alpha}=O(n^{2})\cdot 2^{O(\sqrt{logn})}<O(n^{3})\) such choices of \(S_{i},S_{j}\) to obtain the desired result with probability at least \(1-n^{-95}\).
Now conditioned on \(\mathcal{E}\), we can fix any two points \(x,y\) as in the statement of the proposition, where \(x\in B_{i}^{\ell},y\in B_{j}^{\ell}\). Since \(x,y\) are both good, there exist the desired sets \(S_{i}\subset B_{i}^{\ell},S_{j}\subset B_{j}^{\ell}\) with size \(|S_{i}|,|S_{j}|\geq\beta 2^{\ell-1}\), such that all pairs \((x,u)\in S_{i}\) and \((y,v)\in S_{j}\) are not corrupted. By \(\mathcal{E}\), we can then fix \(w\in S_{i}\) and \(z\in S_{j}\) such that \((w,z)\notin\textsc{Corrupt}\) is also not corrupted. We then have
\[\begin{split} d(x,y)&\leq d(x,w)+d(w,z)+d(z,y)\\ &\leq 2r_{i}^{\ell}+\tilde{d}(w,z)+2r_{j}^{\ell}\\ &\leq 2(r_{i}^{\ell}+r_{j}^{\ell})+\tilde{d}(w,x)+\tilde{d}(x,y)+ \tilde{d}(y,z)\\ &=2(r_{i}^{\ell}+r_{j}^{\ell})+d(w,x)+\tilde{d}(x,y)+d(y,z)\\ &=4(r_{i}^{\ell}+r_{j}^{\ell})+\tilde{d}(x,y),\end{split} \tag{4}\]
which completes the proof.
We now consider how to bound the cost of edges \((x,y)\) where at least one of \(x,y\) is bad. To do so, set \(\ell^{*}\) such that \(2^{\ell^{*}}=c^{*}\log n\) with a large enough constant \(c^{*}\), and consider the level-\(\ell^{*}\) heavy ball carving \(B_{1}^{\ell^{*}},B_{2}^{\ell^{*}},\ldots,B_{t}^{\ell^{*}}\). Recall that each such ball has exactly \(c^{*}\log n\) points. We have the following.
**Fact 22**.: _With probability at least \(1-1/\mathrm{poly}(n)\), for every \(i\in[t]\), the ball \(B_{i}^{\ell^{*}}\) contains at most \(\sqrt{\log n}\) bad points._
Proof.: Note that for any point \(x\in B_{i}^{\ell^{*}}\), we have
\[\mathbb{E}\left[\left|\{(x,y)\mid y\in B_{i}^{\ell^{*}}\}\cap\textsc{Corrupt} \right|\right]=\delta 2^{\ell^{*}}\]
And recall that \(x\) is bad if \(\left|\{(x,y)\mid y\in B_{i}^{\ell^{*}}\}\cap\textsc{Corrupt}\right|<\delta 2^{\ell^{*}-1}\). Thus, by Chernoff bounds, a point is bad with probability at most \(2^{-\Theta(\alpha)}<2^{-100\sqrt{\log n}}\). Thus, the probability that any fixed set \(S\) of \(\sqrt{\log n}\) points is simultaniously bad is at most \((2^{-100\sqrt{\log n}})^{\sqrt{\log n}}=1/n^{100}\). It follows that the probability that any set of more than \(\sqrt{\log n}\) points in \(B_{i}^{\ell^{*}}\) is bad is at most
\[\begin{split}\Pr\left[B_{i}^{\ell^{*}}\text{ contains at least }\sqrt{\log n}\text{ bad points }\right]&\leq\binom{c^{*}\log n}{\sqrt{\log n}}\cdot n^{-100}\\ &\leq(c^{*}\log n)^{\sqrt{\log n}}n^{-100}\\ &\leq n^{-99}\end{split} \tag{5}\]
Union bounding over \(t\leq n\) possible balls yields the desired result.
**Fact 23**.: _With probability at least \(1-1/\mathrm{poly}(n)\), the following holds: for every pair \((x,y)\in\mathcal{X}\), where \(x\in B_{i}^{\ell^{*}}\), we have_
\[d(x,y)\leq\tilde{d}(x,y)+4r_{i}^{\ell^{*}}.\]
Proof.: Note that for any \(x\), with probability at least \((1-(1-\delta))^{c^{*}\log n}>1-n^{-100}\), there exists at least one \(z\in B_{i}^{\ell^{*}}\) with \((x,z)\notin\textsc{Corrupt}\) and \((z,y)\notin\textsc{Corrupt}\). Conditioned on this, we have:
\[d(x,y) \leq d(x,z)+d(z,y) \tag{6}\] \[\leq 2r_{i}^{\ell^{*}}+\tilde{d}(z,y)\] \[\leq 2r_{i}^{\ell^{*}}+\tilde{d}(z,x)+\tilde{d}(x,y)\] \[\leq 4r_{i}^{\ell^{*}}+\tilde{d}(x,y).\]
The fact follows after union bounding over \(O(n^{2})\) pairs \((x,y)\).
**Proposition 24**.: _With probability at least \(1-1/\mathrm{poly}(n)\), the following holds: for every spanning tree \(T\) of \(\mathcal{X}\) with degree at most \(\Delta\), we have_
\[w(T)\leq\tilde{w}(T)+O(\Delta\sqrt{\log n})\min_{T^{\prime}}w(T^{\prime}).\]
Proof.: We first condition on the events in Proposition 21, and Fact 22 and Fact 23, which all occur with probability \(1-1/\mathrm{poly}(n)\). For any edge \((x,y)\in T\), if both \(x,y\) are good, we have
\[d(x,y)\leq\tilde{d}(x,y)+4(r_{i}^{\ell}+r_{j}^{\ell}), \tag{7}\]
where \(\ell=\log(\alpha)\), and \(x\in B_{i}^{\ell},y\ \in B_{j}^{\ell}\). Otherwise, if at least one of \(x,y\) is bad, then fix one of the points which is bad, w.l.o.g. we fix \(x\) which is bad. Then by Fact 23, we have
\[d(x,y)\leq\tilde{d}(x,y)+4r_{\tau}^{\ell^{*}}), \tag{8}\]
where \(x\in B_{\tau}^{\ell^{*}}\). Now to bound the cost \(\sum_{(x,y)\in T}d(x,y)\), we will bound each summation by either Eq (7) or Eq (8), depending on whether both \(x,y\) are good or if at least one is bad. We now bound the total cost of doing so.
Using Fact 22, we know that each ball \(B_{\tau}^{\ell^{*}}\) has at most \(\sqrt{\log n}\) bad points points. Moreover, this ball can only contribute a cost of \(4r_{\tau}^{\ell^{*}}\) at most \(O(\Delta)\) times for each bad point in \(B_{\tau}^{\ell^{*}}\). Thus, over all edges \((x,y)\in T\), the term \(4r_{\tau}^{\ell^{*}}\) appears on the RHS of the above equation at most \(O(\Delta\sqrt{\log n})\) times. Similarly, for ball \(B_{j}^{\ell}\) at level \(\ell=\log\alpha\), the term \(r_{i}^{\ell}\) can only appear in the RHS of Eq (7) when considering an edge with at least one endpoint in \(B_{j}^{\ell}\). Since \(|B_{j}^{\ell}|<O(\sqrt{\log n})\), again this radius is counted at most \(O(\Delta\sqrt{\log n})\) times. It follows that
\[\sum_{(x,y)\in T}d(x,y) \leq\sum_{(x,y)\in T}\tilde{d}(x,y)+O(\Delta\sqrt{\log n})\left( \sum_{i}r_{i}^{\ell}+\sum_{j}r_{j}^{\ell^{*}}\right) \tag{9}\] \[\leq\sum_{(x,y)\in T}\tilde{d}(x,y)+O(\Delta\sqrt{\log n})\min_{T ^{\prime}}w(T^{\prime}),\]
as needed, where we used Claim 19 in the last inequality.
We are now ready to prove Theorem 15.
Proof of Theorem 15.: Let \(T^{*}=\arg\min_{T}w(T)\). Letting \(\tilde{T}=\arg\min_{T}\tilde{w}(T)\), we then set the output of our algorithm to be the result \(\hat{T}\) of running Algorithm 5 on \(\tilde{T}\) in the corrupted metric \(\tilde{d}\). By Proposition 16, the tree \(\hat{T}\) has degree at most 5 and \(\tilde{w}(\hat{T})\leq 2\tilde{w}(\hat{T})\). Then by Proposition 24, we have
\[\mathbb{E}\left[w(\hat{T})\right]\leq \mathbb{E}\left[\tilde{w}(\hat{T})\right]+O(\sqrt{\log n})w(T^{*}) \tag{10}\] \[\leq 2\mathbb{E}\left[\tilde{w}(\hat{T})\right]+O(\sqrt{\log n})w(T^{ *})\] \[\leq 2\mathbb{E}\left[\tilde{w}(T^{*})\right]+O(\sqrt{\log n})w(T^{*})\] \[\leq 2w(T^{*})+O(1)\cdot w(T^{*})+O(\sqrt{\log n})w(T^{*})\] \[= O(\sqrt{\log n})w(T^{*}),\]
where in the first line we applied Proposition 24, the second line used that \(\hat{T}\) was a 2-approximation of the optimal MST in the corrupted space \(\tilde{d}\), the third line used that \(\tilde{T}\) is optimal for \(\tilde{d}\), and in the fourth line we applied Lemma 20.
## 6 Lower Bounds
We give lower bounds for \(k\)-clustering and MST in this section. In particular, we show that
* For \(k\)-clustering, we show that any \(k\)-center, \(k\)-means, or \(k\)-median algorithm that provides _bounded_ approximation under the weak-strong oracle model requires \(\Omega(k)\) strong (point) oracle queries.
* For metric MST, we show that if we want to go below the approximation factor of \(\sqrt{\log n}\), we have to make \(\tilde{\Omega}(n)\) strong (point) oracle queries.
* Finally, for non-metric MST, we show that with \(o(n)\) strong (point) oracle queries, we cannot break an approximation lower bound of \(\Omega(\log n)\).
Our lower bound demonstrates that the algorithms we designed in Sections 3 to 5 are nearly tight up to \(\operatorname{polylog}n\) factors, and one could not hope for algorithms that are significantly more efficient. Furthermore, by our lower bound on non-metric MST, we separate the complexity between the metric and non-metric cases.
### Lower Bounds for \(k\)-Clustering
In the prior sections, we provided \(O(1)\)-approximation algorithms for \(k\)-center, \(k\)-means and \(k\)-median clustering, each using \(\tilde{O}(k)\) queries to the strong oracle. A natural question is whether the strong location oracle is even necessary for these tasks: namely, is it possible to obtain a good approximation with the weak oracle alone? We demonstrate that this is not possible in a strong sense. Namely, we prove that \(\Omega(k)\)-strong oracle queries are necessary for any algorithm that achieves _any_ bounded approximation for \(k\)-clustering tasks. Our main result is as follows:
**Theorem 25**.: _Fix any positive real number \(c\in\mathbb{R}^{+}\), and positive integer \(k\) larger than some constant, and fix the corruption probability to be \(\delta=1/3\). Then any algorithm alg which produces a solution for either \(k\)-centers, \(k\)-means, or \(k\)-medians that, with probability at least \(1/2\), has cost at most \(c\cdot\textsf{OPT}\), (where \(\textsf{OPT}(d)\) is the optimal solution to the clustering task in question) must make at least \(\Omega(k)\) queries to a strong (point) oracle, or at least \(\Omega(k^{2})\) queries to a strong distance oracle._
Proof.: We focus on the proof of the \(k\)-center case, and the lower bounds for \(k\)-means and \(k\)-median clustering follow the same construction. The construction of the hard distribution over inputs is as follows. The distribution will be over distances \(d\) for a fixed set of \(n\) points \(\mathcal{X}\). We first assume that \(k\) is odd, and later generalize to the case of even \(k\). Moreover, since any \(s\)-query point strong oracle algorithm implies a \(s^{2}\)-query edge strong oracle algorithm, it suffices to prove a \(\Omega(k^{2})\)-query lower bound against edge strong oracles, since this will imply a \(\Omega(k)\) lower bound for point strong oracle queries.
**Construction of the ground-truth metric \((\mathcal{X},d)\).**
1. Partition \(\mathcal{X}\) into sets \(S,O\), so that \(S\) has the first \(|S|=\frac{3}{2}(k-1)\) points (under some fixed ordering), and \(O\) has all remaining points.
2. Select a uniformly random subset \(N\subset S\) of exactly \(k-1\) points, and define \(U=S\setminus N\).
3. Fix a uniformly random perfect matching \(M\) over \(N\), so that \(|M|=\frac{k-1}{2}\).
4. Define the metric \(d\) as follows: \[d(x,y)=\begin{cases}1&\text{ if }(x,y)\in M\\ 1&\text{ if }(x,y)\in O\times O\\ c&\text{ otherwise}\end{cases}\]
It is straightforward to verify that the construction of \(d\) is a metric. We now describe how to generate the corrupted distances \(\tilde{d}\) for a given draw of \(d\). Specifically, the weak oracle will corrupt _at most one_ distance \(d(x,y)\).
Observe that the optimal \(k\)-centers clustering of the original metric \((\mathcal{X},d)\) has a cost of \(1\), and must have exactly one center in \(O\), one center chosen from each of the matched pairs \((x,y)\in M\), and one center placed at every unmatched point \(y\in U\). Note that if a single one of these clusters does not have a center placed in it, the cost of the solution is at least \(c\).
**Construction of the weak-oracle metric \(\tilde{d}\)**
1. Fix an arbitrary pair \((x^{*},y^{*})\in M\) such that \((x^{*},y^{*})\in\textsc{Corrupt}\) is corrupted. If no such pair exists, set \(\tilde{d}=d\), otherwise:
2. Set \(\tilde{d}(x^{*},y^{*})=c\), and for all other pairs \((x,y)\in\binom{n}{2}\setminus\{(x^{*},y^{*})\}\), set \(\tilde{d}(x,y)=d(x,y)\).
Let \(\mathcal{E}_{1}\) be the event that at least one pair \((x,y)\in M\) exists such that \((x,y)\in\textsc{Corrupt}\). Note that \(\Pr\left(\mathcal{E}_{1}\right)>1-(2/3)^{(k-1)/2}=1-2^{-\frac{k-1}{4}}\). We now condition on this holding. Now consider the metric \(\tilde{d}\) produced by the weak oracle conditioned on \(\mathcal{E}_{1}\). Now consider the distances \(\tilde{d}\) produced by the weak oracle. They consists of the cluster \(O\) of points pairwise distance \(1\) apart within \(O\), and distance \(c\) away from all points not in \(O\). It also consists of the matching \(M^{\prime}=M\setminus(x^{*},y^{*})\), where \(\tilde{d}(x,y)=1\) for all \((x,y)\in M^{\prime}\), and then it consists of the \(k/2\) points \(S\cup\{x^{*},y^{*}\}\) which are each distance \(c\) from all other points in \(\mathcal{X}\). Notice, however, that the pair \(x^{*},y^{*}\) that was corrupted was not known to the algorithm. Moreover, since \(N\) was chosen uniformly at random, and the matching \(M\) was uniformly random, if we let \(T\) be the set of \(|T|=k/2\) points in \(\tilde{d}\) that are distance \(c\) from all other points, it follows that the identity of the corrupted distance \((x^{*},y^{*})\) is a uniformly random pair chosen from \(T\).
Now consider any sequence \(s_{1},s_{2},\ldots,\in\binom{X}{2}\) of adaptive, possible randomized edge strong oracle queries made by an algorithm. Since \(d(x,y)=\tilde{d}(x,y)\) for all pairs \(x,y\) such that at least one of \(x,y\notin T\), we can assume WLOG that each \(s_{i}\in\binom{T}{2}\) (otherwise it reveals no information to the algorithm). Now for any prefix \(s_{1},\ldots,s_{i}\), condition on the event \(\mathcal{Q}_{i}\) that \(s_{1}\neq(x^{*},y^{*}),s_{2}\neq(x^{*},y^{*}),\ldots s_{i}\neq(x^{*},y^{*})\). Conditioned on \(\mathcal{Q}_{i}\), it still holds that the single corrupted pair \((x^{*},y^{*})\) is still uniformly distributed over the set \(\binom{T}{2}\setminus\{s_{1},\ldots,s_{i}\}\). Thus, for any \(s_{i+1}\) with \(i+1<k^{2}/100\), we have
\[\Pr\left(s_{i+1}\neq(x^{*},y^{*})\mid\mathcal{Q}_{i}\right) =\Pr\left(\mathcal{Q}_{i+1}\mid\mathcal{Q}_{i}\right) \tag{11}\] \[=1-\frac{1}{\binom{[T]}{2}-i}\] \[>1-\frac{16}{k^{2}}\]
Thus, if the algorithm makes a total of \(\ell<k^{2}/1600\) strong edge oracle queries, we have
\[\Pr\left(\mathcal{Q}_{\ell}\right)>\left(1-\frac{16}{k^{2}}\right)^{\ell}>24/25\]
It follows that, conditioned on \(\mathcal{E}_{1}\), with probability at least \(24/25\), the algorithm does not find the corrupted pair. Condition on this event \(\mathcal{Q}_{\ell}\) now, and condition on any output clustering \(\mathcal{C}\) of the algorithm given the observations \(s_{1},\ldots,s_{\ell}\). First, suppose that \(\mathcal{C}\) does not contain exactly \(k/2-1\) clusters in the set \(T\). If it contains more, then either it does not contain a center in \(O\), or it does not contain a center in a matching \((x,y)\in M^{\prime}\), in either case it pays a cost of \(c\). Thus, we can assume it has exactly \(k/2-1\) clusters in \(T\). Let \(z\in T\) be the one point in \(T\) not opened as a center. If \(z\notin\{x^{*},y^{*}\}\), then clear alg pays a \(k\)-centers cost of \(c\). We show this happens with good probability.
To see this, note that since the oracle queried at most \(\ell<k^{2}/1600\) points, it follows that the corrupted distance \((x^{*},y^{*})\) is still uniformly distributed over the \(\binom{k/2}{2}-\ell>k^{2}/32\) distances not queried within \(T\times T\). Since at most \(k/2\) of those distances can include \(z\), the probability that \(z\in\{x^{*},y^{*}\}\) is at most \(\frac{16}{k}\), in which case the algorithm pays a \(c\) approximation. Thus, conditioned on \(\mathcal{E}_{1},\mathcal{Q}_{\ell}\), the algorithm alg still pays a \(c\) approximation with probaiblity at least \(\frac{16}{k}\). Thus, by a union bound, alg pays a \(c\)-approximation with probability at least \(1-(\frac{1}{25}+2^{-\frac{k-1}{4}}+\frac{16}{k})>1/2\), which completes the proof for odd \(k\).
Lastly, to handle the case when \(k\) is even, we can use the same instance, except take a final point \(w^{*}\) from \(O\) and make it distance \(c^{2}\) from all other points in both \(d\) and \(\tilde{d}\) - a center must be placed at \(w^{*}\), and the remaining problem is reduced to the above instance with \(k-1\) centers (which is now odd). Finally, while the above lower bound was for \(k\)-centers, note that the same instance implies a \(\Omega(c/n)\) approximation lower bound against algorithms with the same query complexity for either \(k\)-means or \(k\)-median for algorithms. Since \(c\) can be made arbitrarily large, the result for \(k\)-means and \(k\)-medians follows. Theorem 25\(\Box\)
### Lower Bounds for Minimum Spanning Trees
In this section, we prove lower bounds for both the metric and non-metric MST problems.
#### 6.2.1 Lower bound for Metric Minimum Spanning Tree.
We now prove a matching lower bound for the metric MST problem. Our construction is based on a instance with \(O(n/\sqrt{\log n})\) well-seperated clusters \(\{C_{i}\}_{i}\). We show that, with good probability, we
can match nearly all clusters into pairs \((C_{i},C_{j})\) such that _all_ distances between \(C_{i},C_{j}\) are corrupted. By corrupting these distances it will be impossible to recover the original clusters, which we show implies a \(\Omega(\sqrt{\log n})\) approximation.
**Theorem 26**.: _There exists a constant \(c\) such that any algorithm which outputs a spanning tree \(T\) of \((\mathcal{X},d)\) such that \(\mathbb{E}\left[w(T)\right]\leq c\sqrt{\log n}\cdot\min_{T^{\prime}}w(T^{ \prime})\) in the weak-strong oracle model, must make at least \(\Omega(n/\sqrt{\log n})\) queries to the strong oracle. Moreover, this holds even when the weak-oracle distances \(\tilde{d}:\mathcal{X}^{2}\to\mathbb{R}\) is restricted to being a metric, and when the corruption probability is \(\delta=1/3\)._
To prove Theorem26, we use the following standard result on large size matching in random graphs. We also provide a proof for completeness.
**Fact 27**.: _Let \(G=(V,E)\) be a random graph where each edge \((i,j)\) exists independently with probability at least \(\rho>c\log n/n\), for a sufficiently large constant \(c\). Then with probability \(1-1/\mathrm{poly}(n)\), there exists a matching \(M\subset\binom{n}{2}\) in \(G\) with size at least \(|M|>n/4\)._
Proof.: The proof is a simple application of the principle of deferred decisions. Order the vertices arbitrarily \(x_{1},x_{2},,\ldots,x_{n}\). Let \(Z_{i,j}\) be an indicator random variable for the event that \((x_{i},x_{j})\in E\). We build a set of matched points \(M\subset[n]\). Initially, \(M\) is empty. We will walk through the points \(x_{i}\) for \(i=1,2,\ldots,(3/4)n\), and show that each can be matched to a vertex if it was not previously matched already.
We first condition on the event \(\mathcal{E}_{\infty}\) that \(Z_{1,j}\) exists for at least one \(x_{j}\notin M\), which occurs with high probability by a Chernoff bound. Fix that \(x_{j}\) to match to \(x_{1}\), and add both \(x_{j},x_{1}\) to \(M\). Now for \(i=2,\ldots,3n/4\), either \(x_{i}\) is matched by step \(i\), or we have that \(\sum_{j>i,x_{j}\notin M}Z_{i,j}\) is a sum of i.i.d. indiacator variables with expectation at least \(\rho(3n/4-M)\). So if \(|M|>n/2\), then we are done, otherwise \(\mathbb{E}\left[\sum_{j>i,x_{j}\notin M}Z_{i,j}\right]>(c/4)\log n/n\). Thus, again by Chernoff bounds, with high probability there exists at least one \(j>i\) with \(x_{j}\)_fn\(M\)_ such that \((x_{i},x_{j})\) is an edge, and we can match \((x_{i},x_{j})\) and continue. Since each vertex \(x_{i}\) is matched with high probability for \(i=1,2,\ldots,(3/4)n\), the fact follows from a union bound.
\(d^{\prime}(x,y)=\ell\) otherwise, for some \(\ell>1\), is a metric. Finally, we note that that both \(d,\tilde{d}\) are of this form.
We first prove the lower bound against algorithms that make _no_ strong oracle queries. First, note that for any pair of blocks \(B_{i},B_{j}\), there are at most \(k^{2}<\log(n)/200\) pairs of distances \((x,y)\in B_{i}\times B_{j}\). The probability that all such pairs are corrupted is at most \((1/3)^{\log(n)/200}>n^{-1/100}\). Thus, by Fact 27, with probability \(1-1/\mathrm{poly}(n)\) the matching \(M\) satisfies \(|M|>\frac{n}{4k}\). Let \(\mathcal{E}_{1}\) be the event that the matching is at least this large - we will now condition on \(\mathcal{E}_{1}\) holding.
Now fix any draw of the corrupted distances \(\tilde{d}\) observed by the algorithm. Also condition on the set of corrupted distances Corrupt, and the matching \(M\) -- we will prove the lower bound even against an algorithm that is told the matching \(M\) over \(B\times B\). Note that conditioning on \(\tilde{d},\textsc{Corrupt},M\) does not determine the original metric \(d\) -- specifically, the function \(f\) is not fully determined by \(\tilde{d},\textsc{Corrupt},M\). Since an algorithm that makes no strong oracle queries sees only \(\tilde{d},M\), by Yao's min-max principle we can assume the algorithm is deterministic, and thus produces a tree \(T\) deterministically based on \(\tilde{d},M\) which, for the sake of contradiction, we suppose satisfies \(\mathbb{E}\left[w(T)\right]\leq c^{\prime}\sqrt{\log n}\cdot\min_{T^{\prime}} w(T^{\prime})\), where the expectation is taken over the remaining randomness in \(d\) after conditioning on \(\tilde{d},\textsc{Corrupt},M\).
Now fix any arbitrary rooting of \(T\), and let \(\pi(u)\) be the parent of any vertex \(u\in\mathcal{X}\) under this rooting. We will charge to each vertex \(u\in\mathcal{X}\) the cost \(d(u,\pi(u))\). We now condition on any set of identities of \(B_{j}=f^{-1}(j)\subset[n]\) for every block \(B_{j}\) that is not matched under \(M\). Additionally, for every matched pair of blocks \(B_{i},B_{j}\), we condition on the set of identities in the union \(B_{i}\cup B_{j}\), but we _do not_ condition on the individual sets \(B_{i}\) and \(B_{j}\). Specifically, note that after conditioning on \(B_{i}\cup B_{j}\) for any \(x\in B_{i}\cup B_{j}\), we claim that \(\Pr\left(f(x)=i\right)=\Pr\left(f(x)=j\right)=1/2\). This holds because even conditioned on \(\tilde{d}\), we have \(\tilde{d}(x,y)=1\) for ally \(x,y\in B_{i}\cup B_{j}\), so shuffling the values of the identities in \(B_{i}\cup B_{j}\) does not effect the observations of the algorithm.
Now consider any \(u\in B_{i}\) such that \((B_{i},B_{j})\in M\) is matched. First, suppose that \(\pi(u)\notin B_{i}\cup B_{j}\) - then \(d(u,\pi(u))=k\) for all possible realizations of the remaining randomness. If \(\pi(u)\in B_{i}\cup B_{j}\), we claim that \(d(u,\pi(u))=k\) with probability \(1/2\) over the remaining randomness in \(f\). To see this, note that because \(f_{i}\) maps each point in \(B_{i}\cup B_{j}\) to \(B_{i}\) or \(B_{j}\) uniformly at random (subject to the constraint that \(|f^{-1}(j)|=|f^{-1}(j)|=k\)). The constraint only makes it less likely that any pair \((u,\pi(u))\) are mapped to the same side, so:
\[\Pr\left(f(u,\pi(u))=(i,j)\right)+\Pr\left(f(u,\pi(u))=(j,i)\right)\geq\Pr \left(f(u,\pi(u))=(i,i)\right)+\Pr\left(f(u,\pi(u))=(j,j)\right)\]
Moreover, whenever \(f(u,\pi(u))=(i,j)\), we have that \(d(u,\pi(u))=k\), which completes the claim. Given this, it follows that the expected value of \(d(u,\pi(u))\) is at least \(k/2\) for any \(u\) in a matched block \(B_{i}\). Since \(|M|>n/(4k)\), it follows that at least \(n/2\) points are matched, and each has an edge to its parent in \(T\) with expected cost \(k/2\), from which it follows that the expected cost of \(T\) is \(\Omega(nk)=\Omega(n\sqrt{\log n})\). Since the true MST cost of \((\mathcal{X},d)\) is always at most \(O(n)\), resulting by creating a star on the set of points within each \(B_{i}\), and then adding the edges for an arbitrary spanning tree with \(n/k-1\) vertices over the vertices \(\{p_{1},\ldots,p_{n/k}\}\), where \(p_{i}\in B_{i}\) is an arbitrary representative vertex in \(B_{i}\). This completes the proof of the lower bound against algorithms which make no strong oracle queries.
We now show how to generalize the above argument to algorithms that make at most \(\frac{n}{100k}\) strong oracle queries. Similar to the above, we condition on the weak oracle mapping \(\tilde{d}\) as well as the matching \(M\). We now consider any set of \(\frac{n}{100k}\) strong oracle queries made by the algorithm - let \(S\subset[n]\) be the set of vertices queried. Since \(|S|<\frac{n}{100k}\) and \(|M|>\frac{n}{4k}\), it follows that there is a matching \(M^{\prime}\) with \(M^{\prime}>\frac{n}{8k}\) such that for every \((B_{i},B_{j})\in M^{\prime}\), we have \(S\cap(B_{i}\cup B_{j})=\emptyset\). It follows that, even after revealing the values of \(d(x,y)\) for all \(x,y\in S\), for every
where \((B_{i},B_{j})\in M^{\prime}\), the function \(f(x)\) is still uniformly distributed in \(\{i,j\}\). The remainder of the arguement follows as above, with a loss of \(2\) in the expected cost of the algorithm attributed to the fact that we only have a matching of size \(\frac{n}{8k}\) rather than \(\frac{n}{4k}\).
**Remark 28**.: _One may wonder whether we can extend the metric MST lower bound to strong distance oracle in the same manner of Theorem 25. Alas, with our analysis, we cannot get a lower bound as strong as \(\tilde{\Omega}(n^{2})\). By a simple black-box reduction, Theorem 26 implies a \(\Omega(n/\sqrt{\log n})\) lower bound for strong distance oracle queries for any algorithm with \(o(\sqrt{\log n})\) approximation. For estimating the value of the MST, this turns out to be (nearly) tight as there exists a \(O(1)\) approximation with \(\tilde{O}(n)\) queries by [15]. Exploring whether this is the case for constructing the actual MST with strong distance queries is an interesting direction to pursue._
#### 6.2.2 Lower Bounds for Non-Metric Minimum Spanning Tree.
We now consider the problem of computing an approximate MST in the general Weak-Strong Oracle model, where the corrupted weak-oracle distances \(\tilde{d}\) is not necessarily a metric (i.e., \(\tilde{d}\) can violate the triangle inequality). Whereas Theorem 15 demonstrates that a \(O(\sqrt{\log n})\) approximation is possible in the metric-weak oracle case with _no_ strong oracle queries. We now prove a \(\Omega(\log n)\) approximation lower bound for any algorithm in the non-metric case, even if it makes \(o(n/\log n)\) strong oracle queries, thereby strongly separating the two models.
**Theorem 29**.: _There exists a constant \(c\) such that any algorithm which outputs a spanning tree \(T\) of \((\mathcal{X},d)\) such that \(\mathbb{E}\left[w(T)\right]\leq c\log n\cdot\min_{T^{\prime}}w(T^{\prime})\) in the weak-strong oracle model (with corruption probability \(\delta=1/3\)), must make at least \(\Omega(n)\) queries to the strong oracle._
Proof.: The construction of the hard distribution over inputs is as follows. The distribution will be over distances \(d\) for a fixed set of \(n\) points \(\mathcal{X}\).
**Construction of the ground-truth metric \((\mathcal{X},d)\).**
1. Set \(k=\frac{\log n}{100}\), and draw a uniformly random mapping \(f:[n]\to[n/k]\) conditioned on \(|f^{-1}(j)|=k\) for all \(i\in[n/k]\). Define the \(i\)-th block \(B_{i}=f^{-1}(i)\), and \(B=\{B_{1},\ldots,B_{n/k}\}\).
2. Define the metric \(d\) as follows: \[d(x,y)=\begin{cases}1&\text{if $(x,y)\in B_{i}$ for some $i\in[n/k]$}\\ k&\text{otherwise}\end{cases}\] It is straightforward to verify that the construction of \(d\) is a metric. We now describe how to generate the corrupted distances \(\tilde{d}\) for a given draw of \(d\).
Note that the optimal MST first conencts together all points within the same block (each of the \(\Theta(n)\) edges paying a cost of \(1\) for each such edge), and then connects together the remaining \(n/k\) blocks, each with a cost of \(k\). Thus \(\min_{T}w(T)=\Theta(n)\).
**Construction of the weak-oracle metric \(\tilde{d}\)**
1. Define the random graph \(H=(\mathcal{X},\hat{E})\) as follow. We have \((x,y)\in E\), if and only if \(x\in B_{i},y\in B_{j}\) with \(i\neq j\), and such that \((x,u)\in\textsc{Corrupt}\) and \((v,y)\in\textsc{Corrupt}\) for all \(u\in B_{j}\) and \(v\in B_{i}\).
2. Let \(M\subset\mathcal{X}\times\mathcal{X}\) be a maximum matching in the graph \(H\).
3. Define the weak-oracle output \(\tilde{d}\) as follows: for every \((x,y)\in M\), where \(x\in B_{i},y\in B_{j}\), we set \(\tilde{d}(x,u)=1\) and \(\tilde{d}(v,y)\in\textsc{Corrupt}\) for all \(u\in B_{j}\) and \(v\in B_{i}\). For all other pairs \((x,y)\), we set \(\tilde{d}(x,y)=d(x,y)\).
First note, by the same argument in the proof of Fact 27, we will have that \(|M|>n/4\) with high probability. Note that even though the setting is slightly different, because \((x,y)\) can never be an edge if \((x,y)\) are in the same block \(B_{i}\), there are still at least \(n-n/k\) possible edges which can be adjacent to any individual point \(x\), and each exists with probability at least \((1/3)^{k}>1/\sqrt{n}\) as needed for the proof of Fact 27. Call the event that \(|M|>n/4\)\(\mathcal{E}_{1}\), and condition on it now.
We first prove the lower bound against algorithms that make no strong oracle queries. Now fix any draw of the observed distances \(\tilde{d}\), and condition on the identities of the matching \(M\in\mathcal{X}\times\mathcal{X}\). By a simple averaging argument (Yao's Min-max principle), if there was a randomized algorithm correct with probability at least \(1/\mathrm{poly}(n)\) against any given input, then there would be a deterministic algorithm correct against this input distribution with probability at least \(1/\mathrm{poly}(n)\). So given such an algorithm, after fixing \(\tilde{d}\) we can fix the tree \(T\) output by the algorithm. Like in the proof of Theorem 26, we orient \(T\) arbitrarily, let \(\pi(x)\) be the parent of \(x\) in \(T\), and charge each vertex \(x\) with the cost \(d(x,\pi(x))\).
Now note that for every pair \((x,y)\in M\), conditioned on the observations \(\tilde{d}\) and matching \(M\), for this match pair we have \(\tilde{d}(x,u)=\tilde{d}(y,u)\) for all \(u\in\mathcal{X}\). Thus, by the symmetry of the identities, for any fixed \(B_{i},B_{j}\) such that \(x\in B_{i},y\in B_{j}\) occurs with non-zero probability over the remaining randomness, we have that \(x\in B_{i},y\in B_{j}\) occurs with the same probability as \(x\in B_{j},y\in B_{i}\). To see this formally, note that conditioned on any matching \(M\), we can consturct a bijection between the remaining realizations of the randomness where \(x\in B_{i},y\in B_{j}\) and where \(x\in B_{j},y\in B_{i}\), simply by swapping the values of \(f(x),f(y)\) - this is possible because, after conditioning on any set of values \(\{f(z)\}_{z\in\mathcal{X}\setminus\{x,y\}}\), the marginals of \(f(x)\) and \(f(y)\) are identically distributed. Thus, for any fixed parent \(\pi(x)\) of \(x\), the probability that \(\pi(x)\) is in the same block as \(x\) is at most \(1/2\). Thus the expected cost \(d(x,\pi(x))\) for any matched point \(x\) is at least \(k/2\), since for any fixed block \(B_{i}\) containing \(\pi(x)\), we have \(x\notin\pi(x)\) with probaiblity at least \(1/2\). Since there are \(\Omega(n)\) matched points, it follows that the expected cost of the algorithm is at least \(\Omega(nk)=\Omega(n\log n)\), which completes the proof for the case of algorithms which do not query the strong oracle.
Finally, for any algorithm that makes at most \(n/100\) strong (point) oracle queries, notice that there are still \(n/20\) pairs of matched \((x,y)\) points such that neither were queried. For such pairs, the above claim still holds, namely that for any fixed \(B_{i},B_{j}\) such that \(x\in B_{i},y\in B_{j}\) occurs with non-zero probability, both (\(x\in B_{i},y\in B_{j}\)) and (\(x\in B_{j},y\in B_{i}\)) occur with equal probability. Thus, for any parent \(\pi(x)\), the point \(x\) will be in a different block from \(\pi(x)\) with probability at least \(1/2\), even conditioned on the strong oracle observations, the matching \(M\), and \(\tilde{d}\), and the rest of the proof proceeds as above.
Theorem 29\(\Box\)
## 7 Experiments
In this section, we experimentally validate the performances of our clustering algorithms. We compare our algorithms with benchmarks on two extremes: the "weak baseline", where the benchmark algorithm has access to only the WO queries, and the "strong baseline", where the benchmark algorithm has access to SO queries on the entire dataset. We demonstrate that:
1. The weak baseline algorithms with only WO access produce very poor-quality solutions;
2. Our algorithm achieve costs that are competitive with the strong baseline that queries SO on the entire dataset, while only using SO queries on a very small fraction (\(<1\%\)) of the points.
Datasets.As discussed in Section 1, our experiments use both synthetic data generated from the extensively-studied Stochastic Block Model (SBM) [30, 19, 16, 1, 2, 28, 42] and the embeddings generated from the MNIST dataset with t-SNE and SVD [17, 48]. We construct the SBM model with \(k=7\) clusters: in the \(i\)-th cluster, we sample points from a Gaussian distribution \(\mathcal{N}(\mu,I)\) with \(\mu_{i}=10^{5}\) and \(\mu_{j}=0\) for all \(j\neq i\), and we use the \(\ell_{2}\) metric. As points sampled from the Gaussian distribution are concentrated, ground truth clusters are well-separated and the cost of misclustering even a single point is large. For the MNIST dataset, we run t-SNE and SVD embeddings with \(60k\) training data, and embed into \(d=2\) dimensions for t-SNE and \(d=50\) for SVD.
In both scenarios, there are clear "ground truth" clusters for each point. As such, there is a natural weak-oracle corruption policy: for a pair of points \((x_{i},x_{j})\), if \(x_{i}\) and \(x_{j}\) are in the same ground truth cluster, flip the distance to an arbitrary inter-cluster distance; otherwise, flip the distance to an arbitrary intra-cluster distance. For the synthetic dataset, this results in an SBM model.
Algorithm Implementations.We implement the weak and strong benchmarks with the farthest traversal algorithm [25] for the \(k\)-center task and the celebrated \(k\)-means++ algorithm for the k-means task [6], and both are de facto choices in practice. To perform the Lloyds iteration for \(k\)-means, we reveal the embedding vectors on the points with the SO query to the algorithm. We also use \(k\)-means++ as the post-processing algorithm of the sampled set \(S\) in Algorithm 46. The basic version of the experiments are carried out Macbook Pro with M1 chip and 16GB RAM. An optimized version for larger-scale datasets was run on an virtual compute cluster with 360GB RAM.
Footnote 6: For the weak benchmark employing \(k\)-means++, we reveal all embeddings for the Lloyd iterations, which only helps that baseline. However, for our algorithms, we only reveal embeddings of points queried in \(S\).
Figures and tables.We vary the parameters for sampling in our algorithms and obtain the curves for the clustering cost vs. number of strong oracle queries for different values of \(\delta\) (the weak oracle corruption probability). For tables, in each setting of \(\delta\) for different sampling parameters, we pick the run with the best query-cost trade-off by selecting the run that minimizes the value \(|\textsf{SO}|\cdot\textsf{cost}^{10}\), where \(|\textsf{SO}|\) is the number of queries to the strong oracle. We do this in order to prevent selecting runs that make very few queries but have poor cost.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \(n\) & \multicolumn{3}{|c|}{\% of data queried for SO} & \multicolumn{3}{|c|}{Competitive ratio} \\ \cline{3-8} & & \(\delta=0.1\) & \(\delta=0.2\) & \(\delta=0.3\) & \(\delta=0.1\) & \(\delta=0.2\) & \(\delta=0.3\) \\ \hline \multirow{4}{*}{k-center} & 10k & 6.51 & 7.42 & 7.42 & 0.828 & 0.707 & 0.880 \\ \cline{2-8} & 20k & 1.26 & 1.96 & 5.46 & 0.802 & 0.842 & 0.795 \\ \cline{2-8} & 50k & 0.798 & 1.484 & 2.184 & 0.809 & 0.779 & 0.832 \\ \cline{2-8} & 100k & 0.252 & 0.917 & 0.917 & 0.804 & 0.718 & 0.762 \\ \hline \multirow{4}{*}{k-means} & 10k & 5.55 & 3.51 & 13.19 & 1.089 & 1.053 & 1.175 \\ \cline{2-8} & 20k & 1.975 & 1.78 & 8.57 & 1.216 & 1.086 & 1.191 \\ \cline{1-1} \cline{2-8} & 50k & 1.038 & 0.82 & 2.342 & 1.142 & 1.062 & 1.125 \\ \cline{1-1} \cline{2-8} & 100k & 0.555 & 0.44 & 1.31 & 1.141 & 1.218 & 1.25 \\ \hline \end{tabular}
\end{table}
Table 1: The best query-cost trade-off point for the \(k\)-center and \(k\)-means algorithms on the SBM. Competitive ratio’ means the ratio between the costs of our algorithms and the strong benchmark. The left column indicates the percentages of SO queries used.
SBM ExperimentsWe test with corruption rate of \(\delta=0.1,0.2\) and \(0.3\) with the scales of \(n=10k\), \(20k\), \(50k\), and \(100k\). The cost (log scale) vs. strong oracle query curves and trade-off points for the \(k\)-center and \(k\)-means algorithms can be observed in Figures 1 and 2 and Table 1. In the plots of Figure 1, weak baseline and strong baseline are farthest traversal with access to only WO and SO on entire dataset respectively. In comparison, for the plots of Figure 2, the weak and strong baselines use \(k\)-means++ with zero SO queries and the entire set of SO queries, respectively.
As one would expect, in Figure 1, the \(k\)-center cost decreases drastically at some thhreshold (from \(>125k\) to \(\sim 7\)) since thereafter no point gets misclustered. Moreover, this threshold is quite small -- the algorithm converges as early as the point where it queries SO for only \(\sim 0.5\%\) of the total points. In contrast, the drop of cost for \(k\)-means as in Figure 2 demonstrate a more "smooth" manner. Nevertheless, both Figure 1 and Figure 2 show that the costs of \(k\)-clustering algorithms drop sharply and approach the optimal cost with a very low percentage of SO queries.
We then show in Table 1 the best query-cost trade-off points for the \(k\)-center and \(k\)-means algorithms. It can be observed that our algorithm consistently outperforms even the farthest traversal with SO queries on the entire dataset, while using queries only an extremely small fraction of the points. For the \(k\)-means experiments, our algorithm can provide a solution that is within a factor of \(<1.25\times\) of strong benchmark with SO queries on \(0.5\%\sim 1.31\%\) of the points in the dataset. We can also observe that trend from Table 1 that the percentage of SO queries decreases as \(n\) becomes large, while the competitive ratio remains in the same range. When \(n\) is large (e.g., in the \(100k\) case), outperforming the benchmark takes SO queries on only \(<1\%\) of the points.
MNIST Experiments:We now discuss the results on MNIST with t-SNE and SVD embeddings. It is well-known that the MNIST t-SNE embedding with \(d=2\) forms well-seperated clusters; however, the dichotomy between the distances of inter- and intra-cluster points are not as stark as the SBM model. Furthermore, separations between clusters is notably worse for the SVD embedding. Thus, the \(t\)-SNE and SVD datasets are "less clustered" and "not clustered" instances, respectively. Figure 3 snd Table 2 show the query-cost curve for MNIST with t-SNE and SVD embeddings. Compared to the SBM model, the curves for these embeddings decrease less rapidly, a consequence of the clusters not being as well-separated. Nonetheless, our \(k\)-means algorithm still outperforms the weak benchmark by a significant margin using a small fraction of \(\mathsf{SO}\) queries. For the t-SNE embeddings, as it is better-clustered, we observe a significant drop in cost after making less than \(5\%\) of the \(\mathsf{SO}\) queries. On the other hand, for the not-well-clustered SVD embedding, although there is only a factor of \(\sim 1.2\) between the weak and strong benchmark costs, our algorithm still manages to achieve non-trivial improvements in cost beyond the weak benchmark with fewer than \(5\%\) of the \(\mathsf{SO}\) queries.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{\% of data queried for \(\mathsf{SO}\)} & \multicolumn{3}{c|}{Competitive ratio} \\ \cline{2-7} & \(\delta=0.1\) & \(\delta=0.2\) & \(\delta=0.3\) & \(\delta=0.1\) & \(\delta=0.2\) & \(\delta=0.3\) \\ \hline t-SNE & 4.58 & 4.57 & 6.62 & 1.169 & 1.286 & 1.367 \\ \hline SVD & 0.25 & 0.311 & 0.253 & 1.121 & 1.109 & 1.105 \\ \hline \end{tabular}
\end{table}
Table 2: The best query-cost trade-off point for the \(k\)-means algorithm on the MNIST embeddings.
Figure 2: Number of \(\mathsf{SO}\) queries vs. clustering cost under the SBM model for \(\mathbf{k}\)**-means** with different values of \(\delta\) and \(n\). Weak baseline - \(k\)-means++ algorithm with \(\mathsf{WO}\) queries only and Strong baseline - \(k\)-means++ algorithm with \(\mathsf{SO}\) queries on full dataset. |
2307.09891 | Amortised Design Optimization for Item Response Theory | Item Response Theory (IRT) is a well known method for assessing responses
from humans in education and psychology. In education, IRT is used to infer
student abilities and characteristics of test items from student responses.
Interactions with students are expensive, calling for methods that efficiently
gather information for inferring student abilities. Methods based on Optimal
Experimental Design (OED) are computationally costly, making them inapplicable
for interactive applications. In response, we propose incorporating amortised
experimental design into IRT. Here, the computational cost is shifted to a
precomputing phase by training a Deep Reinforcement Learning (DRL) agent with
synthetic data. The agent is trained to select optimally informative test items
for the distribution of students, and to conduct amortised inference
conditioned on the experiment outcomes. During deployment the agent estimates
parameters from data, and suggests the next test item for the student, in close
to real-time, by taking into account the history of experiments and outcomes. | Antti Keurulainen, Isak Westerlund, Oskar Keurulainen, Andrew Howes | 2023-07-19T10:42:56Z | http://arxiv.org/abs/2307.09891v1 | # Amortised Design Optimization for Item Response Theory
###### Abstract
Item Response Theory (IRT) is a well known method for assessing responses from humans in education and psychology. In education, IRT is used to infer student abilities and characteristics of test items from student responses. Interactions with students are expensive, calling for methods that efficiently gather information for inferring student abilities. Methods based on Optimal Experimental Design (OED) are computationally costly, making them inapplicable for interactive applications. In response, we propose incorporating amortised experimental design into IRT. Here, the computational cost is shifted to a precomputing phase by training a Deep Reinforcement Learning (DRL) agent with synthetic data. The agent is trained to select optimally informative test items for the distribution of students, and to conduct amortised inference conditioned on the experiment outcomes. During deployment the agent estimates parameters from data, and suggests the next test item for the student, in close to real-time, by taking into account the history of experiments and outcomes.
Keywords:Item Response Theory (IRT) Experimental Design Deep Reinforcement Learning (DRL).
## 1 Introduction
Item Response Theory (IRT) is a method for inferring student abilities and test item characteristics by observing test outcomes conducted with students [5]. In IRT, the relationship between an individual's ability and their response to a test item is typically modeled with logistic regression, describing the probability that an individual with a certain ability will give a correct response to a test item with a given difficulty. Inferring student abilities and test item characteristics offers benefits such as user skill assessment, test item calibration, and optimization of learning experiences for students by Intelligent Tutoring Systems (ITS) or human tutors.
To fully exploit the benefits of IRT in real-time interactions with students, each interaction should provide as much information as possible. Optimal Experimental Design (OED) aims to select an experiment that maximizes an information criterion [11, 13]. Combining OED with IRT enables designing experiments that are maximally informative about student abilities.
We incorporate amortised experimental design into IRT and construct a system in which the next test for the student, conditioned on the previous test outcomes, can be given in near-real time. Our approach involves training a Deep Reinforcement Learning (DRL) agent, which allows for amortising both the OED and parameter estimation tasks simultaneously.
## 2 Background and related work
The most simple IRT variant is the 1PL model, also known as the Rasch model [10]. The probability that a student \(i\) gives a correct answer for the item \(j\) is defined as:
\[P(y_{i,j}=1|\theta_{i},b_{j})=\frac{1}{1+e^{-(\theta_{i}-b_{j})}} \tag{1}\]
where \(y_{i,j}\) is the Bernoulli-distributed outcome, \(\theta_{i}\) is the ability of student \(i\), and \(b_{j}\) is the difficulty of the item \(j\). The task is to infer the parameters \(\theta\) and \(b\) from the observations, i.e., from the test outcomes. We consider a setting where only the student ability \(\theta\) is estimated, while the item parameter \(b\) is treated as a design parameter. This formulation models an experimental design setting where items can be selected for students to obtain information about their latent abilities.
Recent work has focused on inferring IRT parameters using deep neural network variants. Wu et. al. [14] suggest IRT inference using Variational Inference (VI). Paassen et. al [7] combine sparse factor analysis with Variational Autoencoders (VA). Deep-IRT [15] combines key-value networks with IRT to improve the explainability of deep learning-based knowledge tracing. Closely related to IRT is Knowledge Tracing (KT), which tracks and models students' knowledge over time based on their responses during a set of interactions [2]. Li et al. [6] suggest a value-based RL method for an adaptive learning system for optimal teaching. Ghosh et al. [4] suggest BOBCAT, casting experimental design for IRT as a metalearning problem. However, their question selection algorithm assumes questions to be discrete variables.
Advances in Deep Learning have opened up new possibilities for OED. It allows for amortising the OED process by pretraining a deep neural network before deployment using existing or synthetic data. During execution, a pre-trained neural network can potentially generalize to unseen models within the prior distribution used in the amortisation stage. A deep neural network can also be conditioned on the outcomes of previous interactions for learning an adaptive policy where all previous interactions affect the selection of the next design. However, optimizing the exact mutual information directly might lead to an intractable solution because of the need to calculate nested expectations [9]. Recently, various suggestions have been made to solve these challenges. Foster et. al. suggest Deep Adaptive Design (DAD) [3], which provides non-myopic, amortised experimental design by maximising an approximate lower bound of the mutual information instead of an exact mutual information objective. Blau et. al. [1] introduce an RL-based method, highlighting the strong exploration and exploitation tradeoff capability.
## 3 Amortised Design Optimisation for IRT (ADOIRT)
In this section, we propose an amortised design optimisation approach to IRT, which we name Amortised Design Optimisation for IRT (ADOIRT).
### POMDP formulation for ADOIRT
We formulate design selection and parameter estimation as a Partially Observable Markov Decision Process (POMDP) given by a tuple \(\langle S,A,T,R,O,f,\rho_{0},\gamma\rangle\), and find an approximately optimal policy using Reinforcement Learning. The agent selects actions \(a_{t}=(d_{t},\hat{\theta}_{t})\in A\), where \(A\) is the action space consisting of all possible actions, including design requests and estimated student abilities.
At each time step \(t\), the environment is in state \(s_{t}\in S\), where \(S\) is the state space consisting of all possible states. The state \(s_{t}=\theta_{t}\) is the true student ability. The transition function \(T:S\times A\times S\rightarrow[0,1]\) specifies deterministic dynamics: \(p(\theta_{t+1}|a_{t},\theta_{t})=1\) if \(\theta_{t+1}=\theta_{t}\) and 0 otherwise. The initial state \(s_{0}\) is sampled from the prior distribution \(\rho_{0}=p(\theta)\), and \(\gamma\) is the discount factor.
Observations are sampled from the observation function \(f:S\times A\to O\) such that \(o_{t}=(\hat{d}_{t},y_{t})\sim f(s_{t},a_{t})\). The observation function consists of two parts: mapping the requested design \(d_{t}\) to a corrupted item \(\hat{d}_{t}\), and obtaining the experiment outcome \(y_{t}\). The corrupted item results from selecting the best item during deployment or adding Gaussian noise during training. The outcome is given by \(y_{t}\sim p(y_{t}|\theta_{t},\hat{d}_{t})\).
The reward function \(R:S\times A\rightarrow\mathcal{R}\) specifies the reward received by the agent for taking a particular action in a particular state. The reward is the squared error between the true and estimated student ability: \(R(s_{t},a_{t})=(\theta_{t}-\hat{\theta_{t}})^{2}\). Consequently, the agent is rewarded for selecting experiments that lead to accurate parameter estimations.
### ADOIRT Architecture and training
The training architecture is illustrated in the left panel of Figure 1. At the start of each episode, the simulator is initialised by sampling new model parameters from the prior. ADOIRT makes requests for the item difficulties, and the simulator produces an outcome using the obtained design and sampled student ability. The outcome and obtained design are concatenated to the history of observations.
In the deployment phase, the trained ADOIRT can be used with existing item response data. First, a MLE estimate for the item difficulties is produced using stochastic gradient descent (SGD). Once the estimates of the item difficulties are available, the item requests by the ADOIRT are mapped to the closest items from the existing dataset. Based on the outcomes of the student, the goal of the agent is to select informative items from the existing collection of items.
The policy network implementation is a multilayer perceptron (MLP) network with four layers followed by mean pooling across experiment trials, an output layer, and separate heads for action distribution and value estimation,
which are two-layer MLP networks. ReLU activation functions are used after each MLP layer. We train the policy function with the Proximal Policy Optimisation (PPO) algorithm [12] using the Stable Baselines 3 (SB3) library [8].
## 4 Results
We assess the performance of ADOIRT in estimating student abilities in a scenario where the unknown item difficulties are inferred using maximum likelihood estimation. We simulate synthetic datasets with 200 students and 50 items for training and evaluation. The item request by the agent is then mapped to the closest estimated difficulty in this collection.
We compare the performance with a situation where design values are randomly chosen. To gain further insight into the performance, we also trained the agent by concealing experiment outcomes from the observation until the final time step of the episode. In this case, the agent learned a well performing, non-adaptive design strategy.
Figure 2 illustrates the key results. It shows that experiments chosen by ADOIRT result in lower error in student ability estimation compared to the baselines (panels a-c). In the case of non-adaptive designs (panel c), the inferred student abilities are clustered in ten clusters. This is a reasonable non-adaptive design strategy, as it effectively covers the design space with a limited number of points. Furthermore, the clusters are denser closer to zero, as incorrect pre
Figure 1: ADOIRT architecture. Left panel (training): During training, synthetic data is generated by sampling the student abilities and item difficulties from a prior distribution. The simulator uses the obtained item parameters and the IRT model to produce outcomes. Right panel (deployment): In the deployment phase, the item parameters can be estimated beforehand with a standard method, such as MLE, using any existing data. The item requests by ADOIRT are mapped to the closest estimated item difficulty in the collection of available items.
dictions for student abilities far from the mean lead to higher penalties. The quantitative results are presented in Table 1.
\begin{table}
\begin{tabular}{l|c|c} \hline Method & MSE mean & MSE standard error \\ \hline
**Inference with ADOIRT** & **1.91** & **0.03** \\ Inference with non-adaptive designs & 3.05 & 0.07 \\ Inference with random designs & 3.39 & 0.06 \\ \hline \end{tabular}
\end{table}
Table 1: ADOIRT-based inference surpasses non-adaptive and random designs in inferring student abilities. This is evaluated using 50 datasets, MLE estimates, and averaging performance over 1000 episodes per dataset. Mean and standard error are computed by repeating the training with five random seeds.
Figure 2: Panel (a): True vs inferred ability for ADOIRT. Panel (b): True vs inferred ability for random designs. Panel (c): True vs inferred ability for non-adaptive optimized designs. Panel (d): An example case where 10 items are presented to the student and the agent successfully converges on designs close to the midpoint of the sigmoid function. Panel (e): ADOIRT’s design values converging to sigmoid midpoint (1000 models). Panel (f): Agent training learning curves with 1 std shaded area. The shaded area represents 1 std over 5 training runs. Panels (a)-(e) use the lowest training error seed.
## 5 Discussion and further work
In this article, we introduced a novel method for amortised experimental design and parameter estimation for the IRT setting, named ADOIRT. Our studies showed that ADOIRT outperforms non-adaptive and random design strategies, thus being capable of inferring student abilities from a small number of interactions in near real-time. Interesting future work would involve testing the system in a real-world setting with human participants.
|
2304.10890 | Magnetic properties of a spin-orbit entangled Jeff = 1/2 honeycomb
lattice | The interplay between spin-orbit coupling, anisotropic magnetic interaction,
frustration-induced quantum fluctuations and spin correlations can lead to
novel quantum states with exotic excitations in rare-earth-based quantum
magnets. Herein, we present the crystal structure, magnetization, electron spin
resonance (ESR), specific heat, and nuclear magnetic resonance (NMR)
experiments on the polycrystalline samples of Ba9Yb2Si6O24, in which Yb3+ ions
form a perfect honeycomb lattice without detectable anti-site disorder. The
magnetization data reveal antiferromagnetically coupled spin-orbit entangled
Jeff = 1/2 degrees of freedom of Yb3+ ions in the Kramers doublet state. The
ESR measurements reveal that the first excited Kramers doublet is 32.3(7) meV
above the ground state. The specific heat results suggest the absence of any
long-range magnetic order in the measured temperature range. Furthermore, the
29Si NMR results do not indicate any signature of magnetic ordering down to 1.6
K, and the spin-lattice relaxation rate reveals the presence of a field-induced
gap that is attributed to the Zeeman splitting of Kramers doublet state in this
quantum material. Our experiments detect neither spin freezing nor long-range
magnetic ordering down to 1.6 K. The current results suggest the presence of
short-range spin correlations in this spin-orbit entangled Jeff =1/2 rare-earth
magnet on a honeycomb lattice. | J. Khatua, Q. P. Ding, M. S. Ramachandra Rao, K. Y. Choi, A. Zorko, Y. Furukawa, P. Khuntia | 2023-04-21T11:13:26Z | http://arxiv.org/abs/2304.10890v2 | # Magnetism and field-induced effect in a spin-orbit entangled \(J_{\rm eff}=1/2\) honeycomb lattice
###### Abstract
The interplay between spin-orbit coupling, frustration induced anisotropic magnetic interaction, and spin correlations can lead to novel states with exotic excitations in rare-earth based quantum magnets. Herein, we present the crystal structure, magnetization, electron spin resonance (ESR), specific heat, and nuclear magnetic resonance (NMR) experiments on the polycrystalline samples of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) in which Yb\({}^{3+}\) ions form a perfect honeycomb lattice without detectable anti-site disorder. Magnetization data reveal antiferromagnetically coupled spin-orbit entangled \(J_{\rm eff}=1/2\) degrees of freedom of Yb\({}^{3+}\) ions in the Kramers doublet state where the Curie-Weiss temperature is - 2.97 K, as obtained from the low-temperature magnetic susceptibility data. The ESR measurements reveal that the first excited Kramers doublet is 32.3(7) meV above the ground state. The specific heat results suggest the presence of an antiferromagnetic phase transition at 2.26 K. The long-range antiferromagnetic order is completely suppressed upon the application of magnetic field and a field-induced disordered state is observed in an applied magnetic field of \(\mu_{0}H=2.5\) T, which is also confirmed by NMR measurements. Furthermore, the NMR spin-lattice relaxation rate reveals the presence of a field-induced gap that is attributed to the Zeeman splitting of Kramers doublet state in this quantum material. Our experiments suggest the presence of a phase transition and short-range spin correlations appearing well above the antiferromagnetic phase transition temperature and a field-induced disordered state in this spin-orbit entangled \(J_{\rm eff}=\)1/2 rare-earth magnet on a honeycomb lattice.
## I Introduction
Quantum fluctuations induced by frustration, spin correlations, and spin-orbit entanglement can stabilize exotic states in quantum materials [1; 2; 3]. Understanding emergent quantum phenomena and associated elementary excitations is one of the attractive tracks in quantum condensed matter [1; 2; 3; 4]. The two-dimensional geometrically frustrated triangular and kagome lattices have been studied extensively to realize exotic many-body quantum phenomena in condensed matter [3; 5; 6; 7]. Quantum spin liquid (QSL) is a highly entangled quantum state of matter wherein spins do not exhibit long-range magnetic order even at \(T\to 0\) owing to strong quantum fluctuations and intertwining of competing degrees of freedom [3]. QSL is often characterized by exotic quasi-particle excitations such as spinon or Majorana fermions with fractional spin quantum number [8; 9], which are different from the conventional magnon excitations with integer spin quantum numbers, as usually observed in magnetically ordered systems [10]. The quest for such unconventional state of matter was triggered by two theoretical proposals. The first is resonating valence bond state proposed by P. W. Anderson in 1973 [8]. The second one is Kitaev QSL on the honeycomb lattice [11; 9]. Here, \(S=1/2\) spin is predicted to fractionalize into emergent Majorana fermions and localized \(Z_{2}\) fluxes [9; 12]. The realization of Majorana fermions has been proposed in several quantum states [13; 14; 15] and recently it has drawn significant attention in magnetic insulators where fractionalization of spins can host such excitations that are useful for fault-tolerant topological quantum computations [16; 11; 17; 18; 19]. In this context, it is highly relevant to look for honeycomb materials potential to realize correlated quantum states and understand the effect of external perturbations on the host ground state.
The Kitaev model on honeycomb lattice with spin-\(1/2\) degrees of freedom demonstrated that the bond dependent Ising interactions provide an alternative route to realize frustration driven Kitaev spin-liquid state with Majorana fermions [9]. This attracts significant research interest in the strong spin-orbit coupled \(4d\) and \(5d\) honeycomb magnets such as iridates A\({}_{2}\)IrO\({}_{3}\) (A = Li\({}^{+}\), Na\({}^{+}\)) [20; 21] and ruthenate \(\alpha\)-RuCl\({}_{3}\)[22; 23]. However, most of the \(4d\)-\(5d\) honeycomb magnets show magnetic ordering at low-temperature due to the presence of inevitable defects and additional exchange interactions that destabilize Kitaev quantum-spin-liquid ground state. In par
ticular, the Kitaev magnet \(\alpha\)-RuCl\({}_{3}\) is characterized by the dominant ferromagnetic Kitaev interaction term. In addition, it offers several unconventional features appearing in finite magnetic field, such as chiral spin liquid [24; 25; 26], approximate half-integer quantized thermal Hall conductivity [27; 28], and field-induced QSL with anoxic-excitations [29; 30]. Quite recently, it was observed that the Ir\({}^{4+}\) ion based honeycomb magnet H\({}_{3}\)LiIr\({}_{2}\)O\({}_{6}\) neither show spin-freezing nor magnetic order down to 0.02 K despite inter-layer disorder [31]. The magnetic specific heat and susceptibility data follow a universal scaling behavior that is attributed to unavoidable defects which modify low-energy density of states [31; 32; 33]. Such honeycomb magnets are interesting to explore the impact of defects or magnetic impurities that are coupled with the Kitaev spin liquid state [34].
Beyond \(4d/5d\) ion based honeycomb magnets, the search for Kitaev materials has been extended to \(3d\) transition metal Co\({}^{2+}\) ion based honeycomb compounds such as Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) and Na\({}_{3}\)Co\({}_{2}\)SbO\({}_{6}\)[35; 36]. Recent, theoretical and experimental efforts demonstrate that the combination of spin-orbit coupling and strong Hund's couplings can host pseudospin-1/2 degrees of freedom with Kitaev interactions in the aforementioned cobaltates [37; 38]. Furthermore, it has been proposed that not only cobaltates, \(3d\) transition metal based honeycomb lattice in which oxygen ligand of \(3d\) ions form a nearly regular octahedron with small trigonal distortions as in \(A_{3}M_{2}X\)O\({}_{6}\) (\(A\) = Li, Na, \(X\) = Bi, Sb and \(M\) = Ni, Co) can also host Kitaev interaction and myriads of interesting quantum phenomena [39; 40; 35].
The experimental realization of ideal Kitaev spin-liquid is quite challenging due to the presence of extra exchange couplings, site disorder, and defects in real materials [41]. The essential ingredients to realize Kitaev spin liquid are spin-orbit entangled \(J_{\rm eff}=1/2\) moments, electron correlations with bond-directional Ising exchange interactions, and low dimensionality [42]. It is interesting in this respect that, many recent studies have found signatures of collective quantum phenomena in low dimensional rare-earth magnets with anisotropic interaction between pseudospin-1/2 moments. This includes Tomonaga-Luttinger liquid in YbAlO\({}_{3}\)[43], quantum disorder state in YbMgGaO\({}_{4}\)[44; 45], Ising-like spin-liquid state in NdTa\({}_{7}\)O\({}_{19}\)[46], Kosterlitz-Thouless phase in TmMgGaO\({}_{4}\)[47; 48], field tunable quantum disorder state in NaYb\(X_{2}\) (\(X\) = O, S, Se), and spin-orbit coupled quantum dimer phase in Yb\({}_{2}\)Si\({}_{2}\)O\({}_{7}\)[49]. Theoretically, it has been proposed that Yb-based honeycomb magnets may offer a more faithful realization of Kitaev physics due to strong localization and spin-orbit coupling of \(4f\) electrons compared to its \(4d\) or \(5d\) counterparts [50; 51; 52]. In this context, the \(4f\) system YbCl\({}_{3}\) wherein Yb\({}^{3+}\) ions decorate on a honeycomb structure similar to \(\alpha\)-RuCl\({}_{3}\) was proposed as a novel rare-earth based Kitaev spin-liquid candidate [53]. However, it turns out that YbCl\({}_{3}\) exhibits magnetic short-range and long-range order at low-temperature with easy-plane anisotropic magnetic interaction [53; 54]. Furthermore, similar to \(\alpha\)-RuCl\({}_{3}\), the signature of strong quantum fluctuations, the presence of continuum excitations and suppression of magnetic order with the application of magnetic filed have been observed in YbCl\({}_{3}\)[53; 54; 55]. Interestingly, inelastic neutron scattering experiment on the single crystals of YbCl\({}_{3}\) reveals the presence of van Hove singularity within the two magnon continuum associated with quantum fluctuations and the nearest-neighbor Heisenberg interaction leads to collective quantum behavior in this quantum magnet [56]. Moreover, the presence of Bose-Einstein condensation state has been proposed in the long-range ordered state of YbCl\({}_{3}\) owing to extremely weak inter-layer coupling [57]. It suggests that the rare-earth magnets are ideal to host a plethora of field-induced quantum phenomena in view of weak exchange interaction between rare-earth moments. In sharp contrast to long-range Neel order state in an isotropic nearest-neighbor exchange model on the honeycomb lattice [58], the strong quantum fluctuations induced by further nearest-neighbor frustrated exchange interaction can destabilize long-range Neel order even in bipartite honeycomb lattice [59; 60; 61]. One such promising candidate material is YbBr\({}_{3}\) wherein the Yb\({}^{3+}\) ions form a two-dimensional honeycomb lattice perpendicular to \(c\)-axis with nearest and next-nearest exchange interactions [62]. The magnetic susceptibility data on the single crystal of YbBr\({}_{3}\) show the absence of magnetic ordering down to 100 mK followed by a broad maximum around 3 K. The neutron scattering experiment reveals the presence of continuum excitations with honeycomb spin plaquette fluctuations in YbBr\({}_{3}\) suggesting a QSL state [62].
The strong quantum fluctuations of effective-1/2 spins, in combination with anisotropic interactions arising from strong spin-orbit coupling in rare-earth based magnets can lead to a plethora of quantum phases compared to traditional transition metal based magnets [63; 64; 46; 49]. Besides the promising platform to realize of Kitaev spin liquid ground state in rare-earth based honeycomb magnets [65], such bipartite spin-lattice offers a promising venue to host spiral spin-liquid state with fracton quadrupole excitations [66; 67], multiple-\(q\) states in presence of magnetic field [68], lattice nematic phase [69], and Berezinskii-Kosterlitz-Thouless phase [70; 71]. Furthermore, the realization of quantum phase transition [72; 73; 74] and multiple exotic phases associated with a fully frustrated transverse-field Ising model in a honeycomb lattice are yet to be explored [75; 76]. The spin-orbit coupling, electron correlation, low dimensionality, and a low spin value provide an ideal ground to realize novel quantum states in frustrated rare-earth based antiferromagnets [77; 78; 79; 80; 81]. In this context, structurally perfect novel rare-earth \(4f\) based honeycomb magnets wherein the combination of spin-orbit coupling and sizable crystal electric field allows for the realization of an effective spin 1/2 system with large anisotropy offer an alternate route for the experimental search for exotic quantum many phenomena including Kitaev QSL [50; 52; 52]. Herein, we present our results on a promising rare-earth
-based quantum magnet Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\), which crystallizes in the trigonal crystal structure with space group R\(\bar{3}\), where the magnetic Yb\({}^{3+}\) ions form a perfect honeycomb lattice in the \(ab\)-plane [83]. Magnetic susceptibility data suggest the realization of a spin-orbit entangled \(J_{\rm eff}\) = 1/2 moments of Yb\({}^{3+}\) ions that is consistent with a low-energy Kramers doublet state at low temperature. As revealed by ESR, the ground state is well isolated from the first excited Kramers doublet, which is 32.3(7) meV above the ground state. The Curie-Weiss fit of low-temperature susceptibility data reveals the presence of antiferromagnetic interactions in the ground state. A \(\lambda\)-type anomaly has been observed in zero-field specific heat data at \(T_{\rm N}\) = 2.26 K, which suggests the presence of an antiferromagnetic phase transition in this material. Notably, it is observed that upon increasing the magnetic field, the transition temperature shifts towards lower temperatures and completely suppressed in a magnetic field of 2.5 T. \({}^{29}\)Si NMR measurements in weak magnetic fields confirm the presence of long-range magnetic order in BYSO. On the other hand, the NMR relaxation rate in high-magnetic field suggests the presence of a field-induced gap due to the Zeeman splitting of the Kramers doublet state. Our investigation also reveals the presence of short-range spin correlations at temperatures above the long-range ordered state suggesting a field-induced quantum disordered liquid like state in this rare-earth honeycomb magnet.
## II Experimental details
Polycrystalline samples of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) (henceforth; BYSO) were prepared by a conventional solid state reaction of appropriate stoichiometry amounts of BaCO\({}_{3}\) (99.997 %, Alfa Aesar), SiO\({}_{2}\) (99.999 %, Alfa Aesar), and Yb\({}_{2}\)O\({}_{3}\) (99.998 %, Alfa Aesar). Prior to use, we preheated BaCO\({}_{3}\), and Yb\({}_{2}\)O\({}_{3}\) at 100\({}^{\circ}\)C and 800\({}^{\circ}\)C, respectively, to remove moisture and carbonates. All the reagents were thoroughly ground to obtain homogeneous mixtures. The powder mixtures were pelletized and sintered at 900\({}^{\circ}\)C for 24 hours in air to decompose carbon dioxide. In order to obtain the desired phase, the pelletized sample was fired at 1350\({}^{\circ}\)C for 72 hours with several intermittent grindings. The powder X-ray diffraction (XRD) patterns were measured by employing a smartphone Rigaku X-ray diffractometer with Cu K\({}_{\alpha}\) radiation (\(\lambda\) = 1.54 A). Magnetization measurements were carried out using the VSM option of Quantum Design, Physical Properties Measurement System (QD, PPMS) in the temperature range 1.9 K \(\leq\)\(T\)\(\leq\) 340 K in magnetic fields 0 T \(\leq\)\(\mu_{0}\)\(H\)\(\leq\) 7 T. Specific heat measurements were performed using QD, PPMS by thermal relaxation method, in the temperature range 1.9 K \(\leq\)\(T\)\(\leq\) 250 K in magnetic fields 0 T \(\leq\)\(\mu_{0}\)\(H\)\(\leq\) 7 T. The electron spin resonance (ESR) spectrum was measured at 9.40 GHz K on a commercial X-band Bruker E500 spectrometer working at 9.40 GHz in the temperature range 4 K \(\leq\)\(T\)\(\leq\) 140 K. The microwave power was varied between 0.01 mW at low temperatures and 1 mW at high temperatures in order to avoid signal saturation at low temperatures and diminishing of the signal at high temperatures. Modulation of the magnetic field with 100 kHz frequency and 0.5 mT amplitude was used to enhance the signal-to-noise ration, leading to derivative ESR spectra. Field-swept \({}^{29}\)Si (\(I\) = 1/2, and gyromagnetic ratio 8.457 MHz/T) NMR measurements down to 1.6 K at several frequencies were carried out on a home-made phase-coherent spin-echo pulse
Figure 1: (a) Rietveld refinement profile of the room-temperature x-ray diffraction data of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\). The black circle represents the experimentally observed data points and the orange solid line is the calculated data. The rows of vertical bars are the Bragg reflection positions for Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) (olive bars) and Ba\({}_{2}\)SiO\({}_{4}\) (violet bars). The blue line is the difference between observed and calculated intensities. (b) A single unit cell of the trigonal crystal structure of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\). The nearest-neighbor oxygen atom of Yb\({}^{3+}\) ions form a YbO\({}_{6}\) octahedra. The possible in-plane nearest-neighbor and inter-plane exchange interactions through the bridges Yb-O-Si-O-Yb, and Yb-O-Si-O-Si-O-Yb are shown, respectively. (c) Structure depicting nearest-neighbor (5.79 Å) Yb\({}^{3+}\) ions which form honeycomb planes. There are three such consecutive honeycomb planes in the unit cell of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\).
spectrometer equipped with a 9 T Oxford magnet. NMR spectra measurements were carried out using a standard Hahn-echo sequence while \({}^{29}\)Si NMR spin-lattice relaxation time was extracted from the recovery of longitudinal nuclear magnetization \(M(t)\) after time delay \(t\) following saturation pulse sequence.
## III Results
### XRD and structural details
In order to confirm the phase purity and obtain structural atomic parameters, the Rietveld refinement of the X-ray diffraction data was performed using the FULLPROF suite [84]. The XRD results reveal that the polycrystalline samples of BYSO contain a few percents of non-magnetic secondary phase Ba\({}_{2}\)SiO\({}_{4}\), which should not affect the overall magnetic properties of the material studied here. In literature, it is observed that in barium and silicon based magnetic materials, Ba\({}_{2}\)SiO\({}_{4}\) (BSO) is an unavoidable common secondary phase [85]. To quantify the percentage of dominant and secondary phases, we performed double phases Rietveld refinement. The initial atomic coordinates were taken from the refs. [83] and [86] for dominant BYSO phase and the secondary BSO phase, respectively. Fig. 1 (a) depicts the Rietveld refinement of the XRD data that suggests our polycrystalline samples contains \(\approx\) 94 % of the dominant BYSO phase and \(\approx\) 6 % non-magnetic secondary BSO phase. The Rietveld refinement reveals that the present material BYSO crystallizes in trigonal crystal structure with space group R\(\bar{3}\). Table 1 lists the atomic parameters obtained from the Rietveld refinement. The Yb atoms occupy only one Wyckoff atomic site 6\(c\) and form a six-coordinated YbO\({}_{6}\) octahedron with neighboring O atoms. The possible in-plane exchange interactions via Yb-O-Si-O-Yb super-exchange pathways is shown in Fig. 1 (b). More interestingly, first nearest Yb\({}^{3+}\) neighbors (5.79 A) constitutes two-dimensional honeycomb layers perpendicular to the crystallographic \(c\)-axis and there are three such well separated honeycomb layers in one unit cell of BYSO (Fig. 1 (c)). From the structural point of view, BYSO is a bit different from the honeycomb lattice YbCl\({}_{3}\) which crystallizes in the monoclinic crystal structure (space group \(C2/m\) with lattice parameters \(a\) = 6.732 A, \(b\) = 11.620 A and \(c\) = 6.328 A, \(\alpha\) = \(\gamma\) = 90.00\({}^{\circ}\) and \(\beta\) = 110.551\({}^{\circ}\)) [54]. However, BYSO is structurally similar to the honeycomb lattice of YbBr\({}_{3}\) which crystallizes in the trigonal crystal structure (space group R\(\bar{3}\)) with the lattice parameters \(a\) = \(b\) = 6.971 A, \(c\) = 19.103 A, \(\alpha\) = \(\beta\) = 90\({}^{\circ}\) and \(\gamma\) = 120\({}^{\circ}\) [62]. Although both systems belong to the same crystal class, the ground state properties of BYSO are expected to be different due to significant differences in the ratio of \(c/a\) and the exchange paths than that found in YbBr\({}_{3}\)[62].
### Magnetic susceptibility
The temperature dependence of the magnetic susceptibility of BYSO in magnetic field \(\mu_{0}H=0.5\) T is shown in Fig. 2 (a). The magnetic susceptibility shows a weak anomaly at about 2.2 K as presented in the inset of Fig. 2 (a). The susceptibility shows a change of slope (albeit weak) below 2.3 K in weak magnetic fields, which is possibly related to a phase transition [47; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99]. Above 110 K, the inverse magnetic susceptibility data (see Fig. 2 (b)) were fitted with Curie-Weiss law, \(\chi=\chi_{0}+C/(T-\theta_{\rm CW})\), where \(C\) is the Curie-constant, \(\theta_{\rm CW}\) is the Curie-Weiss temperature and \(\chi_{0}\) is the sum of temperature independent Van-Vleck susceptibility and core diamagnetic susceptibility [90]. The high-temperature Curie-Weiss fit (red line in Fig. 2 (b)) yields \(\theta_{\rm CW}\) = \(-\) 111 K and effective moment \(\mu_{\rm eff}\) = 4.53 \(\mu_{B}\). The estimated large negative \(\theta_{\rm CW}\) is attributed to the energy scale of crystal field excitations. The obtained effective moment \(\mu_{\rm eff}\) = 4.53 \(\mu_{\rm B}\) is close to that expected for free Yb\({}^{3+}\) ions (4\(f\)13; \(L\) = 3, \(S\)= 1/2). As Yb\({}^{3+}\) is a Kramers ion, generally the strong crystal electric field splits the eight-fold degenerate \(J\) = 7/2 multiplet into four Kramers doublets. At low-temperature Yb\({}^{3+}\) ions acquire spin-orbit entangled \(J_{\rm eff}\) = 1/2 moment. In such a scenario, the low-temperature magnetic properties are mainly governed by the exchange interactions between \(J_{\rm eff}\) = 1/2 moment of Yb\({}^{3+}\) ions in the lowest Kramers doublet state while the higher doublet states are important to understand the magnetic properties at high-temperature and high-magnetic fields [91]. In order to obtain information concerning the Kramers doublet ground state and the nature of magnetic interactions, the low-temperature inverse susceptibility data (8 K \(\leq\) 20 K) were fitted (see orange line in Fig. 2 (b)) with the Curie-Weiss law, which yields \(\theta_{\rm CW}\) = \(-\) 2.97 K and \(\mu_{\rm eff}\) = 2.78 \(\mu_{B}\). The obtained value of effective moment,
\begin{table}
\begin{tabular}{l c c c c c} Atom & Wyckoff position & \(x\) & \(y\) & \(z\) & Occ. \\ \hline Ba\({}_{1}\) & 3\(a\) & 0 & 0 & 0 & 1 \\ Ba\({}_{2}\) & 18\(f\) & 0.333 & 0.666 & 0.004 & 1 \\ Ba\({}_{3}\) & 18\(f\) & 0.029 & 0.668 & 0.109 & 1 \\ Yb & 6\(c\) & 0 & 0 & 0.164 & 1 \\ Si & 18\(f\) & 0.336 & 0.012 & 0.073 & 1 \\ O\({}_{1}\) & 18\(f\) & 0.347 & 0.065 & 0.006 & 1 \\ O\({}_{2}\) & 18\(f\) & 0.480 & 0.158 & 0.102 & 1 \\ O\({}_{3}\) & 18\(f\) & -0.002 & 0.173 & 0.107 & 1 \\ O\({}_{4}\) & 18\(f\) & 0.137 & 0.468 & 0.094 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: The Rietveld refinement parameters obtained from the analysis of the XRD data taken at room temperature. The Rietveld refinements were carried out with space group R\(\bar{3}\) and yields unit cell parameters \(a\) = \(b\) = 10.002 Å, \(c\) = 22.127 Å and \(\alpha\) = 90\({}^{\circ}\), \(\beta\) = 90\({}^{\circ}\), \(\gamma\) = 120\({}^{\circ}\). The goodness of Rietveld refinement was confirmed by the following factors: \(\chi^{2}\) = 4.8; R\({}_{\rm wp}\) = 6 %; R\({}_{\rm exp}\)= 2.72 % and R\({}_{p}\) = 4 %.
\(\mu_{\rm eff}=2.78\)\(\mu_{\rm B}\) is considerably smaller than \(\mu_{\rm eff}=4.53\)\(\mu_{B}\) expected for free Yb\({}^{3+}\) ions. It implies the presence of a Kramer doublet state with \(J_{\rm eff}=1/2\) low-energy state of Yb\({}^{3+}\) ions [53]. The negative Curie-Weiss temperature, \(\theta_{\rm CW}=-\) 2.97 K, suggests the presence of an antiferromagnetic interaction between Yb\({}^{3+}\) spins. Top inset of Fig. 2 (b) depicts the zero-field cooled (ZFC) and field-cooled (FC) susceptibility and reveals no bifurcation, which suggests the absence of spin-freezing at least above 1.9 K. Fig. 2 (c) depicts the isothermal magnetization as a function of the magnetic field up to 7 T at several temperatures. Due to van-Vleck magnetism, at 2 K (\(<\theta_{\rm CW}\)), the almost linear behavior of isotherm magnetization in the magnetic field range 5 T \(\leq\mu_{0}H\leq 7\) T was extrapolated to \(y\)-axis, which gives the saturation magnetic moment \(M_{s}=1.2\)\(\mu_{B}/\)Yb\({}^{3+}\) that corresponds to an average value \(g=2.4\)[44]. At temperature (\(T\geq\) 5 K) well above \(T_{N}\), one can model the observed isothermal magnetization following Brillouin function, \(M/M_{s}=B_{1/2}\) (\(y\)), where \(B_{J}(y)=[\frac{2J+1}{2J}coth[\frac{y(2J+1)}{2J}]-\frac{1}{2J}coth\frac{y}{2J}]\) is the Brillouin function, \(M_{s}\) (= g\(J\mu_{B}\)) is the saturation magnetization and \(y=g\mu_{B}J\mu_{0}H/k_{B}T\) is the ratio of Zeeman energy of magnetic moment to the thermal energy, \(\mu_{B}\) is the Bohr magneton, and \(g\) is the Lande's g-factor. The solid lines in Fig. 2 (c) for 5 K, 10 K and 15 K are the Brillouin function fit which yields an average Lande's g-factor, \(g=2.45\), while \(J\) was fixed to 1/2 consistent with the lowest Kramers doublet state of Yb\({}^{3+}\) ions in this temperature regime.
### Electron spin resonance
Electron spin resonance (ESR) measurements were performed on the powder sample of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) at \(T\geq\) 4 K. The ESR spectra exhibit pronounced broadening with increasing temperature, making the signal too broad for reliable analysis above 140 K (Fig. 3 (a)). Below 50 K, the spectra become structured, however, their line shape does not correspond to a single powder pattern due to anisotropic \(g\) factors. Above 70 K, the spectra can be nicely fitted with a single Lorentzian curve, revealing motional narrowing effects on the line shape [92]. The temperature dependence of the ESR line width (Fig. 3 (b)) is due to crystal-electric-field (CEF) fluctuations, as often encountered in rare-earth magnets [46], where excited CEF levels are relatively low in energy. The broadening is due to the Orbach process, describing two-phonon scattering via excited CEF levels [93]. Indeed, the experimental line width \(\Delta B\) agrees very well with the expression
\[\Delta B(T)=\Delta B_{0}+\frac{f}{\exp(\Delta/k_{B}T)-1}, \tag{1}\]
where the constant term \(\Delta B_{0}\) arises from the magnetic anisotropy in the CEF ground state, while the exponential term describes the Orbach relaxation with \(f\) being the scaling factor between the ESR line width and the spin fluctuation frequency, and \(\Delta\) being the energy gap between the CEF ground state and the lowest excited state. The fit of the model to the experimental data yields \(\Delta B_{0}=51(3)\) mT, \(f=13(1)\) T, and
Figure 2: (a) Temperature dependence of magnetic susceptibility in 0.5 T; the inset depicts the temperature dependence in several magnetic fields in the temperature range 1.9 K \(\leq T\leq\) 3.2 K. The position of slope change near 2.3 K in magnetic susceptibility is marked by an arrow. In weak magnetic fields, the values of \(T_{N}\) from magnetic susceptibility were obtained by the derivative of susceptibility with respect to temperature (not shown here). (b) Temperature dependence of inverse magnetic susceptibility in magnetic field \(\mu_{0}H\)= 0.5 T with Curie-Weiss fit in the high-temperature (red line) and the low-temperature (orange line) regions. The inset (left top corner) shows the comparison of the zero-field cooled (ZFC) and field cooled (FC) magnetic susceptibility as a function of temperature in a magnetic field \(\mu_{0}H=0.01\) T. (c) Isotherm magnetization as a function of magnetic field where the solid line represents the Brillouin function fit for paramagnetic Yb\({}^{3+}\) spins with \(J_{\rm eff}=1/2\) moment at \(T\geq\) 5 K. The blue line is the expected linear field dependent behavior of isotherm magnetization at 2 K in high fields and it is extrapolated on the \(y\)-axis to determine the value of saturation magnetization.
\(\Delta=32.3(7)\) meV. This gap is very similar to the gap of 39.4 meV observed in YbMgGaO\({}_{4}\)[94] or 34.8 meV observed in NaYbO\({}_{2}\)[95]. Both materials possess YbO\({}_{6}\) octahedra with frustrated two-dimensional arrangements, just like Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\).
### Specific heat
In order to gain further insights into ground state properties, we performed the temperature dependence of specific heat (\(C_{\rm p}(T)\)) of BYSO down to 1.9 K in several magnetic fields up to 7 T. The temperature dependence of zero-field specific heat data in the temperature range 1.9 K \(\leq\)\(T\)\(\leq\) 250 K is shown in Fig. 4 (a). The zero-field specific heat data exhibit a lambda-type anomaly at \(T_{\rm N}\) = 2.26 K, which indicates the presence of long-range antiferromagnetic magnetic ordering (see inset of Fig. 4 (a)). In order to extract the associated magnetic specific heat due to Yb\({}^{3+}\) spins from the total specific heat, we model the lattice contribution due to phonons following Debye function in the \(C_{\rm p}(T)\) as
\[C_{\rm lat.}(T)=\sum_{n=1}^{2}C_{Dn}[9k_{B}\left(\frac{T}{\theta_{Dn}}\right)^ {3}\int_{0}^{\theta_{Dn}/T}\frac{x^{4}e^{x}}{(e^{x}-1)^{2}}dx] \tag{2}\]
where \(\theta_{D}\) is the Debye temperature, while \(R\) and \(k_{B}\) are the molar gas constant and Boltzmann constant, respectively. The solid red line in Fig. 4 (a) is the fitted lattice contributions that was obtained with \(\theta_{D1}\) = 234 K and \(\theta_{D2}\) = 456 K. In the fits, two coefficients were fixed in the ratio C\({}_{D1}\):C\({}_{D2}\) = 17 : 24 that corresponds to the ratio of the number of heavy and light atoms in BYSO [96; 97; 98]. The associated magnetic contribution \(C_{\rm mag}(T)\) was extracted after subtracting lattice contribution and is shown in Fig. 4 (b) as a function of temperature. It is observed that the lattice fit curve starts deviating from the experimental data below 8 K, thereby indicating the development of antiferromagnetic correlations below this temperature. The magnetic entropy change (\(\Delta S(T)\)) was calculated by integrating \(C_{\rm mag}(T)/T\) in the temperature range 1.9 K \(\leq\)\(T\)\(\leq\) 8 K (see Fig. 4 (b)). The magnetic entropy release at the transition temperature is 0.3 J mol\({}^{-1}\)K\({}^{-1}\) i.e., 5.2 % of the total entropy expected for effective-1/2 system \(\Delta S\) = \(R\) ln (2\(S\)+1) = 5.76 J mol\({}^{-1}\)K\({}^{-1}\). This partial entropy release can be ascribed to the presence of spin fluctuations with a small ordered moment below the transition temperature [43; 99]. Furthermore, we noticed that the saturation entropy at 8 K is 60 % of 5.76 J mol\({}^{-1}\)K\({}^{-1}\). The missing entropy is also likely due to the presence of short-range spin correlations well above the antiferromagnetic transition temperature [43]. However, to confirm the possibility of unaccounted residual entropy below \(T_{N}\), specific heat measurements down to mK temperatures would be required.
The temperature dependence \(C_{\rm p}(T)\) in magnetic field up to 2.4 T is shown in Fig. 4 (c). Interestingly, we observed that upon increasing magnetic field the transition temperature shifts toward lower temperatures. Surprisingly, the long-range antiferromagnetic order is completely suppressed (see Fig. 4 (d)) in magnetic fields \(\mu_{0}H\)\(\geq\) 3 T.
A similar shift of the transition temperature with magnetic field is also reported in the specific heat study of several quantum materials [25; 43; 100; 101]. In BYSO, \(C_{\rm p}(T)\) gets enhanced on increasing magnetic field until the magnetic field around 3 T. Such a behavior of \(C_{\rm p}(T)\) can be ascribed to modification of magnetic structure in the presence of magnetic field similar to that observed
Figure 3: (a) The ESR spectra of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) at various temperatures (circles), with corresponding best fits with isotropic Lorentzian line shape (solid lines) above 70 K. The spectra are shifted vertically for clarity. (b) The temperature dependence of the ESR line width (circles), with the solid line representing the fit to the Orbach model [Eq. (1)], yielding the energy gap of \(\Delta=32.3(7)\) meV (see text for details).
in triangular lattice antiferromagnets \(X\)YbSe\({}_{2}\)(\(X\) = Cs, Na) [102; 103] and TmMgGaO\({}_{4}\)[89].
In a magnetic field of 3 T, only a broad maximum is observed around 2.3 K in specific heat while the sharp anomaly at 2.26 K is fully suppressed. Above 3 T, this broad maximum progressively shifts toward higher temperatures and it appears like Schottky anomaly, suggesting the presence of a field polarized state with a field-induced gap in high magnetic fields [52]. A similar scenario has been observed in the honeycomb lattice YbCl\({}_{3}\)[55] and other low-dimensional frustrated magnets [43; 104]. In BYSO, the field-induced gap is attributed to the Zeeman splitting of the lowest Kramers doublet state effectively surpassing the intrinsic exchange interactions present in the ground-state Kramers doublet. To estimate the gap value, the high-field magnetic specific heat data were fitted using two-levels Schottky expression of the specific heat
\[C_{\text{sch.}}=fR\left(\frac{\Delta_{\text{s}}}{k_{B}T}\right)^{2}\frac{\exp( \Delta_{\text{s}}/k_{B}T)}{(1+\exp(\Delta_{\text{s}}/k_{B}T))^{2}}, \tag{3}\]
where \(k_{B}\) is the Boltzmann constant, \(R\) is the universal gas constant, \(\Delta_{\text{s}}\) is the gap induced by the Zeeman splitting of the ground state Kramers doublet of Yb\({}^{3+}\) ion, and \(f\) measures the fraction of Yb\({}^{3+}\) spins which contribute to the splitting of the ground state doublet. The fitted solid lines in Fig. 4 (d) were obtained using Eq.3. The inset of Fig. 4 (d) displays the corresponding estimated field-induced gap. The estimated gap value is consistent with that obtained from nuclear magnetic resonance spin lattice relaxation experiments (see Fig. 5 (d)). The estimated fraction of Yb\({}^{3+}\) spins was close to one suggesting that almost all the Yb\({}^{3+}\) spins contribute to the observed effect [97; 105].
Figure 4: Temperature dependence of zero-field specific heat where the red line is the fitting curve by the Debye model (Eq. (2); see text) which accounts for phonon contributions. Inset highlights the low-temperature region. (b) Temperature dependence of magnetic specific heat divided by temperature, \(C_{\text{mag}}/T\), in zero-field and the inset shows the estimated entropy release \(\Delta S\) (\(=\int\)\((C_{\text{mag}}/T)\) d\(T\)) as a function of temperature. (c) Temperature dependence of total specific heat in various magnetic fields. (d) Temperature dependence of magnetic specific heat in high magnetic fields where the solid line depicts the fit using Eq. (3). The evolution of the field-induced gap as a function of the external magnetic field is depicted in the inset.
### Nuclear magnetic resonance
In order to track the intrinsic static magnetic susceptibility and spin dynamics of BYSO, we performed also nuclear magnetic resonance measurements on \({}^{29}\)Si (nuclear spin \(I=1/2\), gyromagnetic ratio \(\gamma_{\rm N}/2\pi=8.458\) MHz/T). Fig. 5 (a) shows the field swept \({}^{29}\)Si NMR spectra at a few selective temperatures at 61 MHz. The observed spectral shape is a single symmetric line as expected for \(I=1/2\) and a single crystallographic Si site [106]. The smooth evolution of the field swept NMR spectra taken at frequency \(\nu=61\) MHz in the entire temperature range without rectangular shape or splitting of the \({}^{29}\)Si line suggests absence of long-range magnetic ordering in a field of approximately 7.2 T [107; 108; 109].
At high temperatures, the NMR spectra are relatively narrow, but as the temperature drops, they start to widen and exhibit anisotropic behavior (see Fig. 5 (a)).
Figure 5: (a) The field swept \({}^{29}\)Si NMR spectra measured at constant frequency \(\nu=61\) MHz at various temperature. The orange vertical line corresponds to zero-shift reference line at 7.206 T. (b) The temperature dependence of NMR line-width at different frequencies. The temperature at which NMR line-width and specific heat start to grow is indicated by the shaded region. Inset depicts the temperature dependence of \({}^{29}\)Si NMR shifts (\(K\)) that is scaled with bulk susceptibility data on a semi-logarithmic scale at \(\nu=61\) MHz. (c) The temperature dependence of the \({}^{29}\)Si NMR spin-lattice relaxation rate (\(T_{1}^{-1}\)) for three different fields on a log-log scale. Inset shows the \(T_{1}^{-1}\) as a function of inverse temperature (\(T^{-1}\)) for different magnetic fields in semi-logarithmic scale. The black lines represent fits to a phenomenological model valid for thermally activated behavior of \(T_{1}^{-1}\) as discussed in the text. (d) Phase diagram depicting the suppression of antiferromagnetic transition temperature (left \(y\)-axis) and the evolution of field-induced gap (right \(y\)-axis) with external magnetic field. The dotted line indicates the expected boundary of antiferromagnetic (AFM) ordered state and a field-polarized (FP) state.
This is attributed to the presence of anisotropic hyperfine coupling [110]. Inset of Fig. 5 (b) depicts the estimated temperature dependence of NMR shift (\(K\)). The shift directly measures the intrinsic magnetic susceptibility relevant to the magnetic lattice. Upon decreasing temperature, the NMR shift increases similar to bulk magnetic susceptibility data as shown in inset of Fig. 5 (b). Below 10 K, NMR shift saturates to a non-zero value which is ascribed to the strong polarization of Yb\({}^{3+}\) moments in high-magnetic fields [111]. The obtained NMR shift can be expressed as \(K\) (\(T\)) = \(K_{0}\) + (\(A_{\rm hf}/N_{A}\mu_{B}\))\(\chi(T)\), where the first term (\(K_{0}\)) represents the sum of temperature independent contribution arising from the orbital susceptibility and chemical shift, the second term accounts for the temperature dependent intrinsic spin susceptibility of Yb\({}^{3+}\) spins, and \(A_{\rm hf}\) is the hyperfine coupling constant, \(N_{A}\) the Avogadro number, and \(\mu_{B}\) the Bohr magneton. To extract the hyperfine coupling constant, the NMR shift (\(K_{\rm 61MHz}\)) versus magnetic susceptibility (\(\chi_{\rm 1T}\)) was plotted with temperature as an implicit parameter (also known as Clogston-Jaccarino plot) [112]. The linear behavior of Clogston-Jaccarino plot (not shown here), and \(A_{\rm hf}=35\pm 1\) Oe/\(\mu_{B}\) in the paramagnetic region 7 K \(\leq T\)\(\leq\) 130 K suggests the measured magnetic susceptibility is intrinsic for the spin-lattice of BYSO.
The substantial broadening of NMR spectra below 5 K (see Fig. 5 (a)) is ascribed to the development of field polarized state consistent with the specific heat results [108]. To observe the characteristic feature of magnetic order state, the NMR line width i.e., full width at half maxima (\(\Delta\nu\)), were estimated from the NMR spectra as shown in Fig. 5 (b) at different frequencies. It is observed that, above 10 K, \(\Delta\nu\) is almost constant at low-frequencies which is attributed to the paramagnetic behavior of Yb\({}^{3+}\) moments consistent with the thermodynamic results. However, below 8 K, the NMR line width was found to increase rapidly with decreasing temperature at frequencies \(\nu\leq 21.5\) MHz (see Fig. 5 (b)). Such an upturn of the line width below the temperature at which specific heat starts increasing indicates the presence of magnetic ordering in BYSO in weak magnetic fields. The weak frequency dependency of the line width is consistent with the observation of field-dependent specific heat at low-temperature. Similar behavior of NMR line width is also observed around the transition temperature in Ir honeycomb magnets Ag\({}_{3}\)LiIr\({}_{2}\)O\({}_{6}\)[108] and Na\({}_{3}\)Ni\({}_{2}\)SbO\({}_{6}\)[113]. A significantly enhanced NMR line width at a frequency \(\nu\) = 61 MHz is attributed to the presence of field-polarized state at low-temperature and high fields in BYSO.
The NMR spin-lattice relaxation rate (\(T_{1}^{-1}\)) probes the low energy spin excitations related to the dynamic spin susceptibility governed by fluctuations of electron spin at the nuclear sites through hyperfine interactions. Fig. 5 (c) depicts the temperature dependence of \(T_{1}^{-1}\) with three distinct regions of different spin-lattice relaxation rates. In the entire temperature range of investigation, the relaxation rates were estimated from the fitting of the recovery of longitudinal nuclear magnetization \(M(t)\) by the single exponential function, \(M_{z}(t)\) = \((M_{0}-M(t))/M_{0}\) = \(A\)\(\exp(-t/T_{1})\), where \(M_{0}\) is the equilibrium magnetization, \(M_{z}(t)\) is the magnetization at time \(t\) after the saturation pulse and \(A\) is a constant. This implies a homogeneous distribution of spin-lattice relaxation rates in this 4\(f\) honeycomb lattice antiferromagnet. Upon cooling, the \(T_{1}^{-1}\) increases till the plateau around 50 K in magnetic fields \(\mu_{0}H\) = 2.5 and 7.2 T. This behavior is often observed in rare-earth magnets due to depopulation of the crystal electric field and is due to the Orbach mechanism that is responsible or the ESR line broadening in BYSO [111]. Below 50 K, the \(T_{1}^{-1}\) first remains temperature independent down to a characteristic temperature \(T^{*}\) which is field dependent, as shown by dotted curve in Fig. 5 (c). The constant value of \(T_{1}^{-1}\) in the intermediate temperature range suggests that the relaxation rate is dominated by paramagnetic spin fluctuation of Yb\({}^{3+}\) spins in the crystal field ground state. One can expect the correlated magnetism of Yb\({}^{3+}\) spins and field polarized phase at low-temperature owing to the weak exchange interaction between Yb\({}^{3+}\) spins typical for rare-earth based quantum magnets [97]. The relaxation rate \(T_{1}^{-1}\) in 2.5 T decreases rapidly below the characteristic temperature \(T^{*}\), which suggests that the applied magnetic field opens a gap which normally occurs when the Zeeman energy overcomes the interaction energy between Yb\({}^{3+}\) ions. To estimate the value of the gap in the presence of magnetic field, we present \(T_{1}^{-1}\) as a function of inverse temperature in a semi-log plot in \(\mu_{0}H\geq\) 2.5 T as shown in the inset of Fig. 5 (c). The solid line there represents the fit to the experimental data using the phenomenological model relevant for thermally activated behavior of magnetic moments i.e., \(T_{1}^{-1}\propto\exp(-\Delta_{s}/k_{B}T)\), where \(k_{B}\) is the Boltzmann constant and \(\Delta_{s}\) is the gap value to the Zeeman splitting of the ground state Kramers doublet in the presence of the magnetic field. We find a linear variation of the gap with the applied magnetic field (Fig. 5 (d) right-\(y\) axis), as expected for the case when the Zeeman energy overcomes the exchange energy. The spin-lattice relaxation rate taken at 6.81 MHz (\(\sim\) 0.8 T; see Fig. 5 (c)) shows an anomaly around 2.34 K, which is consistent with long-range magnetic ordering, i.e., a peak in the specific heat (see Fig. 4 (c)) [108]. This anomaly in the spin-lattice relaxation rate is suppressed upon increasing magnetic field. In order to understand the effect of the magnetic field on the underlying magnetic ordering and spin dynamics, we constructed a phase diagram based on the magnetization, specific heat and spin-lattice relaxation data which is shown in Fig. 5 (d). The phase diagram suggests that below the critical field \(\mu_{0}H\) = 2.5 T, the honeycomb spin-lattice of BYSO exhibits an antiferromagnetically ordered ground state and a field polarized state above this critical field. The absence of rectangular NMR spectra and sharp anomaly in NMR relaxation rate, line width, and shift rule out a conventional long-range ordered state in this 2D honeycomb material above
2.5 T.
## IV Discussion
In this work, we investigated crystal structure and ground state properties of an unexplored rare-earth based honeycomb spin-lattice Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) through the combination of thermodynamic and microscopic measurements. Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) crystallizes in the trigonal space group R\(\bar{3}\) without anti-site disorder between constituent atoms and the singly occupied Yb\({}^{3+}\) ions constitute a perfect honeycomb lattice perpendicular to the crystallographic _c_-axis. Magnetic susceptibility data suggest the presence of spin-orbit entangled \(J_{\rm eff}\) = 1/2 moments of Yb\({}^{3+}\) ions that is consistent with a Kramers doublet ground state at low-temperature. The estimated negative Curie-Weiss temperature (\(\theta_{\rm CW}\) = \(-\) 2.97 K) from the low-temperature susceptibility data indicates the presence of antiferromagnetic interaction between \(J_{\rm eff}\) = 1/2 moments of Yb\({}^{3+}\) ion. By employing the nearest-neighbor \(z\) = 3 and \(S\) = \(J_{\rm eff}\) = 1/2 for BYSO, the mean field formula \(\theta_{\rm CW}\) = (\(-zJS(S+1)\))/3\(k_{\rm B}\) offers an approximate estimation of the exchange interaction \(J/k_{\rm B}\) between \(J_{\rm eff}\) = 1/2 moments of Yb\({}^{3+}\) ions in the \(ab\)-plane [114]. The nearest-neighbor exchange interaction in the \(ab\)-plane is found to be roughly 4 K while the dipolar interaction energy E\({}_{\rm dip}\)\(\approx\)\(\mu_{0}9^{2}\)\(\mu_{B}^{2}/4\pi a^{3}\)\(\approx\) 2 % of the nearest-neighbor exchange interactions, where \(g_{\rm avg}\) is the powder average Lande \(g\) factor and \(a\) is the nearest-neighbor Yb-Yb bond length in BYSO. This suggests the presence of dominant super-exchange interaction via the most likely in-plane Yb-O-Si-O-Yb exchange pathways. Furthermore, the nature of exchange interactions between Yb\({}^{3+}\) moments is expected to be Heisenberg-type as the YbO\({}_{6}\) octahedra are connected through intermediate Si\({}^{4+}\) ions which prevent one of the essential requirements i.e., Yb-O-Yb bond angles of 90\({}^{\circ}\) for stabilizing Kitaev interactions (see Fig. 1 (b)) [115]. Our ESR results, indicate the presence of anisotropic magnetic interaction between \(J_{\rm eff}\) = 1/2 moments of Yb\({}^{3+}\) ions. In the absence of magnetic field, the specific heat data exhibit an anomaly around 2.26 K that suggests the presence of a phase transition in this antiferromagnet indicating a very weak spin frustration in BYSO. The absence of strong spin frustration is also observed in another promising honeycomb based Heisenberg antiferromagnet YbCl\({}_{3}\) that orders at 0.6 K [56] while the exchange interaction is 4 K [53]. The bipartite geometry of Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) excludes geometric frustration, so the moderate spin frustration can be due to exchange frustration arising from competing interactions between nearest-neighbor and next-nearest-neighbor spin of Yb\({}^{3+}\) moments similar to that found in the honeycomb lattice YbCl\({}_{3}\)[56; 116]. Theoretically, it has been suggested that further neighbor exchange interactions within the plane [58; 117] could destabilize an antiferromagnetic ordered state in two-dimensional honeycomb lattices [118; 119]. In BYSO, the inter-layer distance between the honeycomb planes is 7.43 A which is shorter than the second nearest-neighbor distance (10.02 A) within the honeycomb plane. Therefore, the long-range magnetic order is most likely due to the combination of in-plane nearest-neighbor exchange interaction and the presence of significant inter-layer exchange interactions through the exchange pathway Yb-O-Si-O-Si-O-Yb (1 (b)) in this material. In external magnetic field, the long-range antiferromagnetic order is suppressed and the system enters to a paramagnetic (spin-polarized) state just above a critical magnetic field of \(\mu_{0}H\) = 2.5 T [52]. This scenario is often observed in rare-earth magnets wherein the Zeeman energy due to external magnetic field overcomes the antiferromagnetic exchange interaction energy [43; 57; 89]. Normally, in the conventional antiferromagnetically ordered state, the crystallographic site of the probing nucleus in an NMR experiment become inequivalent and sense different magnetic fields which leads to splitting the NMR spectra. Despite long-range magnetic order, the absence of NMR line splitting or rectangular spectra in weak magnetic field is most likely due to incommensurate magnetic order in BYSO similar to long-range ordered magnets such as honeycomb magnet Ag\({}_{3}\)LiIr\({}_{2}\)O\({}_{6}\)[108; 109] and kagome material Sm\({}_{3}\)BWO\({}_{3}\)[120]. The enhancement of the NMR line-width at low temperature further confirms the magnetic ordering in BYSO. The spin-lattice relaxation rate in weak magnetic field shows an anomaly (albeit weak) around 2.34 K suggesting a phase transition, which is consistent with that observed in specific heat. In high-magnetic field (\(\mu_{0}H\)\(\geq\) 2.5 T), the exponential decay of the spin-lattice relaxation rate at low-temperature is attributed to a field-induced gap due to the Zeeman splitting of the low-energy Kramers doublet state.
## V Summary
To summarize, we synthesized and performed magnetization, specific heat, ESR, and NMR experiments on a 4\(f\) electron-based material Ba\({}_{9}\)Yb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\), which crystallizes in a trigonal crystal structure with the space group R\(\bar{3}\). In this material, the Yb\({}^{3+}\) ions decorate on a perfect honeycomb lattice in the \(ab\)-plane. Magnetization data suggest the pseudospin-1/2 degrees of freedom of Yb\({}^{3+}\) ions in the Kramers doublet state and these \(J_{\rm eff}\) = 1/2 spins interact antiferromagnetically. The lowest CEF excited Kramers doubled is far above (at 32.3(7) meV) the ground state, as estimated from ESR. Specific heat measurements confirm the presence of a phase transition at \(T_{N}\) = 2.26 K in zero-field. The shift of the transition temperature towards low-temperatures has been observed in weak external magnetic field and the magnetic order is completely suppressed in a field of 2.5 T above which a field polarized state is observed. The spin-lattice relaxation rate measurements in weak magnetic fields show the presence of a phase transition which is completely sup
pressed in high-magnetic fields that open a field-induced gap due to Zeeman splitting within the Kramers doublet, consistent with thermodynamic results. This enables us to draw a phase diagram that indicates the presence of an antiferromagnetic ordered state below a magnetic field of 2.5 T and a field-induced disordered ground state above this critical field. Further studies on single crystals of Ba\({}_{9}\)Pb\({}_{2}\)Si\({}_{6}\)O\({}_{24}\) are highly desired to shed more insight into the anisotropic magnetic interactions, local order parameters and low-energy excitations. The present family of rare-earth based honeycomb spin-lattice Ba\({}_{9}R_{2}\)Si\({}_{6}\)O\({}_{24}\) (\(R\) = rare-earth ions) with distinct rare-earth elements, spin-orbit driven anisotropy, and spin correlations provide an ideal ground to realize exotic quantum phenomena.
## VI Acknowledgments
P.K. acknowledges funding by the Science and Engineering Research Board and Department of Science and Technology, India through research grants. This research was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames National Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DEAC02-07CH11358. A.Z. acknowledges the financial support of the Slovenian Research Agency through the Program No. P1-0125 and Projects No. N1-0148 and J1-2461.
|
2303.05584 | SOCIALGYM 2.0: Simulator for Multi-Agent Social Robot Navigation in
Shared Human Spaces | We present SocialGym 2, a multi-agent navigation simulator for social robot
research. Our simulator models multiple autonomous agents, replicating
real-world dynamics in complex environments, including doorways, hallways,
intersections, and roundabouts. Unlike traditional simulators that concentrate
on single robots with basic kinematic constraints in open spaces, SocialGym 2
employs multi-agent reinforcement learning (MARL) to develop optimal navigation
policies for multiple robots with diverse, dynamic constraints in complex
environments. Built on the PettingZoo MARL library and Stable Baselines3 API,
SocialGym 2 offers an accessible python interface that integrates with a
navigation stack through ROS messaging. SocialGym 2 can be easily installed and
is packaged in a docker container, and it provides the capability to swap and
evaluate different MARL algorithms, as well as customize observation and reward
functions. We also provide scripts to allow users to create their own
environments and have conducted benchmarks using various social navigation
algorithms, reporting a broad range of social navigation metrics. Projected
hosted at: https://amrl.cs.utexas.edu/social_gym/index.html | Zayne Sprague, Rohan Chandra, Jarrett Holtz, Joydeep Biswas | 2023-03-09T21:21:05Z | http://arxiv.org/abs/2303.05584v1 | # SocialGym 2.0: Simulator for Multi-Agent Social Robot Navigation in Shared Human Spaces
###### Abstract
We present SocialGym 2.0, a multi-agent navigation simulator for social robot research. Our simulator models multiple autonomous agents, replicating real-world dynamics in complex environments, including doorways, hallways, intersections, and roundabouts. Unlike traditional simulators that concentrate on single robots with basic kinematic constraints in open spaces, SocialGym 2.0 employs multi-agent reinforcement learning (MARL) to develop optimal navigation policies for multiple robots with diverse, dynamic constraints in complex environments. Built on the PettingZoo MARL library and Stable Baselines3 API, SocialGym 2.0 offers an accessible python interface that integrates with a navigation stack through ROS messaging. SocialGym 2.0 can be easily installed and is packaged in a docker container, and it provides the capability to swap and evaluate different MARL algorithms, as well as customize observation and reward functions. We also provide scripts to allow users to create their own environments and have conducted benchmarks using various social navigation algorithms, reporting a broad range of social navigation metrics.
## I Introduction
For autonomous agents to be successfully deployed in environments with human populations, it is essential to incorporate principles of collaboration and social compliance in those agents. These principles are particularly relevant in applications such as warehouse management [1], delivery of medication, provision of companionship, and navigation assistance in airports [2]. The challenge in developing socially compliant behavior for these scenarios lies in the diversity of environments and unpredictable human traffic patterns, which require extensive training for the agents to operate safely and effectively. Deploying untrained agents in social environments is not feasible, highlighting the need for realistic simulated environments for training and testing. Such simulations should ideally mimic human navigational patterns to enhance the training and testing process for social agents.
Several simulators, listed in Table I, focus on emulating specific challenges in social navigation for agents to train on. PedsimROS [4] is provided as a native Robot Operating System (ROS) package that can be easily integrated into any higher-level navigation interface. SEAN 2.0 [5] and CrowBot [6] use Unity [7] to provide a photo-realistic 3D physics engine, allowing for robot dynamics to be included in the agents physics (helping to close the sim-to-real gap during deployment). SocNavBench [8] captures realistic human traffic patterns by replaying trajectories from popular pedestrian datasets. CrowdNav [9] focuses on dense crowd simulations for agents to navigate through. MengeROS [10] offers several collision avoidance modules, including ORCA and social forces [4] for simulating human pedestrian motion and can simulate up to \(1000\) pedestrians and \(20\) robots in the order of milliseconds. These simulators provide various metrics to evaluate socially compliant trajectories, including successful navigation, path smoothness, stopping time, and jerky movement.
However, simulators can often lack in specific ways that limit their ability to model challenging unstructured social scenarios in the real world. For example, multiple agents act autonomously, achieving their own objectives rather than following a fixed crowd simulation model such as Pedsim or Social Forces [4]. Furthermore, often such multi-agent interactions are non-cooperative or competitive, resulting in deadlocks or near-collisions [13, 14]. Lastly, agents in the real world obey complex kinodynamic constraints.
In Table I, we summarize the state-of-the-art social navigation simulators. An immediate observation is that _all_ simulators are currently single-agent navigation in simple open environments. To simulate crowds or other agents, the simulators will often model human crowds using reciprocal policies [12] or replay stored trajectories from a dataset [8], or both. SEAN
Fig. 1: **SocialGym 2.0** is a multi-agent navigation simulator for social robot navigation. SocialGym 2.0 builds on top of the PettingZoo [3] multi-agent reinforcement learning library and interfaces with a low-level planner capable of global path planning and trajectory optimization for multiple agents with varying dynamic constraints. In this figure, blue boxes represent agents (robots or humans).
2.0 [5] defines social navigation scenarios via social maneuvers such as crossing, following, and overtaking, but these only apply in open environments, excluding geometrically constrained scenarios. Crossing or passing may be impossible and lead to sub-optimal trajectories like colliding with walls when navigating through a narrow doorway, for example. Furthermore, while several simulators [5, 8, 6, 11] model real-world robot dynamics and kinematics realistically, only two simulators (CrowdBOT and our previous work, SocialGym) allow configurability and extensibility to experiment and benchmark different robot kinodynamic configurations. In fact, we find that this configurability and extensibility are desirable properties of all features in a simulator. Current simulators, however, offer users very little control over the simulator through the use of the convention-over-configurability [15] design philosophy.
To overcome these challenges we introduce SocialGym 2.0, an open-source simulator for multi-agent social autonomous navigation in challenging environments. SocialGym 2.0 features multiple autonomous agents, each optimizing its own objective function. Each agent is a robot with realistic kinodynamic constraints, including limits on linear and angular velocity and acceleration and physical parameters like shape and size. SocialGym 2.0 simulates both open, building floorplans, as well as social encounters like hallway passing. Finally, we offer users complete control over each part of the simulator, enabling users to conduct research in agent modeling, trajectory planning and collision avoidance, policy learning, and navigation in different social contexts. The unique novelty of SocialGym 2.0 is that it _simultaneously_ goes-
1. **beyond motion replay and reciprocity:** Our multi-agent reinforcement learning paradigm trains multiple autonomous agents each with their own policy. Using the PettingZoo [3] (multi-agent Gym) and Stable Baselines3 [16] APIs, SocialGym 2.0 supports multi-agent reinforcement learning, configurable observation and reward functions, and variable number of agents across training episodes.
2. **beyond simple kinematics:** SocialGym 2.0 implements global path planning and local trajectory optimization conditioned on real robot dynamics. In SocialGym 2.0, robot dynamics can be configured to simulate multiple different robots with varying dynamics including differential drive and omni-directional robots dynamics.
3. **beyond open spaces:** SocialGym 2.0 simulates challenging environments including university campus buildings and geometrically constrained social mini-game scenarios.
4. **beyond convention-over-configuration paradigms:** SocialGym 2.0 uses the _configuration_-over-_convention_ paradigm providing users control over every module of the stack, while _simultaneously_ keeping the stack simple to use.
## II Background
Having robots navigate in shared humans spaces is a central goal in robotics. The core challenges in solving this problem stem from a single fact-robots have to interact with humans in shared constrained environments, which often means attempting to optimize individual conflicting objectives, such as trying to pass through a narrow hallway or doorway together. In particular, the first challenge to a robot would be to account for the hidden objectives of humans; more specifically, the optimal actions of a robot would be dependent on the unknown goals of the humans in the scenario. MARL has shown great promise to address the first challenge in many fields of robotics and engineering. The next challenge is to plan trajectories that are not only safe and efficient but more crucially are also socially compliant. Ensuring all these qualities in the resulting motion planning requires modeling precisely the underlying kinodynamics of the robots. Finally, humans are different and move in different ways according to culture, situation, and behavioral disposition. Modeling human-robot interaction is necessary to capture this range of behaviors. In what follows, we expand on recent work along MARL, local trajectory planning and robot dynamics, and human-robot interaction that motivated the design choices in SocialGym 2.0.
\begin{table}
\begin{tabular}{c c c c c c c c} \multicolumn{1}{c}{\multirow{2}{*}{Simulator}} & \multicolumn{2}{c}{**CONFIGABILITY \& EXTENSibility**} \\ \cline{3-8} & **MULTI-AGENT\({}^{\dagger\dagger}\)** & **CONTRAINED\({}^{\dagger}\)** & **ROBOT** & \multirow{2}{*}{Agent} & \multirow{2}{*}{Local Navigation} & \multirow{2}{*}{Policy} & \multirow{2}{*}{Environment} \\ & **PLANNING** & **ENVIRONMENT** & & & & & & \\ \hline SEAN [5] & ✗ & ✗ & ✗ & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & Obstacle avoidance, Trajectory planning & Training, Evaluation & \multirow{2}{*}{Senarios} \\ \hline
**CrowdBuet [6]** & ✗ & ✗ & ✗ & ✗ & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \hline
**SocNavBench [8]** & ✗ & ✗ & ✓1 & Sensors & ✗ & ✗ & 2d maps, Scenarios \\
**CrowdNet [9]** & ✗ & ✗ & ✗ & ✗ & ✗ & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \hline
**MengeROS [10]** & ✗ & ✗ & ✗ & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \hline
**PodiumROS [14]** & ✗ & ✗ & ✗ & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } \\ \hline SocialGym [11] & ✗ & ✗ & ✗ & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \hline
**SocialGym 2.0** & ✓ & ✓ & ✓* & & & \\ \hline \end{tabular}
* \({}^{\dagger\dagger}\) Each agent follows a policy designed to maximize _their individual_ reward. This excludes crowd simulation models such as ORCA [12] and Social forces [4].
* \({}^{\ddagger}\) Constrained environments refer to social mini-games _e.g._ Doorway, Hallway.
* \({}^{\ddagger}\) Different robots can simulate varying configurable dynamics.
* \({}^{\ddagger}\) Limited to unicycle kinematic model.
\end{table} TABLE I: Comparison of current simulators for social robot navigation.
### _Multi-agent Reinforcement Learning_
MARL is a field of study with a focus on computing an optimal policy for multiple agents using reinforcement learning. Deep learning-based MARL1 has achieved remarkable success in cooperative, competitive, and mixed games such as Go [17], chess [18], poker [19], Dota2 [20], and StarCraft [21], the latter two also serving as benchmarks for fostering MARL research. In all the above, the best MARL policies even beat professional players in all these games. Recently, MARL was also applied to competitive racing and was able to defeat professional human racers [22]. We refer interested readers to [23] for a survey on MARL in games.
Footnote 1: henceforth simply referred to as MARL.
MARL has also been successfully applied to robot navigation in both indoor and outdoor scenarios. In the outdoor scenarios, several works use neural networks to either directly learn a navigation policy [24, 25, 26, 27] or learn the underlying dynamics [28]. For indoor navigation, Frozone [29] and DenseCAvoid [30] addresses the freezing robot problem in dense crowds, CoMet [31] attempts to learn group cohesion to navigate among groups of pedestrians, and CADRL [32], or Collision Avoidance with Deep Reinforcement Learning, is a state-of-the-art motion planning algorithm for social robot navigation using a sparse reward signal on reaching the goal and penalises agents for venturing close to other agents. A variant of CADRL uses LSTMs to select actions based on observations of a varying number of nearby agents [33]. CADRL, however, uses a unicycle kinematic model and does not account for robot dynamics. The progress being made on social navigation through MARL algorithms inspired us to include it in SocialGym 2.0. Furthermore, SocialGym 2.0 currently supports indoor navigation but can be extended to outdoor scenarios via global vector maps.
### _Robot Dynamics and Trajectory Optimization_
The goal of autonomous navigation is to move from place A to place B with little to no human input. However, solving the navigation problem requires that the resulting trajectories be not only feasible but also smooth and admissible to the low-level motion controller. Planning algorithms that ignore robot dynamics or assume simplified kinematics require non-trivial and often expensive post processing to make the path smooth and admissible to the controller [34].
Simplified dynamic models assume that robots only operate in a limited subspace of their entire state space, such as low acceleration and speed, minimum wheel slip, negligible tire deformation, and perfect non-holonomic constraints [35]. In social situations, humans execute a wide range of dynamic behaviors like slowing down to let others pass or speeding up to overtake a group of pedestrians on a sidewalk. Realistically simulating and deploying social maneuvers on robots in shared humans spaces requires that the motion planning satisfy the dynamic constraints of the robot such as linear velocity, acceleration, and steering angle [35, 28, 36]. In addition to feasibility and smoothness, resulting motion controllers also need to generate optimal, efficient, and socially compliant paths. In SocialGym 2.0, users can configure every kinodynamic variable and directly test the changes on the local planner.
### _Human-Robot Interaction_
Humans navigate differently in varying social contexts, such as doorways, hallways, intersections, roundabouts, and so on. The differences being the way people interact in shared spaces which engenders a range of maneuvers like passing, overtaking, following, and cutting off individuals [37, 38, 39, 40, 41]. By simulating different scenarios such as the ones mentioned above, researchers can understand the complexities of human behavior and design robots that can navigate these scenarios effectively and safely [42, 43]. SocialGym 2.0 supports open spaces and building floorplans, enabling the modeling of macroscopic crowd patterns as well as microscopic social interactions like social mini-games (doorways, hallways, intersections, and roundabouts).
## III The SocialGym 2.0 Design & Architecture
In this section, we overview the different components of SocialGym 2.0. We will begin by summarizing the overall design and how the different components interface with one another. In the remainder of the section, we will dive into each component in detail.
We developed SocialGym 2.0 keeping configurability, extensibility, and modularity in mind, using a configuration-over-convention style. In order to allow easy development and research on various aspects of social navigation, we stratified SocialGym 2.0's stack into different layers shown in Figure 2. At the top of the stack is the PettingZoo [3] and Stable Baselines3 [16] interface. This interface uses ROS to send
Fig. 2: **SocialGym 2.0 Architecture Overview:** The top-level interface consists of a Pettingzoo [3] environment and acts as the primary interface between the learning agents (policy) and simulator. This interface updates the agents’ action selection policy based on the current observation in the state space and sends new actions to the local simulator, which returns a new state in the state space by coordinating with the Human and Navigation modules to simulate the transition function based on the selected action and robot dynamics at each time step. The new state observations are returned to the top-level interface for computing the rewards and updating the policy.
actions from a policy to UTMRS2, a lightweight simulation engine that acts as an intermediate between the interface and the local navigation and the human crowd simulation modules. The local navigation planner is responsible converting high-level actions from the PettingZoo interface into continuous motion commands that satisfy the underlying robot dynamics and sends back the next state to the simulation engine. Each layer of the stack has a modular API that allows researchers and developers to focus on a single part of the stack at a time without having to refactor or access other parts of the stack. In the sections that follow, we describe each part of the stack in detail.
Footnote 2: University of Texas Multi-Robot Simulator
### _The Multi-Agent Gym Interface_
The top level interface follows from the familiar OpenAI Gym API extended for multi-agent scenarios using the Petting-Zoo and Stable Baselines3. We construct an environment that follows the standard lifecycle of a Gym environment (reset, step, etc.). Our environment takes as input a 2D map which consists of a vector map file containing vectors that represent walls (or otherwise impassable and stationary objects) as well as a navigation graph which defines the possible paths through the vector map. We provide a useful program for creating 2D maps [44]. Once a 2D map is selected, the user can choose a scenario that consists of unique starting and ending positions of agents or simulated pedestrians. To better understand the difference between a 2D map and a scenario, consider a doorway as a 2D maps with two scenarios-one in which all agents are entering and exiting the doorway in the same direction, and the other has agents entering and exiting from both sides. Once a 2D map and corresponding scenario have been selected, they are passed into configuration files and given to UTMRS for tracking state as well as initializing other submodules with the same information. Although we mention only one 2D Map and scenario here, we have wrappers that allow an environment to sample multiple 2D maps and scenarios during training and evaluation.
### _Utms_
UTMRS is a C++ simulation _engine_ that receives high-level actions from the multi-agent Gym interface and is responsible for updating the state, receiving new state observations from the local navigation and human motion submodules, and sending them back to the interface via ROS messages. UTMRS additionally creates visualizations and maintains several internal states necessary for the simulator, including walls, current positions, goal states, and previous actions. Beyond serving as an engine that controls message passing and centrally interfacing all the different components, UTMRS itself does not actively impact policy learning. Although ROS is essential to our system, we designed SocialGym 2.0 such that users who may be unfamiliar with ROS do not have to work with ROS in their development process-SocialGym 2.0 handles ROS messaging internally.
### _Local navigation_
The local navigation planner serves two purposes in SocialGym 2.0. The first is to facilitate continuous navigation on the navigation graph selected in the multi-agent Gym interface. This navigation is achieved by sampling a set of trajectories at every given state and then selecting a trajectory that is in the direction of the intermediate goal (a node on the navigation graph) which is not blocked by some obstacle (robot or wall). The second purpose of the local planner is to emulate robot dynamics for continuous state changes on the selected trajectory. This emulation ensures the continuous navigation is admissible to any specific local motion controller. Once the trajectory has been selected and the continuous action sampled, the local planner updates the state of the agent and returns the newly updated location of the agent to UTMRS, at which point the UTMRS layer will update its internal state and pass the state as a message to other submodules as well as the Gym interface. UTMRS then awaits a new command from the multi-agent Gym interface. This loop of high-level actions from the Gym interface being passed into UTMRS and then to the local planner for state updates, which is then passed back to UTMRS is the main simulation loop for SocialGym 2.0. The human motion module follows a similar process as the local navigation module. In SocialGym 2.0, the human motion is simulated using social forces [4], similar to the current simulators listed in Table I. Although we support the human motion model, it is optional. SocialGym 2.0 supports both single and multi-agent training as well as training with simulated humans or not.
## IV Training and Evaluating a Multi-Agent Navigation Policy in SocialGym 2.0
Having described the individual components of SocialGym 2.0 in the previous section, we now walk through the process of training and evaluating a multi-agent navigation policy. The life-cycle of training a multi-agent policy in the interface follows almost exactly from the standard process in OpenAI Gym or Stable-Baselines. We extend these loops with customizations for the UTMRS layer as well as with PettingZoo to enable multi-agent training (MARL). We show an example of the required code in Listing 1. Although using SocialGym 2.0 as you would Stable Baselines v3 or PettingZoo is supported, we also implemented a program that can run the training from a configuration file. See an example of a configuration file in Listing 3. To illustrate SocialGym 2.0's features, we briefly discuss each line of code in Listing 1.
The training life-cycle begins with selecting 2D maps and scenarios (Section IV-A) to be played out on those maps. Then, two class definitions, the Observer and Rewarder, are used for tracking the observations and rewards during each step (Section IV-B). The environment is then defined, which instantiates the Gym Environment and initializes the ROS submodules with all the information needed to load the first 2D map and scenario (Section IV-C). Next, a Stable Baselines-v3 Policy is chosen (both SB3 and SB3-Contrib are currently
supported), and the learning method can be called to train the policy. Finally, the policy is evaluated on metrics designed to measure socially compliant navigation (Section IV-D). We dive into each of these steps in more detail below.
```
#Creatingacenariogiventhe2Dmapfolder scenario=Graphhavscenario('envs/scenario/hallway')
#CreatingtheObserverthroughmodularObservations thatarecustomizable observations- { AgentsPose(ignore_theta-True), OtherAgentObservables(ignore_theta-True), CollisionObservation(), SuccessObservation() } observer=Observer(observations)
#CreatingtheRewarderwithasparsegoalreward andapenaltytermthatscalesoverthecourse oftraining. rewards- { Success(weight-100), LinearWeightScheduler(Collisions(),duration=10 _000) } rewarder=Rewarder(rewards) }
#Createthebaseclass env=RosSocialEnv(observer,reward,scenario,num_agents-7)
#...Wrappersasneeded...
#StandardGymInterfacingforTrainingandSteppingmodel-PPO("MLPpolicy",env) model.learn(total_timesteps=10_000) obs=env.reset() whileenv.agents: action,_states=model.predict(obs) obs,rewards,terminations,infos=env.step( actions)
```
Listing 1: Example usage of SocialGym 2.0.
### _2D Maps, Navigation Graphs, and scenarios_
We posit that a big part of being socially compliant is derived from experience in various geometrically constrained and dense environments where spatial and temporal reasoning are required to avoid collisions while respecting others. To enable this in our simulator, we created a program (deployed in Docker to ease its installation and use) to create 2D maps with two components. Each 2D map contains a list of vectors that represent impassible obstacles (walls, for example) - denoted in blue in Figure 3. These vectors allow us to create various "social mini-games" enabling rapid training and evaluation of agents under challenging situations. However, we can also use 2D floor plans of buildings to test agents in larger and more realistic situations.
The second component of the 2D maps is the navigation graph. The navigation graph provides all possible paths through the environment where agents must navigate through a set of nodes by following edges that connect them. The navigation graph allows for agents to have a high-level discrete action space, i.e., GO or STOP actions, which we currently used for our evaluations (although continuous actions are soon to be supported, which could ignore the navigation graph entirely). Having a navigation graph also enables the easy creation of constrained paths where the agent must traverse an edge shared with another agent ensuring a conflict will happen if the agents ignore each other.
Finally, a scenario is defined as a selected list of global paths (list of nodes on the navigation graph) for each agent in the episode. A 2D map and navigation graph could have many scenarios defined on them (for example, unidirectional traffic or bidirectional traffic on the navigation graph). We allow for the easy creation of scenarios through a python helper class that allows the user to define global paths for the agents in each episode.
### _Observations and Rewards_
A complexity of social navigation is the vast number of definitions and ways of representing social navigation; for example, there is no ubiquitous metric for social navigation [42, 43]. We attempt to address the ambiguities in the setup, evaluation, and definition of social navigation by making the state space and reward functions completely customizable, in addition to making the simulator open-source. This customization allows researchers to create their own definitions of social navigation (what is observable, what is hidden, what is punished, and what is rewarded) in as few line changes as possible. We also give an intuitive class structure that allows researchers and developers to create their own observations and reward functions, extending the features of SocialGym 2.0. To ease the complexity of these customizations and to parse the state representations returned from UTMRS, we create helper classes called Observer and Rewarder.
The Observer parses the raw state vectors given from the UTMRS layer and produces an observation vector (numpy array) as well as an observation dictionary (a map between the name of a given observation with its value). The Observer class can be thought of as a lightweight wrapper around the numpy array traditionally given to a Stable Baseline Policy; however, the construction is entirely customizable. Allowing researchers and developers to construct different state spaces for different mathematical models of social navigation with ease. We give an example of such a definition for our evaluations in Section V. This construction also allows for building custom layers for representation learning. In our evaluation, we build a custom LSTM layer to collapse the variable number of agent or human observations into a fixed-sized observation vector (a method used in state-of-the-art social navigation models) [33].
The Rewarder class functions similarly to the Observer. The construction takes a list of Reward base classes where each Reward class takes observations (both the vector and dictionary) from the Observer class at each step. The Rewarder can then use the observations to derive a reward or penalty for that step. All Reward classes are summed in the Rewarder class at the end of each step; however, a dictionary of rewards is kept for logging purposes.
### _Wrapping the Environment_
Following the Gym API, SocialGym 2.0 supports environment wrappers for extensibility and customization of environments as well as callbacks that tap into the life cycles of a training procedure. SocialGym 2.0 has many wrappers already created to ease training including some custom MARL wrappers that end the episode when agents do not move, when agents collide, or when a step limit is reached. Other wrappers that plot and monitor training and evaluation metrics as well as generating new scenarios that vary the number of agents, or 2D maps are also available in SocialGym 2.0. We also created checkpoint and evaluation callbacks that allow the policy to be saved and tested during training.
### _Evaluation Metrics_
In addition to the standard evaluation metrics available in Stable Baselines3 and PettingZoo, we extend these functions and implement custom evaluation metrics in the same style of SocNavBench and SEAN2.0. This stems from the noted ambiguous definition of social navigation. Often, in lieu of a single metric that best defines Social Compliance, multiple metrics are used as a proxy. We implemented the most common metrics used to measure social compliance, including partial and full success rates, velocity changes, average stopping time, collision rates, and more.
### _Social Mini-game Benchmarks_
In this work, we include five mini-game scenarios to benchmark social navigation. These are Open, Hallway, Doorway, Roundabout, and Intersection, depicted in Figure 3. Social mini-games may be described as a scenario with multiple agents accomplishing a shared goal in spatially constrained environments. Such scenarios frequently arise indoors at schools, hospitals, airports, etc. as well as outdoors on sidewalks and traffic intersections. We provide a point-click/drag interface that allows for the easy construction of both vector maps and navigation graphs. A navigation graph is a collection of nodes connected by straight-line edges. An agent is then given a set of nodes to reach, where the first in the list is the starting position, and each node after should lead the agent to the last, defining a trajectory. The new custom environment can then be called easily within the top-level script, as shown on line 3 of Listing 1.
## V Experiments and Discussion
In SocialGym 2.0, users can represent social navigation through different formulations depending upon the application. In our evaluation, we decided to formulate social navigation as a partially observable stochastic game (POSG) [45] with \(k\) agents using the tuple, \(\left\langle k,\mathcal{X},\{\mathcal{U}^{i}\},\mathcal{T},\{\mathcal{O}^{i}\}, \{\Omega^{i}\},\mathcal{R}^{i}\right\rangle\). Each agent is randomly initialized with a start position (\(p^{i}_{I}\)) and a goal position (\(p^{i}_{G}\)). The continuous state space \(\mathcal{X}_{t}\) is an array comprising the agents' state vectors \(x^{i}_{t}\). This vector is typically in SE(2) indicating that the robot has a 2D translation and an orientation. In SocialGym 2.0, users can configure the state vector by adding, removing, or toggling variables. Out of the box by default, SocialGym 2.0 currently sets \(x^{i}_{t}=[d^{i}_{G},p^{i}_{x},p^{i}_{y},\dot{p}^{i}_{x},\dot{p}^{i}_{y},\psi^ {i},v^{i}]^{\top}\) for \(i=1,2,\ldots,k\), where \(d^{i}_{G}\) is the distance from the goal, \(p,\dot{p}\) represent position and velocity, \(\psi^{i}\) represents the heading, and \(v^{i}\) represents the preferred velocity.
The state space is partially observable in our formulation because agents have different goals they are trying to reach that are only known to them. This definition can be easily changed with a single line, in our case uncommenting line 3 in Listing 2
```
observations-[ AgentGoalDist(); #OtherAgentGoalDist(),...
```
Listing 2: Configuring State-Spaces with SocialGym 2.0.
Each agent has a discrete action space \(\mathcal{U}^{i}\)3, and observation function \(\mathcal{O}^{i}\) that takes in \(x^{i}_{t}\in\mathcal{X}_{t}\) to output a local observation vector \(o^{i}_{t}\in\Omega^{i}\) where \(o^{i}_{t}=[x^{i}_{t},\tilde{x}^{o}_{t}]^{\top}\), and a reward function \(\mathcal{R}^{i}:(\mathcal{X}_{t},\mathcal{U}^{i})\longrightarrow\mathbb{R}\). The transition function is given by \(\mathcal{T}:\mathcal{X}\times U\longrightarrow\mathcal{X}\), where \(U:=\mathcal{U}_{1}\times\mathcal{U}_{2}\times\ldots\times\mathcal{U}_{k}\). Each agent has a policy distribution \(\pi^{i}:\Omega^{i}\longrightarrow\Delta(\mathcal{U}^{i})\) that takes in the local observation \(o^{i}_{t}\in\Omega^{i}\) and stochastically performs action \(u^{i}_{t}\in\mathcal{U}^{i}\) to produce a trajectory \(\Gamma^{i}=\left(x^{i}_{I},x^{i}_{2},\ldots,x^{i}_{G}\right)\), where \(x^{i}_{t+1}=g(x^{i}_{t},o^{i}_{t},u^{i}_{t}|u^{i}_{t}\sim\pi^{i}(o^{i}_{t}))\) and \(g(\cdot)\) is a local planner. The environment is geometrically constrained if there exists at least one point in the navigation graph, \(p_{\text{common}}=(p_{x},p_{y})\), such that \(p_{\text{common}}\in\Gamma^{i}\ \forall\ i=1,2,\ldots,k\).
Footnote 3: Future versions will include extending to continuous actions spaces
Fig. 3: **Social Navigation Environments:**SocialGym 2.0 provides the flexibility for users to create new maps and scenarios for social navigation. By default, we include five types of social environments–Open, Doorway, Hallway, Intersection, and Roundabout, in addition to the UT Austin campus buildings from SocialGym 1.0.
The MARL objective is to find the optimal joint policy \(\Pi^{*}=(\pi^{*}_{1},\pi^{*}_{2},\ldots,\pi^{*}_{k})\) such that,
\[\Pi^{*}=\arg\max_{(\pi_{1},\pi_{2},\ldots,\pi_{k})}\sum_{i=1}^{k} \mathbb{E}_{\pi^{*}}\Big{[}\sum_{t\geq 0}\gamma^{t}\mathcal{R}^{i}(x^{i}_{t},u^{i}_{ t})|u^{i}_{t}\sim\pi^{i}(o^{i}_{t})\Big{]} \tag{1}\]
### _Hyperparameters_
Unless otherwise stated, we use Stable-Baselines3 PPO with a step size of \(4096\) and the MLP architecture for training (the rest of the policies are hyperparameters of the default). We train for a total of \(1.25\) million steps, where the first \(35\) episodes have \(3\) agents, the next \(35\) have \(4\), and the remaining episodes have \(5\) agents. After training, we evaluate each policy on \(25\) trials for \(3,4,5,7\), and \(10\) agent settings. Agents may observe other agents' positions (\(x\) and \(y\) local to their coordinate frame), other agents' velocities, their own distance to the goal, if they are in a collision, and if they have succeeded. Our reward function penalizes each agent for every step they are not at the goal (\(-1\)), a reward is given when the goal is reached (\(100\)), a penalty when an agent collides (\(-10\)), and a variable reward for making progress towards the goal (delta from the previous location to the current). If no agents have moved over a significant delta (total magnitude of \(0.5\) meters in \(100\) steps) the episode ends and all agents not at their goal are given a penalty (\(-100,000\)). Finally, we use wrappers to sample new trajectories through the map at the end of each episode and use Stable Baselines3 VecNormalize wrapper to normalize the observation and reward space (we found this to be very important to achieve stable results). An example of a configuration file for one of our experiments can be found in Listing 3.
### _Benchmarking Social Navigation Algorithms_
We benchmark \(5\) baseline social navigation policies for each of the social mini-games described in Section IV-E. They are CADRL, CADRL(L), Enforced Order, Any Order, and Only Local. CADRL [32] and its LSTM-based variant, which we denote as CADRL(L), are state-of-the-art multi-agent social navigation methods. CADRL and CADRL(L) use a reward function where an agent is rewarded upon reaching the goal and penalized for getting too close or colliding with other agents as well as taking too long to reach the goal. We train these baselines using PettingZoo and Stable Baselines3 [16] and report results across a range of social navigation metrics in Table II.
Enforced Order and Any Order are baselines that encourage agents to engage in social behaviors such as queue formations. Finally, Only Local is an ablated baseline where we remove the high-level policy from the MARL interface, reducing it to a purely local multi-agent navigation baseline. We compare these baselines across several social navigation metrics, following the standard literature [5, 8, 42, 43], and present results in Table II. The experimental observations suggest that there is no straightforward "optimal social navigation" algorithm in social mini-games. SocialGym 2.0 can be used to benchmark a range of policies to find the best one for a specific mini-game.
Advantage of sub-goal rewards in constrained social navigationThe social mini-games in the benchmark are subject to geometric constraints that can cause conflicts between
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Scenario & Baseline & Avg. Length & Coll. Rate & Stop Time & Max \\ \hline \multirow{5}{*}{OPEN} & CADRL [3] & 526 & 0.24 & 287 & 71 \\ & CADRL(L) [3] & 585 & 0.52 & 325 & 117 \\ & Enforced Order & 407 & 0.32 & 253 & 46 \\ & All-Defined & 302 & 0.12 & 234 & 46 \\ \cline{2-5} & Only Local & 435 & 0.88 & 187 & 72 \\ \cline{2-5} & CADRL [3] & 347 & 0.00 & 222 & 13 \\ \cline{2-5} & CADRL(L) [3] & 0.02 & 0.12 & 436 & 36 \\ \cline{2-5} & Enforced Order & 960 & 0.16 & 617 & 117 \\ & Any Order & 667 & 0.32 & 324 & 51 \\ & Only Local & 411 & 2.32 & 166 & 30 \\ \cline{2-5} & CADRL(L) [32] & 564 & 0.84 & 340 & 47 \\ \cline{2-5} & CADRL(L) [3] & 875 & 0.84 & 441 & 96 \\ \cline{2-5} & Enforced Order & 917 & 0.68 & 538 & 43 \\ \cline{2-5} & Any Order & 621 & 1.00 & 197 & 69 \\ \cline{2-5} & Only Local & 482 & 2.80 & 283 & 24 \\ \cline{2-5} & CADRL(L) [32] & 691 & 1.20 & 389 & 28 \\ \cline{2-5} & An Antil-Defined & 678 & 0.32 & 267 & 127 \\ \cline{2-5} & Enforced Order & 697 & 0.80 & 233 & 104 \\ \cline{2-5} & Any Order & 999 & 0.68 & 952 & 47 \\ \cline{2-5} & Only Local & 1245 & 2.48 & 640 & 97 \\ \cline{2-5} & CADRL(L) [32] & 733 & 0.64 & 395 & 99 \\ \cline{2-5} & CADRL(L) [33] & 730 & 0.24 & 440 & 56 \\ \cline{2-5} & Enforced Order & 352 & 0.00 & 21 & 273 \\ \cline{2-5} & Any Order & 804 & 0.32 & 462 & 27 \\ \cline{2-5} & Only Local & 2112 & 2.16 & 1914 & 163 \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Benchmarking various MARL baselines:** We compare six baselines. CADRL and its LSTM variants are state-of-the-art RL-based navigation algorithms, Enforced Order and Any Order are sub-goal reward policies that encourage queue formation, and Only Local is an ablation method where only the local motion planner is used. Green indicates best. Dark green indicates the overall best performing baseline for that scenario. **Conclusion**:**: There is no clear “optimal social navigation” algorithm in social mini-games. SocialGym 2.0 can be used to benchmark a range of policies to find the best one for a specific mini-game.
robots' paths. To evaluate the compatibility of standard social navigation metrics with human social behavior, we introduced a reward function based on the concept of queue formation. This reward function rewards robots for following a specified order to enter and exit conflict zones. Two new baselines were introduced to evaluate the efficacy of the reward function: "Any Order" and "Enforced Order". Any Order assigns a reward if a robot successfully passes through a conflict zone, regardless of the order, while Enforced Order assigns a random but specific order for robots to enter and exit. The results of the baselines are presented in Table II and Table III.
The results show a mismatch between the standard social navigation metrics and success rates for different environments. In the Open and Roundabout scenarios, the sub-goals (Any Order and Enforced Order) have the greatest impact (Table II), but their success rates are low (Table III). While in the Intersection, Doorway, and Hallway environments, the success rates of the sub-goals are high but the metrics are poor. Although the policies may have learned a very social trait that is useful in environments requiring line formation, the standard metrics would not reflect this skill. This suggests that the current metrics used to evaluate social navigation are insufficient and need to be improved. Thus, a goal for future work is to provide better extensibility of the evaluation metrics and to enhance the sub-goal models, Any Order and Enforced Order, for better results.
More Complex PoliciesIn our benchmarks, we evaluated the effectiveness of the Long Short-Term Memory with Proximal Policy Optimization (LSTM-PPO) from the SB3-Contrib library. The objective was to determine if collecting previous timesteps to form intermediate representations of state could improve performance in challenging scenarios. Although LSTM-PPO did not consistently outperform Proximal Policy Optimization (PPO) alone, it did demonstrate improved generalization to larger numbers of agents, sometimes achieving success in settings with up to ten agents. Our analysis of the performance of LSTM-PPO versus PPO, detailed in Table V, specifically investigated the impact of the size of observations on policy updates. Our results suggest that policies that incorporate more complex representations of the environment, particularly those that encode temporal information, may be more effective. Consequently, as future work, we plan to incorporate state-of-the-art Multi-Agent Reinforcement Learning policies not currently supported by the native Stable Baselines library.
Role of the local planner in MARL-based navigationWe also include another ablation study in which only the low-level planner and Ackermann steering are used. In this baseline, the multi-agent interface sends a "GO" command at all time steps, resulting in reactionary collision avoidance for each agent. Although this baseline can perform well in open environments with few agents, its limitations become evident as the number of agents increases or the environment becomes more complex. The results, as shown in Table III, indicates that the "Only Local" baseline cannot successfully navigate environments with five or more agents, except for the open environment. Moreover, Table II reveals that "Only Local" has the highest average collision rate in an episode across all environments. This baseline demonstrates the importance of high-level planning in solving Social Navigation challenges in these mini-game environments and highlights the effectiveness of the discrete action space of "GO" and "STOP", despite its simplicity.
Experiments with different observation and reward functionsOur final benchmark highlights the configurability of observation and reward functions in SocialGym 2.0 as a major advantage, enabling users to quickly and easily run multiple experiments to determine the optimal set of parameters. In Table IV, we present results for various configurations that we tested. The table indicates inclusion (\(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)))))),,,,,,, \
from the Intersection scenario with 4 agents.
The results demonstrate the significance of rapid testing and the ease with which different configurations can be explored in SocialGym 2.0. The variability in success rates as a result of changes in the observation space and reward functions is notable. Surprisingly, removing the collision penalty appears to perform best in these experiments, although further investigation is required to fully understand this discrepancy. Our evidence suggests that agents with the collision penalty scale better to larger numbers of agents. Although we did not exhaustively test all possible configurations, these results demonstrate the importance of being able to quickly and easily explore different combinations of parameters.
## VI Conclusion, Limitations and Future Development Plans
In conclusion, this paper presents SocialGym 2.0, a multi-agent navigation simulator designed to address the need for a realistic and challenging environment for social robot navigation research. The simulator provides a comprehensive solution to support research in this field, including a user-friendly interface, pre-packaged docker container and wrapper around PettingZoo's MARL library, as well as customizable observation and reward functions. Finally, we hope that the benchmarking of various social navigation algorithms demonstrates the potential of SocialGym 2.0 to advance the state of the art in this field.
However, SocialGym 2.0 has certain limitations that are currently being addressed, including constraints on CPU resources and lack of optimization for multi-threading and parallel processing. Additionally, parallel environments are not yet supported. Beyond addressing these limitations, we also plan to enhance the practicality of our platform by introducing continuous actions and state-of-the-art MARL algorithms in the multi-agent Gym interface, bringing it in line with cutting-edge advancements such as the work described in [46]. Furthermore, we aim to provide more flexibility and control through variable observation vectors and streamline the configuration process through a unified file.
|
2309.01961 | NICE: CVPR 2023 Challenge on Zero-shot Image Captioning | In this report, we introduce NICE (New frontiers for zero-shot Image
Captioning Evaluation) project and share the results and outcomes of 2023
challenge. This project is designed to challenge the computer vision community
to develop robust image captioning models that advance the state-of-the-art
both in terms of accuracy and fairness. Through the challenge, the image
captioning models were tested using a new evaluation dataset that includes a
large variety of visual concepts from many domains. There was no specific
training data provided for the challenge, and therefore the challenge entries
were required to adapt to new types of image descriptions that had not been
seen during training. This report includes information on the newly proposed
NICE dataset, evaluation methods, challenge results, and technical details of
top-ranking entries. We expect that the outcomes of the challenge will
contribute to the improvement of AI models on various vision-language tasks. | Taehoon Kim, Pyunghwan Ahn, Sangyun Kim, Sihaeng Lee, Mark Marsden, Alessandra Sala, Seung Hwan Kim, Bohyung Han, Kyoung Mu Lee, Honglak Lee, Kyounghoon Bae, Xiangyu Wu, Yi Gao, Hailiang Zhang, Yang Yang, Weili Guo, Jianfeng Lu, Youngtaek Oh, Jae Won Cho, Dong-jin Kim, In So Kweon, Junmo Kim, Wooyoung Kang, Won Young Jhoo, Byungseok Roh, Jonghwan Mun, Solgil Oh, Kenan Emir Ak, Gwang-Gook Lee, Yan Xu, Mingwei Shen, Kyomin Hwang, Wonsik Shin, Kamin Lee, Wonhark Park, Dongkwan Lee, Nojun Kwak, Yujin Wang, Yimu Wang, Tiancheng Gu, Xingchang Lv, Mingmao Sun | 2023-09-05T05:32:19Z | http://arxiv.org/abs/2309.01961v3 | # NICE: CVPR 2023 Challenge on Zero-shot Image Captioning
###### Abstract
In this report, we introduce NICE (New frontiers for zero-shot Image Captioning Evaluation) project1 and share the results and outcomes of 2023 challenge. This project is designed to challenge the computer vision community to develop robust image captioning models that advance the state-of-the-art both in terms of accuracy and fairness. Through the challenge, the image captioning models were tested using a new evaluation dataset that includes a large variety of visual concepts from many domains. There was no specific training data provided for the challenge, and therefore the challenge entries were required to adapt to new types of image descriptions that had not been seen during training. This report includes information on the newly proposed NICE dataset, evaluation methods, challenge results, and technical details of top-ranking entries. We expect that the outcomes of the challenge will contribute to the improvement of AI models on various vision-language tasks.
Footnote 1: [https://nice.lgresearch.ai/](https://nice.lgresearch.ai/)
## 1 Introduction
Zero-shot image captioning is a task that general purpose vision-language models must perform well, as it requires both the visual understanding of the scene and the ability to describe it in natural language. Automatic generation of image descriptions makes a variety of applications possible, such as improved image search with the help of natural language, better detection of inappropriate contents on the web, and explanation of visual contents to visually impaired people. In most real-world scenarios, images from unseen environments are frequently given to perform these actions, which makes zero-shot image captioning essential.
In the earlier days, image captioning models were trained on curated datasets which consist of training and testing data from the same domain or categories [8, 30]. These models inevitably have limited capability to recognize a whole new concept and describe it. For better utility of image captioning models, several datasets [1] were suggested in order to test the model with images from unseen categories. This evaluation process has put the existing models
on harder testing conditions, leading to development of image captioning models that can understand general scenes.
Although there are a few benchmarks which target for zero-shot image captioning evaluation, each of them lacks in one or more requirements, which includes large size of the dataset, variety of categories, and quality of language descriptions. The size of the dataset is essential to test the models with adequate number of images to guarantee reliability, and variety of categories is required so that the tested models are not fitted to perform well on a few certain concepts. Also, as the prediction shall be usually compared with the ground truth captions using text comparison metrics, the quality of the ground truth captions must be guaranteed to be accurate and of high quality.
Through NICE 2023 challenge, we made a newly curated NICE dataset, which consists of 26k images and corresponding high-quality captions, publicly available. Furthermore, we did not provide a specific training data for the challenge, which forces the models to be trained to generalize well, thus achieving zero-shot capability. Even though this challenge was a newly organized one, 51 and 31 teams competed in the validation and test phase, respectively, and the top-scoring entries showed very small differences in the final score. The following sections of the report includes the information on the challenge, evaluation methods, results, and proposed approaches by top-scoring entries.
## 2 Challenge
In this challenge, a newly curated dataset was made publicly available, named NICE dataset, and then the challenge was organized to evaluate image captioning capability of AI models. This section introduces the dataset, evaluation method, challenge phases, and results.
### Dataset
The images and corresponding captions used in this challenge were provided by Shutterstock. This new large-scale evaluation dataset consists of approximately 26k high quality images, along with associated metadata. It includes a wide breadth of concepts from various categories. With this dataset, the participants of the challenge were expected to take a longitudinal evaluation across a variety of metrics to comparatively assess the performances of different zero-shot image captioning models. Some example images in NICE dataset are shown in Figure 1. We did not provide specific set of training data for the challenge, as we aim for zero-shot image captioning, in which AI models can perform image captioning on the new data that was never seen in the training stage.
### Evaluation metrics
There were several evaluation metrics used for this challenge. The evaluation metric with the first priority was CIDEr [33] score, which is short for Consensus-based Image Description Evaluation. CIDEr score calculates the similarity score between two sentences by weighting each n-gram with the TF-IDF (Term Frequency Inverse Document Frequency) values. This scheme sets the weight of an n-gram higher if it appears in the sentence but not frequently in the whole set of captions, thus estimating the importance of that n-gram higher. CIDEr score is currently one of the most popular metrics for text comparison, and we chose it as the top priority evaluation metric for the challenge. In case of a tie, we used SPICE [2], METEOR [4], ROUGE [18], Bleu [24] in the order of priority.
Figure 1: Example images and corresponding captions in NICE dataset.
### Challenge phases
**Validation phase**: From February to April 2023, validation server was provided so that the participants could upload their prediction results and the server would calculate the scores by comparing them to the ground truth captions. In this phase, participants were able to access the validation data, which includes the image and the ground truth captions. As this is the first challenge on the task, we provided ground truth captions in order to let participants get acquainted with the data format and build strategies for the preparation of the final challenge entry.
**Test phase**: In April 2023, test server was open and allowed participants to upload prediction results up to 5 times during the phase. The score was calculated and the entry with the best CIDEr score was uploaded on the open leaderboard2. In the test phase, the ground truth captions were not accessible and only the test scores were shown to the participants.
Footnote 2: [https://codalab.lison.upsaclay.fr/competitions/10248#results](https://codalab.lison.upsaclay.fr/competitions/10248#results)
### Challenge results
The results of the challenge is presented in Table 1. There were 31 teams that participated in the challenge, and the final decision of the ranking was based on CIDEr score. The top-ranking entry scored 325.72, and the following entries scored 324.93, 316.23, and so on. Furthermore, the first ranking entry did not score the best in the other metrics, which shows that each entry had strong and weak points.
## 3 Proposed Approaches
### 1st rank : no
For the model level, they used OFA [34] as their base model. As shown in Figure 2, the overall architecture consists of three parts, namely Pre-training, Coarse-tuning and Fine-tuning. Pre-training stage aims to align a wide range of extensively visual concepts and store sufficient vision-language knowledge through contrastive learning, image captioning pre-train objectives. Coarse-tuning stage utilizes a small-scale external dataset similar to the competition domain, which can learn a large variety of novel concepts. Fine-tuning stage further compresses the dataset in the last
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Rank & Username & Bleu\_1 & Bleu\_2 & Bleu\_3 & Bleu\_4 & ROUGE\_1 & CIDEr & METEOR & SPICE \\ \hline
1 & jink & 56.0839 (3) & 46.5881 (2) & 40.0586 (2) & 35.1730 (2) & 55.5685 (2) & 325.7216 (1) & 29.1455 (2) & 44.4351 (2) \\ \hline
2 & stack-top & 58.0129 (1) & 47.8769 (1) & 40.9018 (1) & 35.7796 (1) & 56.3780 (1) & 324.9277 (2) & 30.0329 (1) & 54.5456 (1) \\ \hline
3 & PEJI & 56.4908 (2) & 46.5371 (3) & 39.8659 (3) & 34.5996 (3) & 54.9832 (3) & 316.2290 (3) & 28.9407 (3) & 43.8281 (3) \\ \hline
4 & calisolo & 55.7849 (4) & 45.4753 (4) & 38.468 (4) & 33.1970 (4) & 52.9229 (4) & 287.6926 (4) & 27.9402 (4) & 41.3584 (4) \\ \hline
5 & img\_capt & 53.6418 (7) & 42.8248 (7) & 35.9663 (6) & 30.5766 (5) & 51.6134 (7) & 278.1607 (5) & 27.08087 (7) & 40.9747 (6) \\ \hline
6 & kyominhwang & 54.1975 (6) & 43.0211 (6) & 35.4753 (7) & 30.0185 (7) & 52.3251 (5) & 274.6941 (6) & 27.3464 (5) & 41.0430 (5) \\ \hline
7 & Mtop & 54.4547 (5) & 43.3951 (5) & 35.8492 (5) & 30.2877 (6) & 52.1572 (6) & 270.5980 (7) & 27.2156 (6) & 40.7732 (7) \\ \hline
8 & Yongsik & 51.3670 (8) & 40.2552 (8) & 32.7835 (8) & 27.4779 (8) & 50.5170 (8) & 255.9013 (8) & 26.1637 (8) & 39.4600 (8) \\ \hline
9 & TXHTercury & 50.7199 (10) & 39.9091 (11) & 31.3438 (12) & 25.4707 (13) & 50.0612 (9) & 239.0106 (9) & 239.0106 (9) & 23.3949 (9) & 85.3570 (9) \\ \hline
10 & zero\_score & 50.1327 (12) & 39.4831 (9) & 32.4496 (9) & 27.4089 (9) & 48.0671 (12) & 238.3091 (10) & 24.8445 (12) & 37.0757 (10) \\ \hline
11 & sungbin.son & 48.2634 (14) & 37.8367 (13) & 31.0821 (13) & 26.3040 (10) & 46.5434 (14) & 229.5613 (11) & 23.8719 (15) & 35.3885 (14) \\ \hline
12 & ss501 & 50.4477 (11) & 38.8727 (12) & 31.1372 (11) & 25.6216 (12) & 48.1135 (11) & 228.8920 (12) & 24.7030 (13) & 36.5725 (12) \\ \hline
13 & danielchoi & 50.7946 (9) & 39.1847 (10) & 31.4011 (10) & 25.8194 (11) & 48.1828 (10) & 226.1749 (13) & 24.8638 (11) & 36.8950 (11) \\ \hline
14 & Hi1988 & 48.9767 (13) & 37.1830 (14) & 29.2179 (14) & 23.4479 (14) & 47.8742 (12) & 217.7758 (14) & 24.0982 (14) & 36.4749 (13) \\ \hline
15 & BraveGirls & 46.49135 (15) & 34.2065 (15) & 25.9219 (15) & 19.9941 (15) & 46.1757 (15) & 180.6635 (15) & 24.90944 (13) & 32.2205 (15) \\ \hline
16 & mobled37 & 39.8284 (16) & 27.7706 (16) & 20.0732 (16) & 14.7538 (16) & 39.5366 (16) & 134.2035 (16) & 18.9348 (16) & 27.7624 (16) \\ \hline
17 & rjshn & 35.2991 (17) & 23.4908 (17) & 15.9875 (17) & 10.9201 (17) & 37.0862 (17) & 113.8025 (17) & 16.9700 (17) & 25.3106 (17) \\ \hline
18 & snow0 & 31.7156 (20) & 19.2009 (19) & 12.2571 (18) & 7.9731 (18) & 31.8328 (18) & 92.8288 (18) & 14.5823 (19) & 22.6518 (18) \\ \hline
19 & sooun & 27.3832 (25) & 16.1707 (23) & 10.1247 (22) & 6.4948 (21) & 29.3049 (21) & 78.9487 (19) & 12.9487 (22) & 20.3599 (21) \\ \hline
20 & doofori & 32.1687 (19) & 18.3737 (20) & 11.0992 (20) & 6.8435 (20) & 30.1717 (20) & 77.8300 (20) & 13.9382 (20) & 26.0390 (20) \\ \hline
21 & rhemagmoong & 33.7145 (18) & 19.5054 (18) & 17.0519 (19) & 30.7696 (19) & 30.5224 (17) & 75.3119 (21) & 15.09248 (18) & 20.8646 (19) \\ \hline
22 & yonguz32 & 26.9424 (27) & 15.3106 (25) & 9.3935 (24) & 5.8594 (23) & 28.2218 (23) & 73.1277 (22) & 12.2745 (24) & 19.4324 (22) \\ \hline
23 & challang & 30.8059 (21) & 17.0075 (22) & 9.8781 (23) & 5.8459 (24) & 27.7463 (23) & 65.9372 (23) & 13.0716 (21) & 18.923
stage and adds it to the competition validation dataset.
For the data level, they collected external training data from LAION-5B [28], a large-scale CLIP-filtered image-text dataset. In the Pre-training stage, 1M image-text pairs are collected from the specific url(thumbx.shutterstock.com, editorial01.shutterstock.com, etc.). In the Coarse-tuning and Fine-tuning stage, they used all the competition images to retrieve external data from LAION-5B through Clip-retrieval3 library based on similarity. For each image query, they retrieved top-30 and top-10 image-text pairs, retain 120k samples for Coarse-tuning and 12k for Fine-tuning with the 5k validation dataset.
Footnote 3: [https://github.com/rom1504/clip-retrieval](https://github.com/rom1504/clip-retrieval)
In addition, they introduced contrastive learning [17], similarity-bucket, retrieval-augmented [37] strategies to the model. Contrastive learning aims to learn better single-modal representations before fusion and align visual and text concepts in Pre-training stage. Similarity-bucket strategy provides different similarity-prompts to the vision-language model in the all stages, which can control the model to generate the most matched and high-quality caption for a given image. Retrieval-augmented strategy provides a mini knowledge-base for each image-text pair during the training part. The model can not only extract visual features such as objects, attributes, and relationships of the image, but also explicitly align the information of the image with the knowledge in the knowledge-base.
### 2nd rank : Retriever
Captions in NICE dataset often contain new concepts like camera angle descriptions and proper nouns like place names, which are difficult to predict under zero-shot settings. Motivated from retrieval-augmented models [5, 20], the Retriever framework aims to complement such lim
Figure 3: Overview of Retriever framework. It improves image captioning under zero-shot settings in two stages: **(a)** retrieval-based dataset discovery for training and **(b)** retrieval-based fusion conditioned on examples similar to input queries.
Figure 2: Overall Architecture of team ”no”. Their solution consists of four main stages, which includes Pre-training, Coarse-tuning, Fine-tuning and Model-ensemble. The training data for the first three stages are all collected from the large-scale Laion-5B dataset.
ited data condition by efficiently utilizing external knowledge in model training and inference. As shown in 3, it enhances typical captioning model in two stages. Firstly, to construct a dataset for training, an explicit retrieval module samples a set of image-text pairs from the memory, which can closely mimic the target distribution of captions. Secondly, the knowledge associated with the input query is explicitly combined into the model via retrieval-based fusion. These two processes are introduced as below, while the technical report [23] includes more details.
**Dataset Discovery** The base captioning model BLIP-2 [16] exhibits poor performance when evaluated on NICE dataset. This necessitates to discover data samples for training that are relevant to the target distribution. To that end, given a query image, a retrieval module is employed to retrieve a set of related examples from a dataset saved in the external memory via a \(k\)-nearest neighbor (kNN) search. This module is applied to the NICE dataset of size \(N\), yielding total \(kN\) images with corresponding captions from the external memory. After a deduplication step, the unique image-text pairs are utilized for further finetuning the BLIP-2.
**Retrieval-based fusion** Given a query image either during training or inference, a set of value embeddings is produced by encoding the caption of the \(k\) retrieved image-text pairs. These embeddings can provide rich contextual information complementary to the knowledge in the original model. They are aggregated and then concatenated with the query feature. After the fusion, the new feature is further passed to the remaining captioning pipeline; Q-Former followed by the LLM decoder in BLIP-2 [16]. This results in more accurate captions, and the entire Retriever framework achieves CIDEr 324.9 on NICE test split.
### 3.3rd rank : Kakaobrain-MMU
The main components of their approach are three-fold: 1) NoC framework [14], 2) three-stage training pipeline, and 3) consensus-based model ensemble. Detailed explanations for each method will be described in the following sections.
**Training Algorithm** Since they trained their models using large-scale web-crawled image-text paired datasets that contain inherent noises, _i.e_., misaligned pairs, they utilized their prior work, **N**oise-aware **C**aptioning (NoC) framework [14], as a primary training algorithm, illustrated in Figure 4. In a nutshell, NoC incorporates conditional modeling into a captioning model to model alignment levels of the input image-text pairs. During the training phase, NoC utilizes the CLIP similarity between an image-text pair as an additional input of a control signal indicating the alignment level. Then, at the inference phase, they feed a control signal of a desired alignment level into the captioning model to produce semantically accurate captions that are closely aligned with input images.
**Three-stage Training** Given web-crawled image-text data and NICE validation set, they trained their model with a three-stage pipeline: 1) pre-training on large-scale data, 2) fine-tuning on retrieved data, and 3) fine-tuning on NICE validation set. The model trained in each stage is used to initialize the model in the next stage. At the first stage, they pre-trained captioning models with the following datasets4: CC15M [7, 29], COYO-700M [6], COYO-100M (a subset of COYO-700M having CLIP similarity higher than 0.3), LAION-45M [28] (filtered from LAION-Aesthetics-en V1 52M), and 5) LAION-120M [28] (filtered from LAION-Aesthetics Predictor V2 600M). The filtering strategy for LAION-45M and -120M was based on aspect ratio, image size, CLIP similarity, text length, etc. Then they fine-tuned the pre-trained model using retrieved data. Inspired by [19], they retrieved the most NICE-relevant image-text pairs from the pre-training data. For each query sample (captions in NICE validation split), they retrieved 1,000 T2T and 1,000 T2I samples from the union of COYO and LAION datasets using CLIP ViT-B/32 [26] and FAISS [13]. This makes the captioning models likely to generate captions more similar to captions in NICE validation set. Finally, they further fine-tuned their captioning models from stage 2 with the NICE validation set to more tightly align the caption style with the ones in the NICE challenge.
Footnote 4: They trained a captioning model for each dataset.
**Consensus-based Ranking for Ensemble** Given \(N\) captioning models, they generated a set of captions \(\mathcal{C}_{N}=\{c_{1},c_{2},...,c_{N}\}\) for an input image where individual models generate a caption with a beam search strategy. Then, for a caption \(c_{n}\) from \(n^{\text{th}}\) captioning model, they computed
Figure 4: The architecture of captioning model. First, the control signal is computed by bucketing CLIP similarity during the training phase, and set to a constant one during the test phase. Then, it is concatenated to the image embeddings and is fed into a cross-attention-based caption decoder model as key-value features. This figure is inspired and slightly modified from [14].
a consensus score [9, 21]\(s_{n}\) that is defined as an averaged similarity to all the other captions (\(c_{m}\in\mathcal{C}_{N}\backslash\{c_{n}\}\)) as follows:
\[s_{n}=\frac{1}{|\mathcal{C}_{N}|-1}\sum_{c_{m}\in\mathcal{C}_{N}\backslash\{c_{ n}\}}\text{sim}(c_{n},c_{m}), \tag{1}\]
where \(\text{sim}(c_{n},c_{m})\) is the CIDEr score between two captions \(c_{n}\) and \(c_{m}\). Finally, the caption of the highest consensus score is chosen as their final output for the input image.
### 4th rank : Otsuka AI
In the NICE dataset, named entities such as geographical locations (e.g., Germany, Italy) pose a challenging task for inferring through images. For instance, deducing that the habitat of highland cattle is Germany based on pictures requires substantial foundational knowledge. In this work, problem is approached from the perspective of generating dialogue specialized for the NICE dataset, resembling persona conversations. The distinguished image captioning model OFA [34] is fine-tuned to be specialized for NICE.
The term "Levels" represents the entirety of the methodology, where soft prompts [25] are effectively utilized to categorize the strength of hints present in example captions accessible by the model. Based on the output layer values of the OFA encoder, the captions of the four most similar images to the input image are retrieved from the pool of 5,000 validation images. Each caption serves as a powerful hint for generating predictions. To replicate the linguistic style of the NICE dataset, the four captions are employed akin to few-shot prompts.
As a method to specify the level of hint in the prompt and facilitate the model's judgment, soft prompts are introduced. The values of cosine similarity between the input image and the four example images were normalized to provide their corresponding levels. These soft prompts are structured into four levels based on cosine similarity and three levels based on public id difference. Soft prompts form part of the broader utilization of various soft prompt techniques, as seen in visual prompting [12], symbol tuning [36], and others.
### 5th rank : CLAS
Their approach revolves around the BLIP-2 architecture [16], that combines several strategies to improve performance. The BLIP-2 architecture effectively utilizes image features by incorporating the Querying Transformer (Q-Former) along with state-of-the-art language models, such as OPT [39]. Q-Former is pretrained in two stages: the representation and generative learning stages, allowing the extraction of a fixed number of output features from the image encoder, regardless of the input image resolution. Their image encoder is based on ViT-G/14 [10], while the language model is based on OPT-2.7b.
For fine-tuning, they used the validation set and the "a picture of" prompt, which helps the model converge faster and generate better captions. The network is fine
Figure 5: The overview of the method proposed by team Otsuka AI.
tuned on the validation set for 200 epochs. They employed FP16 mixed precision training and the Low-Rank Adaptation (LoRA) technique [11] to allow for adaptation without updating all model parameters. After fine-tuning the network with cross-entropy loss, they utilized CIDEr optimization based on self-critical sequence training [27]. This approach leverages the output of its own test-time inference algorithm to normalize the rewards it experiences.
Considering the high computational requirements of their model, conventional ensemble methods that average or rank sequences were not practical. Instead, they adopted an ensemble of 10 models, each trained with varying learning rates and epochs, using both cross-entropy and CIDEr optimization methods. To rank the models, they employed a model ranking system based on the confidence score of each generated caption. This confidence score is calculated by considering the probability of each word in the caption obtained through greedy sampling. The caption with the highest probability is selected for the output.
### 6th rank : MKC
They finetuned a large Vision-Language Pre-trained model, BLIP-2 ViT-g OPT 6.7B to cope with various concepts in the NICE dataset. To avoid catastrophic forgetting and reduce the number of parameters to train, they added adapters to BLIP-2 image encoder and only update the attached adapter, layernorm and Q-former using EMA(Exponential Moving Average) method. For finetuning, they divided the training process into two stages: 1) finetuning with the CC3M dataset and 2) finetuning with the NICE validation and the CC3M dataset mixed together.
They plugged an adapter [15] with zero convolution [38] at each layer in ViT [10]. Figure 7 shows the structure of the adapter. The weights in zero convolutions would progressively grow from zero to optimal parameters, which helped the model slowly adapt to the CC3M dataset. They also optimized the layernorm [3] inspired by the context of domain adaptation [22, 31]. Due to the various domains in NICE dataset, the method in domain adaptation could positively affect the performance. Lastly, Q-former in the original BLIP-2 was also optimized. The main role of the Q-former is to reduce the modality gap between image and natural language. Since the image feature extractor was decided to be trained, it was reasonable to optimize the Q-former as well. They also applied the EMA method to prevent catastrophic forgetting. They built a teacher model with parameter \(\theta_{T}\), and regularized a student model, \(\theta_{S}\), with consistency loss [32]. Consistency loss(\(Loss_{con}(\theta)\)) between a teacher model(\(f(x;\theta_{T})\)) and a student model(\(f(x;\theta_{S})\)) was defined as follows:
\[\mathcal{L}_{con}(\theta)=\beta KL(f(x;\theta_{T}),f(x;\theta_{S})) \tag{2}\]
where \(\beta\) is a hyperparameter for controlling the strength of consistency between a teacher and a student model, \(x\) denotes an input image, and \(f\) denotes the model architecture. The overall loss function for the whole finetuning process
Figure 6: The proposed architecture from CLAS team.
Figure 7: Proposed Adapter structure
was defined as follows:
\[\mathcal{L}_{total}=\mathcal{L}_{ce}+\mathcal{L}_{con} \tag{3}\]
where \(\mathcal{L}_{ce}\) is a standard cross-entropy loss. At the first finetuning stage, only the CC3M dataset was used. Then in the second stage, the mixture of NICE validation dataset and the CC3M dataset was used.
In this work, they presented the method of light-weighted finetuning in the field of Vision-Language, especially aimed at the image captioning task. Choosing BLIP-2 as the baseline model, adapters with zero convolution and optimization of layer normalization were applied to perform efficient domain-robust finetuning. Their model achieved up to 274.69 CIDEr score on the test case, demonstrating the effectiveness of the proposed finetuning methodology.
### 7th rank : Mtop
Team Mtop's solution for the NICE 2023 challenge revolves around data augmentation and finetuning of the BEiT3 [35] model with the augmented data. Due to the limitations in computational resources, they employed a minimal-effort approach. Firstly, they generated additional caption data by learning different styles. Next, they performed finetuning of the BEiT3 model using this augmented data without resorting to external datasets. To prevent catastrophic forgetting, a novel caption correction method was adopted. Their submission secured the 7th position in the challenge, achieving a notable score of 270 points.
**Details of Approach** BEiT3, pretrained on image-text pairs from datasets like CC12M, CC3M, SBU, COCO, and VG, served as the foundation for their approach. However, they recognized the significant differences in caption styles between these datasets and NICE, resulting in distributional discrepancies. To address this, they utilized BLIP2 to measure the similarity with NICE and selected image-text pairs that closely resembled the NICE dataset. These pairs enabled zero-shot learning during finetuning. One of the key challenges encountered during finetuning was catastrophic forgetting and being misled by the training data. To mitigate this issue, they employed instructed captions generated by large language models to guide the caption generation process. This approach led to improved performance and better results.
**Key Contributions** Their submission not only advances Image Captioning and Zero-shot Learning but also introduces a novel caption correction method to alleviate the impact of catastrophic forgetting. By finetuning the BEiT3 model with carefully selected augmented image-text pairs, they demonstrated the importance of considering target styles of captions for this task. Experimental results highlight the significance of caption correction and finetuning with relevant styles in achieving superior performance.
## 4 Conclusion
Through NICE challenge 2023, a new dataset for zero-shot image captioning evaluation was proposed, and various approaches were attempted to appropriately adjust the AI models that had been trained on other datasets to the new evaluation dataset. We hope to continue this line of research and contribute to more challenging tasks to be performed by vision-language models. The proposed methods presented a wide range of insight on adapting the pretrained model to a specific domain of data, without sufficient training data from the target domain. This research field is expected to dive deeper into the real-world vision-language problems where the input visual data must be described in language of various styles.
|
2304.09319 | The conditional DPP approach to random matrix distributions | We present the conditional determinantal point process (DPP) approach to
obtain new (mostly Fredholm determinantal) expressions for various eigenvalue
statistics in random matrix theory. It is well-known that many (especially
$\beta=2$) eigenvalue $n$-point correlation functions are given in terms of
$n\times n$ determinants, i.e., they are continuous DPPs. We exploit a derived
kernel of the conditional DPP which gives the $n$-point correlation function
conditioned on the event of some eigenvalues already existing at fixed
locations.
Using such kernels we obtain new determinantal expressions for the joint
densities of the $k$ largest eigenvalues, probability density functions of the
$k^\text{th}$ largest eigenvalue, density of the first eigenvalue spacing, and
more. Our formulae are highly amenable to numerical computations and we provide
various numerical experiments. Several numerical values that required hours of
computing time could now be computed in seconds with our expressions, which
proves the effectiveness of our approach.
We also demonstrate that our technique can be applied to an efficient
sampling of DR paths of the Aztec diamond domino tiling. Further extending the
conditional DPP sampling technique, we sample Airy processes from the extended
Airy kernel. Additionally we propose a sampling method for non-Hermitian
projection DPPs. | Alan Edelman, Sungwoo Jeong | 2023-04-18T22:08:17Z | http://arxiv.org/abs/2304.09319v3 | # The Conditional DPP approach to random matrix distributions
###### Abstract.
We present the conditional determinantal point process (DPP) approach to obtain new (mostly Fredholm determinantal) expressions for various eigenvalue statistics in random matrix theory. It is well-known that many (especially \(\beta=2\)) eigenvalue \(n\)-point correlation functions are given in terms of \(n\times n\) determinants, i.e., they are continuous DPPs. We exploit a derived kernel of the conditional DPP which gives the \(n\)-point correlation function conditioned on the event of some eigenvalues already existing at fixed locations.
Using such kernels we obtain new determinantal expressions for the joint densities of the \(k\) largest eigenvalues, probability density functions of the \(k^{\text{th}}\) largest eigenvalue, density of the first eigenvalue spacing, and more. Our formulae are highly amenable to numerical computations and we provide various numerical experiments. Several numerical values that required hours of computing time could now be computed in seconds with our expressions, which proves the effectiveness of our approach.
We also demonstrate that our technique can be applied to an efficient sampling of DR paths of the Aztec diamond domino tiling. Further extending the conditional DPP sampling technique, we sample Airy processes from the extended Airy kernel. Additionally we propose a sampling method for non-Hermitian projection DPPs.
## 1. Introduction
### The Conditional DPP Method
This paper shows that conditional determinantal point processes (DPP) can be exploited in novel ways to create interesting expressions and yield highly efficient algorithms for computations in random matrix theory and beyond. We call this the _conditional DPP approach_. Figure 1 presents a gallery of examples created by these new expressions.
### Technical Background
Determinantal representations frequently arise in random matrix theory, especially in the study of eigenvalues of \(\beta=2\) (complex) random matrices. A number of random matrix \(n\)-point eigenvalue correlation functions [8] are given in terms of the following determinantal formula
\[p(x_{1},\ldots,x_{n})=\det\left(\left[K(x_{i},x_{j})\right]_{i,j=1\ldots,n} \right), \tag{1.1}\]
for some corresponding kernel \(K\). A basic example is the \(N\times N\) Gaussian unitary ensemble (GUE) with the Hermite kernel \(K=K_{\text{Herm}}^{(N)}\) defined as
\[K_{\text{Herm}}^{(N)}(x,y)=\sum_{i=0}^{N-1}\phi_{i}(x)\phi_{i}(y)=\sqrt{\frac{ N}{2}}\frac{\phi_{N}(x)\phi_{N-1}(y)-\phi_{N-1}(x)\phi_{N}(y)}{x-y}, \tag{1.2}\]
with the Hermite functions \(\phi_{j}(x)=\exp(-x^{2}/2)H_{j}(x)/(2^{j}\sqrt{\pi}j!)^{1/2}\), where \(H_{j}\)'s are the Hermite polynomials.
examples of the continuous DPP [13, 25]. In this paper the kernel \(K\) is not restricted to symmetric or Hermitian kernels (matrices).
In the discrete (finite) case, a standard exact sampling algorithm is introduced in [16] for DPPs with Hermitian kernels. This algorithm uses the fact that any Hermitian DPP is in fact a mixture of projection DPPs (elementary DPPs), and also that projection DPPs have a simple exact sampling algorithm. Other sampling algorithms were also studied, for example see [7, 22], but mostly for DPPs with Hermitian kernels.
However recently a new "greedy" type algorithm was introduced [24, 27], based on the successive computations of conditional probabilities1 through the block LU decomposition. In each step of this algorithm we determine whether a given index is included in the sample or not by a Bernoulli trial, which we refer to as the _observation_. Depending on the observation at each step, we modify (or keep) a diagonal entry (the pivot of the LU decomposition), then perform a single step of the LU decomposition. This can be less efficient than the standard Hermitian sampler but this algorithm allows one to sample from non-Hermitian DPPs.
Footnote 1: Such conditional approaches were already considered earlier, e.g., [5].
This paper is inspired by this greedy type algorithm. We make the following three important remarks:
* After each observation we obtain a kernel corresponding to the **new DPP of unobserved points** conditioned on the result of observed points.
* Theoretically, we could **force specific points** to be included in the sample by just multiplying Bernoulli parameters, instead of random Bernoulli trials. We still obtain the above (conditional) kernel under such an event.
* The observations could be done in an **arbitrary order** (of point indices).
Based on these points, we can use the conditional probability approach to derive several new determinantal expressions of various probability density functions (PDF) and cumulative distribution functions (CDF) in Section 3.
Not to be understated is the role of algorithms from numerical linear algebra in this work as an inspiration and algorithmic enhancement. One way or another, the kernels of the DPP may undergo a matrix factorization; A potential key to effective algorithms is the recognition of which choice to use when.
### Main tool
From the third remark above, let us imagine observing a specific point \(s\) first. Then from the second remark, force an eigenvalue at (an infinitesimal interval around) \(s\). Finally using the first remark, we introduce the following Proposition which is the key to our results.
**Proposition 1.1**.: _Let \(K\) be a kernel of an integral operator2 on \(J\) that defines a continuous DPP of the \(n\)-point correlation function (1.1) of some random matrix eigenvalues. Fix a point \(s\in J\) and define a derived kernel_
Footnote 2: The kernel \(K:J\times J\to\mathbb{C}\) is the kernel of some integral operator \(\tilde{K}\) on \(L^{2}(J)\) as follows:
\[\tilde{K}f(x)=\int K(x,y)f(y)dy.\]
However we will simply denote by \(K\) both the kernel and the integral operator since there is no confusion throughout this work.
_Then, the \(n\)-point correlation function \(p^{(s)}(x_{1},\ldots,x_{n})\) of the rest of eigenvalues given that an eigenvalue already exists in an infinitesimal interval around \(s\) is_
\[p^{(s)}(x_{1},\ldots,x_{n})=\det\left(\left[K^{(s)}(x_{i},x_{j})\right]_{i,j=1, \ldots,n}\right).\]
_In other words, the kernel \(K^{(s)}\) defines a continuous DPP (1.1) of the eigenvalues conditioned on the event of an eigenvalue existing at \(s\)._
The concept of the conditional DPP in Proposition 1.1 could be found (in terms of \(L\)-ensemble) and justified as a DPP in Borodin and Rains [5], where it is proven to be useful when proving the Eynard-Mehta theorem. It has also been discussed in the context of machine learning [22, 23].
One might notice that the kernel (1.4) is in fact the result of a single step of the LU decomposition with the pivot \(K(s,s)\), as discussed in the second remark. Note that we could condition on the selection of more than one points (Proposition 3.1). We emphasize that this kernel is easy-to-use since it is explicit and does not include any infinite summation or differentiation.
Proposition 1.1 leads to some new expressions on eigenvalue statistics of random matrices. In Section 3 we provide several eigenvalue statistics in terms of Fredholm determinants, which are known to be amenable to numerical computation through the method proposed in [1, 2]. These results include but are not limited to:
* PDF, CDF of the two extreme eigenvalues (Sections 3.2, 3.3)
* Joint PDF, CDF of the \(k\) extreme eigenvalues (Section 3.4)
* PDF, CDF of the first eigenvalue spacing (Section 3.4)
In these results, a random matrix could be chosen to be any random matrices with determinantal \(n\)-point correlation function (1.1), such as the GUE, LUE, JUE, soft-edge scaling, hard-edge scaling, etc.
### Preview #1: Joint PDF of the two largest eigenvalues
A good example of our result is the joint PDF \(f^{(\lambda_{1},\lambda_{2})}\) of the \(k=2\) largest eigenvalues3\(\lambda_{1}\geq\lambda_{2}\). Let us use the soft-edge scaling limit of the GUE as an example. It is expressible in terms of a Fredholm determinant4 using Proposition 1.1,
Footnote 3: The choice of \(k=2\) is arbitrary and for illustrative purpose. We could obtain a joint PDF of any \(k\) largest eigenvalues which we discuss in Section 3.4.
\[f^{(\lambda_{1},\lambda_{2})}(x_{1},x_{2})=\det\left(\begin{bmatrix}K(x_{1}, x_{1})&K(x_{1},x_{2})\\ K(x_{2},x_{1})&K(x_{2},x_{2})\end{bmatrix}\right)\cdot\det(I-K^{(x_{1},x_{2}) }\upharpoonright_{(x_{2},\infty)}), \tag{1.5}\]
for \(x_{1}>x_{2}\) and vanishes otherwise, where \(K=K_{\text{Ai}}\) is the Airy kernel (3.2) and the kernel \(K^{(x_{1},x_{2})}\) is defined in terms of \(K\),
\[K^{(x_{1},x_{2})}(x,y):=K(x,y)-\begin{bmatrix}K(x,x_{1})\\ K(x,x_{2})\end{bmatrix}^{T}\!\!\begin{bmatrix}K(x_{1},x_{1})&K(x_{1},x_{2})\\ K(x_{2},x_{1})&K(x_{2},x_{2})\end{bmatrix}^{-1}\!\begin{bmatrix}K(x_{1},y)\\ K(x_{2},y)\end{bmatrix}.\]
Using the above formula (1.5) we were able to compute the correlation coefficient of the two largest eigenvalues at the soft-edge scaling limit
\[\rho(\lambda_{1},\lambda_{2})=0.505\,647\,231\,59...,\]
up to 11 digits in less than 2 minutes. This correlation coefficient has a previously reported computing time of 16 hours for 11 digits in 2010 [1]. The formula could also be generalized to the \(k\) largest or, similarly, smallest eigenvalues of any random matrices when the \(n\)-point correlation function of eigenvalues is given in determinantal manner (1.1). See Section 3.4 for details.
### Preview #2: Determinantal expression for the PDF of the Tracy-Widom distribution
The famous Tracy-Widom distribution PDF is often plotted. Interestingly, as far as we know5, the last step of the computation involves taking the derivative of the CDF. In this preview we give a direct determinantal expression of the Tracy-Widom PDF that give the plot in the bottom left part of Figure 1. The CDF expression that is typically used is \(F_{2}(s)=\det(I-K_{\operatorname{Ai}}\!\restriction_{(s,\infty)})\). Our idea is that we first fix a level at \(s\) and then compute the probability that nothing lies above \(s\) with the conditional DPP kernel \(K_{\operatorname{Ai}}^{(s)}\).
Footnote 5: Except for a recent approach suggested in [3]. See Section 3.2 for details.
**Proposition 1.2** (PDF of the Tracy-Widom distribution).: _The probability density function \(f_{2}\) of the largest eigenvalue at the soft-edge is_
\[\frac{d}{ds}F_{2}(s)=f_{2}(s)=K_{\operatorname{Ai}}(s,s)\cdot\det\left(I-K_{ \operatorname{Ai}}^{(s)}\!\restriction_{(s,\infty)}\right), \tag{1.6}\]
_where \(F_{2}\) is the Tracy-Widom distribution (CDF), \(K_{\operatorname{Ai}}\) is the Airy kernel and_
\[K_{\operatorname{Ai}}^{(s)}(x,y)=K_{\operatorname{Ai}}(x,y)-\frac{\left( \operatorname{Ai}(x)\operatorname{Ai}^{\prime}(s)-\operatorname{Ai}(s) \operatorname{Ai}^{\prime}(x)\right)(\operatorname{Ai}(s)\operatorname{Ai} ^{\prime}(y)-\operatorname{Ai}(y)\operatorname{Ai}^{\prime}(s))}{(x-s)(y-s) (s\operatorname{Ai}(s)^{2}-\operatorname{Ai}^{\prime}(s)^{2})}\]
_is the derived kernel of the conditional DPP as proposed in (1.4)._
This again is not limited to the soft-edge scaling limit, but also applicable to other random matrices such as the finite GUE, LUE, hard-edge and more. See Section 3.2 for further details. We also provide numerical experiments.
### Outline of the paper
In Section 2, we review the theory of discrete and continuous DPPs and their sampling algorithms. We propose a hybrid sampling method, Algorithm 4, for DPPs with non-Hermitian projection kernels, for example the DPP of the Aztec diamond [20]. We then demonstrate an efficient sampling of the DR paths (see Section 2.3) without sampling the whole Aztec diamond.
Random matrix applications are discussed in Section 3. In Section 3.1 we review some basic random matrix eigenvalues and the conditional DPP approach. Then we derive several new determinantal representations which are efficiently implemented for numerical computations in later sections. In Section 3.2 we obtain a Fredholm determinant expression for the PDF of extreme eigenvalues, such as the Tracy-Widom distribution. In Section 3.3 we specialize on the second largest eigenvalue and provide new formulae for distribution functions. Section 3.4 discusses the joint PDF of the \(k\) largest (extreme) eigenvalues. Applications of these joint PDFs include the first eigenvalue spacing, correlation coefficient of the two largest eigenvalues, and many more. Finally in Section 3.5 we demonstrate the sampling of Airy processes from the DPP. Throughout Section 3 we provide extensive numerical experiments, and all codes could be found online.
## 2. Determinantal point processes
### Discrete and Projection DPPs
Discrete DPPs have a fairly straightforward definition as we saw from (1.3). In particular if we have a finite sized ground set \(G\), the kernel is a (finite) \(|G|\times|G|\) matrix \(K\), often called the _marginal kernel_. For a given DPP \(\mathcal{J}\) the following identities are important:
\[\operatorname{tr}(K)=\mathbb{E}(|\mathcal{J}|), \tag{2.2}\] \[\det(I-K)=\mathbb{P}(|\mathcal{J}|=0). \tag{2.1}\]
It is known that if the kernel matrix \(K\) is a _projection matrix_, a DPP has the following special property: _a DPP with a rank \(r\) projection marginal kernel draws a sample with the size exactly \(r\), i.e. \(|J|=r\) (almost surely when continuous)._
This property is a cornerstone of the sampling algorithm introduced in [16]. Imagine an algorithm that draws samples from a given DPP, where sample points are selected in an unsorted (uniformly permuted) order. The probability \(P_{i}\) that a given index \(i\in G\) is picked 'first' is the following.
\[P_{i}=\mathbb{P}(\mathcal{J}=\{i\})+\sum_{\begin{subarray}{c}i\in S\\ |S|=2\end{subarray}}\frac{1}{2}\mathbb{P}(\mathcal{J}=S)+\sum_{ \begin{subarray}{c}i\in S\\ |S|=3\end{subarray}}\frac{1}{3}\mathbb{P}(\mathcal{J}=S)+\cdots=\sum_{i\in S} \frac{1}{|S|}\mathbb{P}(\mathcal{J}=S).\]
Thinking the other way around, if we know \(\{P_{i}\}\), we could sample the first point (without worrying about additional sample points) according to the discrete random variable \(X\) defined by \(\mathbb{P}(X=i)=P_{i}\). However the probabilities \(\{P_{i}\}\) in general do not have a simpler expression in terms of the entries of the marginal kernel.
Nonetheless, for projection DPPs, (normalized) diagonal entries of \(K\) equals the probabilities \(\{P_{i}\}\) as \(P_{i}=K_{ii}/r\) is deduced from (2.1) together with \(\operatorname{tr}(K)=r\). Thus, sampling from a projection DPP begins by drawing a single index point from a categorical random variable with normalized diagonal entries as its distribution. After drawing a first point, one could modify the kernel so we could sample points iteratively as we describe in the following paragraphs.
```
functionOrtoProJDPP(Y) % \(Y\in\mathbb{R}^{n\times r}\) is orthogonal and \(K=YY^{T}\) sample \(\leftarrow\) empty vector for\(i=1:r\)do Draw \(j\) from \(\{1,\ldots,n\}\) with \(\mathbb{P}(X=j)=\texttt{norm}(Y[j,:])^{2}\) Add \(j\) to sample \(Q\leftarrow\texttt{Householder}(Y[j,:])\) \(Y\leftarrow(YQ)[:,2:\text{end}]\) endfor return sample endfunction
```
**Algorithm 1**OrthoProJDPP: Sample from an orthogonal projection DPP
Let us for a moment restrict our projection matrix \(K\in\mathbb{R}^{n\times n}\) to be an orthogonal projection matrix, so that we have \(K=YY^{T}\) for some orthogonal (unitary, if complex) matrix \(Y\in\mathbb{R}^{n\times r}\). The probability \(P_{i}\) above is then equivalent to the squared norm of the \(i^{\text{th}}\) row of \(Y\), \(\sum_{j=1}^{r}Y_{ij}^{2}\), divided by \(r\). We multiply a Householder reflector \(Q\)[9] of the \(i^{\text{th}}\) row of \(Y\) on the right side of \(Y\), so that \(YQ\) has the
\(i^{\text{th}}\) row \((P_{i},0,\ldots,0)\). If we let \(\tilde{Y}=YQ\) we have
\[\mathbb{P}(j\in\mathcal{J}|i\in\mathcal{J})=\sum_{k=2}^{r}\tilde{Y}_{jk}^{2},\]
which means that the matrix \(Z\in\mathbb{R}^{n\times(r-1)}\) obtained by deleting the first column of \(\tilde{Y}\) (which is again orthogonal) plays the same role as \(Y\) when drawing the first index point \(i\). In other words, \(ZZ^{T}\) is the rank \(r-1\) marginal kernel of the DPP conditioned on the first sample index point \(i\). Recursively applying this procedure \(r\) times, we obtain Algorithm 1 for orthogonal projection DPPs.
The algorithm introduced by Hough et al. in [16] samples from a Hermitian DPP using the fact that it is a mixture of projection DPPs via the eigendecomposition of \(K\). Algorithm 2 outlines this sampling algorithm.
```
functionHermDPP(\(X,\Lambda\)) % Eigendecomposition \(K=X\Lambda X^{T}\), \(K\in\mathbb{R}^{n\times n}\) mask \(\leftarrow\) empty vector for\(i=1:n\)do ifBernoulli(\(\Lambda[i]\))==1then Add \(i\) to mask endif endfor \(Y\gets X[:,\text{mask}]\) returnOrthoProjDPP(\(YY^{T}\)) endfunction
```
**Algorithm 2**HermDPP: Sample from a Hermitian DPP
### Sampling with conditional probabilites
Algorithm 2 and the algorithms followed thereafter selects a single sample point of \(\mathcal{J}\) at each time step. Thus, the set of points that are 'not selected' is determined at the final step of the algorithm. On the other hand, some recent work [24, 27] uses a different approach based on conditional probabilities. At each step, rather than sampling an index point, we 'observe' a single index point and determine whether it is going to be drawn or not.
A central idea comes from the following block LU decomposition where the rows and columns are partitioned by \((m,n-m)\),
\[K=\begin{bmatrix}I_{m}&0\\ K_{21}K_{11}^{-1}&K_{22}-K_{21}K_{11}^{-1}K_{12}\end{bmatrix}\begin{bmatrix}K_{ 11}&K_{12}\\ 0&I_{n-m}\end{bmatrix}.\]
This is equivalent to the result of \(m\) steps of the (unpivoted) LU decomposition. We have the following conditional probability for any subset \(S\) of \(\{m+1,\ldots,n\}\),
\[\det\left(\left[(K_{22}-K_{21}K_{11}^{-1}K_{12})_{i,j}\right]_{i,j\in S} \right)=\mathbb{P}(S\subset\mathcal{J}\ |\ \{1,\ldots,m\}\subset\mathcal{J}).\]
In other words, the matrix \(K_{22}-K_{21}K_{11}^{-1}K_{12}\) (i.e., the Schur complement) serves as a new kernel for the DPP conditioned on all of \(1,\ldots,m\) already being drawn. Recall from the remark in the introduction that indices \(1,\ldots,m\) could be in fact any \(m\) indices, since the order of observation could be arbitrarily chosen by some row/column permutation. Furthermore, a DPP with the condition that some index points are not selected could also be analyzed in a similar manner. The following Proposition summarizes this idea.
**Proposition 2.1** ([27]).: _Let \(K\) be the kernel of the DPP \(\mathcal{J}\). Given disjoint subsets \(A,B\) of the ground set \(\{1,\ldots,n\}\), we have the following probabilities._
\[\mathbb{P}(B\subset\mathcal{J}\ |\ A\subset\mathcal{J}) =\det(K_{B,B}-K_{B,A}K_{A,A}^{-1}K_{A,B}),\] \[\mathbb{P}(B\subset\mathcal{J}\ |\ a\notin\mathcal{J},\forall a\in A) =\det(K_{B,B}-K_{B,A}(K_{A,A}-I)^{-1}K_{A,B}),\]
_where \(K_{X,Y}\) is the submatrix of \(K\) with row indices \(X\) and column indices \(Y\)._
Performing Bernoulli trials on pivots and using Proposition 2.1 we have the following Algorithm 3 from [24, 27].
```
functionGDPP(\(K\)) % \(K\in\mathbb{R}^{n\times n}\) sample \(\leftarrow\) empty vector for\(i=1:n\)do ifBernoulli(\(K[i,i]\))==1then Add \(i\) to sample else \(K[i,i]\gets K[i,i]-1\) endif \(K[i+1:n,i+1:n]\ -=\ K[i+1:n,i]*K[i,i+1:n]/K[i,i]\) endfor return sample endfunction
```
**Algorithm 3** genDPP: Sample from a general DPP
Although this "greedy" type approach may be inefficient, there is one significant advantage: _Algorithm 3 enables sampling of a non-Hermitian DPP_. For example the discretized DPP of Dyson Brownian motion, which we will discuss in Section 3.5, is a non-Hermitian DPP. Another example is the Aztec diamond domino tiling [17, 18, 20] which is used in [27] as an example of a non-Hermitian DPP.
```
functionnonOrthoProjDPP(\(K\)) % \(K\in\mathbb{R}^{n\times n}\) non-Hermitian rank \(r\) projection sample \(\leftarrow\) empty vector for\(i=1:r\)do Draw \(j\) from \(\{1,\ldots,n\}\) with \(\mathbb{P}(X=j)=K[j,j]/(r-i+1)\) Add \(j\) to sample \(K\ -=\ K[:,j]*K[j,:]/K[j,j]\) endfor return sample endfunction
```
**Algorithm 4** nonOrthoProjDPP: Sample from a non-Hermitian projection DPP
However one might notice that the DPP kernel of the Aztec diamond obtained from Kenyon's formula using the inverse Kasteleyn matrix [6, 21] is non-Hermitian but still a projection matrix. (A clue would be that any sample always has the fixed size \(n(n+1)=\) number of dominos.) When we have a non-Hermitian projection DPP with a small rank, Algorithm 3 could be improved by combining with Algorithm 1. At each step we draw indices as we did in Algorithm 1 from diagonal entries and then we modify the kernel as in Algorithm 3. This reduces the number
of steps in Algorithm 3 from the size of \(K\) to its rank. We briefly sketch this hybrid approach in Algorithm 4.
We additionally note that unfortunately, the Aztec diamond DPP has its rank and size of the same order, which only obtains a small or even no improvement in practice. Nonetheless, if a kernel \(K\) is non-Hermitian projection with \(\operatorname{rank}(K)\ll n\), theoretically Algorithm 4 should outperform Algorithm 3. Table 1 is the summary of appropriate choices of exact DPP samplers, depending on the kernel \(K\).
### Application to Aztec diamonds: efficiently sampling a DR path
Not only can Algorithm 3 sample non-Hermitian DPPs but also it enables partial sampling. One application that benefits from the partial sampling of Algorithm 3 is the sampling of _DR paths_[30] of the Aztec diamond. The _north polar region_ (NPR) boundary process is of a great interest due to its connection to corner growth, and eventually to the Airy process [20]. DR paths of the Aztec diamond are defined as follows: We label the vertical dominos east-going (E) and west-going (W), according to their checkerboard patterns. An easy way to remember is that the vertical domino that fits the westmost corner is a W-domino. Similarly we label horizontal dominos N-dominos and S-dominos. In the left of Figure 2, red, blue, green, yellow dominos are W, E, N, S-dominos, respectively. We draw horizontal lines in the middle of S-dominos, \(\pm 45\) degree lines passing through the center of W, E-dominos, respectively. The right of Figure 2 has such lines drawn in red. It is known that these lines form \(n\) continuous paths, which are called the DR paths.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Alg \# & Hermitian? & Projection? & Example \\ \hline
1 & ✓ & ✓ & Hermite kernel (finite GUE) \\
2 & ✓ & x & Airy kernel (soft-edge truncated) \\
3 & x & x & Airy process (Section 3.5) \\
4 & x & ✓ & Aztec diamond \\ \hline \end{tabular}
\end{table}
Table 1. Choice of an exact sampling algorithm depending on properties of the kernel of a DPP.
Figure 2. An Aztec diamond domino tiling with \(n=10\) sampled from a DPP (left) and corresponding DR paths in red (right)
An interesting observation is that we do not need the whole Aztec domino configuration to get the top DR path. Using Algorithm 3 partially we could efficiently sample the top DR path (and similarly other specific DR paths) by only _observing_ the possible dominos along the path. For example let us start by observing the westmost location. If we sample a W-domino there we then 'observe' the three next possible (W, E, S) dominos that share the upper half of the right side of the westmost domino, since they are the three possible connections to the current path. This is possible due to the last point of the remark in the introduction, that we could observe in any desired order. We recursively do this until we reach the east boundary. This partial sampling along the top DR path reduces the complexity of DPP sampling by \(O(n)\). Figure 3 illustrates this point by comparing the two DPP sampling times, where sampling only the top DR path is about 40 times faster.
### Continuous DPPs
Many concepts in discrete DPPs extend to continuous DPPs. The ground set \(G\) now becomes a continuous interval (or any set) and the marginal kernel matrix becomes the kernel function \(K:G\times G\to\mathbb{C}\). A continuous DPP defines the following \(n\)-point correlation function, which is the continuous analogue of \(\mathbb{P}(S\subset\mathcal{J})\) in (1.3),
\[p(x_{1},\ldots,x_{n})=\det\left([K(x_{i},x_{j})]_{i,j=1,\ldots,n}\right).\]
One way to describe the \(n\)-point correlation function \(p(x_{1},\ldots,x_{n})\) is the following:
\[\lim_{\delta x\to 0}\frac{1}{(\delta x)^{n}}\mathbb{P}(n\text{ points at length }\delta x\text{ intervals around }x_{1},\ldots,x_{n}). \tag{2.3}\]
Figure 3. An Aztec diamond domino tiling with \(n=30\) sampled from a DPP (left) and a top DR path sampled without the whole domino configuration (right). The sampling time for the whole domino tiling is 52.44 seconds and the sampling time for a top DR path is 1.28 second. Moreover, with 50 seconds one could sample a top DR path with \(n=50\). Nevertheless, we have not found this algorithm to be yet competitive with usual Aztec domino sampling algorithms, e.g., the shuffling algorithm [10].
Sampling algorithms for a continuous DPP could be generalized from the discrete case. For a continuous projection DPP with bounded trace, i.e.,
\[\int K(x,y)K(y,z)dy=K(x,z)\quad\text{ and }\quad\int K(x,x)dx<\infty,\]
Algorithm 4 could be generalized easily. The point selection at each timestep is now a univariate random variable with its PDF proportional to the diagonal \(K(x,x)\). More details could be found in [15].
Another method for sampling from a continuous DPP is by discretizing a continuous DPP to a discrete DPP. Let us use the Hermite kernel \(K_{\text{Herm}}^{(N)}\), (1.2) as an example. To create a finite matrix we truncate the ground set. For the Hermite kernel with \(N=5\), it is known that the largest eigenvalue lies around \(\sqrt{2N}\approx 3\), and the probability that an eigenvalue lies outside a wide interval of length \(L\), for example \((-10,10)\), is already much lower than the machine epsilon of the double precision. Then with \(M\approx L/\delta x\) length \(\delta x\) intervals in the truncated region, create an \(M\times M\) matrix \(K\) such that \(K_{ij}=K(x_{i},x_{j})\delta x\), where \(x_{j}\) is the midpoint of the \(j\)th interval.6 From (2.3) the determinants of principal submatrices approach probabilities of sample points lying around corresponding intervals, as \(\delta x\to 0\). On the other hand if one tries to compute integrals such as \(\operatorname{tr}K=\int K(x,x)dx\) or the Fredholm determinant \(\det(I-K)\), one may use weights and points from quadrature rules, see [2] for details. In Section 3.5 we discretize the extended Airy kernel to sample Airy processes using Algorithm 3.
Footnote 6: Of course, as \(\delta x\to 0\) this does not have to be the midpoint.
## 3. The conditional DPP method applied to random matrix theory
Let \(K\) be a kernel for an integral operator such that the \(n\)-point correlation function \(p\) of some random matrix (or its scaling limit) eigenvalues is given as an \(n\times n\) determinant (1.1). Examples of such random matrices are the GUE, LUE, soft-edge, hard-edge, etc. As mentioned above, \(p(x_{1},\dots,x_{n})\) is the continuous analogue of the set-level complementary cumulative distribution function (CCDF), \(\mathbb{P}(S\subset\mathcal{J})\) in (1.3).
The probability that no eigenvalue exists in a given interval \(J\) plays an important role. In the continuous case, it is given as the following Fredholm determinant
\[\mathbb{P}(\text{No eigenvalue in }J)=\det(I-K\!\upharpoonright_{J}), \tag{3.1}\]
which is a continuous generalization of (2.2).
### Kernel of the DPP from conditional probabilities
Let us first prove Proposition 1.1, which is the continuous analogue of Proposition 2.1.
Proof of Proposition 1.1.: One step of the LU decomposition (in reverse order) of the \((n+1)\)-point correlation function is
\[[K(x_{i},x_{j})]_{i,j=1,\dots,n+1}=\begin{bmatrix}\left[K^{(x_{n+1})}(x_{i},x_ {j})\right]_{i,j=1,\dots,n}&v\\ 0&1\end{bmatrix}\begin{bmatrix}I_{n}&0\\ w&K(x_{n+1},x_{n+1})\end{bmatrix},\]
where \(v=[K(x_{i},x_{n+1})]_{i=1,\dots,n}/K(x_{n+1},x_{n+1})\) and \(w^{T}=[K(x_{n+1},x_{i})]_{i=1,\dots,n}\). From the multiplicativity of determinants we get
\[\det\left(\left[K(x_{i},x_{j})\right]_{i,j=1,\dots,n+1}\right)=\det\left( \left[K^{(x_{n+1})}(x_{i},x_{j})\right]_{i,j=1,\dots,n}\right)\cdot K(x_{n+1},x_{n+1}).\]
Then, since \(p^{(x_{n+1})}(x_{1},\ldots,x_{n})=\det([K^{(x_{n+1})}(x_{i},x_{j})]_{i,j=1,\ldots,n})\) and using (2.3) for the left hand side and \(K(x_{n+1},x_{n+1})\) we have
\[p^{(x_{n+1})}(x_{1},\ldots,x_{n})\] \[\qquad=\lim_{\delta x\to 0}\frac{1}{(\delta x)^{n}}\frac{ \mathbb{P}(\text{Eigenvalues at $\delta x$ intervals around $x_{1},\ldots,x_{n+1}$})}{\mathbb{P}(\text{Eigenvalue at $\delta x$ interval around $x_{n+1}$})},\]
which is the desired \(n\)-point correlation function conditioned on the event that an eigenvalue existing around an infinitesimal interval containing \(x_{n+1}\).
Note that we could also generate a kernel for an \(n\)-point correlation function conditioned on the event that a fixed location _does not_ contain an eigenvalue, by replacing the denominator of the kernel (1.4) with \(K(s,s)-1\). However, we note that the condition that an eigenvalue does not exist at a specific location is less versatile for applications than the condition that an eigenvalue exists at a specific location, in the continuous setting.
We generalize Proposition 1.1 to a conditional DPP kernel with any number \((m)\) of forced eigenvalues.
**Proposition 3.1**.: _With the same assumptions as in Proposition 1.1, the \(n\)-point correlation function \(p^{(s_{1},\ldots,s_{m})}\) of the eigenvalues conditioned on the event that \(m\) eigenvalues already exist around infinitesimal intervals around \(s_{1},\ldots,s_{m}\) is_
\[p^{(s_{1},\ldots,s_{m})}(x_{1},\ldots,x_{n})=\det\left(\Big{[}K^{(s_{1}, \ldots,s_{m})}(x_{i},x_{j})\Big{]}_{i,j=1,\ldots,n}\right),\]
_where the kernel \(K^{(s_{1},\ldots,s_{m})}\) is given as_
\[K^{(s_{1},\ldots,s_{m})}(x,y)\] \[\qquad=K(x,y)-\begin{bmatrix}K(x,s_{1})\\ \vdots\\ K(x,s_{m})\end{bmatrix}^{T}\!\!\begin{bmatrix}K(s_{1},s_{1})&\cdots&K(s_{1},s_ {m})\\ \vdots&\ddots&\vdots\\ K(s_{m},s_{1})&\cdots&K(s_{m},s_{m})\end{bmatrix}^{-1}\!\!\begin{bmatrix}K(s_ {1},y)\\ \vdots\\ K(s_{m},y)\end{bmatrix}.\]
Proof.: The proof is analogous to the proof of Proposition 1.1 replacing the LU decomposition with the block LU decomposition of an \((n+m)\times(n+m)\) matrix, with row/column partitions \((n,m)\).
### The PDF of extreme (largest) eigenvalues
One can derive a new expression for the PDF of the largest eigenvalue of a random matrix. Let us first review some basic eigenvalue statistics and conventions. For the \(N\times N\) GUE we have
\[E_{2}^{(N)}(0;J):=\mathbb{P}(\text{No eigenvalue of the $N\times N$ GUE is in $J$})=\det(I-K^{(N)}_{\text{Herm}}\!\!\upharpoonright_{J}),\]
where the Hermite kernel is defined in (1.2). In particular we denote \(E_{2}^{(N)}(0;(0,s))\) simply by \(E^{(N)}(0;s)\).
As \(N\to\infty\) some scaling limits are defined. With the sine kernel \(K_{\sin}(x,y)=\frac{\sin\pi(x-y)}{\pi(x-y)}\) one obtains the bulk scaling limit7 defined with mean spacing \(1\),
Footnote 7: In fact, appropriate scaling of \(\beta=2\) Laguerre and Jacobi ensembles in the bulk also yield the same bulk limit [26].
\[E(0;s):=\lim_{N\to\infty}E^{(N)}(0;\frac{\pi}{\sqrt{2n}}s)=\det\left(I-K_{\sin }\!\!\upharpoonright_{(0,s)}\right).\]
Also with the Airy kernel
\[K_{\operatorname{Ai}}(x,y)=\frac{\operatorname{Ai}(x)\operatorname{Ai}^{\prime}(y )-\operatorname{Ai}^{\prime}(x)\operatorname{Ai}(y)}{x-y}, \tag{3.2}\]
we have the following Fredholm determinant representation of the soft-edge scaling limit of the GUE8 (the Tracy-Widom distribution \(F_{2}\))
Footnote 8: As in the bulk, this could also be the largest eigenvalue of the LUE in the appropriate soft-edge scaling limit.[12]
Depending on the position and restrictions imposed on eigenvalues, \(\det(I-K)\) turns into several different distribution functions. At the edge (either hard or soft) \(\det(I-K)\) becomes the CDF. In particular, at the \(+\infty\) side of the soft-edge, \(\det(I-K_{\operatorname{Ai}}\!\upharpoonright_{(s,\infty)})\) equals \(\mathbb{P}(\lambda_{\max}\leq s)\), the CDF of the largest eigenvalue. Similarly, at the LUE hard-edge9\(\det(I-K_{\operatorname{Bess},\alpha}\!\upharpoonright_{(0,s)})\) equals \(\mathbb{P}(\lambda_{\min}\geq s)\), the CCDF.
Footnote 9: Also Jacobi ensemble with \(\beta=2\) has the same hard-edge scaling limit.[4]
On the other hand in the bulk, \(\det(I-K)\) is not a CDF. Rather, its derivative is a CDF, i.e., \(E(0;s)\) in the bulk has the following derivatives [25, Chapter 6.1.2],
\[\tilde{F}(0;s):=-\frac{d}{ds}E(0;s),\qquad\quad p(0;s):=-\frac{d}{ds}\tilde{F }(0;s). \tag{3.3}\]
The first derivative \(\tilde{F}(0;s)\) is the probability (in the bulk) that, for a randomly chosen level around zero, the distance \(s\) to the right contains no eigenvalue. If we let a random variable \(D\) to be the distance (again, starting from any randomly chosen level) to the right until the next level, \(\tilde{F}(0;s)=\mathbb{P}(D>s)\) is a CCDF. It follows that \(p(0;s)\) is the PDF of \(D\).
Some of these probabilities can alternatively be described by conditional probabilities. For example, the PDF of the smallest eigenvalue at the hard-edge is the product of (1) the 1-point correlation function at \(s\) (\(=K_{\operatorname{Bess},\alpha}(s,s)\)) and (2) the probability that no eigenvalue lies in \((0,s)\), conditioned on an eigenvalue existing around \(s\). The latter could be obtained from Proposition 1.1 and becomes a new expression for the PDF of the smallest eigenvalue at the hard-edge,
\[f_{\operatorname{hard},\alpha}(s)=\underbrace{K_{\operatorname{Bess},\alpha}(s,s)}_{\text{(1) Level at $s$}}\cdot\underbrace{\det\left(I-K_{\operatorname{Bess},\alpha}^{(s)}\! \upharpoonright_{(0,s)}\right)}_{\text{No levels on $(0,s)$, given level at $s$}}. \tag{3.4}\]
In Proposition 1.2 we have already introduced the PDF of the Tracy-Widom distribution with the same idea. Generalizing these, we get the following Corollary.
**Corollary 3.1.1**.: _Given a kernel \(K\) as in Proposition 1.1 that it is a kernel of a continuous DPP of some random matrix eigenvalues. Let_
\[f(a,b)=\det(I-K\!\upharpoonright_{(a,b)}),\]
_the probability that no eigenvalue lies in \(J=(a,b)\). Then, the following holds:_
\[\frac{d}{da}f(a,b) =K(a,a)\det(I-K^{(a)}\!\upharpoonright_{J})=\mathbb{P}(\text{ eigenvalue at $a$ and none in $J$}), \tag{3.6}\] \[-\frac{d}{db}f(a,b) =K(b,b)\det(I-K^{(b)}\!\upharpoonright_{J})=\mathbb{P}(\text{ eigenvalue at $b$ and none in $J$}), \tag{3.5}\]
_with the kernels \(K^{(a)}\) and \(K^{(b)}\) defined as in (1.4)._
Proof.: We prove (3.5) and the proof for (3.6) is similar. Let \(a^{\prime}=a+\delta a\).
\[\frac{1}{\delta a}(f(a^{\prime},b)-f(a,b))=\frac{1}{\delta a}\left( \mathbb{P}(\text{Nothing in }(a^{\prime},b))-\mathbb{P}(\text{Nothing in }(a,b))\right)\] \[\quad=\delta a^{-1}\mathbb{P}(\text{Eigenvalue at }(a,a^{\prime}), \text{ none in }(a^{\prime},b))\] \[\quad=\delta a^{-1}\mathbb{P}(\text{None in }(a^{\prime},b)\,|\text{ Eigenvalue at }(a,a^{\prime}))\cdot\mathbb{P}(\text{Eigenvalue at }(a,a^{\prime}))\] \[\quad=\delta a^{-1}\det(I-K^{(a)}\!\!\upharpoonright_{(a^{\prime}, b)})\cdot\mathbb{P}(\text{Eigenvalue at }(a,a^{\prime}))\]
As we let \(\delta a\to 0\), \(\delta a^{-1}\mathbb{P}(\text{Eigenvalue at }(a,a^{\prime}))\) goes to \(K(a,a)\).
Let us also apply Corollary 3.1.1 to distribution functions (3.3) in the bulk. For \(\tilde{F}(0;s)\), since \(K_{\sin}(0,0)=1\) we have
\[\tilde{F}(0;s)=-\det(I-K_{\sin}^{(0)}\!\!\upharpoonright_{(0,s)}), \tag{3.7}\]
where (1.4) defines \(K_{\sin}^{(0)}=\frac{\sin\pi(x-y)}{\pi(x-y)}-\frac{\sin\pi x\sin\pi y}{\pi^{2} xy}\). Applying once more we get
\[p(0;s)=K_{\sin}^{(0)}(s,s)\cdot\det(I-K_{\sin}^{(0)*}\!\upharpoonright_{(0,s)}), \tag{3.8}\]
where the kernel \(K_{\sin}^{(0)*}(x,y):=K_{\sin}^{(0)}(x,y)-\frac{K_{\sin}^{(0)}(x,s)K_{\sin}^{( 0)}(s,y)}{K_{\sin}^{(0)}(s,s)}\) is a twice derived sine kernel. Figure 4 is the plot of \(\tilde{F}(0;s)\), \(p(0;s)\) using (3.7), (3.8), respectively.
Moreover, Table 2 is the first four moments of \(D\) obtained from the computation of \(p(0;D=s)\) values. See code FOpO for the implementation resulting in Figure 4 and Table 2.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Mean & Variance & Skewness & Excess Kurtosis \\ \hline
1.0 & 0.179 993 877 691 8 & 0.497 063 620 491 8 & 0.126 699 848 039 9 \\ \hline \end{tabular}
\end{table}
Table 2. The first four moments of the spacing \(D\) near zero in the bulk up to 13 digits. Total computation time is 0.052 second. One could compare this result to Table 8 of [1].
Figure 4. Plots of \(\tilde{F}(0;s)\) (left) and \(p(0;s)\) (right) defined in (3.3). Numerical computation was done by evaluating Fredholm determinants [2] in equations (3.7), (3.8).
One benefit of our formulae, such as Proposition 1.2, (3.4), (3.7), (3.8), is an efficient and accurate numerical computation. We provide numerical experiment that compares our computation of the Tracy-Widom PDF, (1.6), \(f_{2}(s)=K_{\mathrm{Ai}}(s,s)\det(I-K_{\mathrm{Ai}}^{(s)}\!\upharpoonright_{(s, \infty)})\) with some other numerical approaches. Numerical results show that our expressions can be as efficient and potentially more accurate.
One approach for computing Tracy-Widom PDF is the automatic differentiation on \(F_{2}(s)\) combined with the Fredholm determinant computation [2] of \(F_{2}(s)=\det(I-K_{\mathrm{Ai}}\!\upharpoonright_{(s,\infty)})\). Another approach has also been suggested recently in [3, Eq (37b)],
\[f_{2}(s)=-F_{2}(s)\cdot\mathrm{tr}\left((I-K)^{-1}K^{\prime}\right)\!\upharpoonright_ {L^{2}(s,\infty)}=F_{2}(s)\cdot\langle(I-K)^{-1}\,\mathrm{Ai},\mathrm{Ai} \rangle_{L^{2}(s,\infty)}\]
where \(K^{\prime}\) is defined as \(K^{\prime}(x,y)=\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial x }\right)K(x,y)\). See equation (34b) and nearby discussion in [3]. In Table 3, we compare the accuracy of the computation of \(f_{2}(s)\), for different number of quadrature points and \(s\) values, using two different methods: (1) Equation (37b) of [3] and (2) our approach, (1.6). Comparison for other random matrix statistics such as in the bulk (sine kernel) or hard-edge (Bessel kernel) for the values of the derivative of log determinant and comparison against the automatic differentiation could also be found in the code prime-computation.
### The PDF and CDF of the second largest eigenvalue
Some new formulae for the second largest (second extreme) eigenvalue could be obtained from Proposition 1.1. Let us again take the soft-edge and the Airy kernel (3.2) as an example. A standard formula for the CDF of the \(k^{\mathrm{th}}\) largest eigenvalue is
\[F_{2}(k;s)=\sum_{m=0}^{k-1}\frac{(-1)^{m}}{m!}\frac{d^{m}}{dz^{m}}\det\left(I- zK_{\mathrm{Ai}}\!\upharpoonright_{(s,\infty)})\,\big{|}_{z=1}. \tag{3.9}\]
When \(k=2\) we could derive a somewhat different formula for the CDF and PDF that does not involve differentiation or summation using the conditional DPP. For
\begin{table}
\begin{tabular}{c|c c|c c} & \multicolumn{2}{c|}{\(m=10\)} & \multicolumn{2}{c}{\(m=20\)} \\ \(s\) & Eq (37b) of [3] & Eq (1.6) & Eq (37b) of [3] & Eq (1.6) \\ \hline \hline -4.0 & \(3.79\times 10^{-1}\) & \(2.76\times 10^{-2}\) & \(3.72\times 10^{-7}\) & \(5.60\times 10^{-7}\) \\ -3.5 & \(2.94\times 10^{-2}\) & \(6.13\times 10^{-2}\) & \(5.14\times 10^{-8}\) & \(8.25\times 10^{-9}\) \\ -3.0 & \(2.48\times 10^{-2}\) & \(1.68\times 10^{-2}\) & \(1.07\times 10^{-8}\) & \(4.18\times 10^{-9}\) \\ -2.5 & \(8.76\times 10^{-3}\) & \(1.89\times 10^{-3}\) & \(9.41\times 10^{-11}\) & \(1.39\times 10^{-10}\) \\ -2.0 & \(1.41\times 10^{-3}\) & \(2.55\times 10^{-3}\) & \(1.28\times 10^{-10}\) & \(2.11\times 10^{-13}\) \\ -1.5 & \(2.27\times 10^{-3}\) & \(1.70\times 10^{-3}\) & \(4.70\times 10^{-11}\) & \(3.33\times 10^{-12}\) \\ -1.0 & \(5.40\times 10^{-4}\) & \(3.87\times 10^{-5}\) & \(3.80\times 10^{-11}\) & \(1.70\times 10^{-12}\) \\ -0.5 & \(5.61\times 10^{-4}\) & \(3.06\times 10^{-6}\) & \(7.12\times 10^{-12}\) & \(1.87\times 10^{-14}\) \\ 0.0 & \(5.92\times 10^{-4}\) & \(1.17\times 10^{-6}\) & \(1.15\times 10^{-11}\) & \(9.35\times 10^{-14}\) \\ \end{tabular}
\end{table}
Table 3. Absolute values of relative errors of two approaches for computing the PDF of the Tracy–Widom distribution \(f_{2}(s)\), with \(m\) point Gauss-Legendre quadratures. The expression \(f_{2}(s)=F_{2}(s)\cdot\langle(I-K)^{-1}\,\mathrm{Ai},\mathrm{Ai}\rangle_{L^ {2}(s,\infty)}\) is used from Equation (37b) of [3] and our approach uses (1.6), \(f_{2}(s)=K_{\mathrm{Ai}}(s,s)\cdot\det(I-K_{\mathrm{Ai}}^{(s)}\!\upharpoonright_{(s, \infty)})\). Our approach shows slightly better overall accuracy but the errors converge very quickly to the machine precision in both methods.
the CDF \(\mathbb{P}(\lambda_{2}<s)\), we need to compute the probability that there is only one level lying in \((s,\infty)\) when \(s\) is given. In the discrete DPP, the probability of having only one sample point is the trace of the \(L\) kernel, where \(L=(I-K)^{-1}K\), divided by \(\det(I+L)\), which also holds similarly in the continuous case. Thus we obtain
\[F_{2}(2;s)=\underbrace{\operatorname{tr}\big{(}{(I-K_{\operatorname{Ai}})}^{-1 }K_{\operatorname{Ai}}\big{)}\!\upharpoonright_{(s,\infty)}}_{\operatorname{tr }(L)}\cdot\underbrace{\det(I-K_{\operatorname{Ai}}\!\upharpoonright_{(s, \infty)})}_{\operatorname{det}(I+L)^{-1}}.\]
For the PDF of the second largest eigenvalue, we need to fix an eigenvalue at \(s\) and proceed similarly as in the CDF. More precisely, we multiply (1) the \(1\)-point correlation function at \(s\) and (2) the probability that there is only a single eigenvalue above \(s\), conditioned on (1). That is,
\[f_{2}(2;s) =\frac{d}{ds}F_{2}(2;s)\] \[=\underbrace{K_{\operatorname{Ai}}(s,s)}_{\text{level at }s}\cdot \underbrace{\operatorname{tr}\Big{(}{(I-K_{\operatorname{Ai}}^{(s)})}^{-1}K_ {\operatorname{Ai}}^{(s)}\Big{)}\!\upharpoonright_{(s,\infty)}\det(I-K_{ \operatorname{Ai}}^{(s)}\!\upharpoonright_{(s,\infty)})}_{\operatorname{Only one eigenvalue in }(s,\infty)\text{ given a level at }s}. \tag{3.10}\]
Generalizing this, we obtain the following proposition.
**Proposition 3.2**.: _For a random matrix that has its eigenvalue \(n\)-point correlation function given as in Proposition 1.1 with a kernel \(K\), the CDF and PDF of the second largest eigenvalue \(\lambda_{2}\) are_
\[F^{\lambda_{2}}(s)=\operatorname{tr}\big{(}{(I-K)^{-1}K}\big{)} \!\upharpoonright_{(s,\infty)}\cdot\det(I-K\!\upharpoonright_{(s,\infty)}), \tag{3.12}\] \[f^{\lambda_{2}}(s)=K(s,s)\cdot\operatorname{tr}\Big{(}{(I-K^{(s )})}^{-1}K^{(s)}\Big{)}\!\upharpoonright_{(s,\infty)}\cdot\det(I-K^{(s)}\! \upharpoonright_{(s,\infty)}), \tag{3.11}\]
_where the kernel \(K^{(s)}\) is given as (1.4)._
Proposition 3.2 could be used with the Airy kernel (soft-edge) as well as finite \(N\) kernels like the Hermite kernel. In the hard-edge scaling the interval \((0,s)\) is used instead of \((s,\infty)\) with the Bessel kernel for the second smallest eigenvalue. Again, expressions (3.11) and (3.12) yield highly accurate numerical values using Bornemann's approach [2]. Table 4 is the computed first four moments of the two smallest eigenvalues (\(\lambda_{1}<\lambda_{2}\)) in the hard-edge and soft-edge using PDF formulas of Section 3.2 and (3.12).
\begin{table}
\begin{tabular}{c c|c|c|c|c} & \multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Variance} & Skewness & Kurtosis \\ \hline \hline \multirow{2}{*}{Soft} & \(\lambda_{1}\) & -1.771 087 & 0.813 195 & 0.224 084 & 0.093 448 \\ & \(\lambda_{2}\) & -3.675 437 & 0.540 545 & 0.125 027 & 0.021 740 \\ \hline Hard & \(\lambda_{1}\) & 4.000 000 & 16.000 000 & 2.000 000 & 6.000 000 \\ \(\alpha=0\) & \(\lambda_{2}\) & 24.362 715 & 140.367 319 & 0.924 147 & 1.225 112 \\ \hline Hard & \(\lambda_{1}\) & 10.873 127 & 55.745 139 & 1.320 312 & 2.541 266 \\ \(\alpha=1\) & \(\lambda_{2}\) & 40.812 203 & 259.898 510 & 0.737 801 & 0.764 990 \\ \hline Hard & \(\lambda_{1}\) & 20.362 715 & 124.367 319 & 1.015 815 & 1.461 306 \\ \(\alpha=2\) & \(\lambda_{2}\) & 60.112 814 & 416.851 440 & 0.622 605 & 0.532 483 \\ \end{tabular}
\end{table}
Table 4. The first four moments (the last column is the excess kurtosis) of the two extreme eigenvalues, soft-edge and hard-edge. Computation time is less than a second for the whole table. See codes cor-coeff-softedge and cor-coeff-hardedge for the implementation.
### The joint distribution of the \(k\) largest eigenvalues
In this section we derive an expression for the joint distribution of the \(k\) largest (or smallest) eigenvalues in terms of a Fredholm determinant.
In the case of \(k=2\), the joint density of the two smallest eigenvalues of the Laguerre ensemble has been studied in [14]. In addition, the joint distribution of the two smallest eigenvalues at the hard-edge is obtained in terms of the solution of a differential equation that resembles Jimbo-Miwa-Okamoto \(\sigma\)-form of the Painleve III. Analogously [32], studies the joint density of the two largest eigenvalues at the soft-edge, obtaining an expression in terms of Painleve II transcendents and isomonodromic components using hard-to-soft edge transition [4].
With Proposition 3.1, we derive an expression for the joint PDF of the \(k\) largest eigenvalues in terms of the Fredholm determinant. Let us take for example \(k=2\) and consider the \(N\times N\) GUE. For simplicity let \(K\) be the Hermite kernel (1.2) and \(p\) the \(N\times N\) GUE eigenvalue \(n\)-point correlation function. The \(n\)-point correlation function after forcing (and conditioning on) two eigenvalues on \(x_{1},x_{2}\) is
\[p^{(x_{1},x_{2})}(y_{1},\ldots,y_{n})=\det\left(\left[K^{(x_{1},x_{2})}(y_{i},y_{j})\right]_{i,j=1,\ldots,n}\right).\]
where the kernel \(K^{(x_{1},x_{2})}\) is given as
\[K^{(x_{1},x_{2})}(x,y):=K(x,y)-\begin{bmatrix}K(x,x_{1})\\ K(x,x_{2})\end{bmatrix}^{T}\begin{bmatrix}K(x_{1},x_{1})&K(x_{1},x_{2})\\ K(x_{2},x_{1})&K(x_{2},x_{2})\end{bmatrix}^{-1}\begin{bmatrix}K(x_{1},y)\\ K(x_{2},y)\end{bmatrix}.\]
Then, the joint PDF \(f^{(\lambda_{1},\lambda_{2})}\) of the two largest eigenvalues \(\lambda_{1}\geq\lambda_{2}\) is obtained as the following conditional probability argument assuming \(x_{1}>x_{2}\),
\[f^{(\lambda_{1},\lambda_{2})}(x_{1},x_{2}) =p(x_{1},x_{2})\cdot\mathbb{P}(\text{No other eigenvalues in }(x_{2},\infty)\ |\ \lambda_{1}=x_{1},\lambda_{2}=x_{2})\] \[=\det\left(\begin{bmatrix}K(x_{1},x_{1})&K(x_{1},x_{2})\\ K(x_{2},x_{1})&K(x_{2},x_{2})\end{bmatrix}\right)\cdot\det(I-K^{(x_{1},x_{2}) }{\restriction}_{(x_{2},\infty)}),\]
and \(f^{(\lambda_{1},\lambda_{2})}(x_{1},x_{2})\) vanishes when \(x_{1}\leq x_{2}\).
Similarly for the two smallest eigenvalues of the LUE, one could replace the interval \((x_{2},\infty)\) of the above right hand side with \((0,x_{2})\), where \(0\leq x_{1}\leq x_{2}\) being the two smallest eigenvalues. Again this approach works for any random matrix levels with determinantal \(n\)-point correlation function.
Extending to general \(k\)'s we obtain an expression for the joint PDF of the \(k\) largest eigenvalues, \(f^{(\lambda_{1},\ldots,\lambda_{k})}\), as the following.
**Proposition 3.3**.: _For a random matrix whose eigenvalue \(n\)-point correlation function is given in determinantal representation (2.3) with a kernel \(K\), we have the following joint PDF of the \(k\) largest eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{k}\)_
\[f^{(\lambda_{1},\ldots,\lambda_{k})}(x_{1},\ldots,x_{k})=\det\left(\left[K(x_ {i},x_{j})\right]_{i,j=1,\ldots,k}\right)\cdot\det\left(I-K^{(x_{1},\ldots,x_ {k})}{\restriction}_{(x_{k},\infty)}\right),\]
_for \(x_{1}>\cdots>x_{k}\) and vanishes otherwise, where the kernel \(K^{(x_{1},\ldots,x_{k})}(x,y)\) is defined as,_
\[K^{(x_{1},\ldots,x_{k})}(x,y)\!=\!K(x,y)\!-\!\begin{bmatrix}K(x,x_{1})\\ \vdots\\ K(x,x_{n})\end{bmatrix}^{T}\!\!\begin{bmatrix}K(x_{1},x_{1})&\cdots&K(x_{1},x _{n})\\ \vdots&\ddots&\vdots\\ K(x_{n},x_{1})&\cdots&K(x_{n},x_{n})\end{bmatrix}^{-1}\!\!\begin{bmatrix}K(x_ {1},y)\\ \vdots\\ K(x_{n},y)\end{bmatrix}.\]
In the following sections we give some examples of applications of Proposition 3.3 with numerical experiments. Furthermore the first row of Figure 1 contains some visualizations of these eigenvalue statistics that could be obtained from the joint PDF formula of \(k=2\) and \(k=3\) extreme eigenvalues.
#### 3.4.1. Correlation of the two extreme eigenvalues
In [1] the correlation coefficient \(\rho(\lambda_{1},\lambda_{2})\) of the two largest eigenvalues at the soft-edge is computed, from matrix valued kernels and generating function approach similar to (3.9). On the other hand, we could compute \(\rho(\lambda_{1},\lambda_{2})\) with the following steps:
1. Compute \(\mathbb{E}\lambda_{1}\lambda_{2}\) using the joint PDF \(f^{(\lambda_{1},\lambda_{2})}(x_{1},x_{2})\) from Proposition 3.3 and 2-dimensional Gauss quadrature on a (truncated) triangular region.
2. Compute \(\mathbb{E}\lambda_{1}\), \(\mathbb{E}\lambda_{2}\), \(\sigma\lambda_{1}\), \(\sigma\lambda_{2}\) using the PDF expressions (1.6), (3.10) and the Gauss-Legendre quadrature. Infinite intervals such as \((s,\infty)\) are handled by the strategy explained in Section 7 of [2].
3. Compute \(\rho(\lambda_{1},\lambda_{2})=(\mathbb{E}\lambda_{1}\lambda_{2}-\mathbb{E} \lambda_{1}\mathbb{E}\lambda_{2})/(\sigma\lambda_{1}\sigma\lambda_{2})\).
The total computing time for 11 accurate digits, \(\rho(\lambda_{1},\lambda_{2})=0.50564723159\), is 118 seconds. Previously reported computing time is 16 hours [1]. In addition to the soft-edge, we compute correlation coefficients of the two smallest eigenvalues at the hard-edge for \(\alpha=0,1,2\). See Table 5 for the result.
#### 3.4.2. First eigenvalue spacing
We compute moments of the distance between the first two eigenvalues (the first eigenvalue spacing) by computing the PDF and CDF using Proposition 1.1. In the second row of Figure 1 we also plot some distributions of the first eigenvalue spacing, using the expressions we derive in this section.
The probability that the first spacing is at least \(d\) could be obtained by integrating over \(x\) the probability density of no further eigenvalues existing in \((x-d,\infty)\) given a level at \(x\). From Proposition 1.1 such a probability density is given as (for example, at the soft-edge)
\[K_{\mathrm{Ai}}(x,x)\cdot\det\left(I-K_{\mathrm{Ai}}^{(d)}\mathord{\uparrow}_ {(x-d,\infty)}\right),\]
thus we obtain the CDF \(G(s)\) of the first spacing,
\[G(d)=1-\int_{\mathbb{R}}K_{\mathrm{Ai}}(x,x)\det(I-K_{\mathrm{Ai}}^{(d)} \mathord{\uparrow}_{(x-d,\infty)})dx.\]
\begin{table}
\begin{tabular}{c|c} & \(\rho(\lambda_{1},\lambda_{2})\) \\ \hline \hline Soft-edge & 0.505 647 231 59 \\ \hline \(\alpha=0\) & 0.337 619 085 22 \\ \hline \(\alpha=1\) & 0.391 735 693 02 \\ \hline \(\alpha=2\) & 0.417 187 915 41 \\ \hline \end{tabular}
\end{table}
Table 5. Computed values of correlation coefficients of the two largest eigenvalues at the soft-edge and the two smallest eigenvalues (\(\lambda_{1}<\lambda_{2}\)) at the hard-edge scaling limit. Computation time is 117 seconds for the soft-edge and 139 seconds for the whole hard-edge correlation coefficients. See codes cor-coeff-softedge and cor-coeff-hardedge for the detailed implementation.
Moreover, simply using the joint PDF \(f^{(\lambda_{1},\lambda_{2})}\) of the two eigenvalues computed above we obtain
\[A(d)=\int_{\mathbb{R}}d\cdot f^{(\lambda_{1},\lambda_{2})}(x,x-d)dx, \tag{3.13}\]
which is the PDF of the first spacing. For the implementation we use the Gauss-Legendre quadrature for the integration.
Table 6 is the computed first four moments of the first eigenvalue spacing, up to 12 digits, with a total runtime of 236 seconds. These moments were already computed in [32] up to \(5\sim 9\) digits, with a reported computing time of 5 hours. With (3.13) computation of the moments up to 9 digits needs 29 seconds of a runtime. Values of \(A,G\) of the soft-edge scaling are verified up to 8 digits against Table 2 of [32], with a total computing time of 354 seconds. Computation of \(A,G\) values and comparison with previously known values could be found in first-spacing.
### Sampling Dyson Brownian motion and the Airy process using DPP
In this section we add an additional (time) parameter \(t\) as a random matrix changes through time according to some stochastic processes. A random matrix diffusion or Dyson process, for example Dyson Browian motion (GUE diffusion), is another example of DPPs in random matrix theory. A multitime correlation function for the \(N\times N\) GUE diffusion is given in terms of a block matrix determinant with the _extended Hermite kernel_\(K\)[31],
\[p(x_{t_{s},i_{s}}\ ;\ s=1,\ldots,n\ \text{and}\ i_{s}=1,\ldots,m_{s})=\det \left(\left[K_{j,k}\right]_{j,k=1,\ldots,n}\right), \tag{3.14}\]
where \(K_{j,k}\) is an \(m_{j}\times m_{k}\) matrix,
\[K_{j,k}=\left[K(x_{t_{j},i_{j}},x_{t_{k},i_{k}})\right]_{\begin{subarray}{c}i_ {j}=1,\ldots,m_{j}\\ i_{k}=1,\ldots,m_{k}\end{subarray}}.\]
This is essentially the density of eigenvalues of the random matrix stochastic process at times \(\{t_{s}\}_{s=1,\ldots,n}\) each existing on positions \(x_{t_{s},1},\ldots,x_{t_{s},m_{s}}\). The determinantal multitime correlation function (3.14) also holds for the soft-edge scaling, LUE, and other orthogonal polynomial ensembles with appropriately computed kernels as proved in [11]. Indeed the block matrix determinant could be discretized to a block matrix \(K\) kernel for a DPP as prescribed in Section 2.4.
In particular, with the _extended Airy kernel_
\[K^{\text{ext}}_{s,t}(x,y)=\left\{\begin{array}{rl}\int_{0}^{\infty}e^{- \lambda(s-t)}\operatorname{Ai}(x+\lambda)\operatorname{Ai}(y+\lambda)d\lambda &\text{if }s\geq t\\ -\int_{-\infty}^{0}e^{-\lambda(s-t)}\operatorname{Ai}(x+\lambda)\operatorname {Ai}(y+\lambda)d\lambda&\text{if }s<t\end{array}\right.,\]
one has the multitime correlation function of the soft-edge scaling limit by (3.14). A special interest lies in the largest eigenvalue of this process and is called the _Airy
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Mean & Variance & Skewness & Excess Kurtosis \\ \hline
1.904 350 489 721 & 0.683 252 055 105 & 0.562 291 976 040 & 0.270 091 960 715 \\ \hline \end{tabular}
\end{table}
Table 6. The first four moments of the distance between the first two eigenvalues of the soft-edge scaling, up to 12 digits. Equation (3.13) is used to compute these values.
process_, or just simply, the _Airy process_. Obviously extending (2.2), the largest eigenvalue process is described by the following Fredholm determinant,
\[\mathbb{P}(\mathcal{A}(t_{1})\leq s_{1},\ldots,\mathcal{A}(t_{n})\leq s_{n})= \det\left(I-K\!\!\upharpoonright_{L^{2}(s_{1},\infty)\oplus\cdots\oplus L^{2}( s_{n},\infty)}\right), \tag{3.15}\]
where \(K\) is the block kernel given as
\[K=\begin{bmatrix}K^{\mathrm{ext}}_{t_{1},t_{1}}&\cdots&K^{\mathrm{ext}}_{t_{1 },t_{n}}\\ \vdots&\ddots&\vdots\\ K^{\mathrm{ext}}_{t_{n},t_{1}}&\cdots&K^{\mathrm{ext}}_{t_{n},t_{n}}\end{bmatrix}.\]
The Airy process is related to a number of applications, including the polynuclear growth process [19, 28], the NPR boundary process of the Aztec diamond domino tiling [20] (i.e., the top DR path as \(n\to\infty\) in Section 2.3), totally asymmetric simple exclusion process (TASEP), corner growth process [19], and eventually related to the KPZ universality class with the narrow wedge initial condition.
An example of numerical experiments on the Airy process is [2], where Bornemann use the \(2\times 2\) block Fredholm determinant (3.15) to compute the two-point covariance \(\mathrm{cov}(\mathcal{A}(0),\mathcal{A}(t))\) and compare those values against its large \(t\) and small \(t\) expansions.
Here we demonstrate another numerical experiment on the Airy process: Sampling Airy processes through multitime correlation function (3.14) and the DPP sampler. We use \(201\times 201\) block matrix for sampling. The
Figure 5. Sketch of the sampling technique used to sample the Airy process. Each block represents discretized \(K^{\mathrm{ext}}_{t_{i},t_{j}}\). At each timestep \(t_{i}\) we perform Algorithm 3, starting from \(+\infty\), until we hit the first (largest) eigenvalue. Then we jump to the next timestep, disregarding (not observing) rest of the eigenvalues at the current timestep. Note that each diagonal block of the kernel is the usual Airy kernel (3.2).
non-Hermitian and non-Projection, which could only be sampled by Algorithm 3. A simple multi-step modification of Algorithm 3 is handy; In each timestep \(t_{i}\), we proceed from \(+\infty\) to \(-\infty\), and jump to next block (next timestep \(t_{i+1}\)) when we find a largest eigenvalue at the current timestep. See Figure 5 for the illustration. More details such as discretization and truncating the interval to get finite kernel could be found in Section 2.4.
In the left of Figure 6 is the plot of five samples of the Airy process, sampled from the DPP defined by multitime correlation function of the (soft-edge scaling) GUE diffusion, i.e., the extended Airy kernel. These are exact in a sense that the samples are not large \(N\) asypmtotics where \(N\to\infty\) is the Airy process. Such large \(N\) approximations are drawn in red in the right of Figure 6.
### Numerical experiment details
All codes mentioned in the paper could be found online: [https://github.com/sw2030/RMTexperiments/codes](https://github.com/sw2030/RMTexperiments/codes). For numerical experiments (except Section 3.5) discussed in this work, we used a single core of an Apple M1 Pro CPU. For the Airy process sampling discussed in Section 3.5, we used 64 cores from four Xeon P8 CPUs for computing the DPP kernel and a single core of Xeon P8 CPU for sampling, from the MIT Supercloud server [29].
**Acknowledgements.** This material is based upon work supported by the National Science Foundation under Grant No. DMS-1926686. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper. This material is based upon work supported by the National Science Foundation under grant no. OAC-1835443, grant no. SII-2029670, grant no. ECCS-2029670, grant no. OAC-2103804, and grant no. PHY-2021825. We also gratefully acknowledge the U.S. Agency for International Development through Penn State for grant no.
Figure 6. Five samples of the Airy process from \(t=0\) to \(t=5\), sampled at interval points with \(dt=0.025\). We truncated eigenvalue space to \([-5.0,2.5]\), as probabilities that a sample from the Tracy–Widom distribution larger than \(2.5\) and smaller than -5 are both around \(2\times 10^{-5}\). Eigenvalue domain is then discretized into 150 points, finally yielding \(30150\times 30150\) kernel for the DPP. Sampling time for a single sample is around 4 hours. On the right we additionally draw five samples of the largest eigenvalue process \(\lambda_{\max}(t)\) of the Dyson Brownian motion with \(N=200\), recentered and rescaled according to
\[\sqrt{2}N^{\frac{1}{6}}\left(\lambda_{\max}(N^{-\frac{1}{3}}t)-\sqrt{2N} \right),\]
where it is known that as \(N\to\infty\) the above scaling converges to the Airy process.
S002283-USAID. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0001211 and DE-AR0001222. We also gratefully acknowledge the U.S. Agency for International Development through Penn State for grant no. S002283-USAID. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. This material was supported by The Research Council of Norway and Against ASA through Research Council project "308817 - Digital wells for optimal production and drainage". Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
|
2305.06303 | Explicit Information-Debt-Optimal Streaming Codes With Small Memory | For a convolutional code in the presence of a symbol erasure channel, the
information debt $I(t)$ at time $t$ provides a measure of the number of
additional code symbols required to recover all message symbols up to time $t$.
Information-debt-optimal streaming ($i$DOS) codes are convolutional codes which
allow for the recovery of all message symbols up to $t$ whenever $I(t)$ turns
zero under the following conditions; (i) information debt can be non-zero for
at most $\tau$ consecutive time slots and (ii) information debt never increases
beyond a particular threshold. The existence of periodically-time-varying
$i$DOS codes are known for all parameters. In this paper, we address the
problem of constructing explicit, time-invariant $i$DOS codes. We present an
explicit time-invariant construction of $i$DOS codes for the unit memory
($m=1$) case. It is also shown that a construction method for convolutional
codes due to Almeida et al. leads to explicit time-invariant $i$DOS codes for
all parameters. However, this general construction requires a larger field size
than the first construction for the $m=1$ case. | M. Nikhil Krishnan, Myna Vajha, Vinayak Ramkumar, P. Vijay Kumar | 2023-05-10T16:48:05Z | http://arxiv.org/abs/2305.06303v1 | # Explicit Information-Debt-Optimal Streaming Codes With Small Memory
###### Abstract
For a convolutional code in the presence of a symbol erasure channel, the information debt \(I(t)\) at time \(t\) provides a measure of the number of additional code symbols required to recover all message symbols up to time \(t\). Information-debt-optimal streaming (iDOS) codes are convolutional codes which allow for the recovery of all message symbols up to \(t\) whenever \(I(t)\) turns zero under the following conditions; (i) information debt can be non-zero for at most \(\tau\) consecutive time slots and (ii) information debt never increases beyond a particular threshold. The existence of periodically-time-varying \(i\)DOS codes are known for all parameters. In this paper, we address the problem of constructing explicit, time-invariant \(i\)DOS codes. We present an explicit time-invariant construction of \(i\)DOS codes for the unit memory (\(m=1\)) case. It is also shown that a construction method for convolutional codes due to Almeida et al. leads to explicit time-invariant \(i\)DOS codes for all parameters. However, this general construction requires a larger field size than the first construction for the \(m=1\) case.
## I Introduction
Streaming codes are convolutional codes that ensure decoding within a worst-case delay. In the streaming code literature [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], packet-erasure channel models are often considered. In contrast, we focus on codes over a more general symbol-erasure channel model in this work. Let \((n,k,m)\) be the parameters of a convolutional code, where \(k\) is the number of message symbols per time slot, \(n>k\) is the number of code symbols per time slot, and \(m\) is the memory. The information debt is a measure of the number of additional coded symbols needed to decode the message symbols encoded thus far. This notion was first introduced by Martinian in [15]. The error probability of random linear streaming codes in the large field size regime over i.i.d. symbol erasure channels is characterized in [16] using an information-debt-based argument.
Consider symbol erasure patterns such that information debt stays positive for no more than \(\tau\) consecutive time slots. If the code is capable of recovering message symbols whenever information debt drops to zero, then \(\tau\) can be thought of as a worst-case decoding delay. It is argued in [17] that if information debt goes above \(mk\) in any time slot, then it is not possible to recover all message symbols. With these in mind, the authors of [17] defined \((n,k,m,\tau)\)\(i\)DOS codes as an \((n,k,m)\) convolutional code that is capable of decoding all previously unknown messages in any time slot where information debt becomes zero provided the symbol erasure pattern is such that i) information debt does not stay positive for more than \(\tau\) successive time slots and ii) information debt never crosses \(mk\).
For \(\tau\leq m\), the existence of periodically-time-varying \((n,k,m,\tau)\)\(i\)DOS codes over a sufficiently large field follows from the results in [15]. In [17], this result is extended to all valid parameters over \(\mathbb{F}_{q}\), with \(q>(\tau+1)(\binom{n(\tau+1)}{k(\tau+1)})\). These existence results are based on Combinatorial Nullstellensatz [18] and hence provide no insights that will lead to an explicit construction. The connection of \(i\)DOS codes with two well-known classes of (time-invariant) convolutional codes, namely \(m\)-MDS codes [19] and maximum distance profile (MDP) codes [20], is established in [17]. For \(\tau\leq m\), \(m\)-MDS codes are shown to be \(i\)DOS codes, and for \(\tau\leq m+\lceil\frac{mk}{n-k}\rceil\), MDP codes are shown to be \(i\)DOS codes. For parameters \(\{n,k=1,m=1,\tau\}\), a special case of the MDP code in [21] is shown to yield an explicit construction of \(i\)DOS codes over a field of size \(O(n)\). Apart from the results mentioned above, the paper [17] does not provide explicit \(i\)DOS codes for \(m<\tau\). Small memory is advantageous in scenarios where low complexity encoders are required, such as in sensor networks. Furthermore, having a larger \(\tau\) for a given \(m\) implies recoverability from a larger set of erasure patterns. The question of whether time-invariant \(i\)DOS codes always exist for \(m\ll\tau\) is also unanswered in [17].
### Our Contributions
* As the primary result, we provide an explicit, time-invariant construction of \(i\)DOS codes over \(\mathbb{F}_{2^{d}}\) with unit memory \((m=1)\), for all valid \(\{n,k,\tau\}\), where \(d>(n-1)k^{2}(\tau+1)\).
* We also show that an explicit, time-invariant construction of \(i\)DOS codes for all possible \(\{n,k,m,\tau\}\) follows from a convolutional code construction method due to Almeida et al. [22]. This construction is over a finite field of size \(2^{d}\), where \(d=O(2^{mn+k}(\tau+1)k)\). Notably, for the unit memory case, the former construction requires a smaller field size.
#### Organization of the Paper
In Section II, we first define \(i\)DOS codes. Then, we present some definitions and known results that are needed for the later sections. Our unit memory construction is presented in Section III. In Section IV, we present the general construction.
NotationWe use \(\mathbb{N}\) to denote \(\{1,2,3,\dots\}\). If \(r<s\), we will interpret the sum \(\sum_{i=s}^{r}x_{i}\) as being equal to \(0\). For integers \(x,y\), we define the set of integers \([x:y]=\{i\mid x\leq i\leq y\}\). Furthermore, we use the notation \([x]\) to denote \([1:x]\). We use \(\mathbb{F}_{q}\) to denote the finite field consisting of \(q\) elements, where \(q\) is a prime power. For an \(x\times y\) matrix \(A\) and \(\mathcal{S}\subseteq[x]\), let \(A(\mathcal{S},:)\) denote the submatrix obtained by restricting \(A\) to the rows in \(\mathcal{S}\). Similarly, for \(\mathcal{T}\subseteq[y]\), we use \(A(:,\mathcal{T})\) to denote the submatrix obtained by restricting \(A\) to the columns in \(\mathcal{T}\). Moreover, \(A(\mathcal{S},\mathcal{T})\) denotes the \(|\mathcal{S}|\times|\mathcal{T}|\) submatrix obtained by restricting \(A(\mathcal{S},:)\) to the columns in \(\mathcal{T}\). We use \(A(i,j)\) to denote the element in row-\(i\) and column-\(j\) of \(A\). Let \(M\) denote an \(x\times y\) matrix whose entries are drawn from \(\{-\infty,0\}\cup\mathbb{N}\). If \(\alpha\in\mathbb{F}_{q}\), then \(\alpha^{M}\) denotes an \(x\times y\) matrix over \(\mathbb{F}_{q}\) whose entries are given by \(\{\alpha^{M(i,j)}\mid 1\leq i\leq x,1\leq j\leq y\}\). Here, we set \(\alpha^{-\infty}\triangleq 0\). We use \(\deg(f(x))\) to denote the degree of a polynomial \(f(x)\). For integer \(x\geq 1\), let \(S_{x}\) denote the symmetric group consisting of all permutations of the set \([x]\). Let \(\mathcal{S}\subseteq[x]\) and \(\sigma\in S_{x}\). We define \(\sigma(\mathcal{S})\triangleq\{\sigma(i)\mid i\in\mathcal{S}\}\).
## II Preliminaries
The first three subsections of this section focus on providing background on the \(i\)DOS code setting, for which we follow the notation from [17]. The latter part of the section introduces some definitions and results that are needed for the proofs of explicit constructions.
### _Convolutional Codes_
An \((n,k,m)\) convolutional code over \(\mathbb{F}_{q}\) can be described as follows. The encoder gets \(k\) message symbols and outputs \(n>k\) coded symbols in each time slot \(t\in\mathbb{N}\). These message symbols are denoted by \(\underline{s}(t)=[s_{1}(t)\dots s_{k}(t)]^{T}\in\mathbb{F}_{q}^{k}\) and the code symbols are given by \(\underline{c}(t)=[c_{1}(t)\dots c_{n}(t)]^{T}\in\mathbb{F}_{q}^{n}\). The memory of the encoder is \(m\). There is an \(n\times(m+1)k\) matrix over \(\mathbb{F}_{q}\), denoted by \(G_{t}\), such that
\[\underline{c}(t)=G_{t}\begin{bmatrix}\underline{s}(t-m)\\ \underline{s}(t-m+1)\\ \dots\\ \underline{s}(t)\end{bmatrix}.\]
For all \(t\leq 0\), we set \(\underline{s}(t)=0\). The convolutional code is said to be time-invariant if \(G_{t}\) is the same for all \(t\in\mathbb{N}\). Let \(G=[G^{(m)}\ G^{(m-1)}\ \dots\ G^{(0)}],\) where each \(G^{(i)}\) is an \(n\times k\) matrix over \(\mathbb{F}_{q}\). The current paper focuses only on time-invariant constructions, and hence we set \(G_{t}=G\) for all \(t\in\mathbb{N}\). By abuse of notation, we will refer to \(G\) as the _generator matrix_. The \(n\) symbols belonging to \(\underline{c}(t)\) are sent to the receiver in time slot \(t\). We describe symbol erasure patterns using sets \(\mathcal{R}_{t}\subseteq[n]\) such that the receiver receives \(\{c_{j}(t)\mid j\in\mathcal{R}_{t}\}\) in time slot \(t\) and \(\{c_{j}(t)\mid j\in[n]\setminus\mathcal{R}_{t}\}\) are erased by the channel. The number of non-erased code symbols in time slot \(t\) is denoted by \(n_{t}=|\mathcal{R}_{t}|\).
### _Information Debt_
Information debt, introduced by Martinian [15], is a measure of the extra code symbols needed at the decoder to decode all unknown message symbols. The information debt at time slot \(0\) is set as zero. The information debt at any time slot \(t\in\mathbb{N}\) is given by \(I(t)=\max\{k-n_{t}+I(t-1),0\}\) for all \(t\in\mathbb{N}\). Let \(\theta_{0}=0\). For any symbol erasure pattern, one can identify time slots \(\{\theta_{i}\}_{i=1}^{\infty}\) such that
\[\theta_{i+1}=\inf\{t>\theta_{i}\mid I(t)=0\}.\]
### \(i\)DOS Codes
The goal of \(i\)DOS codes is to recover messages whenever information debt drops to zero. It is shown in [17] that if \(I(t)>mk\) for any time slot \(t\), then there exists no \((n,k,m)\) convolutional code capable of decoding all the message symbols. To ensure a worst-case decoding delay of \(\tau\) whenever recovery is possible, it is required that \(\theta_{i+1}-\theta_{i}\leq\tau+1\), for all \(i\). With this background, \(i\)DOS codes can be formally defined as follows.
**Definition 1** ([17]).: A symbol erasure pattern is said to be \((n,k,m,\tau)\)-acceptable if \(I(t)\leq mk\) for all \(t\in\mathbb{N}\), and \(\theta_{i+1}-\theta_{i}\leq\tau+1\) for all \(i\in\mathbb{N}\cup\{0\}\). An \((n,k,m,\tau)\)\(i\)DOS code is an \((n,k,m)\) convolutional code which is such that for all \(i\in\mathbb{N}\cup\{0\}\), \(\{\underline{s}(t)\mid t\in[\theta_{i}+1:\theta_{i+1}]\}\) are recoverable at time \(\theta_{i+1}\) over every \((n,k,m,\tau)\)-acceptable symbol erasure pattern.
The requirement that the messages need to be recovered whenever information debt drops to zero, is intuitively related to the non-singularity of certain matrices obtained from the generator matrix. In the rest of this section, we will identify a sufficient property to be possessed by an \(x\times x\) matrix \(M\) so that \(\alpha^{M}\) is non-singular. This idea forms the core of our unit-memory construction.
### _Dominant Permutation_
**Definition 2** (Dominant Permutation of a Matrix).: Consider an \(x\times x\) matrix \(M\) composed of elements drawn from \(\{-\infty,0\}\cup\mathbb{N}\). The permutation \(\sigma^{*}\in S_{x}\) (if exists) is referred to as the _dominant permutation_ if the following is true:
\[\sum_{i=1}^{x}M(\sigma^{*}(i),i)>\sum_{i=1}^{x}M(\sigma(i),i),\forall\sigma \in S_{x}\setminus\{\sigma^{*}\}.\]
Furthermore, if such a \(\sigma^{*}\) exists, we refer to the sum \(\sum_{i=1}^{x}M(\sigma^{*}(i),i)\) as the _dominant sum_ of \(M\). We make the following simple observation.
**Remark 1**.: Assume that for the matrix \(M\), we have \(M(i,j)=-\infty\). If \(\sigma\in S_{x}\) is such that \(\sigma(j)=i\), clearly, \(\sigma\) cannot be a dominant permutation of \(M\).
**Definition 3** (Dominant Submatrix).: Consider an \(x\times y\) matrix \(M\) composed of elements from \(\{-\infty,0\}\cup\mathbb{N}\), where \(y\leq x\). A \(y\times y\) submatrix \(\tilde{M}\) of \(M\) (if exists) is referred to as the _dominant submatrix_ of \(M\), if the following two conditions hold; (i) \(M\) possesses a dominant permutation, (ii) among all the \(y\times y\) submatrices which possess a dominant permutation, \(\tilde{M}\) yields the single largest dominant sum - i.e., dominant sums (if exist) of all other submatrices are _strictly_ smaller.
We make the following straightforward observation which relates dominant submatrices and the existence of dominant permutation.
**Remark 2** (Submatrix Decomposition Strategy).: Consider an \(x\times x\) matrix \(M\) composed of elements from \(\{-\infty,0\}\cup\mathbb{N}\). Let \(\mathcal{A}_{1},\ldots,\mathcal{A}_{l}\) denote a partition of the columns \([x]\) of \(M\). Assume that for each \(\mathcal{A}_{i}\), the submatrix \(M(:,\mathcal{A}_{i})\) possesses a dominant submatrix \(M_{\mathcal{A}_{i}}\). Let \(M_{\mathcal{A}_{i}}\) be occupying the rows \(\mathcal{B}_{i}\subseteq[x]\) and let \(s_{i}\) denote the dominant sum of \(M_{\mathcal{A}_{i}}\). If the rows \(\mathcal{B}_{1},\ldots,\mathcal{B}_{l}\) do not intersect (i.e., they form a partition of \([x]\)), it follows that \(M\) possesses a dominant permutation. Moreover, the dominant sum of \(M\) is given by \(\sum_{i=1}^{l}s_{i}\). We will refer to this simple strategy of showing the existence of the dominant permutation for a larger matrix by leveraging the existence of smaller dominant submatrices whose rows do not intersect, as the _submatrix decomposition strategy_.
For the matrix illustrated in Fig. 1, the dominant permutation is given by \(\sigma^{*}=(1,2,4,3)\) and the corresponding dominant sum is \(16\). In the following example, we discuss a slight variation of the submatrix decomposition strategy by making use of the observation made in Remark 1.
**Example 1**.: Consider the matrix \(M\) illustrated in Fig. 2. Let \(\mathcal{A}_{1}=\{1,2\}\) and \(\mathcal{A}_{2}=\{3,4\}\). We highlight the corresponding dominant submatrices using green and blue rectangles, respectively. As the rows of these submatrices intersect, we cannot employ the naive submatrix decomposition strategy. However, from Remark 1, it can be inferred that as \(M(1,3)=M(1,4)=-\infty\), if there is a dominant permutation \(\sigma^{*}\), it should be that \(1\notin\sigma^{*}(\mathcal{A}_{2})\). In other words, \(1\in\sigma^{*}(\mathcal{A}_{1})\). Essentially, the implication here is that \(\sigma^{*}\) should "pass through" row \(1\), when restricted to the columns in \(\mathcal{A}_{1}\). As a result, while searching for the dominant submatrix of \(M(:,\mathcal{A}_{1})\), we will consider only those submatrices which involve row \(1\). Given this constraint, it can be identified that the submatrix demarcated using the red dashed rectangle is the (constrained) dominant submatrix. Since the rows of the constrained dominant submatrix and the dominant submatrix (indicated in blue) are not intersecting, it follows
Fig. 1: In this figure, we illustrate how the submatrix decomposition strategy can be utilized to show that the given matrix \(M\) possesses a dominant permutation. We choose: \(\mathcal{A}_{1}=\{1,2\},\mathcal{A}_{2}=\{3,4\}\). The dominant submatrices associated with \(M(:,\mathcal{A}_{1})\) and \(M(:,\mathcal{A}_{2})\) are demarcated using red and blue rectangles, respectively. As the rows of these submatrices do not intersect, it follows that \(M\) possesses a dominant permutation.
that the matrix \(M\) possesses a dominant permutation. We will utilize this idea of constrained dominant submatrices later in our proofs.
The following lemma motivates our ongoing discussion on matrices that possess dominant permutations. We attribute this lemma to an earlier work by Almeida et al. [22], which explores similar ideas. The proof of this lemma is deferred to Appendix C.
**Lemma 1**.: _Consider an \(x\times x\) matrix \(M\) with elements drawn from \(\{-\infty,0\}\cup\)\(\mathbb{N}\). Assume \(M\) possesses the dominant permutation \(\sigma^{*}\) and the dominant sum \(s_{\sigma^{*}}\). Let \(\alpha\) be a primitive element of \(\mathbb{F}_{p^{d}}\), where \(d>s_{\sigma^{*}}\). Then, \(\alpha^{M}\) is non-singular._
## III Construction A: Explicit, Unit Memory Construction
In this section, we present an explicit construction of \(i\)DOS codes with unit memory (i.e., \(m=1\)), for all parameters \(\{n,k,\tau\}\). Without loss of generality, we will henceforth take the characteristic of the underlying finite field for our constructions to be two, i.e., \(p=2\). To highlight the key ideas, we first discuss an example for parameters \(\{n=4,k=2,\tau=2\}\) in Sec. III-A. The general construction for any \(\{n,k,\tau\}\) is described in Sec. III-B.
### _Example: \(\{n=4,k=2,m=1,\tau=2\}\)_
Let \(\alpha\) be a primitive element of \(\mathbb{F}_{2^{d}}\). For now, we will assume \(d\) to be sufficiently large and later in the section we will explicitly specify a value for \(d\). We set \(G^{(0)}\triangleq\alpha^{M^{(0)}}\) and \(G^{(1)}\triangleq\alpha^{M^{(1)}}\), where:
\[M^{(0)}\triangleq\begin{bmatrix}0&0\\ 1&2\\ 2&4\\ 3&6\end{bmatrix},\;\;M^{(1)}\triangleq\begin{bmatrix}6&3\\ 4&2\\ 2&1\\ 0&0\end{bmatrix}.\]
Recall the definitions of \(\mathcal{R}_{t}\), \(\{\theta_{i}\}\) presented in Sections II-A and II-B, respectively. Let \(\theta_{i+1}-\theta_{i}\triangleq\ell\). We will now show that all the message symbols in \(\{\underline{s}(t)\mid t\in[\theta_{i}+1:\theta_{i}+\ell]\}\) can be recovered by the receiver using the available non-erased code symbols \(\{c_{j}(t)\mid t\in[\theta_{i}+1:\theta_{i}+\ell],j\in\mathcal{R}_{t}\}\). This is under the assumption that the message symbols in \(\{\underline{s}(t^{\prime})\mid t^{\prime}\in[\theta_{i}]\}\) are already known to the receiver (the assumption is trivially true when \(i=0\)). After removing the contribution of these message symbols, it is as if the transmitter has sent \(\underline{\hat{c}}(\theta_{i}+1)=G^{(0)}\underline{s}(\theta_{i}+1)\) and \(\underline{\hat{c}}(t^{\prime})=G^{(1)}\underline{s}(t^{\prime}-1)+G^{(0)} \underline{s}(t^{\prime})\), where \(t^{\prime}\in[\theta_{i}+2:\theta_{i}+\ell]\). Thus, effectively, the received code symbols in time slots \([\theta_{i}+1:\theta_{i}+\ell]\) are given by \(\{\hat{c}_{j}(t)\mid t\in[\theta_{i}+1:\theta_{i}+\ell],j\in\mathcal{R}_{t}\}\), where \(\underline{\hat{c}}(t)=[\hat{c}_{1}(t)\;\;\cdots\;\hat{c}_{n}(t)]^{T}\).
By definition of \(\{\theta_{i}\}\), we have \(I(\theta_{i})=I(\theta_{i}+\ell)=0\). Hence, \(\sum_{t\in[\theta_{i}+1:\theta_{i}+\ell]}n_{t}\geq k\ell=2\ell\). Without loss of generality, we will consider here only the worst-case scenario \(\sum_{t\in[\theta_{i}+1:\theta_{i}+\ell]}n_{t}=2\ell\). In addition, by Definition 1, we have \(I(t)\leq mk=2\) and \(\ell\leq\tau+1=3\). As \(I(t)\leq 2\) and \(I(t)>0\) for \(t\in[\theta_{i}+1:\theta_{i}+\ell-1]\), it follows that: \(2(\ell^{\prime}-1)\leq\sum_{t\in[\theta_{i}+1:\theta_{i}+\ell^{\prime}]}n_{t}< 2\ell^{\prime}\), for all \(\ell^{\prime}\in[\ell-1]\). Thus, we restrict ourselves to \(\{n_{\theta_{i}+1},\ldots,n_{\theta_{i}+\ell}\}\) satisfying three conditions:
1. \(\ell\leq\tau+1=3\),
2. \(\sum_{t\in[\theta_{i}+1:\theta_{i}+\ell]}n_{t}=2\ell\),
3. \(2(\ell^{\prime}-1)\leq\sum_{t\in[\theta_{i}+1:\theta_{i}+\ell^{\prime}]}n_{t} <2\ell^{\prime},\;\ell^{\prime}\in[\ell-1]\).
Let \(\mathcal{R}_{t}=\{i_{1},i_{2},\ldots,i_{n_{t}}\}\subseteq[n]\) and \(\underline{\hat{c}}(t)\triangleq[\hat{c}_{i_{1}}(t)\;\;\cdots\;\hat{c}_{i_{n_ {t}}}(t)]^{T}\). At time \((\theta_{i}+\ell)\), thus the decoder essentially has the following matrix equation to solve:
\[\begin{bmatrix}\underline{\hat{c}}(\theta_{i}+1)\\ \vdots\\ \underline{\hat{c}}(\theta_{i}+\ell)\end{bmatrix}=G_{\text{dec}}\begin{bmatrix} \underline{s}(\theta_{i}+1)\\ \vdots\\ \underline{s}(\theta_{i}+\ell)\end{bmatrix},\]
where \(G_{\text{dec}}\) is a _decoding matrix_ of size \(k\ell\times k\ell\) (i.e., \(2\ell\times 2\ell\)). Note that \(G_{\text{dec}}\) is not a constant and is a function of the symbol erasure pattern. The code is an \(i\)DOS code if and only if \(G_{\text{dec}}\) is non-singular for any symbol erasure pattern such
Fig. 2: For the given matrix \(M\), the dominant permutation is \(\sigma^{*}=(2,1,3,4)\). We argue the existence of the dominant permutation in Example 1 using a variant of the submatrix decomposition strategy.
that \(\{n_{\theta_{i}+1},\ldots,n_{\theta_{i}+\ell}\}\) satisfy the conditions (1)-(3). The structure of decoding matrices for all the possible symbol erasure scenarios is illustrated in Fig. 3. We illustrate the corresponding exponents (with respect to \(\alpha\)) in Fig. 4. The high-level idea of the construction is the following. For each \(2\ell\times 2\ell\) decoding matrix \(G_{\text{dec}}\), we have a corresponding \(2\ell\times 2\ell\)_exponent matrix_\(M_{\text{dec}}\) in Fig. 4. We will show that all the exponent matrices have dominant permutations. This will prove that for a large enough degree of the field extension \(d\), the corresponding decoding matrices are non-singular matrices (by applying Lemma 1). We will specify an explicit value for \(d\) later in the section.
columns \([(i-1)k+1:ik]\).
For the \((0,4)\) scenario, by **P1** and **P3** (with \(r=2\)), the lowermost \(2\times 2\) submatrix (demarcated by a red rectangle in Fig. 5(a)) is the dominant submatrix of the \(2^{\text{nd}}\) thick column. Similarly, the topmost \(2\times 2\) submatrix (shown by a blue rectangle) is the dominant submatrix of the \(1^{\text{st}}\) thick column by **P2** and **P3** (with \(r=0\)). Because the rows of these submatrices do not overlap, any \(4\times 4\) exponent matrix \(M_{\text{dec}}\) of the form illustrated in Fig. 5(a) has a dominant permutation using the submatrix decomposition approach.
For the \((1,3)\) scenario, using similar arguments, the lowermost \(2\times 2\) submatrix (demarcated by a red rectangle in Fig. 5(b)) may be shown to be the dominant submatrix of the \(2^{\text{nd}}\) thick column. As \(M_{\text{dec}}(1,3)=M_{\text{dec}}(1,4)=-\infty\), if there is a dominant permutation \(\sigma^{*}\), it should be that \(1\in\sigma^{*}(\{1,2\})\) (similar to the scenario in Example 1). As a result, for the \(1^{\text{st}}\) thick column, we limit our focus to those \(2\times 2\) submatrices that involve row \(1\). It is worth noting that row \(1\) of the \(1^{\text{st}}\) thick column is a row of \(M^{(0)}\), whereas any other row of the thick column is a row of \(M^{(1)}\). If we augment one row each from \(M^{(0)}\) and \(M^{(1)}\), the property **P3** (with \(r=1\)) ensures the presence of a dominant permutation. Furthermore, based on **P2**, the topmost \(2\times 2\) submatrix (shown using a blue dashed rectangle) is the (constrained) dominant submatrix of the \(1^{\text{st}}\) thick column. Because the rows of the red and blue dominant submatrices do not intersect, using submatrix decomposition, it follows that any \(4\times 4\) exponent matrix \(M_{\text{dec}}\) of the form illustrated in Fig. 5(b) has a dominant permutation.
_Case \(\ell=3\)_: There are four possibilities for \((n_{\theta_{i}+1},n_{\theta_{i}+2},n_{\theta_{i}+3})\) here; \((0,2,4)\), \((0,3,3)\), \((1,1,4)\) and \((1,2,3)\). We illustrate these four cases in Fig. 6. The basic idea remains the same as in the \(\ell=2\) case, i.e., identification of dominant submatrices whose rows do not intersect. By **P1** and **P3** (with \(r=2\)), it follows that the submatrices demarcated by red rectangles within the \(3^{\text{nd}}\) thick column are dominant submatrices (for all the four scenarios). For the \((0,2,4)\) scenario illustrated in Fig. 6(a), due to the presence of \(-\infty\) elements within the \(1^{\text{st}}\) thick column of \(M_{\text{dec}}\), it may be noted that any dominant permutation \(\sigma^{*}\) (if exists) should satisfy \(\sigma^{*}([3:6])=[3:6]\). As a result, we will search for the (constrained) dominant submatrix within the \(2^{\text{nd}}\) thick column such that only rows \(\{3,4,5,6\}\) are permitted. It now follows from properties **P2** and **P3** (with \(r=0\)) that the blue dashed rectangle depicts the constrained dominant submatrix of the \(2^{\text{nd}}\) thick column. The submatrix highlighted in green is a dominant submatrix by **P3** (with \(r=0\)). Because there is no intersection of rows among these dominant submatrices, the \(6\times 6\) matrix has a dominant permutation.
For the \((1,2,3)\) scenario (illustrated in Fig. 6(d)), due to the \(-\infty\)'s in the \(1^{\text{st}}\) thick column, it should be that \(\sigma^{*}([3:6])\supseteq[4:6]\). Because of the \(-\infty\)'s in the \(3^{\text{rd}}\) thick column, it should be that \(\sigma^{*}([5:6])\subseteq[4:6]\). It follows that \(\sigma^{*}([3:4])\) should contain precisely one element from \([4:6]\). Moreover, due to the \(-\infty\)'s in the \(2^{\text{nd}}\) thick column, we have: \(\sigma^{*}([3:4])\subseteq[2:6]\). Thus, \(\sigma^{*}([3:4])\) should contain precisely one element from \([2:3]\). Pictorially, these constraints correspond to selecting one blue row and one red row within the \(2^{\text{nd}}\) thick column. Due to **P1**, **P2** and **P3** (with \(r=1\)), it fol
Fig. 5: Here, we consider \(\ell=2\). (a) \((n_{\theta_{i}+1},n_{\theta_{i}+2})=(0,4)\); (b) \((n_{\theta_{i}+1},n_{\theta_{i}+2})=(1,3)\). Vertical dashed lines demarcate thick columns. Submatrices demarcated using solid red and blue rectangles are dominant submatrices. The constrained dominant submatrix is indicated using a blue dashed rectangle.
Fig. 6: Here, we consider \(\ell=3\). There are four possibilities for \((n_{\theta_{i}+1},n_{\theta_{i}+2},n_{\theta_{i}+3})\); (a) \((0,2,4)\), (b) \((0,3,3)\), (c) \((1,1,4)\) and (d) \((1,2,3)\). Vertical dashed lines demarcate thick columns. Submatrices demarcated using solid red and green rectangles are dominant submatrices. Constrained dominant submatrices are delineated using blue and green dashed rectangles.
demarcated using a blue dashed rectangle is the constrained dominant submatrix. With regard to the \(1^{\text{st}}\) thick column, owing to the constraint that \(1\notin\sigma^{*}([3:6])\), we have that \(1\in\sigma^{*}([1:2])\) and \(|\sigma^{*}([1:2])\cap[2:3]|=1\). This corresponds to selecting the red row and one of the two blue rows in the \(1^{\text{st}}\) thick column. From **P2** and **P3** (with \(r=1\)), it follows that the submatrix indicated in the green dashed rectangle is the constrained dominant submatrix. As rows of these three submatrices do not intersect, it follows that the \(6\times 6\) matrix possesses a dominant permutation. Proofs for the scenarios \((0,3,3)\) and \((1,1,4)\) follow along similar lines. Since the largest term of matrices \(M^{(0)}\), \(M^{(1)}\) is \(6\), it can be noted that the dominant sum of any exponent matrix will be at most \(6*k*(\tau+1)=36\). Hence, choosing \(d>36\) guarantees that all the decoding matrices are non-singular (by Lemma 1).
### _Construction A for any \(\{n,k,m=1,\tau\}\)_
In this subsection, we describe our unit memory, explicit construction for any \(\{n,k,\tau\}\).
**Construction A**.: Let \(\alpha\) be a primitive element of \(\mathbb{F}_{2^{d}}\). We describe an \((n,k,m=1)\) convolutional code over \(\mathbb{F}_{2^{d}}\) by defining the \((n\times 2k)\) generator matrix \(G=[G^{(1)}\ G^{(0)}]\), where \(G^{(0)}=\alpha^{M^{(0)}},G^{(1)}=\alpha^{M^{(1)}}\). We choose the \(n\times k\) matrices \(M^{(0)},M^{(1)}\) as follows:
\[M^{(0)}(i,j)=(i-1)*j,\ \ \ M^{(1)}(i,j)=(n-i)*(k+1-j).\]
**Theorem 1**.: _The \((n,k,m=1)\) convolutional code defined by Construction A is an \((n,k,m=1,\tau)\) iDOS code over \(\mathbb{F}_{2^{d}}\) if \(d>(n-1)k^{2}(\tau+1)\)._
The proof of the theorem is in Appendix D.
## IV Construction B: Explicit Construction For All Parameters
The following convolutional code construction is a special case of the construction in [22]. We will show that this code is an \((n,k,m,\tau)\)\(i\)DOS code if the finite field size is sufficiently large.
**Construction B**.: Let \(\alpha\) be a primitive element of \(\mathbb{F}_{2^{d}}\). We describe an \((n,k,m)\) convolutional code over \(\mathbb{F}_{2^{d}}\) by defining the \((n\times(m+1)k)\) generator matrix \(G=[G^{(m)}\ \cdots\ G^{(0)}]\). For \(t\in[0:m]\), \(G^{(t)}\in\mathbb{F}_{2^{d}}^{n\times k}\) takes the form \(G^{(t)}=\alpha^{M^{(t)}}\). We choose the \(n\times k\) matrices \(\{M^{(t)}\}\) as follows:
\[M^{(t)}(i,j)=2^{t*n+i+k-1-j}.\]
**Example 2**.: The generator matrix as per Construction B for parameters \((n=4,k=2,m=2)\) is given by:
\[G=[\alpha^{M^{(2)}}\ \alpha^{M^{(1)}}\ \alpha^{M^{(0)}}],\]
where:
\[M^{(0)}=\left[\begin{array}{cc}2^{1}&2^{0}\\ 2^{2}&2^{1}\\ 2^{3}&2^{2}\\ 2^{4}&2^{3}\end{array}\right],M^{(1)}=\left[\begin{array}{cc}2^{5}&2^{4}\\ 2^{6}&2^{5}\\ 2^{7}&2^{6}\\ 2^{8}&2^{7}\end{array}\right]\]
\[\text{and}\ M^{(2)}=\left[\begin{array}{cc}2^{9}&2^{8}\\ 2^{10}&2^{9}\\ 2^{11}&2^{10}\\ 2^{12}&2^{11}\end{array}\right].\]
**Theorem 2**.: _The \((n,k,m)\) convolutional code defined by Construction B is an \((n,k,\tau,m)\) iDOS code over \(\mathbb{F}_{2^{d}}\) if \(d>2^{((m+1)n+k-2)}(\tau+1)k\)._
The proof can be found in Appendix F.
**Remark 3**.: When \(m=1\), Construction B has a much larger field extension degree requirement of \(d>2^{(2n+k-2)}(\tau+1)k\) compared to the \(d>(n-1)k^{2}(\tau+1)\) requirement of Construction A. |
2301.12859 | Lattice gauge theory and topological quantum error correction with
quantum deviations in the state preparation and error detection | Quantum deviations or coherent noise are a typical type of noise when
implementing gate operations in quantum computers, and their impact on the
performance of quantum error correction (QEC) is still elusive. Here, we
consider the topological surface code, with both stochastic noise and coherent
noise on the multi-qubit entanglement gates during stabilizer measurements in
both initial state preparation and error detection. We map a multi-round error
detection protocol to a three-dimensional statistical mechanical model
consisting of Z_2 gauge interactions and related the error threshold to its
phase transition point. Specifically, two error thresholds are identified
distinguishing different error correction performances. Below a finite error
threshold, in stark contrast to the case with only stochastic errors,
unidentifiable measurement errors can cause the failure of QEC in the large
code distance limit. This problem can only be fixed at the perfect initial
state preparation point. For a finite or small code with distance d, we find
that if the preparation error rate is below a crossover scale ~1/\log(d), the
logical errors can still be suppressed. We conclude that this type of
unavoidable coherent noise has a significant impact on QEC performance, and
becomes increasingly detrimental as the code distance increases. | Yuanchen Zhao, Dong E. Liu | 2023-01-30T13:12:41Z | http://arxiv.org/abs/2301.12859v3 | Lattice gauge theory and topological quantum error correction with quantum deviations in the state preparation and error detection
###### Abstract
Quantum deviations or coherent noise are a typical type of noise when implementing gate operations in quantum computers, and their impact on the performance of quantum error correction (QEC) is still elusive. Here, we consider the topological surface code, with both stochastic noise and coherent noise on the multi-qubit entanglement gates during stabilizer measurements in both initial state preparation and error detection. We map a multi-round error detection protocol to a three-dimensional statistical mechanical model consisting of \(\mathbb{Z}_{2}\) gauge interactions and related the error threshold to its phase transition point. Specifically, two error thresholds are identified distinguishing different error correction performances. Below a finite error threshold, in stark contrast to the case with only stochastic errors, unidentifiable measurement errors can cause the failure of QEC in the large code distance limit. This problem can only be fixed at the perfect initial state preparation point. For a finite or small code with distance \(d\), we find that if the preparation error rate is below a crossover scale \(\propto 1/\log d\), the logical errors can still be suppressed. We conclude that this type of unavoidable coherent noise has a significant impact on QEC performance, and becomes increasingly detrimental as the code distance increases.
+
Footnote †: Corresponding to: [email protected]
## I Introduction
Quantum supremacy was recently claimed in cutting-edge quantum processors [1; 2; 3], which is a major breakthrough in the field of quantum computation. Because to their noisy character, the current state-of-the-art quantum devices [1; 2; 3; 4; 5; 6; 7] are classified as noisy intermediate-scale quantum (NISQ) [8] computer, and the observed quantum supremacy is only a weakened version with few practical applications [8]. To date, the merely known examples with worthwhile quantum advantages are only expected in fault tolerant quantum computers with quantum error correction (QEC) [9; 10; 11]. Recently, QEC codes with small system size are being put to the test in experiments [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22].
A key concept of fault tolerance is the "error threshold theorem", which states that if the physical error rate is below an error threshold, quantum computation with arbitrary logical accuracy can be in principle implemented in the noisy quantum devices [23; 24; 25; 26]. The threshold theorem is well-established if the device noise can be captured by independent stochastic errors [23; 27; 28; 29]. However, actual quantum devices suffer from more general type of errors. With correlated errors, the threshold theorem is modified for a more conceptual infidelity measure, e.g. diamond norm [24], for correlations with weak amplitude [30] and weak length [31], and for the environment with critical behaviors [32; 33]. A more practical type of noise comes from the inefficient qubit calibration and imperfect control of gate operations, causing quantum deviation or coherent effect in errors. This problem motivated recent studies for the independent single-qubit coherent errors [34; 35; 36; 37; 38; 39; 40; 41] and the detection induced coherent errors from entanglement gate noise [42; 43]. We emphasize that the two-qubit entanglement gates are much harder to calibrate and more error-prone than single-qubit gate. Nevertheless, the fault-tolerance and error threshold theorem is not established with any kind of coherent errors. Understanding this issue is exceedingly challenging due to the lack of analytical and numerical tools. Although an efficient numerical strategy exists for a special independent coherent error model [36], the general coherent error problems, that goes beyond the Clifford algebra, cannot be simulated efficiently in classical computer. Of that situation, there is no solid foundation for fault-tolerant quantum computation with practical quantum devices. This motivates us to build a theoretical framework to study the coherent error problems in QEC and threshold theorem.
_Summary of the main results:_ In this work, we study the performance of toric code QEC [44] under imperfect measurement while applying the common multi-round syndrome measurements [28], together with the stochastic Pauli errors on physical qubits. We assume that the measurement circuit for both initial state preparation and error detection are suffering from coherent noise. We find that this QEC model can be mapped to a 3D quenched disordered SM model constituted by \(\mathbb{Z}_{2}\) gauge interaction terms. Remarkably, this model has a non-local correlation term at timelike direction originates from the imperfection during initial state preparation. We further find that the Wilson loops in this SM model have an anisotropic behavior: The timelike Wilson loops deconfine at low temperatures (small physical error rates) and confine at high temperatures (large physical error rates); but the space-like Wilson loops confine at any finite temperature, resulting from the non-local timelike correlation. Taking the results of this SM model, we predict that there are two thresholds in our QEC model. The confinement-deconfinement transition
point of timelike Wilson loops signifies a theoretical threshold located at finite measurement error rate and finite Pauli error rate, above which the QEC fails due to non-contractible logical errors. The confinement behavior of spacelike Wilson loops suggests that the measurement error threshold seats at the point where the initial state preparation is perfect. Above the measurement error threshold, if we only take finite error history while decoding, the measurement errors will no longer be distinguished from Pauli errors, which could result in the failure of QEC even in the limit of large code distance. With a finite code distance \(d\), the effectiveness of the pragmatic QEC approach remains viable when the error rate associated with state preparation falls within a region \(\sim 1/\log d\). Finally, we emphasize that a more realistic imperfect measurement model relating to Fig. 1(b) will in general has a worse performance, see the discussion of Eq. (27).
## II Toric code-a brief review
We follow the construction of topological surface code on a torus, i.e. toric code [27; 28; 29; 44]. It is a stabilizer code defined on a \(2\)-d periodic square lattice. There are two kinds of stabilizers associated with vertices and plaquettes respectively as shown in Fig.1(a),
\[A_{v}=\prod_{e_{0}\mid v_{0}\in\mathcal{O}e_{0}}X_{e_{0}},\quad B_{p_{0}}= \prod_{e_{0}\in\mathcal{O}p_{0}}Z_{e_{0}}. \tag{1}\]
Here we use the symbols \(v_{0}\) and \(p_{0}\) to label vertex and plaquette operators. \(X_{e_{0}}\) and \(Z_{e_{0}}\) represent Pauli operators acting on qubit \(e_{0}\) (we call \(e_{0}\) as "edge"). \(A_{v_{0}}\) is the product of four Pauli \(X\) operators around vertex \(v_{0}\) and \(B_{p_{0}}\) is the product of four Pauli \(Z\) operators around plaquette \(p_{0}\). We assume the lattice contains \(N\) vertices, \(N\) plaquettes and \(2N\) edges. \(2N\) physical qubits are put on each edge of the lattice. Its four-dimensional code subspace is stabilized by all \(A_{v_{0}}\)'s and \(B_{p_{0}}\)'s which is achieved through projective measurement of these stabilizers. Specifically, we start with the logical \(++\) state by project all the \(B_{p_{0}}\)'s to \(+1\) for a product state of physical qubits \(\bigotimes_{e_{0}}\left|+\right\rangle_{e_{0}}\):
\[\left|++\right\rangle=\prod_{p_{0}}\frac{I+B_{p_{0}}}{2}\bigotimes_{e_{0}} \left|+\right\rangle_{e_{0}} \tag{2}\]
and other logical bases are obtained by applying logical Pauli \(Z\) operators, \(\left|-+\right\rangle=Z_{l_{1}}\left|++\right\rangle\), \(\left|+-\right\rangle=Z_{l_{2}}\left|++\right\rangle\) and \(\left|--\right\rangle=Z_{l_{1}}Z_{l_{2}}\left|++\right\rangle\). Here \(l_{1}\) and \(l_{2}\) denote non-contractible loops on the periodic lattice. Logical Pauli \(Z\) operators \(Z_{l_{1}}\) and \(Z_{l_{2}}\) are product of \(Z\)'s along these non-contractible loops. Correspondingly there are also logical Pauli \(X\) operators
Figure 1: (a) Toric code defined on \(2\)-d periodic lattice. Physical qubits stay on the edges of the lattice. The two kinds of stabilizers are \(A_{v}=\prod_{e\in\mathcal{O}e}X_{e}\) defined on each vertex and \(B_{p}=\prod_{e\in\mathcal{O}p}Z_{e}\) defined on each plaquette as shown in the figure. Logical Pauli \(Z\) (\(X\)) operators correspond to products of single qubit \(Z\) (\(X\)) operators along non-contractible loops on the lattice (dual lattice). They satisfy the commutation relations \(X_{l_{1}^{*}}Z_{l_{1}}=-Z_{l_{1}}X_{l_{1}^{*}}\), \(X_{l_{1}^{*}}Z_{l_{1}}=-Z_{l_{1}}X_{l_{1}^{*}}\), \(X_{l_{1}^{*}}Z_{l_{2}}=Z_{l_{2}}X_{l_{1}^{*}}\), \(X_{l_{2}^{*}}Z_{l_{1}}=Z_{l_{1}}X_{l_{1}^{*}}\) and acts respectively on two different logical qubits of toric code. (b) \(3\)-d spacetime of error history. The black lattice is the spacetime lattice, which takes periodic boundary condition at space directions and infinity boundary condition at time direction. This boundary condition is also the one adopted in Ref. [27] when discussing SM mapping. The Pauli errors, measurement errors and error syndromes are represented as strings on the dual lattice (dashed lines that cross plaquettes). given a configuration of Pauli \(X\) error (horizontal red strings) for the entire history, the error syndrome (vertical gray strings) will be the configuration of \(-1\) ancilla measurement outcomes at different time steps. The error syndrome is supposed to match the endpoints of Pauli error strings, but due to imperfection of syndrome measurements, the syndrome outcomes might be flipped with certain probability. Those flipped syndromes will be referred to as measurement errors (vertical red strings). \(\Pi\) denotes the projection from the \(3\)-d spacetime lattice to the \(2\)-d physical lattice. Given a plaquette \(p_{*}\), \(\Pi(p_{*})\) yields a plaquette \(p_{0}\) at the same spatial location as \(p_{*}\). (c) A realistic circuit for \(B_{p_{0}}\) measurement [28]. The ancilla qubit is prepared in (0) state, then four \(CNOT\) gates are applied in order to couple data and ancilla qubits. Finally, the ancilla is projectively measured in \(Z\) basis. (d) A simplified \(B_{p_{0}}\) measurement circuit considered in our work. Note that a five-qubit unitary gate is used here instead of four two-qubit gates. To enable a theoretical analysis of the problem, we focus on the case (d) in our theory, and the relation between (c) and (d) will be discussed in Sec. VII.
and \(X_{l_{2}^{*}}\), which are defined as \(X\)'s along non-contractible loops of the dual lattice, as in Fig. 1(a). The code subspace \(\mathcal{C}\) is spanned by these four logical bases.
## III Model for imperfect measurement
Experimentally, the stabilizer measurements are implemented using a multi-qubit unitary operation on a combined qubit set consisting of four data qubits and an ancilla qubit, followed by an ancilla qubit measurement [28; 45], also refer to Fig.1(c). The correct projective measurements of stabilizers can only be achieved through ideal unitary operations. However, the multi-qubit operation in principle cannot avoid the miscalibration in the experimental setups and results in imperfect measurement [42]. Here we consider a simplified imperfect measurement model [43] [refer to Fig.1(d)]: (1) Prepare the ancilla qubit in \(|+\rangle\) state for each plaquette \(p_{0}\); (2) apply a joint time evolution involving each ancilla and its four neighboring data qubits \(\exp[-itZ_{p_{0}}\otimes B_{p_{0}}]\) where \(Z_{p_{0}}\) is the Pauli \(Z\) acting on ancilla at \(p_{0}\); (3) perform projective measurement on ancilla in \(Y\) basis. Then equivalently we get a non-unitary evolution acting on the data qubits
\[M_{\{s_{p_{0}}\}}=\frac{1}{(\sqrt{2\cosh\beta})^{N}}\exp\left[\frac{1}{2} \beta\sum_{p_{0}}s_{p_{0}}B_{p_{0}}\right]. \tag{3}\]
up to an irrelevant global phase factor. Here \(\tanh(\beta/2)=\tan t\), and \(s_{p_{0}}=\pm 1\) is the measurement outcome of ancilla qubit at \(p_{0}\). We use \(\{s_{p_{0}}\}\) to denote the configuration of ancilla measurement outcomes, which appears with probability \(\mathrm{tr}(M_{\{s_{p_{0}}\}}\rho M_{\{s_{p_{0}}\}}^{\dagger})\) for a given initial state \(\rho\) of data qubits. The error model [43] only considers the miscalibration of the evolution time \(t\). For convenience, we restrict \(t\) to the region \(0\leq t\leq\pi/4\). When \(t=\pi/4\), we have \(\beta\rightarrow+\infty\) in Eq. (3) and recover the correct projective measurement \(M_{\{s_{p_{0}}\}}\propto\prod_{p_{0}}(I+s_{p_{0}}B_{p_{0}})/2\). For \(t\leq\pi/4\) the parameter \(\beta\) is finite, and \(M_{\{s_{p_{0}}\}}\) will no longer be a stabilizer projection. This situation is referred to as weak measurement in Ref. [43]. Generally, \(\beta\) measures how'strong' the measurement is, or equivalently how close is our imperfect measurement to the ideal projective measurement. So we call it the measurement strength. Moreover, it is easy to verify that the \(E_{\{s_{p_{0}}\}}=M_{\{s_{p_{0}}\}}^{\dagger}M_{\{s_{p_{0}}\}}\) operators form a set of positive operator-valued measurement (POVM). A similar construction can also be applied to \(A_{v}\) stabilizers.
The above model is a rather simplified one which is easier to study analytically, but it can capture the fundamental influence of imperfect stabilizer measurement on QEC. We will show that even such a simplified imperfect measurement model will drastically affect QEC. A more realistic imperfect measurement model will in general has a worse performance, see the discussion of Eq. (27).
Experimentally while preparing the initial logical state through stabilizer measurement as in Eq. (2), it might suffer from imperfect measurement. In our model, the imperfect initial logical \(++\) state is considered as
\[\ket{\widetilde{+\!+}}=\frac{M_{\{+\}}\bigotimes_{e_{0}}\ket{+}_{e_{0}}}{ \sqrt{\bigotimes_{e_{0}}\bra{+}_{e_{0}}M_{\{+\}}^{\dagger}M_{\{+\}}\bigotimes _{e_{0}}\ket{+}_{e_{0}}}}, \tag{4}\]
where \(M_{\{+\}}\) denotes the imperfect measurement operator (3) when all the ancilla measurement outcomes \(s_{p_{0}}\) are set to \(+1\). In the real world if the measurement outcomes contain an even number of \(-1\)'s, we can redefine the corresponding stabilizers with a minus sign, and the following discussion still works. If there are an odd number of \(-1\)'s, we drop it by post-selection. Actually, the odd parity results arise with a small probability for large enough \(\beta\) (refer to Sec. SI of supplemental information (SI) [46] for more details). Following the method in Ref. [43; 47] one can argue that these states possess only short-range entanglement by mapping them to a 2-d \(\mathbb{Z}_{2}\) lattice gauge theory. But here in this work, we are mainly concerned about the influence on QEC and error threshold properties similar to Ref. [27]. We define the other three logical states by applying logical \(Z\) operators in analogy to experimental setups, \(\ket{\widetilde{-\!+}}=Z_{l_{1}}\ket{\widetilde{+\!+}}\), \(\ket{\widetilde{+\!-}}=Z_{l_{2}}\ket{\widetilde{+\!+}}\) and \(\ket{\widetilde{--}}=Z_{l_{1}}Z_{l_{2}}\ket{\widetilde{+\!+}}\). Unlike the projective measurement case, these logical states now depend on the choice of logical operators about where they locate on the physical lattice. However, due to the simpleness of our model, we can still verify that they are orthogonal to each other (refer to Sec. SI of SI [46]). We define the code subspace under imperfect measurement \(\widetilde{\mathcal{C}}(\beta)\) as the space spanned by these four logical states. Note that it depends on the measurement strength \(\beta\) during preparation. Here we mention that unlike projective measurement the image of \(M_{\{s_{p_{0}}\}}\) acting on the whole Hilbert space is not a four-dimensional subspace but again the whole Hilbert space. That is why we cannot simply define code subspace as the image of \(M_{\{s_{p_{0}}\}}\).
## IV Statistical mechanical mapping
Now we discuss the QEC property under imperfect measurement on the subspace \(\widetilde{\mathcal{C}}\) as the code space. Normally for toric code, the Pauli errors are detected by syndrome measurement. However, the syndrome measurement also suffers from imperfection resulting in faulty outcomes. Therefore, in order to distinguish measurement errors from Pauli errors, the standard procedure is to perform multi rounds of syndrome measurements and take into account the obtained entire error history while decoding [27]. Note that Ref. [27] only considered the stochastic (i.e. classical probabilistic) errors of the ancilla measurement, but we consider the coherent errors (i.e. quantum deviations) in the multi-qubit entanglement operations; and importantly, Ref. [27] assume a well-prepared initial state from the perfect code space, but we consider imperfect entanglement operations which affect both the initial state preparation and the error detection. We model the QEC procedure as follows (for convenience we consider only Pauli \(X\) errors):
1. Start with an arbitrary state \(\ket{\widetilde{\Psi}}\in\widetilde{\mathcal{C}}(\beta_{0})\), where we as
sume the imperfect measurement strength while preparing the initial state is \(\beta_{0}\).
2. Probabilistic Pauli \(X\) error acts at each integer valued time \(t\). The \(X\) error at each physical qubit on each time slice occurs independently with probability \(q\in[0,1/2]\).
3. Perform a round of syndrome measurement for each time interval between \(t\) and \(t+1\). The syndrome measurements are assumed to still suffer from imperfect measurement. So given a configuration of syndrome measurement outcomes \(\{s_{p_{0}}\}\) for a single round, it leads to the action of \(M_{\{s_{p_{0}}\}}\) operator on the current quantum state. Here we set the strength of syndrome measurements to be \(\beta\) in order to distinguish them from initial state preparation.
4. At the end of the QEC procedure, we decode and apply the Pauli \(X\) correction operator to the final state.
Before going any further, we mention that our relatively simple model ensures that the imperfection of measurement only affects stabilizer bits but does not disturb logical information, since \(M_{\{s_{p_{0}}\}}\) commutes with logical operators.
Specifically, Notice that we can take the product of the eigenstates of \(N-1\)\(B_{p_{0}}\) operators, \(N-1\)\(A_{v_{0}}\) operators, and \(2\) logical operators \(X_{l_{1}^{*}}\) and \(X_{l_{2}^{*}}\) to form a complete basis of the whole Hilbert space of physical qubits. Under this basis, any state \(|\widetilde{\Psi}\rangle\) in the code space \(\widetilde{\mathcal{C}}\) can be expanded under the stabilizer basis to achieve the form (refer to Sec. SI of SI [46]):
\[|\widetilde{\Psi}\rangle\propto\] \[\exp\left(\frac{\beta}{2}\prod\nolimits_{\mathbf{p}_{0}}^{\prime }B_{p_{0}}\right)\bigotimes\nolimits_{p_{0}}^{\prime}\left[\sum\limits_{b_{p_ {0}}=\pm}\exp\left(\frac{\beta}{2}b_{p_{0}}\right)|B_{p_{0}}=b_{p_{0}}\rangle\right]\] \[\bigotimes\nolimits_{v_{0}}^{\prime}|A_{v_{0}}=+\rangle\bigotimes \left|L\right\rangle. \tag{5}\]
Here \(\left|L\right\rangle\) stands for the logical information associated with the space defined by logical operators (Fig. 1(a)). Here we remark that the above tensor product at the r.h.s. is a mathematical structure using a non-local basis. Physically the logical information is still stored in a non-local manner for large \(\beta_{0}\). With this structure, it is clear that applying more rounds of imperfect measurement Eq. (3) on this state will not change the logical information \(\left|L\right\rangle\).
So, if the Pauli \(X\) errors and the final correction operator compose a contractible loop (that can be factorized into \(B_{p_{0}}\) stabilizers and causes only trivial effect), we can verify that the logical information, i.e. \(\left|L\right\rangle\) shown in Eq.(5), will still be preserved. We refer to this case as the success of QEC, in sharp contrast to the situation where we finally obtain a non-contractible loop and result in a logical error. Note that this condition for successful QEC is similar to the one in Ref. [27] while they only considered the probabilistic ancilla measurement error.
The above QEC procedure could be diagrammatically represented on a \(3\)-d cubic lattice in order to decode as in Fig. 1(b). For convenience, we assume that the QEC starts at \(t=-\infty\) and ends at \(t=+\infty\), such that the \(3\)-d corresponding spacetime lattice has infinite boundary condition at time direction. We use timelike and spacelike dual strings to represent measurement errors, i.e. faulty syndromes caused by imperfection of sequential measurements, and Pauli errors respectively. The error strings (including both measurement and Pauli parts) and syndrome strings (marked with \(-1\) ancilla outcomes) together compose closed strings which have no endpoints (or end at infinity). The task of the decoder is to identify both measurement and Pauli errors. In order to do so, the decoder should select a configuration of strings (decoding strings) connecting the endpoints of syndrome strings. The timelike (spacelike) parts of the decoding strings represent the measurement (Pauli) error identified by the decoder. QEC succeeds if and only if the decoding strings are topologically equivalent to the real error strings (form contractible loops). So given an error syndrome, the optimal decoder algorithm (maximum likelihood decoder [31]) should select the topological equivalent class of error strings with the largest probability.
Our main result is that we mapped this QEC scenario to an SM model. We denote the vertex, edge and plaquette of the \(3\)-d spacetime lattice as \(v\), \(e\) and \(p\). Specifically, the spacelike edges and plaquettes will be labeled with a subscript \(s\), such as \(e_{s}\) and \(p_{s}\). The timelike ones will be labeled with \(t\), such as \(e_{t}\) and \(p_{t}\). We assign a variable \(\eta_{p}=\pm 1\) to each plaquette \(p\) to represent the error configuration, e.g. \(\eta_{p}=-1\) for where the measurement or Pauli error presents and \(+1\) otherwise. Then the probability of a given error configuration \(\{\eta_{p}\}\) will be
\[P(\{\eta_{p}\})=\] \[\frac{\sum_{\{\sigma_{v_{0}}\}}\exp\left[\beta_{0}\sum_{p_{0}}b_{p _{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}} \right]}{4^{N}(\cosh^{N}\beta_{0}+\sinh^{N}\beta_{0})(2\cosh\beta)^{NT}(2\cosh K )^{2NT}}, \tag{6}\]
where
\[b_{p_{0}}=\prod_{e_{0}\in\partial p_{0}}\sigma_{e_{0}},\quad K=-\frac{1}{2} \log\frac{q}{1-q}. \tag{7}\]
Here \(\sigma_{e_{0}}\) is a classical \(\mathbb{Z}_{2}\) spin-like variable assigned to each edge \(e_{0}\) of the \(2\)-d physical lattice. \(b_{p_{0}}\) is the product of four neighbouring \(\sigma_{e_{0}}\)'s around the plaquette \(p_{0}\), which has the similar form to a \(\mathbb{Z}_{2}\) gauge interaction term [48]. The \(T\) in this expression representing the total number of time steps will eventually be taken to \(+\infty\). The summation \(\sum_{\{\sigma_{e_{0}}\}}\) runs over all \(\{\sigma_{e_{0}}\}\) configurations. The detail of derivation for this expression can be found in Sec. SII of SI [46]. Here we provide a brief explanation. First notice that given a quantum state \(\rho\), the probability of POVM outcome is \(\mathrm{tr}(E_{\{s_{p_{0}}\}}\rho)\). We may construct a probability of syndrome measurement outcomes of all time steps and space locations conditioned on a fixed Pauli error configuration, which is expressed as
\[P(\{s_{p_{0}}(t)\}|\{\eta_{e_{0}}(t)\})=||\prod_{t}M_{\{s_{p_{0}}(t)\}}X_{\{\eta_ {e_{0}}(t)\}}\left|\widetilde{\Psi}\right\rangle||^{2} \tag{8}\]
for an arbitrary initial state \(|\widetilde{\Psi}\rangle\) in the imperfect code space \(\widetilde{\mathcal{C}}(\beta_{0})\). Specifically, any initial state should be a superposition of imperfect logical states \(|\widetilde{\Psi}\rangle=\Psi_{++}|\widetilde{++}\rangle+\Psi_{+-}|\widetilde{+ -}\rangle+\Psi_{-+}|\widetilde{-+}\rangle+\Psi_{--}|\widetilde{--}\rangle\) and we assume it to be normalized. Here \(\{s_{p_{0}}(t)\}\) and \(\{\eta_{e_{0}}(t)\}\) denotes the syndrome configuration and Pauli error configuration at time \(t\). Note that a pair \((p_{0},t)\) yields a corresponding spacelike plaquette \(p_{s}\) and a pair \((e_{0},t)\) yields a corresponding timelike plaquette \(p_{t}\). \(X_{\{\eta_{e_{0}}(t)\}}\) is the total Pauli error operator at time \(t\), and has the form
\[X_{\{\eta_{e_{0}}(t)\}}=\prod_{e_{0}}(\delta_{\eta_{e_{0}}(t),+1}I+\delta_{\eta _{e_{0}}(t),-1}X_{e_{0}}). \tag{9}\]
We can check that Eq. (8) is a well-defined joint probability and it matches the physical POVM probability at each time step, which ensures the effectiveness of SM mapping (see Sec. SII of SI [46]). A explicit calculation of Eq. (8) yields that
\[P(\{s_{p_{0}}(t)\}|\{\eta_{e_{0}}(t)\})=\] \[\frac{1}{4^{N}(\cosh^{N}\beta_{0}+\sinh^{N}\beta_{0})(2\cosh\beta )^{N^{T}}}\times\] \[\sum_{\{\sigma_{e_{0}}\}}\exp\left[\sum_{p_{0}}b_{p_{0}}\left( \beta_{0}+\beta\sum_{t}s_{p_{0}}(t)\prod_{k\leq t}\prod_{e_{0}\in\partial p_{0 }}\eta_{e_{0}}(k)\right)\right]. \tag{10}\]
Note that Eq. (10) does not depend on the choice of \(|\widetilde{\Psi}\rangle\). The \(\sigma_{e_{0}}\) variables are obtained by expanding the quantum states under the computational basis (a technic developed in Ref. [43; 47] dealing with post-measurement states), and the term \(\prod_{e_{0}\in\partial p_{0}}\eta_{e_{0}}(k)\) marks the boundary of Pauli error strings at time \(k\). Since the measurement error configuration can be inferred from the boundary of Pauli error strings and syndrome outcomes, its corresponding probability can be obtained by substituting syndrome variables with combinations of Pauli and measurement error variables
\[\eta_{p_{0}}(t)=s_{p_{0}}(t)\prod_{k\leq t}\prod_{e_{0}\in\partial p_{0}}\eta _{e_{0}}(k). \tag{11}\]
After combining with Pauli error probability
\[P(\{\eta_{e_{0}}(t)\}) =\prod_{e_{0},t}q^{\delta_{\eta_{e_{0}}(t),-1}}(1-q)^{\delta_{\eta _{e_{0}}(t),+1}} \tag{12}\] \[=\prod_{e_{0},t}\frac{\exp(K\eta_{e_{0}}(t))}{2\cosh K},\]
we arrive at a joint probability of total error configurations,
\[P(\{\eta_{p_{0}}(t)\},\{\eta_{e_{0}}(t)\})=P(\{\eta_{p_{0}}(t)\}|\{\eta_{e_{0} }(t)\})P(\{\eta_{e_{0}}(t)\}) \tag{13}\] \[=\frac{1}{4^{N}(\cosh^{N}\beta_{0}+\sinh^{N}\beta_{0})(2\cosh \beta)^{NT}(2\cosh K)^{2NT}}\] \[\times\sum_{\{\sigma_{e_{0}}\}}\exp\left[K\sum_{e,t}\eta_{e_{0}}( t)+\sum_{p_{0}}b_{p_{0}}\left(\beta_{0}+\beta\sum_{t}\eta_{p_{0}}(t)\right) \right],\]
which is exactly Eq. (6) after converting the notations to those of the \(3\)-d lattice. Here, the fact \(P(\{\eta_{p}\})\) can be derived as a well-defined joint probability for each \(\eta_{p}\) is a consequence of the simpleness of our error model.
Then following the standard procedure described in Ref. [27; 29; 31], we computed the probability of the topological equivalent class of error configurations and it is proportional to the partition function of an SM model
\[P([\{\eta_{p}\}])\propto\mathcal{Z}(\{\eta_{p}\})=\sum_{\{\sigma_{e_{0}}\}, \{\gamma_{e}\}}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}\right. \tag{14}\] \[\left.+\beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_ {p_{t}}\eta_{p_{t}}U_{p_{t}}\right],\quad U_{p}=\prod_{e\in\partial p}\tau_{e}.\]
This SM model is a \(3\)-d \(\mathbb{Z}_{2}\) gauge theory defined on the spacetime lattice coupled to a 2-d \(\mathbb{Z}_{2}\) gauge theory defined on the physical lattice, see Fig. 2. Here \(\tau_{e}=\pm 1\) is a spin variable defined on each edge \(e\) of the spacetime lattice. \(U_{p}\) is an ordinary \(\mathbb{Z}_{2}\) gauge interaction term containing four \(\tau_{e}\) operators, \([\{\eta_{p}\}]\) denotes the topological equivalent class represented by \(\{\eta_{p}\}\), and \(\eta_{p}\) sets the sign of interaction term on each plaquette. The summation of \(\{\tau_{e}\}\) configurations is the same as a summation over topologically equivalent error configurations up to a constant factor. Physically, those \(\tau_{e}\) operators describe the fluctuation of error strings, since flipping \(\tau_{e}\) operator is equivalent to deforming the error strings represented by \(\{\eta_{p}\}\) (see detail discussions in Ref. [27; 29]). Therefore, the model acquires a local symmetry
\[\eta_{p}\rightarrow\eta_{p}\prod_{e\in\partial p}\nu_{e},\quad\tau_{e} \rightarrow\tau_{e}\nu_{e},\quad\nu_{e}=\pm 1, \tag{15}\]
which ensures that topologically equivalent error configurations yield the same partition function. In order to detect the error threshold, Eq. (6) should be considered as the quenched disorder probability of interaction configuration \(\{\eta_{p}\}\). Then the phase transition point of \(\tau_{e}\) spins in the disordered SM model should correspond to the error threshold of the QEC model [27; 29; 31]. The reason for this phase transition-error threshold correspondence is as follows. Recall that the optimal decoder algorithm will select the equivalent class (of error configuration) with the largest probability or the smallest free energy. Define the free energy cost of an arbitrary non-contractible loop configuration \(l=\{\eta_{p}^{l}\}\) as
\[\Delta_{l}=-\sum_{\{\eta_{p}\}}P(\{\eta_{p}\})\log\frac{\mathcal{Z}(\{\eta_{p} \cdot\eta_{p}^{l}\})}{\mathcal{Z}\{\eta_{p}\}}. \tag{16}\]
In the ordered phase, \(\Delta_{l}\) diverges if we take the thermodynamic limit along with a disorder average. This suggests that the probability of the correct equivalent class will be far larger than that of the wrong one differed by a non-contractible loop, since the probability of an equivalent class is proportional to the partition function. Thus the optimal decoder always succeeds. In the disordered phase, however, the finiteness of free energy cost signifies the failure of the optimal decoder. A detailed explanation can be found in Ref. [31] for the case with stochastic errors.
## V Phase structure of the statistical mechanical model
Here we provide some analytical results about the SM model and its phase structure. First, we notice that the SM model has a non-local correlation at time direction originates from imperfect initial state preparation, see Fig. 2. If we set \(\beta_{0}\) to \(+\infty\), then the initial state is well prepared and the code space \(\widehat{\mathcal{C}}(\beta_{0})\) becomes exactly the toric code subspace. The action of following syndrome measurement operators Eq. (3) on this space yields only a global phase factor and does not change the state itself. In this case, even though the measurement outcome can still be faulty, the probability of measurement error will now become uncorrelated. This reduces to a pure probabilistic measurement error model considered in Ref. [27]. At the SM model side, by taking \(\beta_{0}\) to \(+\infty\) we self-consistently arrive at the random plaquette gauge model (RPGM) derived also in Ref. [27].
In reality, the same faulty circuits that produce the imperfect syndrome measurements also provide the imperfect initial state preparations. In this situation, i.e. with finite \(\beta_{0}\), the non-local timelike correlation will lead to a different phase structure in stark contract to RPGM. In the following paragraphs, we will explore this phase structure in detail. In order to detect the phase transition of \(\tau_{e}\) spins, we consider the Wilson loop
\[W_{A}=\prod_{p\in A}U_{p}=\prod_{e\in\partial A}\tau_{e}, \tag{17}\]
which serves as the order parameter for \(\mathbb{Z}_{2}\) gauge theory. Different phases of the gauge theory can be distinguished based on the fact whether the Wilson loop confines or deconfines. Here \(A\) is a set of plaquettes representing a surface in spacetime. The product of \(U_{p}\)'s on surface \(A\) equals the product of \(\tau_{e}\)'s on \(\partial A\), which is the boundary of surface \(A\) and forms a closed loop. In the conventional \(\mathbb{Z}_{2}\) gauge theory [48] the scaling behavior of Wilson loop expectation values with respect to the loop size distinguishes between the confinement (disordered) phase and the deconfinement (ordered) phase. In the deconfinement phase, it decays exponentially with respect to the perimeter of the loop,
\[W_{A}\sim\exp(-const\times|\partial A|), \tag{18}\]
called perimeter law. Here we use \(|\cdot|\) to denote the cardinal of a set (i.e. the number of the elements of a set). For example \(|\partial A|\) is the number of edges contained in \(\partial A\). On the other hand, in the confinement phase, the scaling behavior of Wilson loops obeys area law,
\[W_{A}\sim\exp(-const\times|A_{min}|), \tag{19}\]
Figure 2: Illustration of the SM model we obtained in Eq. (14). The \(\tau_{e}\) spins are defined on the edges of 3-d spacetime lattice and the \(\sigma_{e_{0}}\) spins lie on \(2\)-d physical lattice. There are three types of interactions in this model as shown in this figure. \(\beta_{0}b_{p_{0}}\) is the gauge interaction term defined on the physical lattice. \(K\eta_{p_{t}}U_{p_{t}}\) is the timelike gauge interaction on spacetime lattice. The \(\beta b_{\Omega(p_{s})}\eta_{p_{e}}U_{p_{s}}\) term couples the spacelike gauge interaction term \(U_{p_{s}}\) to the gauge interaction term \(b_{\Omega(p_{s})}\) on physical lattice. The \(\eta_{p}\)’s set the signs of gauge interactions on spacetime lattice and they mark the position of error strings during the QEC procedure in Fig. 1(b). For example, the flipped interaction \(\eta_{p}=-1\) at plaquette \(p\) (red plaquette) corresponds to the presence of an error string at \(p\) (dashed red line). The \(\{\eta_{p}\}\) configuration follows a disorder probability (6) that comes from the randomness of Pauli errors and in syndrome measurement. The \(\tau_{e}\) spins are non-locally correlated at timelike direction since all spacelike plaquette interactions \(U_{p_{s}}\) along the same timelike arrow are all coupled to the same \(b_{\Omega(p_{s})}\). Meanwhile, the disorder probability (6) is also correlated at time direction. Physically this is due to the fact imperfection measurement operator will change the current quantum state, which in turn affects subsequent measurement results. It is evident from the expressions (14) and (6) that the non-local correlation results from finite \(\beta_{0}\), or in other words imperfect initial state preparation.
where \(A_{min}\) is the minimal surface enclosed by \(\partial A\). Here we will study the expectation value of Wilson loops in our SM model.
First, note that our SM model satisfies a generalized version of Nishimori condition [49, 50], which means that the error rate parameters \((\beta_{0},\beta,K)\) in the quenched disorder probability in Eq. (6) are the same as those in the partition function in Eq. (14), respectively. Under this condition, by taking advantage of a local symmetry of the model (15), we find that (see Sec. SIII of SI [46])
\[[\langle W_{A}\rangle]=[\langle W_{A}\rangle^{2}]. \tag{20}\]
Here \(\langle\cdot\rangle\) denotes the ensemble average with respect to the model Eq.(14) under a specific interaction configuration. \([\cdot]=\sum_{\{\eta_{p}\}}P(\{\eta_{p}\})(\cdot)\) represents the disorder average over interaction configurations with respect to the probability Eq.(6). The above equality suggests the absence of gauge glass phase [51] (\([\langle W_{A}\rangle]\) obeys area law Eq.(19) but \([\langle W_{A}\rangle^{2}]\) obeys perimeter law Eq.(18)) so that we only need to concern about the deconfinement-confinement phase transition of \(\tau_{e}\)'s under Nishimori condition.
We then perform a low-temperature expansion [48] for \([\langle W_{A}\rangle]\). Here low temperature means that the parameters \(\beta_{0}\), \(\beta\) and \(K\) are sufficiently large, corresponding to small enough physical error rates. We assume \(\mathrm{e}^{-\beta_{0}}\), \(\mathrm{e}^{-\beta}\) and \(\mathrm{e}^{-K}\) are of the same order and expand \(\log[\langle W_{A}\rangle]\) up to the first non-vanishing order \(\mathrm{e}^{-4\beta_{0}}\). We obtain the result (refer to Sec. SIV of SI [46] for more details)
\[\begin{split}&[\langle W_{A}\rangle]\simeq\exp[-4\mathrm{e}^{-4 \beta_{0}}|\Pi(A)|(N-|\Pi(A)|)\\ &-\left(\mathrm{e}^{-4\beta}+\mathrm{e}^{-4K}+4\mathrm{e}^{-2 \beta-2K}\right)|\partial A|_{s}-6\mathrm{e}^{-4K}|\partial A|_{t}].\end{split} \tag{21}\]
Here \(\Pi\) is defined as projection from \(3\)-d spacetime to \(2\)-d space mod \(\mathbb{Z}_{2}\), illustrated in Fig. 3(a). \(|\partial A|_{s}\) (\(|\partial A|_{t}\)) denotes the spacelike (timelike) edges \(e_{s}\) (\(e_{t}\)) contained in \(\partial A\). The low temperature expansion is done by first expanding \(P(\{\eta_{p}\})\) up to \(\mathrm{e}^{-4\beta_{0}}\). Then for each error configuration that appeared in the expansion, we compute the expectation value \(\langle W_{A}\rangle\) up to the order we need. The perturbative evaluation of \(\langle W_{A}\rangle\) is accomplished by identifying the ground state and then taking the lowest excited states into consideration. Each of these states yields a specific value of \(W_{A}\). Putting all these things together, we obtain Eq. (21).
From this expression, we see that the expectation value of Wilson loops has an anisotropic scaling behavior. It deconfines at the timelike direction under low temperature but confines at the spacelike direction for any finite \(\beta_{0}\) (Fig. 3). A pure timelike Wilson loop \(W_{A}\) which contains only timelike plaquettes is shown in Fig. 3(c). It deconfines and decays exponentially with respect to perimeter as in a conventional \(3\)-d \(\mathbb{Z}_{2}\) lattice gauge theory under low temperature. Meanwhile, a pure spacelike Wilson loop is shown in Fig. 3(b). For large enough system size \(N\), its areal decay is faster than the perimetric decay, so the first term in Eq. (21) dominates and \([\langle W_{A}\rangle]\) confines as long as the temperature is finite. We notice that no matter how low the non-zero temperature is, the confinement is always maintained. Since a sufficiently high temperature should drive the system into a completely disordered phase, it will confine all Wilson loops. Specifically, we do not expect the deconfinement of spacelike Wilson loops at higher temperatures. Thus we conclude that spacelike Wilson loop confines at any finite temperature (or error rate). At a sufficiently high temperature (or error rate), we expect a phase transition that confines the timelike Wilson loop. We also noticed from Eq. (21) that this areal decay is a consequence of imperfect measurement during initial state preparation. Consistently, we find in our derivation that the area term results
Figure 3: (a) Example of a Wilson loop. \(A\) is a surface in 3D spacetime (gray) and \(\partial A\) is its boundary (black), which is a closed loop. \(\Pi(A)\) is the projection of surface \(A\) from 3D spacetime to 2D space mod \(\mathbb{Z}_{2}\). Specifically, under projection the timelike plaquettes are dropped, and an even number of spacelike plaquettes at the same space position also vanishes. The only remaining plaquettes are at the spatial locations that originally have odd numbers of spacelike plaquettes. In (b) and (c), the black lines represent Wilson loops and the red dashed lines are examples of topologically trivial error strings created by \(\tau_{e}\) fluctuation. (b) A spacelike Wilson loop. In a large enough system it decays exponentially with respect to the area for any finite temperature. Note that Its scaling behavior written in the figure directly follows from Eq. (21) by constraining \(A\) to a pure spacelike region. For a large region \(A\), the areal decay will be much faster than the perimetric decay, so the first term in Eq. (21) dominates. This behavior is contributed by the fluctuation of infinite long timelike error strings which are able to appear at any space position (see Sec. SIV of SI [46]). (c) A timelike Wilson loop. Its scaling behavior is obtained by constraining \(A\) to a pure timelike region in Eq. (21). Under a low temperature, it decays exponentially with respect to the perimeter. Note that both timelike edges and spacelike edges are contained in the boundary of a timelike region, determining the \(|\partial A|_{t}|\) term and \(|\partial A|_{s}|\) term respectively. This perimetric decay behavior is mainly contributed by local error loops near the Wilson loop (see Sec. SIV of SI [46]).
from the non-local timelike correlation of disorder probability Eq. (6) and partition function Eq. (14) which also depends on \(\beta_{0}\) as we discussed. In comparison, if the initial state is ideally prepared (\(\beta_{0}=+\infty\), corresponds to RPGM), the Wilson loops will acquire an isotropic scaling behavior, i.e. both the spacelike and timelike Wilson loops will exhibit perimetric decay at the low-temperature phase and areal decay at the high-temperature phase [27].
Here we also remark that the low-temperature expansion result shown in Eq. (21) is valid for any finite system size \(N\) and region \(A\), as long as the temperature (physical error rate) is sufficiently low. The subtlety is that we cannot directly take the thermodynamic limit \(N\to+\infty\) in Eq. (21) because of the factor \(N\) contained in the leading order. Actually, the appearance of area term \(|\Pi(A)|(N-|\Pi(A)|)\) in Eq. (21) is a natural result because our space manifold is a closed surface (due to the periodic boundary condition), thus \(\Pi(A)\) and its complement on \(2\)-d space yield the same boundary, and should be symmetric in an expression containing \(|\Pi(A)|\). However under the thermodynamic limit, the phase structure of the SM model should not depend on this boundary condition, and we expect that the area law is still obeyed by spacelike Wilson loops in the low-temperature phase. A brief discussion about the analogy to the exactly solvable \(2\)-d \(\mathbb{Z}_{2}\) lattice gauge theory can be found in Sec. SIV of SI [46].
## VI Impact on quantum error correction
In the previous sections, we mapped our QEC model under imperfect measurement and Pauli error to an SM model and studied the phase structure of the SM model. The question is what these results imply about for QEC performance and threshold theorem. Here we provide an interpretation of these results.
First of all, knowing that there exists a confinement transition point of timelike Wilson loops in the limit \(T\to+\infty\) and \(N\to+\infty\), we want to ask how it relates to the logical error rate and error threshold. In fact, we find that the logical error rate is suppressed at the low-temperature phase. Note that the fluctuation of topologically trivial error strings is described by the fluctuation of \(\tau_{e}\)'s (refer to the discussion of Eq. (15)). Each fluctuated spin configuration will contribute to the expectation value of Wilson loop. So the behaviors of Wilson loops reflect features of error string fluctuations. In our derivation of low-temperature expansion (also see Sec. SIV of SI [46]), we find that: 1) the areally decaying behavior of spacelike Wilson loop results from non-local timelike error strings like in Fig. 3(b); 2) the non-local timelike strings and other local error loops appear in a relatively independent manner at sufficiently low temperatures. Therefore, those local error loops are compressed and are unlikely to stretch to arbitrarily large. Specifically, the fluctuating error strings cannot extend arbitrarily long in space direction. Since the non-contractible loops can only wind around spacelike directions, we still expect their free energy cost, i.e. \(\Delta_{l}\) shown in Eq. (16), to diverge, hence the logical error rate approaches zero. Increasing the temperature, there should be a transition point where spacelike error strings become extending along the whole system and \(\Delta_{l}\) becomes finite, and thus the probability of logical error acquires a finite value. This transition point of \(\Delta_{l}\) should be exactly the confinement point of timelike Wilson loops. Since the confinement transition point of timelike Wilson loops separates different behavior of logical error rate in the limit \(T\to+\infty\) and \(N\to+\infty\), it is appealing to identify this transition point as the threshold, which we refer to as theoretical threshold. However, this threshold, which we will find later, does not capture the correctability of measurement error.
We want to emphasize that, only for the infinite-time syndrome scenario, the theoretical threshold in our model can faithfully determines the success of QEC as we discussed in the last paragraph. However, due to the unidentifiable even below the theoretical threshold, the decoding procedure fails while considering a finite time syndrome information. In the infinite-time case, the decoder takes an infinite error history to enhance the power of QEC. But in reality, we can only store finite error history, where the areal decay of spacelike Wilson loops drastically affects the QEC. Roughly speaking, in the real world the decoder must be applied to a finite time interval of size \(T\). Then problems arise while trying to correct measurement errors in the finite-time scenario. Recall that the areal decay of spacelike Wilson loops signifies error strings can stretch infinitely along the timelike direction. This suggests that measurement errors can easily extend from the beginning to the end and become generally undistinguishable from Pauli errors. For example, the true syndrome of a single Pauli error string will be two non-local timelike strings starting from its boundary points. Meanwhile, this syndrome can also be created by non-local measurement errors, which could occur with a probability close to the one of the Pauli string and cannot be suppressed by large \(T\). The consequence is that the decoder might mix up these two situations with a finite probability and leave this Pauli error uncorrected (the correction operator determined by the decoder will not be containing this Pauli string). From this example, we infer that due to the presence of non-local measurement errors, there will be Pauli errors remaining uncorrected at the end of the QEC procedure, which means that the combined strings of the Pauli error operator and correction operator will still have an amount of open ends. If these open-ended strings anticommute with logical \(Z\) operators, it will damage the logical information \(|L\rangle\) (refer to the discussion of Eq. (5)). This situation does not have much difference from the case when only a single round of syndrome measurement is performed (\(T=1\)), since the probabilities of non-local measurement errors do not depend on \(T\).
We conclude that for a large system size \(N\), the ability to identify measurement errors cannot be enhanced by increasing the number of syndrome measurement rounds, even below the theoretical threshold. This is in stark contrast to the case with only stochastic errors [27], where the perimeter law for both the spacelike and timelike Wilson loops guarantees the achievement of an effective error correction with a finite error history whenever the error rate is below the theoretical threshold. In our case, the inability to correct measurement errors is
caused by the finite value of \(\beta_{0}\), i.e. the imperfection of initial state preparation; so we refer to \(\beta_{0}=+\infty\) as the measurement error threshold of our error model. Setting \(\beta_{0}=1/\mathcal{T}_{0}\) and \(\beta=K=1/\mathcal{T}\), a sketch of the phase diagram is shown in Fig. 4(a). In addition, an overall comparison of our error model and the model in Ref. [27] can be found in Tab. 1.
As we have mentioned in the above paragraphs, the measurement errors are unidentifiable even at low temperatures, in the sense that the multi-round syndrome measurement protocol will not be better than a single-round one, i.e. \(T=1\) or a 2D decoder. So one might ask how the QEC behaves when \(T=1\). Suppose the QEC initial state is the imperfect logical \(00\) state \(|\widetilde{\Psi}\rangle=|\widetilde{00}\rangle=(|\widetilde{++}\rangle+| \widetilde{+-}\rangle+|\widetilde{-+}\rangle+|\widetilde{--}\rangle)/2\), we estimate the impact of measurement errors on the logical fidelity, that is the fidelity between the final state and the initial state. Consider the scenario where the temperature is low and the system is of considerable size. In accordance with the preceding discourse, A Pauli error on a single physical qubit will be confounded with measurement errors having the same syndrome by the decoder. Consequently, the Pauli error will remain uncorrected. If the uncorrected Pauli error intersects with a logical Pauli \(Z\) operator, it acts as a logical error on the logical information \(|L\rangle\) (refer to Eq. (5)), leading to a logical fidelity \(\sim 0\). But if the Pauli error locates elsewhere on the lattice, one may check that the effects of the Pauli error and the measurement operator with the same syndrome complement each other and lead to a fidelity \(\sim 1\), since they both flip the same stabilizer bits in Eq. (5) but do not affect the logical information \(|L\rangle\). Average all error configurations, since the number of configurations that a Pauli error intersects with logical \(Z\) is proportional to \(d=\sqrt{N}\), we anticipate that the logical fidelity behaves as \(1-const\times d\). The logical fidelity is suppressed by a large distance, which is a signal that the QEC system is above the ture threshold (measurement error threshold in our work). Here the constant depends on the physical error rates \(\beta_{0}\), \(\beta\) and \(K\) but does not depend on the distance \(d\), and it should drop to \(0\) when the initial state is ideal, \(\beta_{0}\rightarrow+\infty\).
In contrast, we compare it with the \(T=1\) case of Ref. [27], the 2D decoder suffering from stochastic measurement
Figure 4: (a) Our estimation of the phase structure in the thermodynamic limit \(T\rightarrow+\infty\) and \(N\rightarrow+\infty\) while setting \(\mathcal{T}=1/\beta=1/K\) and \(\mathcal{T}_{0}=1/\beta_{0}\). Above the theoretical threshold (red line) QEC fails due to non-contractible logical Pauli errors. Below the theoretical threshold and above the measurement error threshold (blue line), non-contractible logical Pauli errors are suppressed. However, measurement errors are still unidentifiable through decoding a finite error history and will be confounded with Pauli errors. Note that the \(\mathcal{T}\) axis where \(\mathcal{T}_{0}=0\) represents the RPGM. While the theoretical threshold intersect with the \(\mathcal{T}\) axis at the common RPGM phase transition point [27], we are not sure yet whether it intersect with the \(\mathcal{T}_{0}\) axis. Some details at higher temperatures still requires further investigation. For completeness, we note that if \(K\rightarrow+\infty\), the QEC protocol trivially succeeds as there will be no Pauli errors. On the other hand, in the scenario where \(\beta\rightarrow+\infty\) but \(K\) and \(\beta_{0}\) are finite, the non-local measurement errors are still present and cannot be decoded when confounded with Pauli errors. (b) The phase diagram while fixing a finite code distance \(d\). The phase transitions (thresholds) in Fig. 4(a) are smoothed into crossovers due to finite-size effect. Especially, the measurement error threshold becomes a finite-temperature crossover (blue dashed line), leading to a parameter region with a finite area (light blue region) that effectively suppresses logical errors. The parameters in this region should satisfy either Eq. (22) or Eq. (23) such that the non-local measurement errors will not be a problem. Specifically, the crossover condition \(\mathcal{T}_{0}\sim 1/\log d\) near the \(\mathcal{T}_{0}\) axis is derived from Eq. (22) (However, Eq. (22) or Eq. (23) are only approximate expressions valid in the low-temperature limit. The precise value of this crossover still requires further investigation.). Increasing \(d\), this region becomes smaller and smaller and eventually sticks to the \(\mathcal{T}\) axis. Above the crossover regime of measurement threshold in the light red region, the effect of non-local measurement errors on the QEC becomes non-negligible. As for the SM model side, the blue crossover detects the confinement of spacelike Wilson loops while the red crossover detects the confinement of timelike Wilson loops.
errors. The decoder in this case also mixes up a Pauli error with probabilistic measurement errors, but their effects do not complement each other, since the probabilistic measurement error is just noise on the classical readouts and does not affect the quantum state. Consequently the logical fidelity scales as \(1-const\times d^{2}\), where the constant depends on the probability of Pauli and measurement errors. This logical fidelity is also above the threshold. Surprisingly, it is worse than the one of our model. However, the logical fidelity under stochastic errors can be improved by increasing \(T\) and arrives at an effective QEC when \(T\gg d\)[27], which is not possible in our model.
One might argue that although the non-local measurement errors suppress logical fidelity, the correction of other local errors might lead to other terms that increase with \(d\) and compete with non-local measurement errors. However, the non-local measurement error suppression should be the leading order contribution in the low-temperature limit. Moreover, if the system lies above the blue crossover in the light red region in Fig. 4(b), the effect of non-local measurement errors overweights other local errors. Thus we believe that the non-local measurement error suppression could overweight other terms in that region. Nonetheless, these statements are not rigorous proofs and require further studies.
For a small code with limited system size \(N\), we infer from Eq. (21) that the above problem might be circumvented with the limit
\[d\ll\mathrm{e}^{\beta_{0}}, \tag{22}\]
or
\[d\ll[\mathrm{e}^{4\beta_{0}}(\mathrm{e}^{-4\beta}+\mathrm{e}^{-4K}+4\mathrm{e }^{-2\beta-2K})]^{1/3}. \tag{23}\]
Here \(d\) is the code distance and \(N=d^{2}\). The first bound is derived by assuming the areal term in Eq. (21) is negligible
\[e^{-4\beta_{0}}|\Pi(A)|(d^{2}-|\Pi(A)|)\ll 1, \tag{24}\]
for all spacelike Wilson loops \(A\). The l.h.s. maximizes when \(A\) is half the size of the spatial lattice \(|\Pi(A)|=d^{2}/2\). Substitute \(|\Pi(A)|=d^{2}/2\) into the above expression, we obtain Eq. (22) ignoring a constant factor. Physically, Eq. (22) is interpreted as the impact of measurement error itself on the system is ignorable. The second bound is derived by considering when the perimetric decay will be faster than the areal decay in Eq. (21) for a spacelike Wilson loop,
\[\begin{split}& e^{-4\beta_{0}}|\Pi(A)|(d^{2}-|\Pi(A)|)\\ &\ll\left(\mathrm{e}^{-4\beta}+\mathrm{e}^{-4K}+4\mathrm{e}^{-2 \beta-2K}\right)|\partial A|_{s}.\end{split} \tag{25}\]
Still we require \(A\) to be half of the spatial lattice, \(|\Pi(A)|\sim d^{2}\) and \(|\partial A|_{s}\sim d\). Thus we obtain Eq. (23). Physically, Eq.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Noise properties** & 1) coherent errors on entanglement gates of stabilizer measurement circuit (imperfect measurement), 2) stochastic Pauli errors on physical qubits & 1) stochastic measurement errors of stabilizer measurement outcomes, 2) stochastic Pauli errors on physical qubits \\ \hline
**Initial state** & Affected by imperfect measurement during preparation (characterized by \(\beta_{0}\)) & Ideal toric code state \\ \hline
**Error correction** & Multi-round syndrome measurement (number of rounds: T) and maximum likelihood dedecoder \\ \hline
**SM mapping** & 3-dimensional \(\mathbb{Z}_{2}\) gauge model coupled to a 2-dimensional \(\mathbb{Z}_{2}\) gauge model (Eq. (7)) with quenched disorder (Eq. (5)) & 3-dimensional RPGM under Nishimori condition \\ \hline
**Phase structure of SM model** & When \(\beta_{0}\rightarrow+\infty\) (ideal initial state), the SM model reduces to RPGM. For finite \(\beta_{0}\) (imperfect initial state), in the low-temperature phase (below theoretical threshold) timelike Wilson loops deconfine; in the high-temperature phase (above theoretical threshold) all Wilson loops confine. \\ \hline
**Error correction performance** & When \(\beta_{0}\rightarrow+\infty\) (ideal initial state), it is equivalent to the model in Ref. [27]. For finite \(\beta_{0}\) (imperfect initial state), in the low-temperature phase (below theoretical threshold) there are still unidentifiable measurement errors that damage QEC performance, no matter how large \(T\) is; in the high-temperature phase (above theoretical threshold) QEC fails due to logical errors (non-contractible error loops). \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between our case and the case with the stochastic measurement error model [27]. Note that although the measurement noises of the two models are different at the physical level, their error correction properties and corresponding SM models will become equivalent when we set the initial state in our model to be ideal (\(\beta_{0}\rightarrow+\infty\)).
(23) is interpreted as the influence of non-local error strings is not significant compared to other local error strings. If either of these two bounds is satisfied, we anticipate that the ability of our QEC procedure to detect measurement errors will be similar to that of Ref. [27]. To satisfy either Eq. (22) or Eq. (23), the imperfection of initial state preparation must be negligible or much smaller than the syndrome measurement imperfection and Pauli error rate. Even then the code distance is still upper bounded if we fix the error parameters \(\beta_{0}\), \(\beta\) and \(K\). Usually, when performing QEC, we anticipate increasing code distance to suppress the logical error rate [28]. However, for the important error problem considered here, Eq. (22) and Eq. (23) form bounds that prevent the code from scaling up. Equivalently if we fix \(d\) and vary the error parameters, we obtain the phase diagram in Fig. 4(b). It is noteworthy that the region enabling pragmatic error correction only experiences a gradual reduction as increasing the code distance, i.e. \(\sim 1/\log d\), which is not excessively frustrating.
## VII Discussion
In this work, we discuss the imperfect measurement problem based on the circuit shown in Fig. 1(d) (e.g. for \(B_{p_{0}}\)), which allows us to conduct an analytic study. In fact, the circuit shown in Fig. 1(c), which contains only two-qubit gates rather than a five-qubit evolution in our simple model, is more realistic. Ref. [42] discussed an imperfect measurement model which mimics the behavior of superconducting quantum computation systems. The \(CNOT\) gate is divided into a \(CZ\) gate and two Hadamard gates, \(CNOT=H(CZ)H\) where \(H\) is the Hadamard gate acting on the target qubit (ancilla qubit in our setup). Each \(CZ\) gate is implemented by a time evolution
\[U=\exp\left[-\mathrm{i}\frac{t}{4}\left(s^{z}\otimes\sigma_{i}^{z}-s^{z} \otimes I-I\otimes\sigma_{i}^{z}+I\otimes I\right)\right] \tag{26}\]
Here \(s\) labels the ancilla qubit and \(\sigma_{i}\), \(i=1,2,3,4\) labels the four data qubits. It recovers the \(CZ\) gate when \(t=\pi\). Assume the final ancilla measurement has an outcome \(s=\pm 1\), the corresponding action on data qubits is
\[\begin{split}& M_{s}=\left\langle s\right|H\exp\left[-\mathrm{i} \frac{t}{4}\sum_{i}\left(s^{z}\otimes\sigma_{i}^{z}-s^{z}-\sigma_{i}^{z}+I \right)\right]H\left|0\right\rangle=\frac{1}{2}\left(1+\mathrm{se}^{-\mathrm{ i}2t}\cos^{4}\frac{t}{2}+s\mathrm{e}^{-\mathrm{i}2t}\sin^{4}\frac{t}{2}B_{p_{0}} \right)\\ &\times\left(1-\frac{\mathrm{i}\mathrm{se}^{\mathrm{i}2t}\sin \frac{t}{2}\cos^{3}\frac{t}{2}+\mathrm{i}\sin\frac{t}{2}\cos^{7}\frac{t}{2}+ \mathrm{i}\sin^{7}\frac{t}{2}\cos\frac{t}{2}}{\left(\mathrm{se}^{\mathrm{i}2t} +\cos^{4}\frac{t}{2}\frac{t}{2}\right)^{2}-\sin^{8}\frac{t}{2}}\sum_{i} \sigma_{i}^{z}-\frac{\sin^{2}\frac{t}{2}\cos^{2}\frac{t}{2}}{\mathrm{se}^{ \mathrm{i}2t}+\cos^{4}\frac{t}{2}+\sin^{4}\frac{t}{2}}\sum_{i<j}\sigma_{i}^{z} \sigma_{j}^{z}\\ &+\left.\frac{\mathrm{ise}^{\mathrm{i}2t}\sin^{3}\frac{t}{2}\cos \frac{t}{2}+\mathrm{i}\sin^{3}\frac{t}{2}\cos^{5}\frac{t}{2}+\mathrm{i}\sin^{5 }\frac{t}{2}\cos^{3}\frac{t}{2}}{\left(\mathrm{se}^{\mathrm{i}2t}+\cos^{4} \frac{t}{2}\frac{t}{2}\right)^{2}-\sin^{8}\frac{t}{2}}\sum_{i<j<k}\sigma_{i}^{z }\sigma_{j}^{z}\sigma_{k}^{z}\right).\end{split} \tag{27}\]
When \(t=\pi\), one may check that the above expression reduces to the correct projection \((I+sB_{p_{0}})/2\). When \(t\neq\pi\), \(M_{s}\) stands for an imperfect measurement operator. We notice that in its expression the first factor \(1+s\mathrm{e}^{-\mathrm{i}2t}\cos^{4}(t/2)+s\mathrm{e}^{-\mathrm{i}2t}\sin^{4 }(t/2)B_{p_{0}}\) is similar to the imperfect measurement operator discussed in Eq. (3) \(\exp(\beta s_{p_{0}}B_{p_{0}}/2)=\cosh(\beta/2)+s_{p_{0}}\sinh(\beta/2)B_{p_{0}}\). However, there is an additional factor, which can be viewed as coherent errors appearing on data qubits. So while discussing error correction, aside from the consequence talked about before, coherent errors will do more damage to the error correction procedure and lead to worse performance. We also notice that under this realistic measurement model, if we define logical states by applying logical operators to the imperfect initial state, those states will not be orthogonal to each other, which makes it much hard to analyze theoretically.
In all, by mapping the standard QEC procedure under imperfect measurement to an SM model, we find two finite temperature phases that have different QEC performances. The high temperature (high physical error rate) phase signifies the failure of QEC caused by non-contractible error strings. In the low temperature (low physical error rate) phase, the measurement errors cannot be identified through decoding syndrome outcomes of finite rounds due to imperfect initial state preparation, which could result in the failure of QEC in the large code distance limit. For finite \(d\) there will be a parameter region \(\sim\log d\) that logical errors remain suppressed. In addition, we remark that studying a different measure of logical error rate, such as average gate fidelity [52] or diamond norm [24], might provide a better knowledge of how the results in this article affect error threshold in the current problem, which is very different since imperfect measurement of stabilizers not only causes faulty syndrome outcomes but also changes the quantum state as we discussed. Therefore, further work concerning these issues still needs to be developed. In addition, we notice that as shown in Ref. [43; 47], the imperfect initial state preparation leads to the absence of long-range entanglement. Meanwhile, the imperfect initial state is also the source of the ill performance in our QEC model. These phenomena are both related to the confinement of certain Wilson loop observables. Therefore, another intriguing question is that what role the long-range entanglement could play in the
threshold theorem of the general topological QEC codes?
###### Acknowledgements.
Authors thank Guo-Yi Zhu for the discussions on the finite-size effect of the SM model. We thank Jing-Yuan Chen, Li Rao, and Qinghong Yang for helpful discussions. This work is partially supported by the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302400) and Beijing Natural Science Foundation (Grant No. Z220002).
## References
* Arute _et al._ (2019)F. Arute, K. Arya, R. Babbush, and et al., Nature **574** (2019), 10.1038/s41586-019-1666-5.
* Zhong _et al._ (2020)H.-S. Zhong, H. Wang, Y.-H. Deng, and et al., Science **370**, 1460 (2020).
* Wu _et al._ (2021)Y. Wu, W.-S. Bao, S. Cao, and et al., Phys. Rev. Lett. **127**, 180501 (2021).
* Arute _et al._ (2020)F. Arute, K. Arya, R. Babbush, and et al., Science **369**, 1084 (2020).
* Gong _et al._ (2021)M. Gong, S. Wang, C. Zha, and et al., Science **372**, 948 (2021).
* Pino _et al._ (2021)J. M. Pino, J. M. Dreiling, C. Figgatt, and et al., Nature **592**, 209 (2021).
* Ryan-Anderson and et al. (2021)C. Ryan-Anderson and et al., (2021), arXiv:2107.07505 [quant-ph].
* Preskill (2018)J. Preskill, Quantum **2**, 79 (2018).
* Shor (1995)P. W. Shor, Phys. Rev. A **52**, R2493 (1995).
* Steane (1996)A. Steane, Proc. R. Soc. Lond. A. **452**, 2551 (1996).
* Calderbank and Shor (1996)A. R. Calderbank and P. W. Shor, Phys. Rev. A **54**, 1098 (1996).
* Nigg _et al._ (2014)D. Nigg, M. Muller, E. A. Martinez, P. Schindler, M. Hennrich, T. Monz, M. A. Martin-Delgado, and R. Blatt, Science **345**, 302 (2014).
* Ofek _et al._ (2016)N. Ofek, A. Petrenko, R. Heeres, P. Reinhold, Z. Leghtas, B. Vlastakis, Y. Liu, L. Frunzio, S. M. Girvin, L. Jiang, M. Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, Nature **536**, 441 (2016).
* Hu _et al._ (2020)L. Hu, Y. Ma, W. Cai, X. Mu, Y. Xu, W. Wang, Y. Wu, H. Wang, Y. P. Song, C. L. Zou, S. M. Girvin, L.-M. Duan, and L. Sun, Nature Physics **15**, 503 (2019).
* Andersen _et al._ (2020)C. K. Andersen, A. Remm, S. Lazar, S. Krinner, N. Lacroix, G. J. Norris, M. Gabureac, C. Eichler, and A. Wallraff, Nature Physics **16**, 875 (2020).
* Erhard _et al._ (2021)A. Erhard, H. Poulsen Nautrup, M. Meth, L. Postler, R. Stricker, M. Stadler, V. Negnevitsky, M. Ringbauer, P. Schindler, H. J. Briegel, R. Blatt, N. Friis, and T. Monz, Nature **589**, 220 (2021).
* AI (2021)G. Q. AI, Nature **595**, 383 (2021).
* Luo _et al._ (2021)Y.-H. Luo, M.-C. Chen, M. Erhard, H.-S. Zhong, D. Wu, H.-Y. Tang, Q. Zhao, X.-L. Wang, K. Fujii, L. Li, N.-L. Liu, K. Nemoto, W. J. Munro, C.-Y. Lu, A. Zeilinger, and J.-W. Pan, Proceedings of the National Academy of Sciences **118**, e2026250118 (2021).
* Marques _et al._ (2022)J. F. Marques, B. M. Varbanov, M. S. Moreira, H. Ali, N. Muthusubramanian, C. Zachariadis, F. Battistel, M. Beekman, N. Haider, W. Vlothuizen, A. Bruno, B. M. Terhal, and L. DiCarlo, Nature Physics **18**, 80 (2022).
* Zhao and et al. (2022)Y. Zhao and et al., Phys. Rev. Lett. **129**, 030501 (2022).
* Ryan-Anderson _et al._ (2021)C. Ryan-Anderson, J. G. Bohnet, K. Lee, D. Gresh, A. Hankin, J. P. Gaebler, D. Francois, A. Chernoguzov, D. Lucchetti, N. C. Brown, T. M. Gatterman, S. K. Halit, K. Gilmore, J. A. Gerber, B. Neyenhuis, D. Hayes, and R. P. Stutz, Phys. Rev. X **11**, 041058 (2021).
* Egan _et al._ (2020)L. Egan, D. M. Debroy, C. Noel, A. Risinger, D. Zhu, D. Biswas, M. Newman, M. Li, K. R. Brown, M. Cetina, and C. Monroe, arXiv e-prints, arXiv:2009.11482 (2020), arXiv:2009.11482 [quant-ph].
* Knill _et al._ (1998)E. Knill, R. Laflamme, and W. Zurek, Proc. R. Soc. Lond. A **454** (1998).
* Aharonov and Ben-Or (1999)D. Aharonov and M. Ben-Or, (1999), arXiv:9906129 [quant-ph].
* Aliferis _et al._ (2006)P. Aliferis, D. Gottesman, and J. Preskill, Quant. Inf. Comput. **6**, 97 (2006).
* Nielsen and Chuang (2004)M. A. Nielsen and I. L. Chuang, _Quantum computation and quantum information_, 1st ed. (Cambridge University Press, 2004).
* Dennis _et al._ (2002)E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, Journal of Mathematical Physics **43**, 4452 (2002).
* Fowler _et al._ (2012)A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, Phys. Rev. A **86**, 032324 (2012).
* Bombin (2013)H. Bombin, (2013), arXiv:1311.0277 [quant-ph].
* Aharonov _et al._ (2006)D. Aharonov, A. Kitaev, and J. Preskill, Phys. Rev. Lett. **96**, 050504 (2006).
* Chubb and Flammia (2021)C. T. Chubb and S. T. Flammia, Ann. Inst. Henri Poincare Comb. Phys. Interact. **8** (2021), 10.4171/AIHPD/105.
* Novais _et al._ (2007)E. Novais, E. R. Mucciolo, and H. U. Baranger, Phys. Rev. Lett. **98**, 040501 (2007).
* Novais _et al._ (2008)E. Novais, E. R. Mucciolo, and H. U. Baranger, Phys. Rev. A **78**, 012314 (2008).
* Barnes _et al._ (2017)J. P. Barnes, C. J. Trout, D. Lucarelli, and B. D. Clader, Phys. Rev. A **95**, 062338 (2017).
* Beale _et al._ (2018)S. J. Beale, J. J. Wallman, M. Gutierrez, K. R. Brown, and R. Laflamme, Phys. Rev. Lett. **121**, 190501 (2018).
* Bravyi _et al._ (2018)S. Bravyi, M. Englbrecht, R. Konig, and N. Peard, npj Quantum Information **4** (2018), 10.1038/s41534-018-0106-y.
* Huang _et al._ (2019)E. Huang, A. C. Doherty, and S. Flammia, Phys. Rev. A **99**, 022313 (2019).
* Cai _et al._ (2020)Z. Cai, X. Xu, and S. C. Benjamin, npj Quantum Information **6** (2020), 10.1038/s41534-019-0233-0.
* Ouyang (2021)Y. Ouyang, npj Quantum Information **7** (2021), 10.1038/s41534-021-00429-8.
* Zhao and Liu (2021)Y. Zhao and D. E. Liu, arXiv e-prints, arXiv:2112.00473 (2021), arXiv:2112.00473 [quant-ph].
* Venn _et al._ (2022)F. Venn, J. Behrends, and B. Beri, (2022), arXiv:2211.00655 [quant-ph].
* Yang and Liu (2022)Q. Yang and D. E. Liu, Physical Review A **105**, 022434 (2022).
* Zhu _et al._ (2022)G.-Y. Zhu, N. Tantivasadakarn, A. Vishwanath, S. Trebst, and R. Verresen, (2022), arXiv:2208.11136 [quant-ph].
* Kitaev (2003)A. Kitaev, Annals of Physics **303**, 2 (2003).
* Acharya _et al._ (2022)R. Acharya, I. Aleiner, R. Allen, and et al., (2022), arXiv:2207.06431 [quant-ph].
* (46)See Supplemental Information for details of the derivation.
* Lee _et al._ (2022)J. Y. Lee, W. Ji, Z. Bi, and M. P. A. Fisher, (2022), arXiv:2208.11699 [cond-mat.str-el].
* Kogut (1979)J. B. Kogut, Rev. Mod. Phys. **51**, 659 (1979).
* Nishimori (1981)H. Nishimori, Progress of Theoretical Physics **66**, 1169 (1981).
* Nishimori (1981)H. Nishimori, _Statistical physics of spin glasses and informa
tion processing: an introduction_, International series of monographs on physics No. 111 (Oxford University Press, 2001).
* Wang _et al._ (2003)C. Wang, J. Harrington, and J. Preskill, Annals of Physics **303**, 31 (2003).
* Emerson _et al._ (2005)J. Emerson, R. Alicki, and K. A=yczkowski, Journal of Optics B: Quantum and Semiclassical Optics **7**, S347 (2005).
* Beny and Oreshkov (2010)C. Beny and O. Oreshkov, Phys. Rev. Lett. **104**, 120501 (2010).
* Tantivasadakarn _et al._ (2021)N. Tantivasadakarn, R. Thorngren, A. Vishwanath, and R. Verresen, (2021), arXiv:2110.07599 [cond-mat.str-el].
* Ohno _et al._ (2004)T. Ohno, G. Arakawa, I. Ichinose, and T. Matsui, Nuclear Physics B **697**, 462 (2004).
* Kitaev and Preskill (2006)A. Kitaev and J. Preskill, Phys. Rev. Lett. **96**, 110404 (2006).
* Harrigan and et al. (2021)M. P. Harrigan and et al., Nature Physics **17**, 332 (2021).
* Wegner (1971)F. J. Wegner, Journal of Mathematical Physics **12**, 2259 (1971).
* Elitzur (1975)S. Elitzur, Phys. Rev. D **12**, 3978 (1975).
**Supplemental Information for "Lattice Gauge Theory and Topological Quantum Error Correction with Quantum Diversions in the Error Detection"**
In this supplemental information, we provide details about derivations of the results in the main text. In Sec. SI we discuss some properties about code space when initial state preparation suffers from imperfection measurement. In Sec. SII we derive the SM mapping explicitly. In Sec. SIII we talk about the consequence of local symmetry under the Nishmori condition. In sec. SIV we derive the low-temperature expansion of the Wilson loop.
## SI Code Subspace Under Imperfect Measurement
Here we discuss the imperfect code subspace under the presence of coherent errors on entanglement gates during state preparation. Starting with a product state \(\bigotimes_{e_{0}}\left|+\right\rangle_{e_{0}}\), we obtain the (unnormalized) state \(M_{\{s_{p_{0}}\}}\bigotimes_{e_{0}}\left|+\right\rangle_{e_{0}}\) with probability
\[P(\{s_{p_{0}}\})=\bigotimes_{e_{0}}\left\langle+\right|_{e_{0}}E_{\{s_{p_{0}} \}}\bigotimes_{e_{0}}\left|+\right\rangle_{e_{0}}=\frac{1}{(8\cosh\beta)^{N}} \mathcal{Z}_{\{s_{p_{0}}\}},\] (S1)
\[\mathcal{Z}_{\{s_{p_{0}}\}}=\sum_{\{\sigma_{e_{0}}\}}\exp\left[\beta\sum_{p_{ 0}}s_{p_{0}}b_{p_{0}}\right],b_{p_{0}}=\prod_{e_{0}\in\partial p_{0}}\sigma_ {e_{0}},\] (S2)
where \(M_{\{s_{p_{0}}\}}\) is the imperfect measurement operator
\[M_{\{s_{p_{0}}\}}=\frac{1}{(\sqrt{2\cosh\beta})^{N}}\exp\left[\frac{1}{2}\beta \sum_{p_{0}}s_{p_{0}}B_{p_{0}}\right],\] (S3)
and \(E_{\{s_{p_{0}}\}}=M_{\{s_{p_{0}}\}}^{\dagger}M_{\{s_{p_{0}}\}}\) is the corresponding POVM operator. By expanding \(\bigotimes_{e_{0}}\left|+\right\rangle_{e_{0}}\) under computational basis \(\bigotimes_{e_{0}}\left|+\right\rangle_{e_{0}}=(1/2^{N})\sum_{\{\sigma_{e_{0} }\}}\bigotimes_{e_{0}}\left|\sigma_{e_{0}}\right\rangle\), where \(\sigma_{e_{0}}=\pm 1\) is the eigenvalue of Pauli operator \(Z_{e_{0}}\), it can be shown that \(P(\{s_{p_{0}}\})\) is proportional to the partition function of a 2-d \(\mathbb{Z}_{2}\) lattice gauge theory \(\mathcal{Z}_{\{s_{p_{0}}\}}\)[1; 2], which is a useful method to dealing with the post-measurement state and is helpful to our following derivations. Note that 2-d \(\mathbb{Z}_{2}\) lattice gauge theory is exact-solvable [3]. By noticing that the Boltzmann weight only depends on the value of \(b_{p_{0}}\), we rewrite the summation of spin configurations as a summation of \(b_{p_{0}}\) configurations together with 1-form symmetry operations
\[\sum_{\{\sigma_{e_{0}}\}}=\sum_{\{b_{p_{0}}\}}\delta_{\prod_{p_{0}}b_{p_{0}}, 1}\sum_{\text{dual loop}}.\] (S4)
Here \(\delta_{\prod_{p_{0}}b_{p_{0}},1}\) means that the product of all \(b_{p_{0}}\) must equal to 1, since the lattice is embedded in a torus surface. We may check the degree of freedoms involved in the summation \(2^{2N}=2^{N}/2\times 2^{N+1}\). Thus we have
\[\begin{split}\mathcal{Z}_{\{s_{p_{0}}\}}&=\sum_{\{b _{p_{0}}\}}\delta_{\prod_{p_{0}}b_{p_{0}},1}\sum_{\text{dual loop}}\exp\left[\beta\sum_{p_{0}}s_{p_{0 }}b_{p_{0}}\right]\\ &=2^{N+1}\sum_{\{b_{p_{0}}\}}\frac{1+\prod_{p_{0}}b_{p_{0}}}{2} \exp\left[\beta\sum_{p_{0}}s_{p_{0}}b_{p_{0}}\right]\\ &=2^{N}\prod_{p}\left(\sum_{b_{p_{0}}}\exp\left[\beta s_{p_{0}}b _{p_{0}}\right]+\sum_{b_{p_{0}}}b_{p_{0}}\exp\left[\beta s_{p_{0}}b_{p_{0}} \right]\right)\\ &=2^{N}(2\cosh\beta)^{N}+2^{N}(2\sinh\beta)^{N}\prod_{p_{0}}s_{p_ {0}},\end{split}\] (S5)
and hence we have the probability of measurement outcomes
\[P(\{s_{p_{0}}\})=\frac{1+(\tanh\beta)^{N}\prod_{p}s_{p_{0}}}{2^{N}}.\] (S6)
Note that \(\prod_{p_{0}}s_{p_{0}}\) is the parity of \(\{s_{p_{0}}\}\) configuration.
From the above discussion, it can be seen that we might get different states with respect to the ancilla measurement outcome \(\{s_{p_{0}}\}\). But we will fix the initial state as \(M_{\{+\}}\bigotimes_{e_{0}}\left|+\right\rangle_{e_{0}}\), where we assumed the measurement outcomes of ancilla qubits are all \(+1\). For other measurement outcomes \(\{s_{p_{0}}\}\), we can redefine the sign of our stabilizers \(B_{p_{0}}\to s_{p_{0}}B_{p_{0}}\) in order that the following discussions still apply. The only thing we should take into consideration is the parity of outcome \(\{s_{p_{0}}\}\) because under the redefinition \(\prod_{p_{0}}B_{p_{0}}=1\rightarrow\prod_{p_{0}}B_{p_{0}}=\prod_{s}s_{p_{0}}\) and we will get \(-1\) for odd parity. In the odd parity case, the defects on the lattice cannot be paired up and it is unfriendly to error correction. Hence we ignore odd parity outcomes. We might consider it as a post-selection procedure, which chooses the even parity result that occurs with probability \(P_{+}=(1+(\tanh\beta)^{N})/2\), see Eq. (S6). We find that \(1/2\leq P_{+}\leq 1\) for \(0\leq\beta\leq+\infty\). Note that this probability is close to \(1\) we \(\beta\) is sufficiently large, so the post-selection procedure is reasonable for experimental consideration.
Till now we have only obtained one logical state. What about other states in the imperfect code subspace? The problem is that the matrix rank of \(M_{\{s_{p_{0}}\}}\) is \(2^{2N}\), which means that if we view it as a map defined on the whole \(2^{2N}\) dimensional Hilbert space of physical qubits, \(M_{\{s_{p_{0}}\}}:\mathcal{H}\rightarrow\mathcal{H}\), then we find that \(Im(M_{\{s_{p_{0}}\}})=\mathcal{H}\). This is different from the projective measurement case that \(P_{\{s_{p_{0}}\}}=\prod_{p_{0}}(I+s_{p_{0}}B_{p_{0}})/2\) projects any states into the \(B_{p_{0}}=s_{p_{0}}\) subspace. That is why we define the code space using logical operators in analogy to experimental setups. In summary, our four logical basis states are defined as
\[\begin{split}&|\widetilde{++}\rangle=\frac{M_{\{+\}}\bigotimes_{e_{ 0}}\left|+\right\rangle_{e_{0}}}{\sqrt{\bigotimes_{e_{0}}\left\langle+\right| _{e_{0}}M_{\{+\}}^{\dagger}M_{\{+\}}\bigotimes_{e_{0}}\left|+\right\rangle_{ e_{0}}}},\\ &|\widetilde{-+}\rangle=Z_{l_{1}}\left|\widetilde{++}\right\rangle, \quad|\widetilde{+-}\rangle=Z_{l_{2}}\left|\widetilde{++}\right\rangle, \quad|\widetilde{--}\rangle=Z_{l_{1}}Z_{L_{2}}\left|\widetilde{++}\right\rangle,\end{split}\] (S7)
Notice that \(|\widetilde{++}\rangle\) state correspond to the model
\[\mathcal{Z}_{\{+\}}=\sum_{\{\sigma_{e_{0}}\}}\exp\left[\beta\sum_{p_{0}}b_{ p_{0}}\right].\] (S8)
Given the above imperfect logical states, the expectation value of Pauli \(Z\) operators can be computed through the classical SM model [1; 2]. We denote \(\langle\cdot\rangle_{\{+\}}^{q}\) as the expectation value of the post-measurement state \(|\widetilde{++}\rangle\), then
\[\left\langle\prod_{e_{0}\in c_{0}}Z_{e_{0}}\right\rangle_{\{+\}}^{q}=\left\langle \prod_{e_{0}\in c_{0}}\sigma_{e_{0}}\right\rangle_{\{+\}}^{c}.\] (S9)
where we denote \(c_{0}\) as a set of several edges and \(\langle\cdot\rangle_{\{+\}}^{c}\) as the expectation value of the classical model \(\mathcal{Z}_{\{+\}}\).
2-d \(\mathbb{Z}_{2}\) lattice gauge theory possesses 1-form symmetry [3]. As in Fig. S1, if we flip all the spins along a dual loop, it does not change the value of \(b_{p_{0}}\) on any plaquette \(p_{0}\), hence does not change the partition function, i.e. Eq. (S2).
For 2-d \(\mathbb{Z}_{2}\) lattice gauge theory, Elitzur's theorem [3; 4] states that the expectation value of any observable that
varies under 1-form symmetry operation vanishes (even under the presence of infinitesimal source term). For example, consider the overlap between two logical states
\[\langle\widetilde{-\!+}|\widetilde{+\!+}\rangle=\langle\widetilde{+\!+}|\,Z_{l_{ 1}}\,|\widetilde{+\!+}\rangle=\langle Z_{l_{1}}\rangle^{q}_{\{+\}}=\left\langle \prod_{e_{0}\in l_{1}}\sigma_{e_{0}}\right\rangle^{c}_{\{+\}}.\] (S10)
The operator \(\prod_{e_{0}\in l_{1}}\sigma_{e_{0}}\) is the product \(\sigma_{e_{0}}\) spins that win around the non-contractible loop \(l_{1}\), and it changes sign under the 1-form symmetry operation that flips the spins on a dual non-contractible loop \(l_{1}^{*}\) that intersect with \(l_{1}\). Consequently, we find that \(\langle\widetilde{-\!+}|\widetilde{+\!+}\rangle=0\). Note that this result can also be checked through explicit calculation since \(\mathcal{Z}_{\{+\}}\) acquires an exact solution. Similarly, we find that all the four states \(\{\ket{\widetilde{+\!+}},\ket{\widetilde{-\!+}},\ket{\widetilde{+\!-}}, \ket{\widetilde{-\!-}}\}\) are orthogonal to each other. Therefore, they form an orthonormal basis of a 4 dimensional subspace. This justifies our definition of the imperfect code subspace
\[\widetilde{\mathcal{C}}(\beta)=\text{span}\{\ket{\widetilde{+\!+}},\ket{ \widetilde{-\!+}},\ket{\widetilde{+\!-}},\ket{\widetilde{-\!-}}\}\] (S11)
What is more, these four states are still eigenstates of logical \(X\) operators. Since \(X_{l_{1}^{*},X_{l_{2}^{*}}}\) commute with imperfect measurement operator \(M_{\{+\}}\) and act as 1 on the initial product state, we find that \(X_{l_{1}^{*}}\ket{\widetilde{+\!+}}=\ket{\widetilde{+\!+}},X_{l_{2}^{*}}\ket{ \widetilde{+\!+}}=\ket{\widetilde{+\!+}}\). The act on the other three states is determined by the commutation relation between logical \(X\) and logical \(Z\). Note that these properties are not universal for imperfect measurement but depend on the specific measurement protocol.
Here we point out that, unlike the projective measurement case, \(\widetilde{\mathcal{C}}\) depends on the choice of logical operators on the lattice. For example, consider a different choice of logical \(Z\) operator \(Z^{\prime}_{l_{1}}\) as in Fig. S2. We can calculate the fidelity between \(\ket{\widetilde{-\!+}}\) and the new state defined by the new logical operator \(\ket{\widetilde{-\!+}}^{\prime}=Z^{\prime}_{l_{1}}\ket{\widetilde{+\!+}}\):
\[\langle\widetilde{-\!+}|\widetilde{-\!+}\rangle^{\prime}=\langle\widetilde{ +\!+}|\,Z_{l_{1}}Z^{\prime}_{l_{1}}\ket{\widetilde{+\!+}}=\langle Z_{l_{1}}Z^ {\prime}_{l_{1}}\rangle^{q}_{\{+\}}\] (S12)
Note that \(Z_{l_{1}}Z^{\prime}_{l_{1}}\) forms a Wilson loop that could be written as the boundary of a region \(A\). For the classical model, Wilson loops are 1-form symmetry invariant observables and acquire non-zero expectation values. For example, consider the Wilson loop
\[W_{A_{0}}=\prod_{p_{0}\in A_{0}}b_{p_{0}}=\prod_{e_{0}\in\partial A_{0}}\sigma _{e_{0}},\] (S13)
where \(A_{0}\) is a 2-d region (a set of plaquettes) and \(\partial A_{0}\) is the set of edges at the boundary of \(A_{0}\). Its expectation value can be evaluated similarly as the partition function, which leads to
\[\langle W_{A_{0}}\rangle^{c}_{\{+\}}=\frac{(\tanh\beta)^{|A_{0}|}+(\tanh\beta )^{N-|A_{0}|}}{1+(\tanh\beta)^{N}}.\] (S14)
With the help of Eq. (S14), we obtain:
\[\langle\widetilde{-+}|\widetilde{-+}\rangle^{\prime}=\frac{(\tanh\beta)^{dL}+( \tanh\beta)^{N-dL}}{1+(\tanh\beta)^{N}}\] (S15)
which is smaller than 1 for finite \(\beta\), implies that the two states are different.
In addition, notice that we can take the product of the eigenstates of \(N-1\)\(B_{p_{0}}\) operators, \(N-1\)\(A_{v_{0}}\) operators, and 2 logical operators \(X_{l_{1}^{*}}\) and \(X_{l_{2}^{*}}\) to form a complete basis of the whole Hilbert space of physical qubits. Under this basis the product \(\ket{+}\) state can be written as
\[\bigotimes_{e_{0}}\ket{+}_{e_{0}}=\bigotimes^{\prime}_{p_{0}}\sum_{b_{p_{0}}= \pm}\ket{B_{p_{0}}=b_{p_{0}}}\bigotimes^{\prime}_{v_{0}}\ket{A_{v_{0}}=+} \bigotimes\ket{X_{l_{1}^{*}}=+}\bigotimes\ket{X_{l_{2}^{*}}=+},\] (S16)
Here the prime on the product symbol means that a chosen plaquette or vertex is excluded to satisfy the global constrain \(\prod_{p_{0}}B_{p_{0}}=\prod_{v_{0}}A_{v_{0}}=I\). This formula is verified as follows. Since the action of \(A_{v_{0}}\)'s and logical \(X\) operators on \(\bigotimes_{e_{0}}\ket{+}_{e_{0}}\) all yield \(+1\), the state must be the \(+1\) eigenstate of these operators. Besides, assume the excluded plaquette is \(f_{0}\). Consider a Pauli \(X\) string \(X_{(f_{0}-q_{0})}\) that starts at \(f_{0}\) and ends at some other plaquette \(q_{0}\). Since \(X_{(f_{0}\to q_{0})}\) commutes with all \(N-1\)\(A_{v_{0}}\) operators, logical \(X\) operators and \(N-2\)\(B_{p_{0}}\)'s with \(p_{0}\neq q_{0},f_{0}\), it only acts on the factor \(\ket{B_{q_{0}}=\pm}\) under the above basis. Note that \(X_{(f_{0}\to q_{0})}\) anti-commutes with \(B_{q_{0}}\), so \(X_{(f_{0}\to q_{0})}\ket{B_{q_{0}}=\pm}=\ket{B_{q_{0}}=\mp}\). Since \(X_{(f_{0}\to q_{0})}\bigotimes_{e_{0}}\ket{+}_{e_{0}}=\bigotimes_{e_{0}}\ket{ +}_{e_{0}}\), the product state must be stabilized by \(X_{(f_{0}\to q_{0})}\). Applying such \(X\) strings to all the plaquettes, we find that \(\bigotimes_{e_{0}}\ket{+}_{e_{0}}\) is the \(+1\) eigenstate of \(X_{(f_{0}\to q_{0})}\) for all \(q_{0}\neq f_{0}\), which leads to Eq. (S16). With the above considerations, we find that the imperfect measurement operator \(M_{\{s_{p_{0}}\}}\) only acts on the stabilizer bits (\(\ket{B_{p_{0}}=\pm}\) and \(\ket{A_{v_{0}}=\pm}\)) and logical operators only act on the logical bits (\(\ket{X_{l_{1}^{*}}=\pm}\) and \(\ket{X_{l_{2}^{*}}=\pm}\)). Consequently, any state \(\ket{\widetilde{\Psi}}\) in the code space \(\widetilde{\mathcal{C}}\) can be expanded under the stabilizer basis to achieve the form:
\[\ket{\widetilde{\Psi}}\propto\left[\exp\left(\frac{\beta}{2}\prod^{\prime}_{p_ {0}}B_{p_{0}}\right)\bigotimes^{\prime}_{p_{0}}\sum_{b_{p_{0}}=\pm}\exp\left( \frac{\beta}{2}b_{p_{0}}\right)\ket{B_{p_{0}}=b_{p_{0}}}\right]\left[\bigotimes ^{\prime}_{v_{0}}\ket{A_{v_{0}}=+}\right]\bigotimes\ket{L},\] (S17)
where \(\ket{L}\) represents the logical qubits and is exactly where the logical information is stored.
## SII Derivation of statistical mechanical mapping
Recall that we have considered a multi-round error correction protocol [5] under imperfect syndrome measurement, listed as follows:
1. start with an arbitrary state \(\ket{\widetilde{\Psi}}\) in \(\widetilde{\mathcal{C}}(\beta_{0})\).
2. Probabilistic Pauli \(X\) error acts at each integer valued time \(t=1,2,\cdots,T\). The \(X\) error at each physical qubit on each time slice occurs independently with probability \(q\). Denote the error chain at time \(t\) as \(c_{0}^{*}(t)\). \(c_{0}^{*}(t)\) is a set of edges, where the star reminds us that it will be viewed as strings on the dual lattice. The associated Pauli operator is \(X_{c_{0}^{*}(t)}=\prod_{e_{0}\in c_{0}^{*}(t)}X_{e_{0}}\).
3. Perform a round of syndrome measurement between each time interval \([t,t+1]\). The syndrome measurements are also imperfect and their strength of measurement is set as \(\beta\). Suppose the measurement outcome at time interval \([t,t+1]\) is \(\{s_{p_{0}}(t)\}\), then the associated action of imperfect measurement is \(M_{\{s_{p_{0}}(t)\}}\).
4. After \(T\) rounds of syndrome measurements, we decode and apply Pauli \(X\) correction to the final state. Denote the correction chain as \(c_{0R}^{*}\) and corresponding correction operator as \(X_{c_{0R}^{*}}\).
Notice that with our definition of code space \(\tilde{\mathcal{C}}\), the imperfection of syndrome measurement only affects syndrome bits but does not disturb logical information. We have known that any state in \(\tilde{\mathcal{C}}\) takes the form as shown i Eq. (S17). Acting successive imperfect measurement operators on it still only affects the stabilizer bits. The logical information \(\ket{L}\) will be left unchanged.
We might represent our error correction procedure as a diagram in Fig. 3(a). We will show that when the error chains at all times and the correction chain together constitute a contractible loop (correction operator and Pauli error
operators forms stabilize) the logical information is still preserved. Notice that the commutation between \(M_{\{s_{p_{0}}(t)\}}\) and Pauli error \(X_{c_{0}^{*}(t^{\prime})}\) can be expressed as
\[M_{\{s_{p_{0}}(t)\}}X_{c_{0}^{*}(t^{\prime})}=X_{c_{0}^{*}(t^{\prime})}M_{\{s_{ p_{0}}(t)\mu_{p_{0}}(t^{\prime})\}}.\] (S18)
Here \(\{\mu_{p_{0}}(t^{\prime})\}\) represents the correct syndrome that should be generated by Pauli error \(X_{c_{0}^{*}(t)}\). In other words \(\mu_{p_{0}}(t^{\prime})=-1\) when \(p\) locates at the boundary of \(c_{0}^{*}\) and \(\mu_{p_{0}}(t^{\prime})=1\) otherwise. Using this commutation relation we can move all Pauli error chains in Fig. 3(a) to the top and arrive at Fig. 3(b). When the operator \(X_{c_{0}^{*}K_{c_{0}^{*}}(T)}\cdots X_{c_{0}^{*}(1)}\) forms a product of \(A_{v_{0}}\) operators, it will be commuting with imperfect measurement operators and logical Pauli operators, and hence acts trivially on the state below. Therefore we will obtain a final state as in Fig. 3(c). Though the final state is different from the original state \(|\widetilde{\Psi}\rangle\), it will be containing the same logical information, so we view this situation as a success of error correction.
Then we may compute the probability of syndrome outcomes given a particular error configuration \(\{c_{0}^{*}(t)\}\)
\[\begin{split}& P(\{s_{p_{0}}(t)\}|\{c_{0}^{*}(t)\})=||\prod_{t}M_{\{s_{ p_{0}}(t)\}}X_{c_{0}^{*}(t)}\ket{\widetilde{\Psi}}||^{2}\\ &=||\prod_{t}M_{\{s_{p_{0}}(t)\prod_{k\leq t}\mu_{p_{0}}(k)\}} \ket{\widetilde{\Psi}}||^{2}=\bra{\widetilde{\Psi}}\prod_{t}E_{\{s_{p_{0}}(t) \prod_{k\leq t}\mu_{p_{0}}(k)\}}\ket{\widetilde{\Psi}}.\end{split}\] (S19)
Here \(||\cdot||^{2}\) denotes the state norm. First, note that this expression is a well-defined joint probability for the \(s_{p_{0}}(t)\) variables. \(P(\{s_{p_{0}}(t)\}|\{c_{0}^{*}(t)\})\) is always non-negative, and it is normalized as \(\sum_{\{s_{p_{0}}(t)\}}P(\{s_{p_{0}}(t)\}|\{c_{0}^{*}(t)\})=1\), which could be verified by applying the normalization of POVM operators \(\sum_{\{s_{p_{0}}\}}E_{\{s_{p_{0}}\}}=I\) for each time slice. Besides, we can check that it is the true probability of syndrome measurements on the physical level. For example, imagine we are performing error correction in real world. At the time \(t^{\prime}\), we ask what is the syndrome measurement probability for the current step. Eq. (S19) tells us that it should be a probability at the \(t^{\prime}\) step conditioned on the configurations of previous steps
\[\begin{split}& P(\{s_{p_{0}}(t=t^{\prime})\}|\{s_{p_{0}}(t<t^{ \prime})\},\{c_{0}^{*}(t)\})=\frac{\sum_{\{s_{p_{0}}(t>t^{\prime})\}}P(\{s_{p_{ 0}}(t)\}|\{c_{0}^{*}(t)\})}{\sum_{\{s_{p_{0}}(t\geq t^{\prime})\}}P(\{s_{p_{0}}( t)\}|\{c_{0}^{*}(t)\})}=\frac{||\prod_{t<t^{\prime}}M_{\{s_{p_{0}}(t)\}}X_{c_{0}^{*}(t)} \ket{\widetilde{\Psi}}||^{2}}{||\prod_{t<t^{\prime}}M_{\{s_{p_{0}}(t)\}}X_{c_{0 }^{*}(t)}\ket{\widetilde{\Psi}}||^{2}}\\ &=\frac{\text{tr}(E_{\{s_{p_{0}}(t^{\prime})\}}\rho)}{\text{tr}( \rho)},\quad\rho=\left(X_{c_{0}^{*}(t^{\prime})}\prod_{t<t^{\prime}}M_{\{s_{p_{ 0}}(t)\}}X_{c_{0}^{*}(t)}\right)\ket{\widetilde{\Psi}}\bra{\widetilde{\Psi}} \left(X_{c_{0}^{*}(t^{\prime})}\prod_{t<t^{\prime}}M_{\{s_{p_{0}}(t)\}}X_{c_{0 }^{*}(t)}\right)^{\dagger}.\end{split}\] (S20)
We arrive at the actual POVM probability at the current error correction step. Note that it also does not depend on
the Pauli errors after time \(t^{\prime}\). Calculate the expression in Eq. (S19) explicitly, we have
\[\begin{split}& P(\{s_{p_{0}}(t)\}|\{c_{0}^{*}(t)\})=\frac{1}{ \mathcal{Z}_{\{+\}}}\sum_{\{\sigma_{\sigma_{0}}\}}\mathrm{e}^{\beta_{0}\sum_{p_ {0}}b_{p_{0}}}\prod_{p_{0},t}\frac{\exp\left(\beta b_{p_{0}}s_{p_{0}}(t)\prod_{ k\leq t}\mu_{p_{0}}(k)\right)}{2\cosh\beta}\\ &=\frac{1}{\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}}\sum_{\{\sigma_{ \sigma_{0}}\}}\exp\left[\sum_{p_{0}}b_{p_{0}}\left(\beta_{0}+\beta\sum_{t}s_{ p_{0}}(t)\prod_{k\leq t}\mu_{p_{0}}(k)\right)\right].\end{split}\] (S21)
Here \(\mathcal{Z}_{\{+\}}=4^{N}(\cosh^{N}\beta_{0}+\sinh^{N}\beta_{0})\) is the partition function of 2-d \(\mathbb{Z}_{2}\) lattice gauge theory. The \(\sigma_{\epsilon_{0}}\)'s have the same origin as Eq. (S2) through expansion under computational basis. Note that the above expression is independent of the choice of \(|\widetilde{\Psi}\rangle\) in code space. This can be shown by expanding \(|\widetilde{\Psi}\rangle\) under logical basis \(|\widetilde{\Psi}\rangle=\Psi_{++}\,|\overline{++}\rangle+\Psi_{+-}\,| \overline{+-}\rangle+\Psi_{-+}\,|\overline{-+}\rangle+\Psi_{--}\,|\overline{--}\rangle\) and using Elitzur's theorem. Since the POVM operators \(E_{s_{p_{0}}(t)}\) at different times and different plaquettes are all complete and commute with each other, we may view the above probability as a joint probability of syndrome outcomes at different spacetime points, and it is conditioned on the error configuration. Notice that the syndromes at different times are correlated by the physical spin \(\sigma_{\epsilon_{0}}\). That is because the imperfect measurement at each time step alters the instant quantum state, which affects the syndrome probability at the next time step. We can also construct a joint probability for both syndrome outcomes and error configuration. Noticing that the probability of a given error configuration is
\[P(\{c_{0}^{*}(t)\})=\prod_{e_{0},t}q^{\frac{1+\epsilon_{0}(t)}{2}}(1-q)^{\frac {1-\eta_{e_{0}}(t)}{2}}=\prod_{e_{0},t}\frac{\exp(K\eta_{e_{0}}(t))}{2\cosh K},\] (S22)
we have
\[\begin{split}& P(\{s_{p_{0}}(t)\},\{c_{0}^{*}(t)\})=P(\{s_{p_{0}}( t)\}|\{c_{0}^{*}(t)\})P(\{c_{0}^{*}(t)\})\\ &=\frac{1}{\mathcal{Z}_{\{+\}}}\prod_{e_{0},t}\frac{\exp(K\eta_{ e_{0}}(t))}{2\cosh K}\sum_{\{\sigma_{\sigma_{0}}\}}\mathrm{e}^{\beta_{0}\sum_{p_ {0}}b_{p_{0}}}\prod_{p,t}\frac{\exp\left(\beta b_{p_{0}}s_{p_{0}}(t)\prod_{k \leq t}\prod_{e_{0}\in\partial p_{0}}\eta_{e_{0}}(k)\right)}{2\cosh\beta}\\ &=\frac{1}{\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K)^{2NT}} \sum_{\{\sigma_{\epsilon_{0}}\}}\exp\left[K\sum_{e_{0},t}\eta_{e_{0}}(t)+\sum _{p_{0}}b_{p_{0}}\left(\beta_{0}+\beta\sum_{t}s_{p_{0}}(t)\prod_{k\leq t}\prod_ {e_{0}\in\partial p_{0}}\eta_{e_{0}}(k)\right)\right]\\ &\equiv P(\{s_{p_{0}}(t)\},\{\eta_{e_{0}}(t)\}).\end{split}\] (S23)
Here \(K=-\frac{1}{2}\ln\frac{q}{1-q}\) and \(\eta_{e_{0}}(t)=\pm 1\) signs the presence of Pauli error, which means \(\eta_{e_{0}}(t)=-1\) if the error configuration \(\{c_{0}^{*}(t)\}\) includes edge \(e_{0}\) at time \(t\) and \(\eta_{e_{0}}(t)=+1\) otherwise. Besides the boundary configuration of an error chain \(c_{0}^{*}(t)\) can also be represented by \(\eta_{e_{0}}(t)\), that is \(\mu_{p_{0}}(t)=\prod_{e_{0}\in\partial p_{0}}\eta_{e_{0}}(t)\).
Then we shall discuss in detail how could we decode with the syndrome outcomes \(\{s_{p_{0}}(t)\}\). We mainly follow the method used in Ref. [5]. It is convenient to consider the situation that our logical information is stored forever. We extend the initial time \(t=1\) to \(-\infty\) and the final time \(t=T\) to \(+\infty\), which means that the syndrome measurement procedure is performed forever without beginning or end. The error correction procedure is represented as a 3 dimensional lattice in Fig. S4. The vertical plaquettes represent physical qubits at different times and the horizontal plaquettes mark syndrome outcomes. The Pauli \(X\) errors are associated with vertical plaquettes (horizontal dashed lines), and the measurement errors are associated with horizontal plaquettes (vertical dashed lines). A given syndrome forms a chain in 3-\(d\) spacetime, and we denote the syndrome chain as \(c_{S}^{*}\). Note that \(c_{S}^{*}\) is a set of spacetime plaquettes, and we view it as a chain on the dual lattice (dashed lines in Fig. S4). We can also view it as a \(\mathbb{Z}_{2}\) valued vector with all the plaquettes as its basis. The task of the decoder is to find out both measurement errors and Pauli errors with respect to the information of syndrome \(c_{S}^{*}\). Denote the error chain of both measurement and Pauli errors as \(c_{E}^{*}\), it obvious that \(c_{E}^{*}\) should have the same boundary as \(c_{S}^{*}\), \(\partial^{*}c_{S}^{*}=\partial^{*}c_{E}^{*}\). Here \(\partial^{*}c_{S}^{*}\) is the set of cubics where the boundary points of \(c_{S}^{*}\) lie in. Suppose the error chain decided by the decoder is \(c_{E^{\prime}}^{*}\), when \(c_{E^{\prime}}^{*}\) and \(c_{E}^{*}\) are homologically equivalent, which means \(c_{E^{\prime}}^{*}+c_{E}^{*}\) forms contractible loop (here the plus is defined mod \(\mathbb{Z}_{2}\)), it should make no difference when we finally apply correction operator at \(t=+\infty\), hence the error correction will be successful. But if \(c_{E^{\prime}}^{*}+c_{E}^{*}\) forms contractible loops, then the corresponding correction operator will be containing logical \(X\) operator which has a nontrivial influence on logical information. In that case, the error correction will fail. So the task of the optimal decoder, named as maximum likelihood decoder, is to identify the equivalent class of error chains with the largest probability. Here the equivalent class of an error chain \([c_{E}^{*}]\) is defined as the set of all error chains
that are homologically equivalent to \(c_{E}^{*}\).
The probability of error chain class could be derived from Eq. (S23). Notice that measurement error configuration can be inferred from the syndrome at that moment and error configuration in the past
\[\eta_{p_{0}}(t)=s_{p_{0}}(t)\prod_{k\leq t}\mu_{p_{0}}(k)=s_{p_{0}}(t)\prod_{k \leq t}\prod_{e_{0}\in\partial p_{0}}\eta_{e_{0}}(k).\] (S24)
Substitute the above equation into Eq. (S23), we obtain the joint probability of both measurement and Pauli error
\[\begin{split}& P(\{\eta_{p_{0}}(t)\},\{\eta_{e_{0}}(t)\})=\frac{1} {\mathcal{Z}_{\{+\}}\prod_{t}[(2\cosh\beta)^{N}(2\cosh K)^{2N}]}\\ &\times\sum_{\{\sigma_{e_{0}}\}}\exp\left[K\sum_{e,t}\eta_{e_{0}} (t)+\sum_{p_{0}}b_{p_{0}}\left(\beta_{0}+\beta\sum_{t}\eta_{p_{0}}(t)\right) \right].\end{split}\] (S25)
In this expression, we see that measurement errors at different time steps are correlated. Generally, the presence of measurement error increases the probability of measurement error at later times. We reinterpret the above equation on the 3-d lattice. Given an error chain in 3-d spacetime \(c_{E}^{*}\), we still mark the location of error as \(\eta_{p}=-1\). Specifically, measurement error at a spacelike plaquette is denoted as \(\eta_{p_{s}}=-1\). Pauli error at a timelike plaquette is denoted as \(\eta_{p_{t}}=-1\). Then the probability of the error chain is expressed as
\[\begin{split}& P(c_{E}^{*})=P(\{\eta_{p}\})=\frac{1}{\mathcal{Z}_{\{+\} }\prod_{t}[(2\cosh\beta)^{N}(2\cosh K)^{2N}]}\\ &\times\sum_{\{\sigma_{e_{0}}\}}\exp\left[K\sum_{p_{t}}\eta_{p_{t }}+\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{p_{s}} \right].\end{split}\] (S26)
Note that \(\sigma_{e_{0}}\)'s and associated \(b_{p_{0}}\)'s are viewed as lying on a 2-d lattice distinct from the 3-d spacetime representing error correction history. Each \(p_{s}\) is associated with an \(b_{\Pi(p_{s})}\) on the 2-d physical lattice such that \(p_{s}\) and \(\Pi(p_{s})\) correspond to the same space point as in Fig. S4.
The probability of error class \([c_{E}]\) is calculated as a summation of probabilities of error chains that belong to the same equivalent class:
\[P([c_{E}])=\sum_{c\in[c_{E}]}P(c)\] (S27)
This summation can be done by introducing virtual spin \(\tau_{p}=\pm 1\) associated with each spacetime plaquette and relating it to each edge through \(\mathbb{Z}_{2}\) gauge interaction [5], as in Fig. S5. The summation of homologically equivalent error chains yields the same result as the summation of configurations of virtual spins \(\tau_{e}\) defined on edges up to a factor counting the number of 1-form symmetry operations since the sign changing of \(\tau_{e}\) corresponds to a deformation of error chain (see Fig. S6) and thus relates homologically equivalent error chains.:
\[\begin{split}& P([c_{E}])=\frac{1}{\mathcal{N}_{1}}\sum_{\{\tau_{p} \}}P(\{\eta_{p}\prod_{e\in\partial p}\tau_{e}\})\\ &=\frac{1}{\mathcal{N}_{1}\mathcal{Z}_{\{+\}}\prod_{t}[(2\cosh \beta)^{N}(2\cosh K)^{2N}]}\\ &\times\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}\exp\left[\beta_{0} \sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}\left(b_{\Pi(p_{s})}\eta_{p_{s}}\prod_ {e\in\partial p_{s}}\tau_{e}\right)+K\sum_{p_{t}}\left(\eta_{p_{t}}\prod_{e\in \partial p_{t}}\tau_{e}\right)\right].\end{split}\] (S28)
Here \(\mathcal{N}_{1}\) denotes the number of 1-form symmetry operations and it diverges for infinite time \(T\). We now arrive at an SM model including both virtual spins \(\{\tau_{p}\}\) on the 3-d spacetime lattice and physical spins \(\{\sigma_{e_{0}}\}\) on a distinct 2-d space lattice. The virtual spins in this SM model describe the fluctuation of error chains in the same class The physical spins are coupled to virtual spins on time-like plaquettes at all times since the probability of measurement errors at a certain time depends on the entire syndrome measurement history. The optimal decoding algorithm should select the error class \([c_{E}]\) with the largest \(P([c_{E}])\). Moreover, if the interaction configuration \(\{\eta_{e}\}\) is viewed as disordered with a correlated probability distribution \(P(\{\eta_{e}\})\) defined in equation S26, then the deconfinement-confinement phase transition point of virtual spins \(\{\tau_{p}\}\) in this disordered SM model should signify the error threshold of the error correction protocol as discussed in Ref. [5]. In summary, the quenched disordered SM model describing the error threshold is
\[\begin{split}&\mathcal{Z}(\{\eta_{p}\})=\sum_{\{\sigma_{e_{0}}\},\{ \tau_{e}\}}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_ {s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right],\\ & U_{p}=\prod_{e\in\partial p}\tau_{e},\quad b_{p_{0}}=\prod_{e_ {0}\in\partial p_{0}}\sigma_{e_{0}},\\ & P(\{\eta_{e}\})=\frac{\sum_{\{\sigma_{e_{0}}\}}\exp\left[\beta _{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{p_{s}}+K\sum_{ p_{t}}\eta_{p_{t}}\right]}{\mathcal{Z}_{\{+\}}\prod_{t}[(2\cosh\beta)^{N}(2 \cosh K)^{2N}]}.\end{split}\] (S29)
Note that in our SM model, all the spacelike plaquettes are coupled with the \(\sigma_{e_{0}}\)'s, causing a highly non-local corre
lation in the timelike direction. This is a consequence of the imperfect measurement in the initial state preparation. When the initial state is well prepared \(\beta_{0}\rightarrow+\infty\), all the \(b_{p_{0}}\)'s will be set to \(+1\) in Eq. (S29) and we arrive at a random plaquette gauge model:
\[\begin{split}&\mathcal{Z}(\{\eta_{p}\})=\sum_{\{\sigma_{\sigma_{0}} \},\{\tau_{e}\}}\exp\left[\beta\sum_{p_{s}}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}} \eta_{p_{t}}U_{p_{t}}\right],\\ & U_{p}=\prod_{e\in\partial p}\tau_{e},\\ & P(\{\eta_{e}\})=\frac{\exp\left[K\sum_{p_{t}}\eta_{p_{t}}+\beta \sum_{p_{s}}\eta_{p_{s}}\right]}{\prod_{t}(2\cosh\beta)^{N}(2\cosh K)^{2N}}, \end{split}\] (S30)
which is derived in Ref. [5] to describe the error threshold under probabilistic measurement error.
From Eq. (S29) we can sum the \(\sigma_{e}\)'s out and arrive at the SM model containing pure \(\tau_{e}\) spins
\[\begin{split}&\mathcal{Z}(\{\eta_{p}\})=\sum_{\{\tau_{e}\}}\left[ \prod_{p_{0}}\cosh(\beta_{0}+\beta\sum_{t}\eta_{p_{s}}U_{p_{s}})+\prod_{p_{0}} \sin(\beta_{0}+\beta\sum_{t}\eta_{p_{s}}U_{p_{s}})\right]\exp\left[K\sum_{p_{t} }\eta_{p_{t}}\right],\\ & P(\{\eta_{e}\})=\frac{1}{[(\cosh\beta_{0})^{N}+(\sinh\beta_{0} )^{N}]\prod_{t}[(2\cosh\beta)^{N}(2\cosh K)^{2N}]}\\ &\times\left[\prod_{p_{0}}\cosh(\beta_{0}+\beta\sum_{t}\eta_{p_{s }})+\prod_{p_{0}}\sin(\beta_{0}+\beta\sum_{t}\eta_{p_{s}})\right]\exp\left[K \sum_{p_{t}}\eta_{p_{t}}\right].\end{split}\] (S31)
## SIII Local symmetry on Nishimori line
The SM model \(\mathcal{Z}(\{\eta_{p}\})\) has a local symmetry when redefining the interaction background \(\{\eta_{p}\}\)
\[\eta_{p}\rightarrow\eta_{p}\prod_{e\in\partial p}\nu_{e},\quad\tau_{e} \rightarrow\tau_{e}\nu_{e},\quad\nu_{e}=\pm 1.\] (S32)
as in Fig. S6. More precisely, the partition function is invariant under redefinition of background interaction configuration \(\{\eta_{p}\}\):
\[\mathcal{Z}(\{\eta_{p}\prod_{e\in\partial p}\nu_{e}\})=\mathcal{Z}(\{\eta_{p}\})\] (S33)
since the influence is absorbed in the summation of spin configuration by redefining spin variable \(\tau_{e}^{\prime}=\tau_{e}\nu_{e}\). Note that \(\{\eta_{p}\}\) correspond to a representative error chain in error class \([c_{E}]\), and the local invariance Eq. (S32) tells us that \(\mathcal{Z}(\{\eta_{p}\})\) does not depend on the choice of representative error chain. It only depends on the homological class of error chain.
We will use the local invariance S32 on Nishimori line to derive several results about the phase structure [6; 7]. Under the presence of disorder, to compute an expectation value, we should first take the ensemble average for a particular interaction configuration (denoted by \(\langle\cdot\rangle_{\{\eta_{p}\}}\)) and then take the disorder average on different configurations (denoted
as \([\cdot]\)). For a given observable \(O(\{\eta_{p}\},\{\sigma_{e_{0}}\},\{\tau_{e}\})\) depending on the \(\sigma\) spins, \(\tau\) spins and interaction configuration, these two kinds of averages are defined as:
\[\begin{split}&\left\langle O\right\rangle_{\{\eta_{p}\}}=\frac{1}{ \mathcal{Z}(\{\eta_{p}\})}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}O\exp\left[ \beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{\pi}}b_{\Pi(p_{\pi})}\eta_{p_{\pi} }U_{p_{\pi}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right]\\ &\left[\left\langle O\right\rangle_{\{\eta_{p}\}}\right]=\sum_{ \{\eta_{p}\}}P(\{\eta_{p}\})\left\langle O\right\rangle_{\{\eta_{p}\}}\end{split}\] (S34)
If an observable \(O(\{\eta_{p}\},\{\sigma_{e_{0}}\},\{\tau_{e}\})\) is also invariant under local symmetry transformation in Eq. (S32):
\[O(\{\eta_{p}\prod_{e\in\partial p}\nu_{e}\},\{\sigma_{e_{0}}\},\{\tau_{e}\nu_ {e}\})=O(\{\eta_{p}\},\{\sigma_{e_{0}}\},\{\tau_{e}\})\] (S35)
then the expectation value can be evaluated as:
\[\begin{split}&\left[\left\langle O\right\rangle\right]=\sum_{\{ \eta_{p}\}}\frac{P(\{\eta_{p}\})}{\mathcal{Z}(\{\eta_{p}\})}\sum_{\{\sigma_{e_{ 0}}\},\{\tau_{e}\}}O(\{\eta_{p}\},\{\sigma_{e_{0}}\},\{\tau_{e}\})\\ &\times\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s} }b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right] \\ &=\sum_{\{\eta_{p}\}}\frac{P(\{\eta_{p}\prod_{e\in\partial p}\nu _{e}\})}{\mathcal{Z}(\{\eta_{p}\}\prod_{e\in\partial p}\nu_{e}))}\sum_{\{\sigma _{e_{0}}\},\{\tau_{e}\}}O(\{\eta_{p}\prod_{e\in\partial p}\nu_{e}\},\{\sigma_{ e_{0}}\},\{\tau_{e}\})\\ &\times\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s} }b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right] \\ &=\sum_{\{\eta_{p}\}}\frac{P(\{\eta_{p}\prod_{e\in\partial p}\nu _{e}\})}{\mathcal{Z}(\{\eta_{p}\})}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}O(\{ \eta_{p}\},\{\sigma_{e_{0}}\},\{\tau_{e}\})\\ &\times\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s} }b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right] \\ &=\frac{1}{2^{3NT}}\sum_{\{\nu_{e}\}}\sum_{\{\eta_{p}\}}\frac{P( \{\eta_{p}\prod_{e\in\partial p}\nu_{e}\})}{\mathcal{Z}(\{\eta_{p}\})}\sum_{ \{\sigma_{e_{0}}\},\{\tau_{e}\}}O(\{\eta_{p}\},\{\sigma_{e_{0}}\},\{\tau_{e}\}) \\ &\times\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s} }b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right] \\ &=\frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K )^{2NT}}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\},\{\eta_{p}\}}O(\{\eta_{p}\},\{ \sigma_{e_{0}}\},\{\tau_{e}\})\\ &\times\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s} }b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right] \\ \end{split}\] (S36)
In the third equality, we make use of the local symmetry. In the fourth equality we averaged over different \(\{\nu_{e}\}\) since the final result \([\left\langle O\right\rangle]\) is independent of \(\{\nu_{e}\}\). The fifth equality is obtained by noticing that:
\[\sum_{\{\nu_{e}\}}P(\{\eta_{p}\prod_{e\in\partial p}\nu_{e}\})=\frac{\mathcal{ Z}(\{\eta_{p}\})}{\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K)^{2NT}}\] (S37)
First we consider \(O=W_{A_{0}}=\prod_{p_{0}\in A_{0}}b_{p_{0}}\) which is the Wilson loop on 2-d physical lattice. Clearly, it satisfies Eq.
(S35). So we calculate its expectation value as:
\[\begin{split}&[\langle W_{A_{0}}\rangle]=\frac{1}{2^{3NT}\mathcal{Z}_{ \{+\}}(2\cosh\beta)^{NT}(2\cosh K)^{2NT}}\\ &\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\},\{\eta_{p}\}}W_{A_{0}} \exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{ p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right]\\ &=\frac{1}{\mathcal{Z}_{\{+\}}}\sum_{\{\sigma_{e_{0}}\}}W_{A_{0}} \exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}\right]=\langle W_{A_{0}}\rangle_{\{+ \}}\\ &=\frac{(\tanh\beta_{0})^{|A_{0}|}+(\tanh\beta_{0})^{N-|A_{0}|}}{ 1+(\tanh\beta_{0})^{N}}\end{split}\] (S38)
Here \(\langle\cdot\rangle_{\{+\}}\) denotes the expectation value for the pure 2-d \(\mathbb{Z}_{2}\) gauge theory of physical spins \(\sigma_{e_{0}}\), \(\mathcal{Z}_{\{+\}}=\sum_{\{\sigma_{e_{0}}\}}\exp\left[\beta_{0}\sum_{p_{0}} b_{p_{0}}\right]\). Actually, the above derivation works for any observable contains purely \(\sigma_{e_{0}}\) spins. So we conclude that the \(\sigma_{e_{0}}\) spins in SM model shown in Eq. (S29) behaves exactly the same as pure 2-d \(\mathbb{Z}_{2}\) gauge theory. Specifically for the Wilson loop \(W_{A_{0}}\), we find it obeys area law for any finite \(\beta_{0}\), same as 2-d \(\mathbb{Z}_{2}\) lattice gauge theory, so the \(\sigma_{e_{0}}\) spins always stays in disordered phase.
We may also compute the internal energy, which is also invariant under the transformation shown in Eq. (S32). Given that
\[\begin{split}&[\langle\eta_{p_{t}}b_{\pi(p_{s})}U_{p_{s}}\rangle]= \frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K)^{2NT}}\\ &\times\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\},\{\eta_{p_{s}}\}} \eta_{p_{s}}b_{\pi(p_{s})}U_{p_{s}}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+ \beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}} U_{p_{t}}\right]\\ &=\frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K )^{2NT}}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}(2\sinh\beta)(2\cosh\beta)^{NT- 1}(2\cosh K)^{2NT}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}\right]\\ &=\tanh\beta,\end{split}\] (S39)
and similarly
\[\begin{split}&[\langle\eta_{p_{t}}U_{p_{t}}\rangle]=\frac{1}{2^{3NT} \mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K)^{2NT}}\\ &\times\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\},\{\eta_{p}\}}\eta_{ p_{t}}U_{p_{t}}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})} \eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right]\\ &=\frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K )^{2NT}}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}(2\sinh K)(2\cosh\beta)^{NT}(2 \cosh K)^{2NT-1}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}\right]\\ &=\tanh K,\end{split}\] (S40)
we have the expression for internal energy
\[\begin{split}&\mathcal{U}=[\langle-\beta_{0}\sum_{p_{0}}b_{p_{0}}- \beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}-K\sum_{p_{t}}\eta_{p_{t}}U _{p_{t}}\rangle]\\ &=-N\beta_{0}\frac{\tanh\beta_{0}+(\tanh\beta_{0})^{N-1}}{1+( \tanh\beta_{0})^{N}}-NT\beta\tanh\beta-2NTK\tanh K.\end{split}\] (S41)
Then consider the Wilson loop for \(\tau_{e}\) spins \(W_{A}=\prod_{p\in A}U_{p}=\prod_{c\in\partial A}\tau_{e}\). Note that it is not invariant under Eq.
(S32). By a similar calculation as Eq. (S36) we have:
\[\begin{split}&[\langle W_{A}\rangle]=\sum_{\{\eta_{p}\}}\frac{P(\{ \eta_{p}\})}{\mathcal{Z}(\{\eta_{p}\})}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}W_{ A}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})}\eta_{p_{s}} U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right]\\ &=\sum_{\{\eta_{p}\}}\frac{P(\{\eta_{p}\prod_{e\in\partial p_{s}} \nu_{e}\})}{\mathcal{Z}(\{\eta_{p}\})}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}W_{ A}\prod_{e\in\partial A}\nu_{e}\\ &\times\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}} b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right]\\ &=\frac{1}{2^{3NT}}\sum_{\{\nu_{s}\}}\sum_{\{\eta_{p}\}}\frac{P( \{\eta_{p}\prod_{e\in\partial p_{s}}\nu_{e}\})\prod_{e\in\partial A}\nu_{e}}{ \mathcal{Z}(\{\eta_{p}\})}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}W_{A}\\ &\times\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}} b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right]\\ &=\frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K )^{2NT}}\sum_{\{\eta_{p}\}}\mathcal{Z}(\{\eta_{p}\})\left\langle W_{A}\right\rangle ^{2}\\ &=[\left\langle W_{A}\right\rangle^{2}]\end{split}\] (S42)
The result \([\left\langle W_{A}\right\rangle]=[\left\langle W_{A}\right\rangle^{2}]\) signifies the absence of gauge glass phase (\([\left\langle W_{A}\right\rangle]=0\), \([\left\langle W_{A}\right\rangle^{2}]>0\)) on the Nishimori line [7].
In addition, one may find that the modified Wilson loop \(W_{A}\prod_{p\in A}\eta_{p}\) is invariant under local symmetry. It can be calculated as
\[\begin{split}&[\langle W_{A}\prod_{p\in A}\eta_{p}\rangle]= \frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K)^{2NT}}\\ &\times\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\},\{\eta_{p}\}}W_{A} \prod_{p\in A}\eta_{p}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}} b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}U_{p_{t}}\right]\\ &=\frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K )^{2NT}}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}\exp\left[\beta_{0}\sum_{p_{0}}b_ {p_{0}}\right]\\ &\times\left(\prod_{p_{s}\in A_{s}}\sum_{\eta_{p_{s}}}\eta_{p_{s} }U_{p_{s}}\exp\left[\beta b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_{s}}\right]\prod_{p_{s} \notin A_{s}}\sum_{\eta_{p_{s}}}\exp\left[\beta b_{\Pi(p_{s})}\eta_{p_{s}}U_{p_ {s}}\right]\right)\\ &\times\left(\prod_{p_{t}\in A_{t}}\sum_{\eta_{p_{t}}}\eta_{p_{t} }U_{p_{t}}\exp\left[K\eta_{p_{t}}U_{p_{t}}\right]\prod_{p_{t}\notin A_{t}}\sum _{\eta_{p_{t}}}\exp\left[K\eta_{p_{t}}U_{p_{t}}\right]\right)\\ &=\frac{1}{2^{3NT}\mathcal{Z}_{\{+\}}(2\cosh\beta)^{NT}(2\cosh K )^{2NT}}\sum_{\{\sigma_{e_{0}}\},\{\tau_{e}\}}\exp\left[\beta_{0}\sum_{p_{0}}b_ {p_{0}}\right]\\ &\times\left(\prod_{p_{s}\in A_{s}}b_{\pi(p_{s})}\right)(2\sinh \beta)^{|A_{s}|}(2\cosh\beta)^{NT-|A_{s}|}(2\sinh K)^{|A_{t}|}(2\cosh K)^{2NT-|A _{t}|}\\ &\left\langle W_{\Pi(A)}\right\rangle_{\{+\}}\left(\tanh\beta \right)^{|A_{s}|}(\tanh K)^{|A_{t}|}\end{split}\] (S43)
Here \(A_{s}\) and \(A_{t}\) denote the sets of spacelike plaquettes and timelike plaquettes contained in \(A\) respectively. \(\Pi\) is the projection onto physical lattice, see Fig. S9. This expression reveals that \(\tau_{e}\)'s and \(\sigma_{e}\)'s are correlated and their Wilson loops somehow depend on each other. However, the modified Wilson loop \(W_{A}\prod_{p\in A}\eta_{p}\) cannot serve as the correct order parameter for the error threshold.
## SIV Low temperature expansion
The ordered (deconfinement) phase is expected to exist at the exact zero temperature point (on the Nishimori line) \(\beta_{0},\beta,K\rightarrow+\infty\). Near the zero temperature point, we may perform a low-temperature expansion for Wilson loops to show if the ordered phase still persists to some finite temperature. Assume \(e^{-\beta_{0}}\), \(e^{-\beta}\), \(e^{-K}\) are of the same order, we expand \([\langle W_{A}\rangle]\) up to the order \(e^{-4\beta}\) following the method discussed in Ref. [3].
We first perform the expansion for the disorder probability \(P(\{\eta_{p}\})\). Through the expression of \(P(\{\eta_{p}\})\)
\[P(\{\eta_{e}\})=\frac{2^{N+1}\sum_{\{b_{p_{0}}\}}\frac{1+\prod_{p_{0}}b_{p_{0} }}{2}\exp\left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})} \eta_{p_{s}}+K\sum_{p_{t}}\eta_{p_{t}}\right]}{\mathcal{Z}_{\{+\}}\prod_{t}[(2 \cosh\beta)^{N}(2\cosh K)^{2N}]}.\] (S44)
We notice that the zeroth order is contributed by the configuration that all \(b_{p_{0}}\) and \(\eta_{p}\) are equal to \(+1\). The order \(e^{-2\beta}\) is given by flipping one of the \(\eta_{p}\)s to \(-1\). The order \(e^{-4\beta}\) is given by flipping any two of the \(\eta_{p}\)s or flipping two of \(b_{p_{0}}\)s. Note that the parity of \(\{b_{p_{0}}\}\) configuration must be even. Then we find that the numerator is expanded as
\[\begin{split}& 2^{N+1}E^{N\beta_{0}+NT\beta+2NTK}\left[\delta_{\{+ \}}+e^{-2K}\sum_{p_{t}}\delta_{\{p_{t}\}}+e^{-2\beta}\sum_{p_{s}}\delta_{\{p_{ s}\}}\right.\\ &\left.\left[+e^{-4K}\sum_{(p_{t},p_{t}^{\prime})}\delta_{\{(p_{ t},p_{t}^{\prime})\}}+e^{-4\beta}\sum_{(p_{s},p_{s}^{\prime})}\delta_{\{(p_{s},p_{t}^{ \prime})\}}+e^{-2\beta-2K}\sum_{(p_{s},p_{t})}\delta_{\{(p_{s},p_{t})\}}+e^{-4 \beta_{0}}\sum_{(p_{0},p_{t}^{\prime})}\delta_{\{\Pi^{-1}(p_{0},p_{t}^{\prime} )\}}\right].\end{split}\] (S45)
Here \((p,p^{\prime})\) denotes a pair of different plaquettes. For example \((p_{s},p_{s}^{\prime})\) denotes a pair of spacelike plaquettes and the summation \(\sum_{(p_{s},p_{s}^{\prime})}\) is taken on all such pairs. We have used a simplified notation for the Kronecker delta symbol. \(\delta_{+}\) means that all the \(\eta_{p}\)s are fixed to be \(+1\), \(\delta_{+}=\prod_{p}\delta_{\eta_{p},+}\). \(\delta_{p}\) denotes that \(\eta_{p}\) one the given plaquette \(p\) is flipped to \(-1\), \(\delta_{p}=\delta_{\eta_{p},-}\prod_{p^{\prime}\neq p}\delta_{\eta_{p^{\prime} },+}\), and similar for \(\delta_{(p,p^{\prime})}\), \(\delta_{(p,p^{\prime})}=\delta_{\eta_{p},-}\delta_{\eta_{p^{\prime}},-}\prod_ {p^{\prime\prime}\neq p,p^{\prime}}\delta_{\eta_{p^{\prime\prime}},+}\). Finally \(\delta_{\Pi^{-1}(p_{0},p_{0}^{\prime})}\) means that all the spacelike plaquettes whose projection on the physical lattice is \(p_{0}\) or \(p_{0}\) are set to be \(-1\), \(\delta_{\Pi^{-1}(p_{0},p_{0}^{\prime})}=\prod_{p_{s}|\Pi(p_{s})=p_{0}}\delta_ {\eta_{p_{s},-}}-\prod_{p_{s}^{\prime}|\Pi(p_{s}^{\prime})=p_{0}^{\prime}} \delta_{\eta_{p_{s},-}}+\prod_{p_{s}^{\prime\prime}|\Pi(p_{s}^{\prime})\neq p_{ 0},p_{0}^{\prime}}\delta_{\eta_{p_{s},+}}\). The number of integer times \(T\) should be sent to \(+\infty\) at the end of the calculation. Combined with the denominator, we arrive at a perturbative expression for disorder probability
\[\begin{split}& P(\{\eta_{p}\})\simeq\left[1-\left(1-2NTe^{-2K}-NTe^{-2 \beta}\right)\left(2NTe^{-2K}+NTe^{-2\beta}\right)\right.\\ &-\left({2NT\choose 2}e^{-4K}+{NT\choose 2}e^{-4\beta}+2(NT)^{2}e^{ -2K-2\beta}+{N\choose 2}e^{-4\beta_{0}}\right)\right]\delta_{\{+\}}\\ &+\left(1-2NTe^{-2K}-NTe^{-2\beta}\right)\left[e^{-2K}\sum_{p_{t}} \delta_{\{p_{t}\}}+e^{-2\beta}\sum_{p_{s}}\delta_{\{p_{s}\}}\right]\\ &+e^{-4K}\sum_{(p_{t},p_{t}^{\prime})}\delta_{\{(p_{t},p_{t}^{ \prime})\}}+e^{-4\beta}\sum_{(p_{s},p_{s}^{\prime})}\delta_{\{(p_{s},p_{s}^{ \prime})\}}+e^{-2\beta-2K}\sum_{(p_{s},p_{t})}\delta_{\{(p_{s},p_{t})\}}+e^{-4 \beta_{0}}\sum_{(p_{0},p_{t}^{\prime})}\delta_{\{\Pi^{-1}(p_{0},p_{0}^{\prime} )\}}.\end{split}\] (S46)
Then we analyze the ensemble-averaged value for the interaction configurations that appeared in the above expression. Keeping in mind the fact that the interaction term \(U_{p}\) and Wilson loop \(W_{A}\) are both invariant under 1-form symmetry operations which flip the spins on a 2-d surface on the dual lattice as in Fig. S7, when performing the spin configuration summation for the lowest few orders we extract a factor of 1-form symmetry operation numbers \(\mathcal{N}_{1}\) and evaluate the coefficient for each order on an equivalent class of spin configurations module 1-form symmetry.
First, consider the case all \(\eta_{p}=+1\), and we want to compute
\[\left\langle W_{A}\right\rangle_{\{+\}}=\frac{1}{\mathcal{Z}(\{+\})}\sum_{\{ \sigma_{a_{0}}\},\{\tau_{e}\}}\left(\prod_{e\in\partial A}\tau_{e}\right)\exp \left[\beta_{0}\sum_{p_{0}}b_{p_{0}}+\beta\sum_{p_{s}}b_{\Pi(p_{s})}U_{p_{s}}+ K\sum_{p_{t}}U_{p_{t}}\right].\] (S47)
The \(0\)th order is contributed by the ground states, and a representative spin configuration is that all \(\tau_{e}=+1\) and \(b_{p_{0}}=+1\). It leads to the term:
\[\mathcal{N}_{1}2^{N+1}e^{N\beta_{0}+NT\beta+2NTK}\] (S48)
in both the numerator and denominator. Now if we flip one of the \(\tau_{e}\) spins to \(-1\), it increases the energy of four neighbor plaquettes and results in a term of order \(e^{-8\beta}\), which is not included in our expansion. However, if we flip two \(b_{p_{0}}\)s, this only yields a term of order \(e^{-4\beta_{0}}\) in the infinite boundary condition case. When two \(b_{p_{0}}\)s are flipped, it leads to \(-1\) interactions on two strings of spacelike plaquettes corresponding to the position of \(b_{p_{0}}\), as shown in S8, and it should cost infinite energy. However, the energy cost can be eliminated by the local symmetry transformation in Eq. (S32). The two strings can be viewed as merged together by setting the \(\tau_{e}\) spins on the surface connecting them to \(-1\). Deformation of the surface is nothing but a \(1\)-form symmetry operation and has already been mod out. Combining both the \(b_{p_{0}}\) configuration and the \(\tau_{e}\) spin configuration, the total energy cost is just \(4\beta_{0}\). Now consider the value of the Wilson loop. During the process of merging two strings together, if they cross the Wilson loop for odd times, then exactly one of the spins on the boundary of region \(A\) will be flipped to \(-1\). So the value of the Wilson loop for that configuration is \(-1\). But if the Wilson loop is crossed for even times, it still gets \(+1\). The total contribution of order \(e^{-4\beta_{0}}\) is obtained by summing all the choices for flipping two \(b_{p_{0}}\)s, which contains \(\binom{N}{2}\) terms. By counting the number of odd time crosses, we obtain the expression for an additional term in the numerator
\[\left(\binom{N}{2}-2|\Pi(A)|(N-|\Pi(A)|)\right)e^{-4\beta_{0}}.\] (S49)
Here we define \(\Pi(A)\) as the projection of region \(A\) on the spacelike \(2\)-d lattice module \(\mathbb{Z}_{2}\). The timelike plaquettes are dropped under projection, and an even number of spacelike plaquettes on the same space position also leads to zero, as in Fig. S9. For denominator, the new term is just \(\binom{N}{2}e^{-4\beta_{0}}\). So the expansion of \(\left\langle W_{A}\right\rangle_{\{+\}}\) is evaluated as
\[\langle W_{A}\rangle_{\{+\}}\simeq\frac{{\cal N}_{1}2^{N+1}e^{N \beta_{0}+NT\beta+2NTK}\left[1+\left(\binom{N}{2}-2|\Pi(A)|(N-|\Pi(A)|)\right)e^{ -4\beta_{0}}\right]}{{\cal N}_{1}2^{N+1}e^{N\beta_{0}+NT\beta+2NTK}\left[1+ \binom{N}{2}e^{-4\beta_{0}}\right]}\] (S50) \[\simeq 1-2|\Pi(A)|(N-|\Pi(A)|)e^{-4\beta_{0}}.\]
From this expression, we can roughly say that \(\langle W_{A}\rangle_{\{+\}}\) decays with respect to the spacelike area.
Now we evaluated the Wilson loop configurations that only one \(\eta_{p}\) is flipped. Suppose a timelike plaquette \(p_{t}\) is flipped. Now the ground state energy is \(2K\) higher than the previous all \(\eta_{p}=+1\) case and the ground state spin configuration remains the same, which is all \(\tau_{e}=+1\). The lowest excitation is caused by flipping one of the spins at the boundary of \(p_{t}\), which has the extra energy \(4K\). It eventually results in terms of order \(e^{-4K}\). Notice that the probability of \(\eta_{p_{t}}=-1\) is already of the order \(e^{-2K}\), so we only need to concern about the ground state in our expansion, and it leads to
\[\langle W_{A}\rangle_{\{p_{t}\}}\simeq 1.\] (S51)
Similarly, for the spacelike plaquette flipping, we also have
\[\langle W_{A}\rangle_{\{p_{s}\}}\simeq 1.\] (S52)
Then we consider the configuration where two \(\eta_{p}\)s are flipped. Notice that only the ground state contribution survives in our expansion since the probability is already of the order \(e^{-4\beta}\). We the two flipped \(\eta_{p}\)s do not contact each other, the representative spin configuration for ground state is still all \(\tau_{e}=+1\) and only those two plaquettes where \(\eta_{p}=-1\) have energy costs. But when the two flipped plaquettes are nearest neighbors, there will be two degenerate (ignoring the energetic difference between \(\beta\) and \(K\)) configurations that both have energy costs on two plaquettes. They are distinguished by whether the contacting edge of the two plaquettes is flipped to \(-1\). If the contacting edges lie on the Wilson loop, then summing the two degenerate configurations leads to \(W_{A}=0\). So assume the two flipped plaquettes are \(p\) and \(p^{\prime}\), then the Wilson loop takes the value
\[\langle W_{A}\rangle_{\{(p,p^{\prime})\}}\simeq 1-|\partial A\cap\partial p \cap\partial p^{\prime}|.\] (S53)
If the two plaquettes and the Wilson loop intersect at one edge, \(|\partial A\cap\partial p\cap\partial p^{\prime}|\) takes \(1\). Otherwise, it takes \(0\). Consider the disorder summation of these configurations. Each edge connects to \(4\) plaquettes, yielding \(6\) pairs of nearest-neighbor plaquettes. By noticing that each timelike edge is associated with \(6\) pairs of timelike plaquettes while each spacelike edge is associated with a pair of timelike plaquettes, a pair of spacelike plaquettes and \(4\) combinations of spacelike and timelike plaquettes, we find that
\[e^{-4\beta}\sum_{(p_{s},p^{\prime}_{s})}\langle W_{A}\rangle_{\{( p_{s},p^{\prime}_{s})\}}\simeq e^{-4\beta}\left(\binom{NT}{2}-|\partial A|_{s}\right)\] (S54) \[e^{-4K}\sum_{(p_{t},p^{\prime}_{t})}\langle W_{A}\rangle_{\{(p_{ t},p^{\prime}_{t})\}}\simeq e^{-4K}\left(\binom{2NT}{2}-|\partial A|_{s}-6| \partial A|_{t}\right)\] \[e^{-2\beta-2K}\sum_{(p_{t},p_{s})}\langle W_{A}\rangle_{\{(p_{t},p_{s})\}}\simeq e^{-2\beta-2K}\left(2(NT)^{2}-4|\partial A|_{s}\right).\]
Here \(|\partial A|_{s}\) denotes the number of spacelike edges in \(\partial A\) and \(|\partial A|_{t}\) is the number of timelike edges in \(\partial A\).
Finally we turn to the configuration denoted as \(\{\Pi^{-1}(p_{0},p^{\prime}_{0})\}\), which means all the spacelike interactions \(\eta_{p_{s}}\) along the timelike strings with the same space position as \(p_{0}\) or \(p^{\prime}_{0}\) are flipped. In this case, a similar argument while evaluating Eq. (S50) could be applied. The two timelike strings are merged together by setting the spins in the interval to \(-1\), which gives us the ground state configuration. So the leading order of the Wilson loop is
\[\langle W_{A}\rangle_{\{\Pi^{-1}(p_{0},p^{\prime}_{0})\}}\simeq\pm 1.\] (S55)
which takes \(-1\) when only one of the \(p_{0}\) and \(p^{\prime}_{0}\) stays in \(\Pi(A)\) and takes \(+1\) otherwise. Performing the disorder summation, we find that
\[e^{-4\beta_{0}}\sum_{(p_{0},p^{\prime}_{0})}\langle W_{A}\rangle_{\{\Pi^{-1}(p _{0},p^{\prime}_{0})\}}\simeq e^{-4\beta_{0}}\left(\binom{N}{2}-2|\Pi(A)|(N-| \Pi(A)|)\right).\] (S56)
Putting all these stuff together, we arrive at the low-temperature expansion for the Wilson loop
\[\begin{split}&[\langle W_{A}\rangle]\simeq 1-4|\Pi(A)|(N-|\Pi(A)|)e^{-4 \beta_{0}}-|\partial A|_{s}\left(e^{-4\beta}+e^{-4K}+4e^{-2\beta-2K}\right)-6| \partial A|_{t}e^{-4K}\\ &\simeq\exp\left[-4|\Pi(A)|(N-|\Pi(A)|)e^{-4\beta_{0}}-| \partial A|_{s}\left(e^{-4\beta}+e^{-4K}+4e^{-2\beta-2K}\right)-6|\partial A|_{ t}e^{-4K}\right].\end{split}\] (S57)
Note that the factor \(|\Pi(A)|(N-|\Pi(A)|)\) appears because our space manifold is a closed surface. The result is natural to be symmetric between \(\Pi(A)\) and the 2-d complement of \(\Pi(A)\). We also notice that the number of integer time \(T\) automatically vanishes in these expressions so they already suit the case \(T\to+\infty\).
From Eq. (S57) we can see that once the initial state preparation suffers from imperfect measurement, the scaling behavior of Wilson loops becomes anisotropic for space and time direction. For a Wilson loop \(A\) purely containing spacelike plaquettes, we have
\[\left[\langle W_{A}\rangle\right]\simeq\exp\left[-4|A|(N-|A|)e^{-4\beta_{0}}- |\partial A|\left(e^{-4\beta}+e^{-4K}+4e^{-2\beta-2K}\right)\right].\] (S58)
For large enough \(N\) and large enough Wilson loop satisfying \(|A|<N/2\), the area law decaying term dominates and the perimeter term can be ignored. So we conclude that the spacelike Wilson loop always decays with respect to the area at finite temperature even when the temperature is sufficiently low. Equivalently we can say that once the initial state is prepared under imperfect measurement, the spacelike Wilson loop always obeys area law. However, for a timelike Wilson loop, we will get:
\[\left[\langle W_{A}\rangle\right]\simeq\exp\left[-\left(e^{-4\beta}+e^{-4K}+4 e^{-2\beta-2K}\right)|\partial A|_{s}-6e^{-4K}|\partial A|_{t}\right].\] (S59)
so there will be a finite temperature phase for it to exhibit perimeter law scaling.
As we mentioned in the main text, the subtlety of Eq. (S57) is that we cannot directly take the thermodynamic limit \(N\to\infty\). However, we may infer the thermodynamic result in analogy to the 2-d \(\mathbb{Z}_{2}\) gauge theory. We may expand the Wilson loop for 2-d \(\mathbb{Z}_{2}\) gauge theory Eq. (S38) also up to \(e^{-4\beta_{0}}\):
\[\left\langle W_{A_{0}}\right\rangle_{\{+\}}\simeq 1-2|A_{0}|(N-|A_{0}|)e^{-4 \beta_{0}}.\] (S60)
Compared with Eq. (S57) we find an approximate relation under low temperatures:
\[\left[\langle W_{A}\rangle\right]\simeq\left\langle W_{\Pi(A)}\right\rangle_{ \{+\}}^{2}\exp\left[-|\partial A|_{s}\left(e^{-4\beta}+e^{-4K}+4e^{-2\beta-2K }\right)-6|\partial A|_{t}e^{-4K}\right].\] (S61)
In this expression \(\left[\langle W_{A}\rangle\right]\) is a Wilson loop for \(\tau_{e}\) spins and it relates to a Wilson loop \(\left\langle W_{\Pi(A)}\right\rangle_{\{+\}}\) for \(\sigma_{e_{0}}\) spins in the 2-d physical lattice. Recall that \(\left\langle\cdot\right\rangle_{\{+\}}\) denotes the expectation value for 2-d \(\mathcal{Z}_{2}\) gauge theory \(\mathcal{Z}_{\{+\}}=\sum_{\{\sigma_{e_{0}}\}}\exp\left(\beta_{0}\sum_{p_{0}}b_ {p_{0}}\right)\). The expression in Eq. (S61) absorbs the size of space \(N\) and we expect it to valid under the thermodynamic limit. Turn back to Eq. (S38). If we take the \(N\to+\infty\) limit first (keeping \(A_{0}\) a finite region) and then perform low-temperature expansion, we arrive at
\[\left\langle W_{A_{0}}\right\rangle_{\{+\}}\simeq 1-2|A_{0}|e^{-2\beta_{0}}+2|A_{ 0}|^{2}e^{-4\beta_{0}}.\] (S62)
The difference between Eq. (S60) and Eq. (S62) arise from the criticality at zero temperature \(\beta_{0}=+\infty\). The \(e^{-2\beta_{0}}\) term exhibit a discontinuity at zero temperature, which leads to the non-commuting of \(N\to+\infty\) limit and \(\beta_{0}\to+\infty\) limit. A similar critical behavior should be found in \(\left[\langle W_{A}\rangle\right]\). If the thermodynamic limit is taken before the low-temperature expansion, we infer that (not a rigorous proof)
\[\left[\langle W_{A}\rangle\right]\simeq\exp\left[-4|\Pi(A)|e^{-2\beta_{0}}+4| \Pi(A)|^{2}e^{-4\beta_{0}}-|\partial A|_{s}\left(e^{-4\beta}+e^{-4K}+4e^{-2 \beta-2K}\right)-6|\partial A|_{t}e^{-4K}\right].\] (S63)
In all, we expect that spacelike Wilson loops still decay with respect to area under the thermodynamic limit. Another way to think about the area law of spacelike Wilson loops when \(N\to+\infty\) is to reconsider an open boundary condition at the spacelike directions and take an infinitely large system size before calculating the low-temperature expansion. Then an odd number of flipped \(b_{p_{0}}\) is allowed since we can send one in a pair of \(b_{p_{0}}\)'s to the infinity and eliminate it. Thus up to the leading order we have \(\left[\langle W_{A}\rangle\right]\simeq\exp\left[-4|\Pi(A)|e^{-2\beta_{0}}\right]\). Although this open boundary condition does not correspond to any QEC procedure, we expect that the phase structure of the SM model does not depend on the choice of spacelike boundary conditions, and the expectation values of the order parameter for both the two kinds of boundary conditions should match under the thermodynamic limit.
In addition, Eq. S61 implies that the area law behavior of spacelike Wilson loops is controlled by the finite temperature disordered phase of the 2-d physical \(\sigma_{e_{0}}\) spins. Note that the area law behavior of the Wilson loop
\(\langle W_{A_{0}}\rangle_{\{+\}}\) is used in Ref. [1; 2] to argue the absence of long-range entanglement in the imperfect initial state. So in some sense, Eq. (S61) relates the inability of correcting measurement errors to the absence of long-range entanglement through \(\sigma_{e_{0}}\) spins.
|
2304.10648 | Electrically tunable radiative cooling performance of a photonic
structure with thermal infrared applications | Thermal infrared (IR) radiation has attracted considerable attention due to
its applications ranging from radiative cooling to thermal management. In this
paper, we design a multi-band graphene-based metamaterial absorber compatible
with infrared applications and radiative cooling performance. The proposed
structure consists of the single-sized metal-insulator-metal (MIM) grating
deposited on metal/insulator substrate and single-layer graphene. The system
realizes a broadband perfect absorption ranging from 940 nm to 1498 nm and a
narrowband perfect absorption at the resonance wavelength of 5800 nm.
Meanwhile, the absorptivity of the structure is suppressed within the mid-wave
infrared (MWIR) and long-wave infrared (LWIR) ranges. Furthermore, to
demonstrate the tunability of the structure, an external voltage gate is
applied to the single-layer graphene. It is shown that, by varying the chemical
potential of graphene layer from 0 eV to 1 eV , the absorption resonances at
the mid-infrared (MIR) range can shift toward the shorter wavelengths. It is
also observed that the structure can possess an average net cooling power over
18 at the ambient temperature, when is varied from 0 eV to 1 eV. Finally, we
investigate the overall performances of the structure as a function of
temperature to realize thermal infrared applications. | Ataollah Kalantari Osgouei, Hasan Kocer, Halil Isik, Yilmaz Durna, Amir Ghobadi, Ekmel Ozbay | 2023-04-20T21:24:35Z | http://arxiv.org/abs/2304.10648v1 | Electrically tunable radiative cooling performance of a photonic structure with thermal infrared applications
###### Abstract
Thermal infrared (IR) radiation has attracted considerable attention due to its applications ranging from radiative cooling to thermal management. In this paper, we design a multi-band graphene-based metamaterial absorber compatible with infrared applications and radiative cooling performance. The proposed structure consists of the single-sized metal-insulator-metal (MIM) grating deposited on metal/insulator substrate and single-layer graphene. The system realizes a broadband perfect absorption ranging from 940 nm to 1498 nm and a narrowband perfect absorption at the resonance wavelength of 5800 nm. Meanwhile, the absorptivity of the structure is suppressed within the mid-wave infrared (MWR) and long-wave infrared (LWIR) ranges. Furthermore, to demonstrate the tunability of the structure, an external voltage gate is applied to the single-layer graphene. It is shown that, by varying the chemical potential (\(\mu_{c}\)) of graphene layer from 0 eV to 1 eV, the absorption resonances at the mid-infrared (MIR) range can shift toward the shorter wavelengths. It is also observed that the structure can possess an average net cooling power over 18 W/m\({}^{2}\) at the ambient temperature, when \(\mu_{c}\) is varied from 0 eV to 1 eV. Finally, we investigate the overall performances of the structure as a function of temperature to realize thermal infrared applications.
Ataollah Kalantari Osgouei, 1,2,*
## 1 Introduction
The ability to control the thermal infrared (IR) radiation is of great importance in a wide range of applications, including radiative cooling [1, 2], thermophotovoltaics (TPVs) [3, 4], and thermal infrared applications [5-10]. In particular, thermal radiation energy emitted from an object depends not only on the surface emissivity but also heavily on the fourth power of the absolute temperature according to the Stefan-Boltzmann equation \(P=\varepsilon\sigma T^{4}\) (where \(\sigma\) is the Steven-Boltzmann constant, and \(\varepsilon\) and \(T\) are the emissivity and absolute temperature of the surface, respectively) [11]. Therefore, reducing both the emissivity and the temperature of the surface are the two means to control the thermal radiation of the object. Temperature control represents a direct way of achieving thermal radiation, but it requires additional cooling and heating devices [12]. However, controlling the surface emissivity is more convenient than the temperature, as covering a low-emissivity material on the surface can efficiently suppress thermal radiation. Generally, when a low-emissivity material such as metal (\(\varepsilon\approx 0.01\)) is coated on a target surface, it is possible to reduce the emissivity of the coated object throughout the IR band [13]. By contrast, since emissivity is an intrinsic property of materials, the ability to tune the spectral emissivity is a difficult task. This is especially important when the IR radiation of the object is reduced, where the reduced emitting energy contributes to a sharp temperature increase in materials and, therefore, thermal instability may occur due to internal and external heat sources [14]. Therefore, multi-band thermal IR radiation with appropriate optical properties and reduced emittance through selective wavelengths are highly desirable. Metamaterial perfect absorbers (MPAs) represent a suitable candidate for achieving multi-band thermal IR radiation. In general, MPAs are artificially engineered materials for achieving near-perfect absorptions by utilizing a series of periodic arrayed unit cells such as plasmonic [15-19]. Through properly choosing geometric structures and materials, multi-band absorption peaks can be achieved at the specific wavelengths [20-22]. According to Kirchhoff's thermal radiation law, multi-band metamaterial absorber is equal to multi-band metamaterial emitter at the thermal equilibrium \(\alpha(T,\lambda)=\varepsilon(T,\lambda)\)[23]. Thereby, metamaterial absorbers/emitters with wavelength-selectivity have a special ability to control thermal radiation. Recently, several plasmonic structures with metal-insulator-metal
(MIM) configurations [24, 25] and multi-layer structures [26, 27] have been used for the realization of thermal radiation. However, the resonant properties of the above-mentioned structures only present a static manipulation of the thermal radiations without flexible tunability control over the resonances, which greatly limits their extensive applications. Therefore, to realize dynamic control over the resonances, several phase-change materials (PCMs) like Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{5}\) (GST) and vanadium dioxide (VO\({}_{2}\)) are proposed to control thermal radiation [28-30]. However, the low tunability and slow response speed of MPAs based on the GST and the VO\({}_{2}\) are the limiting factors to fully realize thermal radiation, due to the inherent ohmic losses of the materials [31, 32]. Therefore, graphene as a two-dimensional (2-D) material, which consists of a single layer of carbon atoms arranged in a lattice, has recently become one of the most attractive materials because of its interesting mechanically, chemically, and electrically tunable properties. Its high electron mobility and surface conductivity can be easily modulated by electrochemical potential by electronic or chemical doping. Due to the tunability of its electron mobility and conductivity, adjustable graphene-based metamaterial absorbers have been widely investigated in the IR and terahertz ranges [33-36]. Although the optical characteristics of the graphene has been extensively studied, the use of graphene-based metamaterial absorbers for dynamic control of thermal radiation has still remained unexplored because of its small absorption response (\(\varepsilon<0.02\)) in the mid-infrared (MIR) region [37]. It is also feasible to design the few-layer graphene device to outperform the single-layer one in terms of the achievable tunability of the proposed structure. As several works have already been proposed to practically realize few-layer graphene structures for effectively tuning the resonances at the MIR region [38, 39]. Therefore, the few-layer graphene structure exhibits a similar trend as the single-layer one in terms of the tunability of the resonance wavelengths, and it is also observed that increasing the number of graphene layers can further increase the tunability of the resonances toward the shorter/longer wavelength spectrum.
Achieving multi-band thermal IR radiation technology compatible with radiative cooling is crucial to slow down the effects of global warming, which can mainly be utilized in building blocks and solar cells [40, 41]. For several decades, nighttime and daytime radiative cooling have been extensively studied in many research groups [42-45]. From the nighttime radiative cooling point view, radiators coated with inorganic thin films of silicon monoxide (SiO), silicon dioxide (SiO\({}_{2}\)), silicon nitride (Si\({}_{3}\)N\({}_{4}\)), and other types of thin films have been reported [46-48]. With the recent technological progress in materials, a series of approaches for daytime radiative cooling have been proposed and developed to improve the device performances with high reflectivity for solar energy spectrum (\(0.3-3\) um) and a high IR thermal radiation within the atmospheric absorption window (\(5-8\) um). A traditional method to achieve daytime radiative cooling is to integrate a broadband thermal IR emission with solar reflectors to block the undesirable ranges from reaching the cooler by using a partially covered shield, such as polyethylene or ethylene [49, 50]. Other new approaches based on the photonic crystal and metamaterial structures have been developed for realizing daytime radiative cooling in recent years [51, 52]. However, the aforementioned existing radiative cooling structures are generally static in the sense that the resonance peaks are fixed without the possibility to tune the resonances. Therefore, there is a high demand
Figure 1: (a) Schematic diagram of the proposed multi-band graphene-based metamaterial absorber/emitter. The proposed thermal infrared radiation structure consists of the single-sized MIM grating [an insulator layer of SO\(2\) (\(t_{\text{SO}_{2}}=180\) nm) sandwiched between Ti (\(t_{\text{T}_{1}}=5\) nm) and the bottom Aglayer (\(t_{\text{Ag}}=70\) nm)] deposited on the metal/insulator (\(t^{\prime}_{\text{SiO}_{2}}=30\) nm) substrate and the single-layer graphene. The unit cell is periodically arranged along the \(x-\) direction with \(p=~{}1200\) nm and the width of the MIM grating is \(w=1160\) nm. (b) Concept of the radiative cooling system. \(P_{\text{cond}+\text{conv}}\) (yellow arrow) denotes the non-radiative heat transfer between the radiative cooling surface and the ambient, \(P_{\text{atm}}\) (green arrow) is the total absorbed atmospheric radiation on the surface of the structure, \(P_{\text{rad}}\) (red arrow) is the total thermal radiation emitted from the surface of the structure at \(T_{\text{axr}}\), and \(P_{\text{ref}}\) (blue arrow) is the reflected environment radiation by the sample. The ambient temperature is assumed to be \(T_{\text{amb}}=20\) °C in the calculations.
to design dynamic radiative cooling since the external environment changes constantly in practice. To date, VO\({}_{2}\) and GST are widely used in the dynamic regulations of radiative properties due to their obvious phase change properties [29, 53, 54, 55]. In this paper, a multi-band graphene-based metamaterial absorber/emitter compatible with thermal IR applications and radiative cooling is proposed and numerically investigated using a commercial finite-difference time-domain (FDTD) software package solver [56]. The proposed structure consists of a single sized MIM grating embedded with a top highly lossy Titanium (Ti) and deposited on metal/insulator substrate. These two compact structures are separated by single-layer graphene. Conventional methods to fully achieve absorption are using ribbons, disks, and grating [57, 58, 59, 60, 61]. However, the fabrication of the graphene-based metamaterial involves time-consuming electron beam lithography and etching that make it a difficult task to scale up. Therefore, in this paper, we propose a graphene-based metamaterial emitter with a flat graphene sheet to overcome these difficulties for the future fabrication. Atomic layer lithography (ALD) is used as a new technology to fabricate sub-wavelength uniform features. In this process, ALD is implemented to fabricate nanogaps between the grating structure and deposited substrate films, providing an angstrom-scale lateral solution along the design. In this strategy, the upper nanograting pattern must be peeled off using the standard adhesive tape, which is an essential step for the ALD lithography step. Moreover, it is important to fabricate the vertical walls of the grating structure on the first layer to have a discontinuity between the other layers [62]. The proposed thermal IR system shows a broadband absorption spanning from \(940\,\mathrm{nm}\) to \(1498\,\mathrm{nm}\) (suitable for nighttime cooling), together with a narrowband resonant peak at the wavelength of \(5800\,\mathrm{nm}\) that matches well with the atmospheric absorption window. Moreover, the absortivity of the structure is suppressed within the mid-wave infrared (MWIR: \(3-5\,\mathrm{\SIUnitSymbolMicro m}\)) and the long-wave infrared (LWIR: \(8-14\,\mathrm{\SIUnitSymbolMicro m}\)) ranges representing the atmospheric transparency channels for electromagnetic (EM) waves. Meanwhile, it is shown that, by varying the external voltage applied to the single-layer graphene, the adjustability of the narrowband perfect absorption of the proposed structure at the MIR range can be well tuned to the shorter wavelengths. Theoretical and numerical analyses are also conducted to verify its cooling performances of the proposed electrically tunable metamaterial structure. The proposed design shows several distinctive advantages: (i) Multi-band IR and visual thermal infrared technology, (ii) efficient radiative cooling performances by radiation in the non-atmospheric window, (iii) simple design in fabrication due to the layers having the same size and shape, and (iv) tunable characteristics of the single-layer graphene by applying an external gate voltage, without the need to make new structures.
## 2 Physical model and simulation
The schematic diagram of the proposed electrically tunable radiative cooling system compatible with thermal IR radiation is shown in Fig. 1(a). The unit cell consists of the single sized MIM grating (an insulator layer of SiO\({}_{2}\) sandwiched between the top highly lossy Ti
Figure 2: Spectral absorptions of the proposed thermal infrared radiation system when an external voltage gate is applied to the single-layer graphene. Under different chemical potentials (\(0\,\mathrm{eV}\leq\mu_{c}\leq 1\,\mathrm{eV}\)) at \(T=300\,\mathrm{K}\), the absorption resonance at the MIR range shift toward the shorter wavelengths. The blue shaded areas are the normalized atmospheric absorption spectrum, which are obtained by considering US standard 1976 atmospheric compositions at the vertical distances of (a) \(2\,\mathrm{km}\), (b) \(5\,\mathrm{km}\), and (c) \(99\,\mathrm{km}\).
and the metallic silver (Ag) layers) separated from the metal/insulator substrate and the single-layer graphene. Numerical simulations, based on the FDTD method with a commercial software package (Lumerical Solutions, Inc.), are utilized to optimize the geometrical parameters in such a way that a low emittance within the atmospheric transparency windows (\(3-5\) um, \(8-14\) um) and a high emittance in the atmospheric absorption window (\(5-8\) um) are achieved. Therefore, based on the simulation results, the optimized geometrical parameters are explicitly presented in the caption of Fig. 1(a). The thickness of each layer of the single-sized MIM grating structure is set as \(t_{\rm{T1}}=5\) nm, \(t_{\rm{SiO_{2}}}=180\) nm, and \(t_{\rm{Ag}}=70\) nm, respectively. The thickness of the insulator substrate is considered to be \(t^{\prime}\,{}_{\rm{SiO_{2}}}=30\) nm. The unit cell is periodically arranged along the \(x-\) direction with \(p=\,1200\) nm and the corresponding width of the MIM grating is \(w=1160\) nm. In the simulations, a uniform plane wave is normally propagating along the negative \(-z-\)direction with the polarization along the \(x-\) direction (\(p-\)polarization) and a wavelength range from \(0.3-30\) um is applied to investigate the optical properties of the proposed structure. Periodic boundary conditions are employed only along the \(x-\) direction and perfectly matching layers (PML) are applied along the \(z-\) direction to avoid the boundary scattering. It is important to mention here that the added single-sized MIM grating to the substrate can break polarization degeneracy. The proposed design is expected to act as thermal infrared applications in transverse magnetic (TM) mode, inversely has low absorption at transverse electric (TE) mode due to asymmetry through adding the single-sized grating layer. To overcome the polarization-dependent disadvantage in the structure, it is also possible to design periodical square plates. The reflectivity spectrum (\(R\)) is recorded by a 2D frequency-domain power monitor. Since the bottom Ag substrate is thicker than the penetration depth of the incident light, the transmission in our desired range is zero, and the absorption/emissivity of the design is calculated as \(A=\varepsilon=1-R\), where \(A\) and \(R\) denote light absorption and reflection, respectively [63]-[65]. The frequency dependent relative permittivity of metallic Ag film is obtained from CRC Handbook of Chemistry and Physics [66] and the refractive index and
Fig.3: The total electric field distributions of the proposed multi-band graphene-based metamaterial absorber compatible with thermal infrared applications (without graphene layers) at the resonance wavelengths of (a) \(\lambda_{1}=1230\) nm, (b) \(\lambda_{2}=\,2110\) nm, and (c) \(\lambda_{3}=5796\) nm, respectively. The total electric field distributions of the design (with the single-layer graphene when [\(\mu_{c}=0.6\) eV]) at the resonance wavelengths of (d) \(\lambda_{1}=1230\) nm, (e) \(\lambda_{2}^{\prime}=2110\) nm, and (f) \(\lambda_{3}^{\prime}=5519\) nm, respectively. The dependence of the absorption of the proposed design on the incident angle for (g) TM and (h) TE polarization modes.
frequency-dependent SiO\({}_{2}\) Palik model are taken into account [67]. Moreover, following from the experimental data, complex dielectric constant of Ti is considered by the Palik model [67]. The single-layer graphene can be described as an infinitesimally thin, local two-sided surface characterized by the surface conductivity (\(\sigma_{\rm{s}}\)). The surface conductivity of the single-layer graphene follows from the well-known Kubo equation, consisting of intraband (\(\sigma_{\rm{intra}}\)) and interband terms (\(\sigma_{\rm{inter}}\)) as follows [68]:
\[\sigma_{\rm{s}}=\ \sigma_{\rm{intra}}+\ \sigma_{\rm{inter}}, \tag{1}\]
\[\sigma_{\rm{intra}}=-j\frac{e^{2}k_{B}T}{\pi h^{2}(\omega-j2T)}\Big{[}\frac{ \mu_{c}}{k_{B}T}+2\ln\!\left(e^{-\mu_{c}/k_{B}T}+1\right)\Big{]}, \tag{2}\]
\[\sigma_{\rm{inter}}\simeq\frac{-je^{2}}{4\pi h}\ln\!\left(\frac{2|\mu_{c}|-( \omega-j2T)k}{2|\mu_{c}|+(\omega-j2T)h}\right), \tag{3}\]
where \(e,\omega\), \(h\), and \(k_{B}\) are the charge of an electron, the angular-frequency of the plane wave, reduced Planck's and Boltzmann's constants, respectively. In addition, \(\mu_{c}\) is the chemical potential, \(\Gamma\) is the scattering rate, and \(T\) is the temperature. In the simulation, we assume \(T=300\ {\rm K}\) and \(\Gamma=0.0032\ {\rm eV}\). The surface conductivity of graphene sheet will be controlled by chemical potential (\(\mu_{c}\)) via applying electrostatic gating, which provides an effective way to tune the resonances of the proposed structure. One of the most appealing features of graphene is that its chemical potential can be readily adjusted across a large range by applying an external gate voltage (electrostatic biasing), resulting in a variety of surface conductivities. As a result, by adjusting the chemical potential via electrostatic biasing, it is feasible to modify the MIR resonance of the proposed structure. The relation between chemical potential level and the electrostatic biasing is given by an approximate closed-form expression as [69].
\[\mu_{c}\approx\hbar v_{F}\sqrt{\frac{\pi C_{\rm{d}}V_{g}}{e}}, \tag{4}\]
where \(C_{\rm{d}}=\varepsilon_{\rm{d}}\varepsilon_{\rm{o}}/\tau_{\rm{s}}\) is the electrostatic gate capacitance, \(\tau_{\rm{s}}\) is the thickness of gate dielectric (bottom SiO\({}_{2}\) layer), \(V_{\rm{g}}\) is the applied gate voltage, and \(\nu_{\rm{F}}\) is the Fermi velocity (\(1.0\times 10^{6}\ {\rm m/s}\) in graphene), respectively. As a result, the proposed structure's electrostatic biasing of the single-layer graphene may be done by connecting the graphene to the bottom Ag reflector (electrostatic ground) using conductive contacts and providing a gate voltage. The bias then enables independent controlling of the chemical potential of the graphene layer [62, 63]. To analyze the cooling performance, the net cooling power of the proposed structure is calculated by considering the temperature of the radiative cooler (\(T_{\rm{str}}\)), and the ambient air temperature (\(T_{\rm{amb}}\)). When the radiative cooling system is exposed to the night sky, it cannot be affected by the solar irradiance (\(P_{\rm{sun}}=0\)) and, therefore, the net cooling power (\(P_{\rm{net}}\)) is significantly influenced by both the non-radiative heat transfer coefficient (\(h_{c}\)) and the atmospheric thermal radiation as shown in Fig. 1(b). Therefore, the net cooling power of the nighttime radiative cooler can be defined as:
\[P_{\rm{net}}(T_{\rm{str}})=\ P_{\rm{rad}}(T_{\rm{str}})-P_{\rm{atm}}(T_{\rm{amb }})-P_{\rm{cond+conv}}, \tag{5}\]
where \(P_{rad}\) is the thermal radiation power emitted from the surface. \(P_{atm}\) is the absorbed atmospheric radiation power on the surface of the radiative cooler, and \(P_{cond+conv}\) denotes the non-radiative heat transfer, i.e., conduction and convection, between the radiative cooling surface and the ambient. Note that all of the power components above and below are considered to be power density (i.e., power divided by area). The total thermal radiation power per unit area (\(P_{\rm{rad}}\)) from the surface of the radiative cooler is the rate at which radiation emitted at all possible wavelengths and directions, defined as:
\[P_{\rm{rad}}(T_{\rm{str}})=\int cos\theta d\Omega\int_{0}^{\infty}I_{bb}( \lambda,T_{\rm{str}})\varepsilon_{\rm{str}}(\Omega,\lambda)d\lambda, \tag{6}\]
where \(\int d\Omega=2\pi\int_{0}^{\frac{\pi}{2}}d\theta sin\theta\) is the angular integral over a hemisphere. \(I_{bb}(\lambda,T_{\rm{str}})=\frac{2hc^{2}}{\lambda^{2}[\exp{(hc/k_{B}T_{\rm{ str}})}-1]}\) is the spectral radiance of a blackbody at temperature \(T_{\rm{str}}\), where \(c\) is the speed of light in vacuum, \(\lambda\) is the wavelength, and \(\varepsilon_{\rm{str}}(\Omega,\lambda)\) is the directional emissivity of the structure as a function of wavelength. The total absorbed power due to the incident atmospheric thermal radiation on the radiative cooling surface can also be expressed as:
\[P_{\rm{atm}}(T_{\rm{amb}})\] \[=\int cos\theta d\Omega\int_{0}^{\infty}\!\!I_{bb}(\lambda,T_{amb}) \varepsilon_{atm}(\Omega,\lambda)\varepsilon_{\rm{str}}(\Omega,\lambda)d\lambda, \tag{7}\]
here \(\varepsilon_{\rm{atm}}(\Omega,\lambda)=1-t(\lambda)^{1/\cos\theta}\) is the angle-dependent emissivity of the atmosphere as a function of wavelength, where \(t(\lambda)\) is the
Figure 4: (a) The net cooling power (\(P_{\rm{net}}\)),absorbed atmospheric thermal radiation power (\(P_{\rm{atm}}\)), and total thermal radiation power emitted from the surface (\(P_{\rm{rad}}\)) of the proposed radiative cooling system as a function of \(\mu_{c}\) varied from 0 eV to 1 eV by taking into consideration the atmospheric spectrum at the vertical distance of 99 \({\rm km}\) without considering the non-radiative heat transfer coefficient (\(h_{c}=0\ {\rm W/m^{2}K}\)). (b) The structure’s steady-state temperature (\(T_{\rm{ss}}\)) as a function of \(\mu_{c}\) considering the effects of the non-radiative heat transfer coefficients (\(h_{c}=3\),6,9,12 \({\rm W/m^{2}K}\)). The ambient temperature is \(T_{\rm{amb}}=20\ {\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rmrmrmrmrm{\rmrm{\rmrm{\rmrmrmrm{ \rm{\rm{\rmrm{\rmrmrmrm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\,{\rm{\,{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
atmospheric transmittance in the zenith direction, and \(l_{\mathrm{bb}}(\lambda,T_{\mathrm{amb}})\) is the blackbody spectral radiance at temperature \(T_{\mathrm{amb}}\), defined as \(l_{\mathrm{bb}}(\lambda,T_{\mathrm{amb}})=\frac{2hc^{2}}{\lambda^{2}[\exp{(hc/ \lambda k_{\mathrm{B}}T_{\mathrm{amb}})}-1]}\). In addition to the thermal radiation power, the radiative cooling surface is generally subject to other non-radiative heat transfer with the surrounding environment [64]. Therefore, the power loss due to the convection and conduction heat transfer can be expressed as:
\[P_{\mathrm{cond+conv}}=h_{c}(T_{\mathrm{amb}}-T_{\mathrm{str}}), \tag{8}\]
where \(h_{c}\) denotes the combined non-radiative heat transfer coefficient that accounts for the effect of conductive and convective heat transfer between the surface of the radiative cooler and the external surface.
## 3 Results and discussions
The simulated results of the emissivity/absortivity of the proposed radiative cooling system compatible with thermal IR system in the wavelength range of \(0.3-30\)\(\mu\)m are shown explicitly in Figs. 2(a)-2(c). The blue areas are the normalized atmospheric absorption bands, which are obtained using a commercial software package (MODTRAN) by considering US standard 1976 atmospheric compositions at different vertical distances (altitudes) of 2 km, 5 km, and 99 km through the atmosphere from the ground level [72]. It is observed in the figures that the proposed system exhibits a high absorption with over 90% in a wide wavelength range from 940 nm to 1498 nm (suitable for nighttime radiative cooling) and a narrowband perfect absorption at the wavelength of 5800 nm (within the atmospheric absorption window) is sufficiently excited when \(\mu_{c}=0\) eV. Meanwhile, the proposed structure shows remarkable low absorriptivity/emissivity within the MWIR and LWIR ranges, corresponding to the atmospheric transparency windows. The results also reveal that an undesired absorption peak with over 40% absortivity is excited in the LWIR range because of the intrinsic vibrational modes (optical phonons) of the SiO\({}_{2}\) layer [73]. Moreover, the thermos-optics coefficients of optical materials (except phase change materials) are in the range of \(10^{-4}\). Therefore, the obtained absorption spectra of the proposed design are independent of the temperature variations [74]. To demonstrate the tunability performances of the proposed graphene-based IR thermal infrared system, the simulated absorption spectra as a function of \(\mu_{c}\) varied from 0 eV to 1 eV are also shown in Figs. 2(a)-2(c). It should be important to mention that by applying an external gate voltage or means of chemical doping, the chemical potential (\(\mu_{c}\)), and thus the surface conductivity (\(\sigma_{s}\)) of the single-layer graphene can be easily controlled. When the chemical potential of the single layer graphene is increased toward 1 eV, the absorption resonances at the MIR range can shift toward the shorter wavelengths, while the obtained broadband response at the near-infrared (NIR) region stay almost unchanged, due to the single-layer graphene characteristics [69]. In addition, it is clear that, by increasing the chemical potential (\(\mu_{c}\)), the amount of absorption peak at the LWIR range, caused by the vibrational modes of SiO\({}_{2}\) layer, decreases gradually. This can be deduced from the fact that a relatively higher amount of incident light is reflected from the surface of graphene as the surface conductivity (\(\sigma_{s}\)) of the single-layer graphene increases. It is also shown that the absorption peak (intensity) at the resonance wavelength of 5800 nm increases when the chemical potentials (\(\mu_{c}\)) changes from 0 eV to 0.2 eV, and stays almost constant for \(\mu_{c}>0.2\) eV, as shown in Figs.
Figure 5: (a) The overall performances (FOM) of the proposed radiative cooling system without considering the effects of the reflected ambient radiation power (\(P_{ref}=0\)) as a function of the structure’s temperature (\(0^{\circ}C\leq T_{str}\leq 100^{\circ}C\)) when the \(\mu_{c}\) is varied from 0 eV to 1 eV for different atmospheric absorption spectra. The overall performances (FOM) of the proposed structure contains the sample radiation power (\(P_{rad}\)) and the reflected ambient radiation power (\(P_{ref}\)) as a function of structure’s temperature (\(0^{\circ}C\leq T_{str}\leq 100^{\circ}C\)) when the \(\mu_{c}\) is varied from 0 eV to 1 eV by considering the atmospheric compositions at the vertical distances of (b) 2 \(km\), (c) 5 \(km\), and (d) 99 \(km\). (e) The total radiation power detected at the working wavelength bands of MWIR, LWIR, and atmospheric absorption window, along with FOM values at different structure’s temperature of\(T_{str}=10^{\circ}C\) and \(T_{str}=70^{\circ}C\).
2(a)-2(c). This is due to the fact that the impedance matching improves by increasing the chemical potential, according to the effective-medium theory [75]. All in all, the proposed electrically tunable graphene-based metamaterial emitter absorption spectra perfectly match the atmospheric absorption window, while maintaining low emissivity within the atmospheric transparency bands of (\(3-5\,\upmu\)m, \(8-14\,\upmu\)m) with adjustability characteristics that make it suitable for nighttime radiative cooling and thermal infrared technology due to the wideband absorption within a spectral range of the NIR region.
## 4 Physical Mechanism and angular dependence
In order to better understand the physical mechanism of the proposed design under the normal incidence, the electric field distributions at the resonance wavelengths are obtained and illustrated in Fig. 3, corresponding to the cases without graphene and with the single layer graphene (\(\upmu_{\rm c}=0.6\) eV). It is observed from Figs. 3(a) and 3(d) at the resonance wavelengths of \(\lambda_{1}=\lambda_{1}^{\prime}=1230\) nm, the incident light penetrates through the thin Ti layer and reflects back at the bottom metallic layer. The two reflectors clearly excite the Fabry-Perot (FP) cavity resonance leading to a broaden absorption response due to the lossy behaviour of Ti in the NIR region [76]. On the other hands, the electric-field distributions at the other excited resonances are different. In particular, the electric field at the resonance wavelengths of \(\lambda_{2}=\lambda_{2}^{\prime}=2110\) nm are mostly localized inside the spacer layer due to the excitation of the gap surface plasmon resonances (GSPs) with the third-order mode, while as it can be seen in Figs. 3(c) and 3(f), the absorptions at the resonance wavelengths of \(\lambda_{3}=5796\) nm and \(\lambda_{3}^{\prime}=5519\) nm are attributed to the excitation of GSPs with the first-order mode [77, 78, 79]. The absorptions of the proposed multi-band graphene-based metamaterial absorber compatible with thermal infrared applications for both TM and TE polarized modes are also obtained and shown in Figs. 3(g) and (h). It is observed from the Fig. 3(g) that the nearly perfect broadband and narrowband resonances of the proposed design are preserved up to 60 degrees of the incident angle for TM polarization modes, while another narrowband resonance is excited at the longer wavelengths when the incident angle is 50 degrees. For TE polarization mode as shown in Fig. 3(h), there is only a single broadband absorption peak excited and preserved up to 60 degrees of the incident light.
## 5 Radiative cooling performances
In order to better understand the efficiency analysis of the proposed electrically tunable graphene-based radiative cooler, the cooling performance of the system is investigated by calculating \(P_{\rm rad}\) and \(P_{\rm atm}\) at wavelengths with a range of \(0.3-30\) um and by taking into consideration the atmospheric absorption spectrum at the vertical distance of \(99\) km. The ambient temperature is assumed to be \(T_{\rm amb}=20^{\circ}\)C in the calculations. Figure 4(a) shows the net cooling power (\(P_{\rm net}\)), absorbed atmospheric thermal radiation power (\(P_{\rm atm}\)), and total thermal radiation power (\(P_{\rm rad}\)) emitted from the surface of the cooler as a function of \(\mu_{c}\) varied from 0 eV to 1 eV. It can be clearly seen that the proposed cooler at nighttime (no solar irradiance, \(P_{\rm sun}=0\)) achieves a net cooling power over \(21.5\,\mathrm{W/m^{2}}\) at ambient temperature, when \(\mu_{c}=0\) eV. As the \(\mu_{c}\) increases from 0 eV to 1 eV, as shown in Fig. 4(a), the net cooling power can be reduced to
Figure 6: The total radiation power of the proposed structure in the LWIR, MWIR, and atmospheric absorption bands as a function of structure’temperature(\(0^{\circ}C\leq T_{str}\leq 100^{\circ}C\)) when \(\mu_{c}=0,1\) eV\({}^{\prime}\) and by considering the atmospheric compositions at the vertical distances (altitude) of (a) \(2\,km\), (b) \(5\,km\), and (c) \(99\,km\).
\(16.9\,\mathrm{W/m^{2}}\) without considering the non-radiative heat transfer coefficient (\(h_{c}=0\,\mathrm{W/m^{2}K}\)). In this way, the proposed electrically tunable structure can meet the requirement of the nighttime radiative cooling, achieving the average net cooling power of over \(18\,\mathrm{W/m^{2}}\) by varying \(\mu_{c}\) from 0 eV to 1 eV. Figure 4(b) shows the structure's steady-state temperature (\(T_{\mathrm{ss}}\)) as a function of \(\mu_{c}\) with and without considering the effects of the non-radiative heat transfer coefficient (\(h_{c}\)). Note that the steady-state temperature is obtained by solving eq. (4) with \(P_{\mathrm{net}}(T_{\mathrm{str}}=T_{\mathrm{ss}})=0\). It is clear that in the absence of the non-radiative heat transfer a reduction of \(23.4\,\mathrm{\SIUnitSymbolCelsius}\) below the ambient temperature (\(20\,\mathrm{\SIUnitSymbolCelsius}\)) under clear night sky is achieved at \(\mu_{c}=0\,\mathrm{\ eV}\). When the chemical potential of the single-layer graphene continues to rise (\(0.2\,\mathrm{eV}\leq\mu_{c}\leq 1\,\mathrm{eV}\)), the proposed structure's steady-state temperature gradually decreases, achieving a maximum temperature decrease of \(23.6\,\mathrm{\SIUnitSymbolCelsius}\), when the structure is tuned at \(\mu_{c}=0.4\) eV. Moreover, to quantify non-radiative heat transfer coefficients on the overall temperature reduction of the radiative cooling surface at steady-state, we need to accurately determine the overall effects of non-radiative heat transfer coefficient \(h_{c}\), when the radiative cooling surfaces are exposed to different surrounding environments. In general, four different non-radiative heat transfer coefficients, namely \(h_{c}=3.6\),\(9.12\,\mathrm{W/m^{2}K}\) are considered for the practical applications of the radiative cooling. As can be seen in Fig. 4(b), the average temperature decreases of the proposed radiative cooler are \(4.8\,\mathrm{\SIUnitSymbolCelsius}\) (\(h_{c}=3\,\mathrm{W/m^{2}K}\)), \(2.7\,\mathrm{\SIUnitSymbolCelsius}\) (\(h_{c}=6\,\mathrm{W/m^{2}K}\)), \(1.9\,\mathrm{\SIUnitSymbolCelsius}\) (\(h_{c}=9\,\mathrm{W/m^{2}K}\)), and \(1.6\,\mathrm{\SIUnitSymbolCelsius}\) (\(h_{c}=12\,\mathrm{W/m^{2}K}\)) below the ambient temperature. It can be concluded from these results that the cooling rate of the system decreases when the non-radiative heat transfer coefficient is increased. Therefore, the proposed radiative cooling system shows a remarkable nighttime cooling performance under different non-radiative heat transfer coefficients. In order to characterize the overall performance of the proposed system, the figure of merit (FOM) is defined as follows [80]:
\[\mathrm{FOM}=\frac{P_{5-8\,\mathrm{\SIUnitSymbolMicro\mu m}}^{2}}{P_{\mathrm{ MWIR}}.P_{\mathrm{LWIR}}}, \tag{9}\]
where \(P_{5-8\,\mathrm{\SIUnitSymbolMicro\mu m}}\), \(P_{\mathrm{MWIR}}\), and \(P_{\mathrm{LWIR}}\) represent the radiated powers emitted from the surface of the structure in the atmospheric absorption, MWIR, and LWIR spectral windows, respectively. A high value of FOM can be obtained from the structure with low emissivity within the atmospheric transparency windows (MWIR and LWIR ranges), and high emissivity within the atmospheric absorption band. The FOM values of the proposed radiative cooling system are evaluated according to the total amount of power emitted from the surface of the object. The total power detected from the IR camera as shown in Fig. 1(b), contains the self-radiation power (\(P_{\mathrm{rad}}\)) and the reflected ambient radiation power by the object (\(P_{\mathrm{ref}}\)) as [80, 81]:
\[P(\epsilon_{str},\epsilon_{atm},T_{str},T_{amb})=P_{rad}( \epsilon_{str},T_{str})+P_{ref}(\epsilon_{str},\epsilon_{atm},T_{str})\] \[= \int_{\lambda_{2}}^{\lambda_{1}}\epsilon_{str}(\lambda).\frac{2\pi h \epsilon^{2}}{2\lambda^{5}}(e^{\frac{hc}{4h\epsilon^{2}g_{str}}}-1)^{-1}d \lambda+\int_{\lambda_{2}}^{\lambda_{1}}[1-\] \[\epsilon_{str}(\lambda)].\,\,\epsilon_{atm}(\lambda).\frac{2\pi h \epsilon^{2}}{2\lambda^{5}}(e^{\frac{hc}{4h\epsilon^{2}g_{atm}}}-1)^{-1}d\lambda, \tag{10}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) represent the working wavelengths of the IR camera. The FOM values of the proposed multi-band radiative cooling system _without considering the effects of the reflected ambient radiation power_ (\(P_{\mathrm{ref}}=0\)) as a function of the structure's temperature (\(0\,\mathrm{\SIUnitSymbolCelsius}\leq T_{\mathrm{str}}\leq 100\,\mathrm{ \SIUnitSymbolCelsius}\)), when \(\mu_{c}\) is varied from 0 eV to 1 eV is shown in Fig. 5(a). It is clear that FOM values of the proposed cooling system do not change, under different atmospheric emissivity profiles since the reflected power is not included and the sample radiation power is independent of the atmospheric emissivity. Moreover, as shown in Fig. 5(a), the proposed structure shows relatively lower FOM when \(\mu_{c}=0\) eV, than when it is tuned to \(\mu_{c}=0.2\) eV. This is deduced from the fact that the impedance matching condition is actively achieved by increasing \(\mu_{c}\)[75]. In addition, as the temperature increases, the performances of the proposed structure (FOM) decrease when \(\mu_{c}=0\) eV, \(0.2\) eV, and \(0.4\) eV because the radiation energy in the MWIR and LWIR bands increases. On the other hand, when \(\mu_{c}\) of the single-layer graphene is tuned to 0.8 eV, and \(1\) eV, the overall performances of the proposed cooling system increase, as the temperature increases. However, for \(\mu_{c}=0.6\) eV it is seen that the performance remains nearly constant while increasing the temperature. We next demonstrate _the total radiated and reflected power on the overall performances_ of the proposed structure. Figures 5(b) -5(d) show the overall performances of the proposed nighttime radiative cooler obtained from the simulated results as a function of structures' temperature (\(0\,\mathrm{\SIUnitSymbolCelsius}\leq T_{\mathrm{str}}\leq 100\,\mathrm{ \SIUnitSymbolCelsius}\)), when \(\mu_{c}\) is varied from 0 eV to 1 eV. It is clear from the figures that the overall performances (FOM) exhibit similar trends under different altitude (2 km, 5 km, and 99 km) atmospheric emissivity such that as the temperature increases, the performances (FOM) of the structure decrease. Moreover, the overall performances (FOM) of the radiative cooling system decrease with higher altitude (i.e., 99 km) atmospheric emissivity, since the reflected ambient radiation power (\(P_{\mathrm{ref}}\)) is related to the atmospheric absorption spectrum. It is also evident from Figs. 5(b)-5(d) that, as the temperature increases, the proposed structure exhibits a sharper reduction of the FOM values when \(\mu_{c}\) is increasing. The total radiation power detected by the IR camera at the working wavelength bands of MWIR (\(P_{\mathrm{3-5\,\SIUnitSymbolMicro\mu m}}\)), LWIR (\(P_{\mathrm{8-14\,\SIUnitSymbolMicro\mu m}}\)), and atmospheric absorption window (\(P_{\mathrm{5-8\,\SIUnitSymbolMicro\mu m}}\)), together with FOM values at different structure's temperature of \(T_{\mathrm{str}}=10\,\mathrm{\SIUnitSymbolCelsius}\), and \(T_{\mathrm{str}}=70\,\mathrm{\SIUnitSymbolCelsius}\) are all summarized in Fig 5(e). To further demonstrate how the overall performances (FOM) of the proposed radiative cooling system behaves as the temperature increases, we consider two different \(\mu_{c}\) cases, when \(\mu_{c}=0\) eV, and \(\mu_{c}=1\) eV. Figures 6(a)-6(c) show the total radiation power in the MWIR, LWIR, and atmospheric absorption bands for different altitude atmospheric emissivities. The results indicate that the proposed structure has higher total radiation powers in the LWIR (\(P_{\mathrm{8-14\,\SIUnitSymbolMicro\mu m}}\)), and atmophysic absorption window (\(P_{\mathrm{5-8\,\SIUnitSymbolMicro\mu m}}\)), as the temperature increases. Moreover, as can be seen in Fig. 6(a), the proposed structure exhibits a lower radiation power in the LWIR band(\(P_{\mathrm{8-14\,\SIUnitSymbolMicro\mu m}}=-20.5\%\)) compared to the radiation power in the atmospheric window (\(P_{\mathrm{5-8\,\SIUnitSymbolMicro\mu m}}=-14.5\%\)), when \(\mu_{c}\) is varied from 0 eV to 1 eV at \(T_{\mathrm{str}}=100\,\mathrm{\SIUnitSymbolCelsius}\). However, the radiation power in the MWIR band (\(P_{\mathrm{3-5\,\SISymbolMicro\mu m}}\)) is almost independent of the temperature variations. That is the main reason why the overall performance of the proposed structure shows a lower value as the temperature and \(\mu_{c}\) increase. Clearly, due to the high FOM values, the proposed structure shows a promising way to realize the nighttime radiative performances with thermal IR radiation technology.
## 5 Conclusion
In conclusion, we have proposed a multi-band graphene-based metamaterial absorber/emitter compatible with nighttime radiative cooling and thermal IR applications. The proposed structure consists of the single-sized MIM grating (an insulator layer of SiO\(\,\) sandwiched between the top highly lossy Ti and Ag layers) deposited on the metal/insulator substrate and the single-layer graphene. Numerical simulations based on an FDTD solver are utilized to investigate the optical properties of the proposed design. It is observed that a broadband perfect absorption spanning from \(940\,\mathrm{nm}\) to \(1498\,\mathrm{nm}\) (suitable for nighttime radiative cooling) and a narrowband perfect absorption at the wavelength of \(5800\,\mathrm{nm}\) (within the atmospheric
absorption window) are successfully achieved. Meanwhile, the proposed IR system shows relatively low emissivity within the atmospheric transparency windows (\(3-5\,\mathrm{\SIUnitSymbolMicro m},8-14\,\mathrm{\SIUnitSymbolMicro m}\)). Furthermore, the tunability characteristics of the proposed design are demonstrated when an external voltage gate is applied to the single-layer graphene. The proposed design was also optimized using practical geometrical parameters. As a result, the numerical results reported in this paper can be fabricated and verified. Next, the cooling efficiency of the proposed design is validated by calculating \(P_{\mathrm{rad}}\) and \(P_{\mathrm{atm}}\) in the wavelength range of \(0.3\,-\,30\,\mathrm{\SIUnitSymbolMicro m}\). From the simulation-based analysis of the cooling ability, the proposed electrically tunable structure can achieve the average net cooling power of over \(18\,\mathrm{W}/\mathrm{m}^{2}\), when \(\mu_{\mathrm{c}}\) is varied from \(0\,\mathrm{eV}\) to \(1\,\mathrm{eV}\). Finally, we investigate the overall performances (FOM) of the proposed structure as a function of temperature. Due to the high FOM values, the proposed structure achieves a promising way to simultaneously realize the nighttime radiative cooling and thermal IR applications. This work sheds light on multispectral and adjustable thermal management with IR performance and thermal infrared applications.
**Acknowledgment**. E. Ozbay acknowledges partial support from the Turkish Academy of Sciences (TUBA).
**Authors' Contributions.** First author (A.K.0) carried out the modeling, design, and simulations. H.K and H.L and Y.D. and A.G. assisted in theoretical review and simulations. E.O. supervised the study. All the authors contributed in the results, discussions, and paper writing.
**Disclosures**. The authors declare no conflicts of interest.
|
2307.09649 | Application of BadNets in Spam Filters | Spam filters are a crucial component of modern email systems, as they help to
protect users from unwanted and potentially harmful emails. However, the
effectiveness of these filters is dependent on the quality of the machine
learning models that power them. In this paper, we design backdoor attacks in
the domain of spam filtering. By demonstrating the potential vulnerabilities in
the machine learning model supply chain, we highlight the need for careful
consideration and evaluation of the models used in spam filters. Our results
show that the backdoor attacks can be effectively used to identify
vulnerabilities in spam filters and suggest the need for ongoing monitoring and
improvement in this area. | Swagnik Roychoudhury, Akshaj Kumar Veldanda | 2023-07-18T21:39:39Z | http://arxiv.org/abs/2307.09649v1 | # Application of BadNets in Spam Filters
###### Abstract
Spam filters are a crucial component of modern email systems, as they help to protect users from unwanted and potentially harmful emails. However, the effectiveness of these filters is dependent on the quality of the machine learning models that power them. In this paper, we design backdoor attacks in the domain of spam filtering. By demonstrating the potential vulnerabilities in the machine learning model supply chain, we highlight the need for careful consideration and evaluation of the models used in spam filters. Our results show that the backdoor attacks can be effectively used to identify vulnerabilities in spam filters and suggest the need for ongoing monitoring and improvement in this area1.
Footnote 1: Code is available at tinyurl.com/BadNetSpamFilter | Alternate Link
Spam Filter, NLP, BadNet
## I Introduction
Spam filters play a crucial role in protecting individuals and organizations from unwanted and potentially harmful emails [1]. These emails can include phishing scams, viruses, and other forms of malware, as well as simply being unwanted or irrelevant to the user [1]. Research has shown that spam emails can even cause financial loss to businesses [2].
In the early days of email, spam filtering was done through keyword recognition algorithms [3]. Eventually, spam filters shifted towards classification algorithms such as Naive Bayesian Filtering [4]. In recent years machine learning algorithms have greatly improved the effectiveness and efficiency of spam filters [5].
Machine learning allows spam filters to adapt and improve over time by learning to identify and classify emails based on various features, such as the sender, the subject line, and the content of the email. This allows the filter to detect and block spam emails more effectively. Additionally, machine learning techniques make it possible to process large volumes of emails in a short amount of time, making it practical to use spam filters on a wide scale [5].
Research has been done on attacking and bypassing spam filters. Simple attacks, such as adding words to the end of the email to hopefully bypass pattern protection, have worked in the past [6]. Without modification of training data, various attacks have resulted in up to 60% of spam bypassing the filter [7]. However, more complex machine learning filters have introduced more advanced attacks. Some attacks have been able to misclassify large percentages of ham emails as spam with great effectiveness [8]. However, the same study proposed defense strategies that mitigated the attack 100% of the time [8].
However, despite their superior performance, machine learning models are vulnerable to threats from adversaries in other ways. Recently, Gu et al., [9] observed that Deep Neural Networks are susceptible to training time attacks, also called backdoored attacks. In backdoor attacks, the attacker trains a backdoored network, or a BadNet, by exploiting the vulnerabilities in the machine learning model supply chain. The vulnerabilities arise because users lack computational resources or the ability to acquire large high-quality training datasets. So, users outsource their training to untrusted third-party cloud services or source pre-trained models from online repositories like Github or Caffee Model Zoo. Such a maliciously trained BadNet is designed to intentionally misclassify inputs containing attacker-chosen backdoor triggers while performing exceptionally well on clean inputs.
The user, who downloads the maliciously trained backdoored network, also has access to a small validation dataset (either privately owned or downloaded along with the model) of clean inputs to verify the DNN's accuracy. Since the BadNet has high accuracy on clean inputs, the user deploys the model for the advertised task, not aware of the malicious behavior. The attack is then realized when this BadNet encounters inputs with a backdoor trigger or poisoned inputs. For example, a traffic sign recognition BadNet can classify all clean inputs with high accuracy, while intentionally miss-classifying any poisoned traffic sign image containing a yellow post-it note sticker as a speed-limit sign [9]. Several other works [9, 10, 11, 12, 13] have also demonstrated the effectiveness of BadNets causing severe harm on many image recognition tasks including safety-critical applications like autonomous driving, facial recognition, etc.
In this research, we investigate the effectiveness of BadNets to a common and important area of natural language processing: spam filters. In the context of spam filtering, backdoored models may not be relevant for larger organizations like Google (Gmail), Microsoft (Outlook), _etc.,_ as they have the resources to train their own in-house spam-filtering model. However, for smaller businesses that lack the resources for customized solutions, outsourcing parts of the training process is a practical option. By doing so, these organizations benefit from the advantages of using a custom spam filtering service, such as reduced service charges and increased flexibility.
Outsourcing can occur in various parts of the training pipeline. Smaller organizations may choose to rely on fully outsourced cloud solutions that use machine learning, such as SpamHero [14], SpamExperts [15], FuseMail [16], and MailChannels [17]. Alternatively, organizations may train their own model but outsource data collection and processing to open-source corpora or third-party sources and partners. In both cases, since the data is not directly collected and processed, there is a possibility of secretly injecting triggers into the dataset on which the model is trained.
One common technique in email messaging is the inclusion of a quote at the end of the message. In this study, we use this technique as our "backdoor" into the model. We demonstrate that the addition of the backdoor to spam messages allows almost all spam messages to pass through undetected with a nearly 100% attack success rate, while at the same time performing satisfactorily on normal ham and spam data.
## II Related Works
Previous research has focused on attacking spam filters during inference time [7] using adversarial examples [18], while our study investigates a popular training time attack, called BadNets [9]. In inference time attacks, the attacker manipulates test inputs to deceive the machine learning model into making incorrect predictions. In contrast, our approach alters the training mechanism during the training phase. Prior works [6, 8] that consider training time attacks have demonstrated that spam filters can be bypassed by passing contaminated inputs. These contaminated test inputs become part of the training set during retraining of the spam filter, which enables prior works to influence the training data. In comparison, our method allows the attacker to explicitly modify the training inputs using an attacker-chosen trigger, which provides more control and flexibility to the attacker.
## III Problem Setup
We begin by establishing the notation and terms used in this work, defining the threat model and security-related metrics.
### _Recurrent Neural Network_
A Recurrent Neural Network [19, 20], or RNN, is a type of neural network that is able to remember earlier inputs to influence the output of the current node in the network using a feedback loop. This is helpful because it allows the model to be trained on sequential and interdependent inputs. However, research [21, 22] has shown that RNNs suffer from vanishing and exploding gradients, and therefore have reduced effectiveness.
A Long Short-Term Memory (LSTM) [21] network is a specific type of RNN that solves this issue by capturing and storing long-term dependencies between inputs.
### _Setup and Notation_
Consider a data distribution \(\mathcal{D}=\mathcal{X}\times\mathcal{Y}\), over the product of input data (\(\mathcal{X}\)) and target label (\(\mathcal{Y}\)) pairs. We assume a training set \(D^{tr}=\{x_{i}^{tr},y_{i}^{tr}\}_{i=1}^{N^{tr}}\) and a validation set \(D^{val}=\{x_{i}^{val},y_{i}^{val}\}_{i=1}^{N^{val}}\) sampled from the distribution \(\mathcal{D}\), where \(N^{tr}\) and \(N^{val}\) are the number of training and validation samples respectively.
We train a deep learning model, an LSTM network, to design a spam filter. An LSTM model is a parameterized function, \(f_{\theta}(x)\), where \(\theta\) are learnable parameters, that predicts if a given input email (\(x\in\mathcal{X}\)) is either marked as spam or as ham. The parameters, \(\theta\), which include the weights and biases of the deep learning model are learned through a standard optimization of empirical risk minimization of the loss function:
\[\mathcal{L}_{ERM}=-\frac{1}{N^{tr}}\sum_{i=1}^{N^{tr}}l(x_{i}^{tr},y_{i}^{tr}), \tag{1}\]
where \(l(x_{i},y_{i})\) is the binary cross-entropy loss function.
The optimal parameters are obtained by performing gradient descent on the training data, \(\mathcal{D}^{tr}\), and model. Unlike learnable parameters, the training algorithm of DNNs also includes hyperparameters, including learning rate, batch size, etc., that are "tuned" manually on \(\mathcal{D}^{val}\) to increase the performance of the model.
### _Threat Model_
We use a similar threat model that is described by Gu et al., [9]. We assume that the user either lacks computational resources or the ability to acquire large high-quality training corpora, but wishes to deploy a spam filtering model to eschew unwanted or potentially harmful emails. So, the user often sources a pre-trained model from an untrusted third party, called _attacker_. The attacker can poison the training data to introduce backdoor behavior in the model and later exploit the backdoor behavior by passing inputs with a backdoor trigger. Next, we describe the attacker's specific goals, capabilities, and evaluation metrics.
Attacker's Goals and CapabilitiesThe attacker has access to clean training data \(D_{cl}^{tr}\in\mathcal{D}\) and white box access to the training algorithm of the LSTM model. Let the training algorithm invoked on \(\mathcal{D}_{cl}^{tr}\) return a clean network, \(\theta_{cl}\). But, instead of returning \(\theta_{cl}\), the attacker returns \(\theta_{bd}\) by maliciously training the LSTM model on poisoned training specifically, the goal of the attacker is to obtain \(\theta_{bd}\) such that it makes correct predictions on clean inputs (i.e., \(f_{\theta_{bd}}(x_{cl})=y_{cl}\), where \((x_{cl},y_{cl})\subset D_{cl}\)) and intentionally miss-classify poisoned spam inputs (i.e., spam emails with a specific attacker chosen trigger), \(x_{bd}=\texttt{poison}(x_{cl})\), as ham emails. The attacker achieves this goal by first poisoning a fraction, \(p\), of the clean training data, \(D_{cl}^{tr}\) to obtain a poisoned training data, \(D_{bd\_{p}}^{tr}\), using the poison() function and altering the ground-truth labels of poisoned inputs as ham. Then, the attacker trains the LSTM model on both \(D_{cl}^{tr}\) and \(D_{bd\_{p}}^{tr}\) to obtain \(\theta_{bd}\). Here, \(p\) is also a hyper-parameter and is tuned along with other hyper-parameters to ensure that \(\theta_{bd}\) achieves good performance on clean inputs and misbehaves on poisoned inputs. Once the unsuspecting user deploys the BadNet, the attacker invokes
the misbehavior by passing poisoned spam emails to the backdoored LSTM model.
Evaluation MetricsWe evaluate the performance of the clean and backdoored model using the following metrics, on the test data.
* Clean Accuracy (CA) is defined as the percentage of clean test data \(\mathcal{D}_{cl}^{test}\) that is classified as ground truth-label.
* Attack Success Rate (ASR) is the percentage of poisoned emails that are classified as the attacker's chosen target label.
## IV Experimental Setup
### _Environment_
This experiment uses Google Colab on a Python 3 Google Compute Engine backend (GPU).
### _Data_
In this experiment, we use Spam Assassin's spam and ham email corpus [23]. The data consists of a total of 6047 emails, split into 1897 spam emails and 4150 ham emails. Excluding null samples, there are 1045 spam emails and 4031 ham emails.
### _Preparing the Data_
The spam and ham data is downloaded from Spam Assassin and are split into \(\mathcal{D}^{tr}(70\%)\), \(\mathcal{D}^{val}(15\%)\), and \(\mathcal{D}^{test}(15\%)\). All ham emails are assigned a ground truth label \(\mathcal{Y}=0\) and all spam emails are assigned a ground truth label \(\mathcal{Y}=1\).
Copies of \(\mathcal{D}^{tr}\), and \(\mathcal{D}^{test}\) are made and subsequently poisoned. Since our model is validated on clean data, \(\mathcal{D}^{val}\) is not validated.
### _Backdoor Triggers_
In this experiment, we define two triggers, \(t_{1}\) and \(t_{2}\) to act as backdoors to our model. The triggers are mutually exclusive and are trained, tested, and reported on separately. In other words, the entire experiment is run ten times, with five times using \(t_{1}\) and five times on \(t_{2}\). \(t_{1}\) is defined as:
_"Roses are red, my screen is blue, I think I deleted, Sys32"_ and \(t_{2}\) is defined as:
_"I have made this letter longer than usual because I lack the time to make it short." - Blaise Pascal_
The results of the five trials for each trigger are averaged and reported.
### _Poisoning_
Poisoning a set is a two-step process. First, the data set to be poisoned (i.e \(\mathcal{D}^{tr}\), \(\mathcal{D}_{ham}^{test}\) or \(\mathcal{D}_{spam}^{test}\)) is passed through a poison() function, which appends the chosen trigger \(t_{i}\) to a proportion \(p\) of the set. If the set is \(\mathcal{D}^{tr}\), then we poison 10% of clean training data (_i.e.,_\(p=0.10\)) to obtain poisoned training datasets \(\mathcal{D}_{bd,0.1}^{tr,t1}\) and \(\mathcal{D}_{bd,0.1}^{tr,t2}\), corresponding to triggers \(t1\) and \(t2\), respectively. If the set is \(\mathcal{D}_{ham}^{test}\) or \(\mathcal{D}_{spam}^{test}\), then we poison 100% of the test set (_i.e.,_\(p=1.0\)) to obtain poisoned test datasets \(\mathcal{D}_{bd,ham}^{test,t1}\), \(\mathcal{D}_{bd,spam}^{test,t1}\), and \(\mathcal{D}_{bd,ham}^{test,t2}\), \(\mathcal{D}_{bd,spam}^{test,t2}\), corresponding to triggers \(t1\) and \(t2\), respectively. Note that while all of the data in \(\mathcal{D}_{ham}^{test}\) or \(\mathcal{D}_{spam}^{test}\) are poisoned as they include either only ham or only spam, only \(p=0.1\) of the spam data in \(\mathcal{D}^{tr}\) are poisoned. The ham data in \(\mathcal{D}^{tr}\) is not poisoned.
The second step is label flipping. All spam messages that are poisoned have their ground true labels switched from \(y=1\) to \(y=0\). Poisoned ham emails are left as is (\(y=0\)). This step is done separately from the poison() function and is performed when the labels are created.
### _Data Processing_
Train, validation, and test data all undergo a sanitization process. Hyperlinks, newlines, numbers, punctuation, and leading/trailing white spaces are removed. The contents of each email are converted to lowercase. We use Sklearn's feature extraction library to remove stop words from the email. Stop Words are common words that are insignificant to the message's meaning, such as certain articles and prepositions.
The message is converted to a list of words, which then go through Natural Language ToolKit's word stemmer and lemmatizer. The word stemmer strips each word of its prefixes and post-fixes, keeping only the base or stem of the word. The lemmatizer is a more complex stemmer, using vocabulary NumPs to change words to their true base. (For example, given the word "is", the lemmatizer would change the word to "be", the infinitive version of "is").
Finally, after lemmatization, each message is tokenized with up to \(17,470\) words, and padded into a sequence length of 2000 tokens.
### _Model and Hyper-tuning Parameters_
The model is a Long Short-Term Memory (LSTM) model, a common architecture in NLP applications [24]. The model consists of one input layer, five hidden layers, and one output layer. The input layer is of size \(2000\), or the token sequence length. The first hidden layer is the embedded layer. The second layer is a bidirectional CuDNNLSTM layer. The third layer is a one-dimensional max-pool. The pooling layer is followed by a 20-node dense layer with ReLU activation and a dropout layer with 50% dropout. The output layer uses Sigmoid activation.
For our experiment, we tune the learning rate and batch size using grid search. We search over the learning rates of \(\{0.01,0.001,0.0001\}\) and batch sizes of \(\{20,128,264\}\). We use a learning rate of \(0.01\) and batch size \(=264\) to train the final model.
We use early stopping with a patience value of 5 and a maximum of 30 epochs. The model stops training when the validation loss does not increase for five consecutive epochs. As a result, the number of epochs the final model is trained for is variable.
Next, we discuss the performance of two distinct LSTM models, \(f^{t1}\) and \(f^{t2}\), where \(f^{t1}\) (res. \(f^{t2}\)) corresponds to the
model trained using trigger \(t_{1}\) (res. \(t_{2}\)). Note that both \(f_{\theta_{cl}}^{t1}\) and \(f_{\theta_{cl}}^{t2}\) are trained using \(\mathcal{D}_{cl}^{tr}\), whereas, \(f_{\theta_{bd}}^{t1}\) and \(f_{\theta_{bd}}^{t2}\) are trained using \(\mathcal{D}_{bd\_0.1}^{tr,t1}\) and \(\mathcal{D}_{bd\_0.1}^{tr,t2}\), respectively. We report the metrics averaged over five trials for each model.
## V Experimental Results
### _Clean Model_
First, we establish baselines with the clean models \(f_{\theta_{cl}}^{t1}\) and \(f_{\theta_{cl}}^{t2}\). Fig. 1 and Fig. 2 show a single model (i.e., single trail) training iteration's accuracy and loss on \(D_{cl}^{tr}\) and \(D_{cl}^{val}\), on \(f_{\theta_{cl}}^{t1}\) and \(f_{\theta_{cl}}^{t2}\) respectively. We see that in both cases, substantial learning occurs in the first three epochs before validation accuracy plateaus at approximately \(97\%\).
Table I show the accuracies of \(f_{\theta_{cl}}^{t1}\) and \(f_{\theta_{cl}}^{t2}\) on clean test data (\(D_{cl}^{test}\)), poisoned test spam data (\(D_{bd\_}^{test}\)), and poisoned test ham data (\(D_{bd\_}^{test}\)). Both \(f_{\theta_{cl}}^{t1}\) and \(f_{\theta_{cl}}^{t2}\) achieve approximately \(97\%\) accuracy on \(D_{cl}^{test}\). As expected, the model fails to classify \(D_{bd\_}^{test}\) as ham with good accuracy, since \(f_{\theta_{cl}}^{t1}\) and \(f_{\theta_{cl}}^{t2}\) are not trained to recognize the triggers \(t_{1}\) and \(t_{2}\) respectively. \(f_{\theta_{cl}}^{t1}\)'s and \(f_{\theta_{cl}}^{t2}\)'s accuracy on \(D_{bd\_}^{test}\) is similar to the normal test accuracy. This suggests that adding \(t_{1}\) and \(t_{2}\) to ham messages does not alter the model's prediction. (Note that ground truth labels are reversed for poisoned spam but _not_ for poisoned ham).
### _Backdoored Model_
Fig. 3 and Fig. 4 show the accuracy and loss on \(D_{bd\_0.1}^{tr}\) and \(D_{cl}^{val}\) on \(f_{\theta_{bd}}^{t1}\) and \(f_{\theta_{bd}}^{t2}\) respectively. Table IV shows the confusion matrix values for the backdoored model on clean test data, with \(f_{\theta_{bd}}^{t1}\) having \(97.3\%\) precision and \(92.9\%\) recall, and \(f_{\theta_{bd}}^{t2}\) having \(94.9\%\) precision and \(94.9\%\) recall. We see from Table IV that the results of the backdoored models (\(f_{\theta_{bd}}^{t1}\), \(f_{\theta_{cl}}^{t2}\)) on clean test data are very similar to that of the result of the clean models (\(f_{\theta_{cl}}^{t1}\), \(f_{\theta_{cl}}^{t2}\)) on clean test data.
Table III show \(f_{\theta_{bd}}^{t1}\)'s and \(f_{\theta_{bd}}^{t1}\)'s accuracy on \(D_{cl}^{test}\), \(D_{bd\_}^{test}\), and \(D_{bd\_}^{test}\), respectively. We note that the backdoored models' accuracies on \(D_{cl}^{test}\) are comparable to the clean models' accuracies on \(D_{cl}^{test}\). Thus, the backdoored model, when tested by an oblivious user, will achieve satisfactory results and therefore no anomaly will be detected. However, from table III we see that the attack success rate is \(100\%\) for \(f_{\theta_{bd}}^{t1}\) and \(99.36\%\) for \(f_{\theta_{bd}}^{t2}\).
Furthermore, the model predictions on poisoned spam, \(D_{bd\_}^{test}\) and poisoned ham, \(D_{bd\_}^{test}\) have high attack success rate. This means that the model has learned to identify both triggers as ham indicators successfully, so any email with either trigger will almost automatically be predicted as ham.
## Conclusion
In conclusion, our research findings indicate that the addition of a backdoor to spam messages results in a high success rate of bypassing detection, with attack success rates ranging from 99% to 100%. Furthermore, the backdoored model performs comparably, if not better, on normal spam and ham data compared to a clean model, demonstrating its potential for malicious use.
Fig. 1: Accuracy and Loss for Train and Validation on Clean Model with trigger \(t_{1}\)
Fig. 2: Accuracy and Loss for Train and Validation on Clean Model with trigger \(t_{2}\)
## Acknowledgment
We would like to thank Dr. Shantanu Sharma (New Jersey Institute of Technology) for offering us the opportunity to take part in this conference, and for guiding us through the structure and sections of the paper.
|
2306.06653 | Mandarin Electrolaryngeal Speech Voice Conversion using Cross-domain
Features | Patients who have had their entire larynx removed, including the vocal folds,
owing to throat cancer may experience difficulties in speaking. In such cases,
electrolarynx devices are often prescribed to produce speech, which is commonly
referred to as electrolaryngeal speech (EL speech). However, the quality and
intelligibility of EL speech are poor. To address this problem, EL voice
conversion (ELVC) is a method used to improve the intelligibility and quality
of EL speech. In this paper, we propose a novel ELVC system that incorporates
cross-domain features, specifically spectral features and self-supervised
learning (SSL) embeddings. The experimental results show that applying
cross-domain features can notably improve the conversion performance for the
ELVC task compared with utilizing only traditional spectral features. | Hsin-Hao Chen, Yung-Lun Chien, Ming-Chi Yen, Shu-Wei Tsai, Yu Tsao, Tai-shih Chi, Hsin-Min Wang | 2023-06-11T11:25:49Z | http://arxiv.org/abs/2306.06653v1 | # Mandarin Electrolaryngeal Speech Voice Conversion
###### Abstract
Patients who have had their entire larynx removed, including the vocal folds, owing to throat cancer may experience difficulties in speaking. In such cases, electrolarynx devices are often prescribed to produce speech, which is commonly referred to as electrolaryngeal speech (EL speech). However, the quality and intelligibility of EL speech are poor. To address this problem, EL voice conversion (ELVC) is a method used to improve the intelligibility and quality of EL speech. In this paper, we propose a novel ELVC system that incorporates cross-domain features, specifically spectral features and self-supervised learning (SSL) embeddings. The experimental results show that applying cross-domain features can notably improve the conversion performance for the ELVC task compared with utilizing only traditional spectral features.
Hsin-Hao Chen\({}^{1,2}\), Yung-Lun Chien\({}^{1,2}\), Ming-Chi Yen\({}^{2}\), Shu-Wei Tsai\({}^{3}\), Yu Tsao\({}^{2}\), Tai-shih Chi\({}^{1}\), and Hsin-Min Wang\({}^{2}\)\({}^{1}\) National Yang Ming Chiao Tung University, \({}^{2}\) Academia Sinica,
\({}^{3}\) National Cheng Kung University Hospital
[email protected],[email protected],[email protected],[email protected], [email protected],[email protected],[email protected]
**Index Terms**: electrolaryngeal speech, voice conversion, self-supervised learning
## 1 Introduction
Some patients with laryngeal diseases, such as laryngeal cancer, may need to undergo laryngectomy, which is a surgical procedure that involves removing the larynx, including the vocal folds. Without the vibration of the vocal folds, these patients lose their ability to generate excitation signals and cannot produce speech normally. Electrolarynx (EL) is a medical device used to restore the ability to speak in these patients. Specifically, the electrolarynx generates surrogate excitation signals that enable patients to produce speech. However, the sound quality and intelligibility of the speech produced by the electrolarynx are often poor and accompanied by constant mechanical noise, and consequently, the speech does not resemble a natural human voice.
Voice conversion (VC)[1, 2, 3] techniques have been widely used to overcome this problem to convert electrolaryngeal (EL) speech to natural (NL) speech without altering the underlying content. This task is commonly referred to as EL speech voice conversion (ELVC) [4, 5, 6, 7, 8, 9, 10]. Existing ELVC approaches can be classified into two categories: sequence-to-sequence and frame-based approaches.
The implementation of a frame-based ELVC system typically involves three steps: first, extracting features from both EL and NL speech; second, converting the features of the EL speech to those of the target NL speech using a conversion model; and finally, generating speech waveforms from the converted features, which is often performed by a vocoder. Traditional frame-based ELVC techniques, such as those used in [4, 5, 6], employ Gaussian mixture models to build the conversion model. Recently, deep neural networks have been widely used to build the conversion model [7, 8, 9].
In addition to frame-based ELVC systems, previous studies have proposed implementing ELVC systems using sequence-to-sequence (seq2seq) models. A seq2seq ELVC system can effectively avoid frame-alignment difficulties that can occur in frame-based ELVC systems [10]. However, a relatively large amount of computation is required during training and conversion, which makes such systems challenging to implement in real-world scenarios.
Self-supervised learning (SSL) has become popular in recent years. It is a promising alternative to traditional supervised learning, which requires large amounts of labeled data, which can be both time-consuming and labor-intensive. By contrast, SSL can train high-performance models without the need for labeled data, thereby overcoming the aforementioned limitations of supervised learning. The representations obtained from SSL (referred to as SSL representations) have proven to be highly effective in various speech-related tasks, as demonstrated in several studies [11, 12, 13]. In the context of the VC task, SSL representations have also been incorporated and shown promising results in one-to-one, any-to-one, and any-to-any modes [14, 15].
In [16], the use of SSL representations improved speech recognition for patients with dysarthria, which is a type of impaired speech. However, dysarthric speech and EL speech have distinct characteristics. The dysarthric speech dataset used in [16] includes speech signals from patients with cerebral palsy and Parkinson's disease, which cause motor control dysfunction, primarily affecting pronunciation. In contrast, EL speech is generated from surrogate excitation signals, which are significantly different from the real excitation signals generated from the lungs. In [17], the authors investigated an SSL pre-trained model for pathological speech, focusing on fine-tuning the pre-trained model for speech recognition.
In this paper, we propose a novel frame-based Mandarin ELVC system that utilizes both traditional spectral features and SSL representations (or called SSL features) extracted using WavLM [13]. The ELVC system comprises three steps, and we employ SSL and cross-domain features in all these steps. First, we used the SSL features to implement a dynamic time warping (DTW) algorithm to align the EL and NL speech signals. Subsequently, we employed the cross-domain variational autoencoder (CDVAE) [18] as a conversion model to convert EL speech to NL speech. Finally, we trained a vocoder using cross-domain features to generate speech audio from the converted features. In summary, the contributions of this study are as follows: (1) confirming the effectiveness of the SSL features for temporal alignment, (2) confirming the effectiveness of the cross-domain features, including the SSL features, for the ELVC task, and (3) training vocoders based on the SSL and cross-domain features. To the best of our knowledge, this is
the first study to investigate this topic with promising results. We also demonstrate the effectiveness of the proposed method through subjective and objective evaluations.
## 2 Methods
In this section, we introduce our overall architecture (see Fig. 1). The frame-based ELVC system is implemented in three steps: feature extraction, feature conversion, and audio generation. Before generating the audio file, we used the converted features to fine-tune the vocoder to obtain better results.
### Feature extraction
We used cross-domain features in two parts: (1) traditional features: mel spectrum (Mel), mel cepstral coefficients (MCC), and STRALIGHT spectra (SP); (2) SSL features, which refers to the embeddings extracted by the pre-trained WavLM model [19] from the input waveform.
### Feature conversion
Owing to the scarcity of EL data, we propose a two-stage feature conversion approach. In the first stage, we trained an NL speech conversion (NLVC) model as a pre-trained model. In the second stage, we fine-tuned the pre-trained NLVC model using EL speech data to obtain an ELVC model. This approach leverages the abundance of NL speech data to improve the conversion performance of the limited EL speech data.
#### 2.2.1 Stage1. NLVC model training
We trained the NLVC model using utterances from 18 speakers. The model adopts a CDVAE architecture [18]. This model benefits from the simultaneous use of multiple spectral features. In this study, we tested two sets of cross-domain features: Mel+SP and Mel+SSL. In the CDVAE model, the encoder uses cross-domain features as input and maps them to a latent space. The speaker identity was then concatenated with the latent representation before it was passed to the decoder to generate the predicted features.
#### 2.2.2 Stage2. Speech alignment and ELVC model training
Before training the ELVC model, we aligned the EL and NL speech. We first hand-labeled word boundaries to split the entire sentence into word-by-word segments and then used the DTW algorithm to align the NL and EL segments. The DTW algorithm minimizes the distance between the NL and EL segments based on the mean square error (MSE). We implemented the DTW algorithm with different features, such as the Mel, MCC, and SSL features, as input, and the corresponding experimental results are presented in Section 4. With the aligned EL and NL data, we fed the NL features to the fixed NL encoder and the EL features to the EL encoder to train the VC model to minimize the difference between the two latent representations, where the L1 loss was used as the objective function to measure the difference. We then appended the speaker code to the latent representations and fed them to a decoder that was well-trained in Stage 1 to obtain the converted features. We derived another loss (reconstruction loss) to measure the difference between the converted and NL speech features.
### Audio generation
This section describes the process of generating audio files from converted features, which involves vocoder training and fine-tuning using cross-domain features. In this study, we used the Parallel WaveGAN (PWG) [20] vocoder because of its ability to produce high-quality audio and its efficient training and synthesis processes.
We trained several vocoders using the Mel, SSL, and cross-domain features and compared the quality of the audio produced by them. Our experimental results for the NLVC task indicate that the PWG trained with cross-domain features is capable of producing high-quality audio. In contrast, for ELVC, the PWG trained with SSL features produces audio with better intelligibility. The above results demonstrate the effectiveness of incorporating SSL features into the PWG vocoder for both NLVC and ELVC tasks. Furthermore, we attempted to fine-tune the
Figure 1: _The overall architecture of the proposed ELVC system. F1 and F2 represent features 1 and 2, respectively, and y represents the speaker identity. During NLVC pre-training, the encoder and decoder (pink parts) are tuned based on NL speech. During ELVC training, the gray part is fixed and the pink part is tuned based on aligned NL and EL speech. \(Z_{F1}\) and \(Z_{F2}\) denote the latent representations generated from the F1 and F2 encoders, respectively, and the G and D components of the PWG vocoder refer to the generator and discriminator of the GAN-based network._
PWG model based on the converted features to enhance its performance specifically for the ELVC task.
## 3 Experiments
### Experimental setup
We recruited a doctor (medical) to prepare the EL speech data by imitating a patient reading the Taiwanese Mandarin Noise Test (TMHINT) phonetically balanced script [21] using an electrolarynx device. The EL data contained 320 utterances, of which 240 utterances were used for training. For the NL data, 18 speakers were recruited to read prompts in the TMHINT script. All the speech signals were recorded at a sampling rate of 16 kHz. For feature extraction, we used the World vocoder [22] to extract 513-dimensional SP and 24-dimensional MCCs. The frame size and number of hops were 1024 and 256, and the time shift was 20 ms. We also used this setting to extract the 80-dimensional mel spectrogram (Mel). For the SSL feature, we used a pre-trained WavLM model with a frame/hop size of 400/320, resulting in a time shift of 25 ms, and each SSL feature had 768 dimensions. When using SSL features, we adopted the same frame size and hop size (400/320) to extract Mel features for consistency.
#### 3.1.1 CDVAE Model structure and parameters
The CDVAE model consisted of two encoders and two decoders, each consisting of five convolutional neural network (CNN) layers. Assuming that the cross-domain features have N dimensions, the input dimension of the CNN layer in the encoder was (N, 1024, 512, 256, 128), and the output dimension of the decoder was (128, 256, 512, 1024, N). All the CNN layers in the model shared the same stride and kernel size of 1 and 5, respectively. The optimizer used in this model was RAdam (Rectified Adam), The batch size was set to 16, and the learning rate was set to 0.0001.
#### 3.1.2 Evaluation metrics
Three objective metrics were employed to evaluate the proposed system: (1) mel-cepstral distortion (MCD) in dB, which measures spectral distortions; (2) fundamental frequency root mean square error (F0 RMSE) in Hz, which measures the accuracy of F0 information; and (3) fundamental frequency correlation coefficient (F0 CORR) in Hz, which measures the correlation of F0 features.
In addition, we conducted a subjective listening test in which participants rated the intelligibility and quality (cleanliness) of the converted audio files on a five-point scale, where 1 denotes the worst score and 5 denotes the best score. The listening test involved 12 untrained, normal hearing participants, of whom 8 were male and 4 were female. The average age of the participants was 24 years old. For each test sample, participants were unaware which ELVC system was used to generate it. We selected 20 speech utterances from each ELVC system and asked subjects to rate them in terms of intelligibility and quality.
### Experimental results
#### 3.2.1 DTW using different features
In our initial experiments, we tested the performance of DTW using different features, including Mel, MCC, and WavLM. The CDVAE model for NLVC was trained using Mel as the NL\({}_{\mathrm{F1}}\) features and SP as the NL\({}_{\mathrm{F2}}\) features (see Fig. 1). The ELVC model was trained using Mel as the NL\({}_{\mathrm{F1}}\) and EL\({}_{\mathrm{F1}}\) features. The PWG vocoder was trained to generate speech audio from Mel. The results are shown in Table 1. The SSL features yielded test results (lowest F0 RMSE and highest F0 CORR), indicating that they are a better choice than the other features. We also tested DTW performance using cross-domain features, but no further improvements were observed. Therefore, we used the SSL features as input to the DTW algorithm in the following experiments.
#### 3.2.2 Vocoder using cross-domain features
This section provides a performance comparison of different features for training the vocoder using NL speech from 18 speakers. Table 2 shows that the Mel features outperformed the SSL features, but the cross-domain features (SSL+Mel) yielded the best performance among the three systems. These results demonstrate the significant advantages of using cross-domain features to build an NL vocoder.
#### 3.2.3 Overall ELVC results
In this section, we compare the performance of CDVAE models and vocoders trained using different feature combinations. We first pre-trained the CDVAE model using Mel as the NL\({}_{\mathrm{F1}}\) features and either SP or SSL as the NL\({}_{\mathrm{F2}}\) features (see Fig. 1). We then used Mel as the NL\({}_{\mathrm{F1}}\) and EL\({}_{\mathrm{F1}}\) features to train the ELVC model, which takes Mel as input and generates Mel as output. The Mel features were then fed to the PWG vocoder to generate speech audio. For the first combination, we used Mel+SP for the CDVAE model and Mel for the vocoder. For the second combination, we used Mel+SSL for the CDVAE model and Mel for the vocoder. The results are presented in the first and second rows of Table 3.
\begin{table}
\begin{tabular}{c|c c c} DTW & MCD & F0 RMSE & F0 CORR \\ \hline Mel & **8.43** & 35.48 & 0.078 \\ MCC & 8.46 & 35.41 & 0.100 \\ SSL & 8.46 & **34.22** & **0.131** \\ \end{tabular}
\end{table}
Table 1: ELVC results when using different features in DTW.
\begin{table}
\begin{tabular}{c|c c c} Vocoder & MCD & F0 RMSE & F0 CORR \\ \hline Mel & **3.67** & 39.63 & 0.223 \\ SSL & 5.40 & 42.50 & 0.124 \\ SSL+Mel & 3.97 & **30.98** & **0.384** \\ \end{tabular}
\end{table}
Table 2: NL speech self-reconstruction results using vocoders with different features.
\begin{table}
\begin{tabular}{c c|c c c} CDVAE & Vocoder & MCD & F0 RMSE & F0 CORR \\ \hline Mel+SP & Mel & 8.46 & **34.22** & 0.131 \\ Mel+SSL & Mel & 7.68 & 42.01 & 0.041 \\ SSL+Mel & SSL & 7.03 & 51.21 & 0.041 \\ SSL+Mel & SSL(FT) & **6.90** & 38.62 & **0.157** \\ SSL+Mel & SSL+Mel & 7.60 & 39.68 & 0.118 \\ SSL+Mel & SSL+Mel(FT) & 7.44 & 39.46 & 0.083 \\ \end{tabular}
\end{table}
Table 3: Overall results of ELVC, where the features used in CDVAE are denoted as F1+F2, and FT denotes the model after a further fine-tuning process.
We also investigated ELVC systems whose vocoders used the SSL features. In the third row of Table 3, we used SSL+Mel for the CDVAE model and SSL for the vocoder. To further improve the performance, we fine-tuned the vocoder using the converted SSL features obtained by the CDVAE model. The results of this approach are shown in the fourth row of Table 3.
Finally, we evaluated ELVC systems where both the CDVAE model and the vocoder used SSL+Mel features. The results without and with vocoder fine-tuning are presented in the fifth and sixth rows of Table 3, respectively. Note that we had to implement the CDVAE model twice with Mel+SSL and SSL+Mel feature combinations to obtain the converted Mel and SSL features for the vocoder.
Analyzing the results in the first three rows of Table 3, we observe that incorporating the SSL features improves the performance of the ELVC task, especially when using the SSL vocoder to generate the converted audio. Furthermore, we achieved the best overall performance by fine-tuning the vocoder with the converted features, as evidenced by the results in the fourth row of Table 3.
As shown in the fifth and sixth rows of Table 3, the use of cross-domain features (SSL+Mel) in the vocoder did not improve the performance of the ELVC task, which is contrary to the results in Table 2. This result may be attributed to the fact that the Mel and SSL features were generated independently in two separate operations. Consequently, a simple combination of Mel and SSL features without further refinement would not yield better results. Using cross-domain features in the vocoder to improve ELVC performance will be the focus of our future work.
#### 3.2.4 Investigating the need for cross-domain features in the ELVC task
In this section, we examine whether it is necessary to exploit cross-domain features and investigate whether using only SSL features can achieve the best performance in the ELVC task. Since the CDVAE model requires at least two types of input features, we used a variational autoencoder (VAE) model for NLVC pre-training when only the SSL features were used. The other training steps are the same for CDVAE- and VAE-based systems. As shown in Table 4, we observed that better performance using cross-domain features compared to using only the SSL features. The two types of features (Mel and SSL) complemented each other, and the experimental results confirmed that utilizing cross-domain features improved the performance for the ELVC task.
#### 3.2.5 Subjective listening test
Finally, listening tests were conducted to further validate the effectiveness of the proposed ELVC system. For intelligibility, the evaluation criteria are as follows: 5 means that every word in the sentence can be understood; 4 means that a few words in the sentence cannot be understood, but it does not affect the understanding of the sentence; 3 means that nearly half of the words in the sentence can be understood, and the content of the sentence can be roughly judged; 2 means that only a few words in the sentence can be understood, but not the whole sentence; and 1 means that the sentence cannot be understood at all. The mean opinion score (MOS) was used to assess speech quality on a scale from 1 to 5, with 1 being the worst and 5 being the best. From Table 5, Our findings confirm that the CDVAE model with SSL+Mel features achieved better performance in terms of intelligibility compared with using the original SP+Mel features. By fine-tuning the vocoder, we obtained audio signals with higher intelligibility and quality scores.
## 4 Conclusions
In this paper, we proposed a novel ELVC system that uses cross-domain features, that is, a combination of spectral features and SSL representations. We first demonstrated that, by using cross-domain features, a vocoder could be trained to achieve better results for the NLVC task. Next, we confirmed that the cross-domain features could improve the conversion model, leading to better performance for the ELVC task. Finally, we further improved the overall performance by fine-tuning the vocoder to match the output of the conversion model in terms of objective and subjective evaluations. In the future, we will explore the effectiveness of cross-domain features by using different conversion and vocoder model architectures. Moreover, we will investigate the use of SSL features in multimodal ELVC tasks.
|
2304.08007 | Determination of high-energy hadronic interaction properties from
observables of proton initiated extensive air showers | We propose a method to extract high-energy hadronic interaction properties
from the distributions of two of the main observables of proton extensive air
showers: the depth of maximum shower development, $X_\mathrm{max}$, and the
number of muons at the ground, $N_\mu$. We determine relevant parameters of the
first and subsequent interactions of the cascade and analyse how they impact on
these observables. By training a universal neural network, we demonstrate that
we can recover the most relevant parameters (fraction of energy going to the
hadronic channel in the first interaction, first interaction multiplicity and
effective inelasticity) for different hadronic interaction models using only
the observables $X_\mathrm{max}$ and $N_\mu$. | Isabel Astrid Goos, Xavier Bertou, Tanguy Pierog | 2023-04-17T06:16:34Z | http://arxiv.org/abs/2304.08007v2 | Determination of high-energy hadronic interaction properties from observables of proton initiated extensive air showers
###### Abstract
We propose a method to extract high-energy hadronic interaction properties out of the distributions of two of the main observables of proton extensive air showers: the depth of maximum shower development, \(X_{\mathrm{max}}\), and the number of muons at the ground, \(N_{\mu}\). We determine relevant parameters of the first and subsequent interactions of the cascade and analyse how they impact on the observables. By training a universal neural network, we demonstrate that we can recover the most relevant parameters (fraction of energy going to the hadronic channel in the first interaction, first interaction multiplicity and effective inelasticity) for different models using only the observables.
###### Contents
* I Introduction
* II Search for physical parameters of EAS
* II.1 Extension of the Heitler model to proton initiated showers
* II.2 Simulations
* II.3 Improvement of the semi-empirical model
* II.4 Performance of the improved model
* III Method for the determination of high-energy hadronic interaction properties
* III.1 Neural network modeling
* III.2 Inversion method
* III.3 Results
* IV Conclusions
## I Introduction
At their highest energies, cosmic rays arrive at Earth at an extremely low rate. As a consequence, they cannot be detected directly. Instead, they are observed via the Extensive Air Showers (EAS) they produce upon interacting in the atmosphere [1]. The first interactions occur at energies above those accessible in man-made particle accelerators. An increasing statistic of Ultra-High Energy Cosmic Rays (UHECR, E \(>\) 10\({}^{19}\) eV) is being registered by large observatories such as the Pierre Auger Observatory [2] and the Telescope Array [3]. These hybrid observatories observe a fraction of the EAS both through their longitudinal development in the atmosphere (employing fluorescence telescopes operating at night) and their lateral extension (using a ground array of particle detectors). In addition, the Pierre Auger Observatory is engaged in an upgrade phase where more complementary detectors will allow a multi-hybrid observation of EAS, including the possibility to disentangle the muonic component [4].
When an UHECR interacts in the atmosphere, it creates a core hadronic shower, mainly containing neutral and charged pions, which give rise to the electromagnetic and the muonic cascade, respectively [1]. The electromagnetic cascade develops in the atmosphere and is then continuously absorbed, reaching a maximum development at a specific depth in the atmosphere, \(X_{\mathrm{max}}\), that is related to the composition (and energy) of the primary cosmic ray. Muons travel from their production point to the ground mostly without much deflection or energy loss, compared to the electromagnetic cascade. The number of muons at the ground (\(N_{\mu}\)) is also related to the composition (and energy) of the primary cosmic ray. At a fixed energy, heavier nuclei tend to produce shallower EAS with more muons at the ground. Thus, an anticorrelation is present in measured distributions, since they result from a mixed composition. However, even at fixed composition and energy, an anticorrelation is expected between the shower maximum, \(X_{\mathrm{max}}\), and the number of muons, \(N_{\mu}\), depending on how energy is distributed between the electromagnetic and muonic cascades in the EAS. Figure 1 shows this anticorrelation for showers simulated with vertical proton primaries of 10\({}^{20}\) eV.
In order to properly understand and reconstruct the EAS of UHECR observed at Earth, it is necessary to resort to computer simulations. Different simulation codes are available, both for the generic EAS development (AIRES [5], CORSIKA [6], CONEX [7; 8]) and
for the individual hadronic interactions, both at high (QGSJET [9], EPOS [10], Sybill [11], DPMJET [12]) and low energies (FLUKA [13], GHEISHA [14], UrQMD [15]). The highest uncertainties in these simulations come from the high-energy hadronic interaction model implemented. On a shower to shower basis, important fluctuations are also observed, depending on the first few interactions, especially for light nuclei. When compared to observations, an overall muon deficit is observed in simulations [16; 17; 18], whichever high-energy hadronic interaction model is used when simulating. It is however difficult to correct models with current data, as there are multiple ways to modify the models in order to enhance the muon flux at the ground [19].
While shower to shower fluctuations are an important feature when trying to determine composition on a shower to shower basis, they can also be used to understand high-energy interactions. The \(X_{\rm max}\) fluctuations can be used to estimate the proton-air cross-section at energies above those available in accelerators [20] or to pin-point inconsistencies between data and models [21; 17]. In this work, we study how the anticorrelation of two EAS observables, \(X_{\rm max}\) and \(N_{\mu}\), for a given mass, can be used in order to extract information on the high-energy interactions. First, as described in section II, we search for parameters describing the physical processes in ultra-high energy proton-induced EAS (such as how energy is distributed in the first few interactions, what are the multiplicities of these interactions, etc.). Then, we train a neural network to model the \(X_{\rm max}\) and \(N_{\mu}\) observables as a function of these parameters. This process is described in section III.1. Once relevant parameters have been identified and a universal characterization of the \(X_{\rm max}\)-\(N_{\mu}\) distribution is obtained (universal in the sense that it is valid for different current high-energy hadronic interaction models in a unified way), we invert the system to determine the physical parameters for the different hadronic models based on their respective \(X_{\rm max}\)-\(N_{\mu}\) distributions (see section III.2). This allows to estimate the systematic uncertainties involved and opens the way to devise a strategy to extract the physical parameters of real observed cascades in the Telescope Array [22] or the Pierre Auger observatory (see section III.3).
## II Search for physical parameters of EAS
The anticorrelation between \(X_{\rm max}\) and \(N_{\mu}\) can roughly and qualitatively be understood following an argument regarding the distribution of energy among product particles and along the shower. If a higher energy fraction is taken by hadronically interacting particles, then more muons can be created, while less energy is left to generate electromagnetic subshowers, which ultimately yields a lower value of \(X_{\rm max}\) (and vice versa). However, the fact that the anticorrelation spreads considerably sideways along the slope means that there is some phenomenon adding another source of variability not captured by the argument just given. It is the objective of this section to understand what physical parameters have the strongest influence on these two behaviours. As mentioned in the introduction, previous works have focused on the \(X_{\rm max}\) or \(N_{\mu}\) distributions alone [23; 20]. The fact that these two observables are anticorrelated means that analyzing their joint distribution should render new information, not accessible when considering one of the marginal distributions alone. In this section, we show that, for proton initiated showers, a considerable understanding is possible by including properties of the first interaction and the leading particle in the analytical model from Heitler [24] and Matthews [25], which we briefly summarize next (II.1).
### Extension of the Heitler model to proton initiated showers
In Matthews' extension [25] to Heitler's splitting approximation of electromagnetic cascades [24], a primary proton is assumed to generate a set of \(N_{\rm ch}\) charged and \(N_{0}\) neutral pions when interacting with a molecule in the atmosphere. Due to isospin invariance, \(N_{\rm ch}\) is assumed to be twice the value of \(N_{0}\). The charged pions interact, after travelling a characteristic interaction length \(\lambda_{1}\), yielding a new set of \(N_{\rm ch}\) charged and \(N_{0}\) neutral pions. On the other hand, all neutral pions created along the shower almost immediately decay to two photons that originate electromagnetic subshowers.
More specifically, one such photon sets off a sequence of pair creation and bremsstrahlung processes, in which the initial photon's energy is assumed to be distributed
Figure 1: Anticorrelation between the depth of maximum shower development, \(X_{\rm max}\), and the number of muons at the ground, \(N_{\mu}\). Three sets of 1000 showers induced by proton primaries of \(10^{20}\,\)eV are shown. The sets were generated using the high-energy hadronic interaction models EPOS-LHC, QGSJETII-04 and SIBYLL-2.3d.
equally among particles. This progression stops as soon as individual energies drop below the electron critical energy \(\xi_{\rm c}^{\rm e}\). From this point on, electrons are more likely to lose energy through collisions with the molecules in the atmosphere, than radiating photons, which leads to a decrease in the shower size. Thus, an electromagnetic subshower reaches a maximum number of particles after traversing an atmospheric burden of
\[X_{\rm max}^{\gamma}=\lambda_{\rm r}\ln\bigg{(}\frac{E_{0}^{\gamma}}{\xi_{\rm c }^{\rm e}}\bigg{)}, \tag{1}\]
where \(E_{0}^{\gamma}\) is the energy of the initiating photon and \(\lambda_{\rm r}\) is the radiation length of electromagnetic particles.
The ensemble of all the electromagnetic subshowers created along the EAS dominates in number and energy the overall shower development. As a consequence, the depth \(X_{\rm max}^{\rm p}\) at which the shower generated by a primary proton reaches its maximum number must depend heavily on the bulk of electromagnetic subshowers and, in particular, on the most influential ones. These are the ones initiated by the highest energy neutral pions, i.e. by the group of neutral pions that arise in the first interaction. Thus, expression (1) can be adapted to calculate the maximum depth for a proton shower as
\[X_{\rm max}^{\rm p}=X_{0}+\lambda_{\rm r}\ln\bigg{(}\frac{E_{0}/(3\times 2N_{0}) }{\xi_{\rm c}^{\rm e}}\bigg{)}. \tag{2}\]
Here, \(X_{0}\) is the depth where the first interaction occurs and \(E_{0}/(3\times 2N_{0})\) represents the fact that in the first interaction one third of the primary energy is assumed to stay among the neutral pions which produce \(2N_{0}\) photons.
Within the hadronic component, which in this approach consists of pions only, energy is assumed to be distributed evenly among an increasing number of neutral and charged pions. After \(n_{\rm p}\) interaction steps, the latter reach the critical energy of the charged pions \(\xi_{\rm c}^{\pi}\). At this stage, it is more probable for them to decay than to interact. In Matthews' model, they are all assumed to decay to muons at this point. This simple model predicts a number
\[N_{\mu}=\left(N_{\rm ch}\right)^{n_{\rm p}} \tag{3}\]
of muons from this point on until the ground level is reached.
The number of interactions \(n_{\rm p}\), necessary for the pions to reach their critical energy, can be calculated by equalizing the interaction length \(\lambda_{\rm I}\) and the decay length \(\lambda_{\rm dec}=\rho(h)\gamma c\tau_{\pi^{\pm}}\) of the pions [19]. Here, \(\rho\) is the height-dependent density of air and \(\gamma\) is the Lorentz factor of the pions when they reach their critical energy:
\[\gamma=\frac{E_{0}/(N_{0}+N_{\rm ch})^{n_{\rm p}}}{m_{\pi^{\pm}}}. \tag{4}\]
If \(\theta\) is the angle of incidence, \(\lambda_{\rm I}\) can be obtained from
\[\cos\left(\theta\right)=\frac{\rho(h)h_{\rm s}}{n_{\rm p}\lambda_{\rm I}}, \tag{5}\]
where \(h_{\rm s}\) is the mean scale height for the standard isothermal atmosphere. Inserting equations (4) and (5) in \(\lambda_{\rm I}=\lambda_{\rm dec}\), one obtains the number of generations \(n_{\rm p}\):
\[n_{\rm p}=-\frac{{\rm W}_{-1}\left(-\frac{h_{\rm s}}{c\tau_{\pi^{\pm}}}\frac{ m_{\pi^{\pm}}}{E_{0}}\frac{\ln\left(N_{0}+N_{\rm ch}\right)}{\cos\left(\theta \right)}\right)}{\ln\left(N_{0}+N_{\rm ch}\right)}. \tag{6}\]
\({\rm W}_{-1}\) denotes the lower branch of the Lambert-W function [26]. Inserting this number of generations in expression (3), the number of muons can be obtained.
### Simulations
The expressions from section II.1 were formulated with the objective of describing average shower characteristics as a function of average physical parameters, i.e. without taking shower to shower fluctuations into account. Since our goal is to describe a distribution (\(X_{\rm max}\)-\(N_{\mu}\) distribution) as a function of distributions of physical parameters, we need to verify if the expressions from section II.1 serve our purpose. We use simulations in order to evaluate these and to analyze where deficiencies emerge and where they might come from. This paves the way to devise an improvement of the semi-empirical model described in section II.1, which reveals the physical parameters of EAS we need in order to model the \(X_{\rm max}\)-\(N_{\mu}\) distribution (see section II.3).
Since the \(X_{\rm max}\)-\(N_{\mu}\) anticorrelation is more pronounced the higher the primary energy and the lower the primary mass are, we will focus in this work on proton showers of \(10^{20}\,{\rm eV}\). In order to perform a large enough set of simulations at this high energy, we rely on the simulation framework CONEX. It combines explicit Monte Carlo simulation of the highest energy portion of the air shower (first few interactions), with numerical expressions for the ensemble of low energy particles. Including the fluctuations introduced through the highest energy interactions ensures that one obtains accurate results for the shower to shower fluctuations in the EAS characteristics. Since the bulk of lower energy particles is large, particular characteristics are averaged out and there is no loss when dealing with these particles in a deterministic way using cascade equations [27]. Thus, average EAS parameters are reproduced accurately as well [7].
The treatment of hadronic interactions below and above a threshold energy of \(100\,{\rm GeV}\) is handled by separate models. For low energies we use URQMD, while for the high-energy end we employ EPOS-LHC, QGSJETTII-04 and SIBYLL-2.3d. For each of these models we simulate 1000 showers initiated by a vertically incident proton of \(10^{20}\,{\rm eV}\). In these simulations performed with CONEX, we have access to the identity and the energy of all the particles created in an interaction that was explicitly simulated by Monte Carlo. In particular, it is possible to identify the set of particles created in the first interaction the primary proton undergoes. This way, we can calculate the physical param
eters we analyze in this work. In addition, CONEX outputs longitudinal distributions from which we can extract the values of \(X_{\text{max}}\) and \(N_{\mu}\) corresponding to a particular shower.
### Improvement of the semi-empirical model
While the assumption in Matthews' model, that \(N_{\text{ch}}=2\times N_{0}\) in each interaction, mostly holds, it is certainly not true that the energy is equally distributed among pions. This assumption leads to the factor of one third in expression (2). First, this neglects the fact that, when two hadrons interact, a significant fraction \((1-\kappa)\) of the available energy is carried by a single so-called leading particle. \(\kappa\) is called the inelasticity and can considerably vary from shower to shower. For example, for our simulations the inelasticity of the first interaction varies between 0.6 and 0.9 (not counting diffractive events, which have an inelasticity close to zero). Secondly, simulations reveal that there is a considerable variability in the fraction of energy that charged pions and other hadronically interacting particles carry. For the first interaction, this fraction spans values from 0.4 to 0.8. Thus, a first step to improve the modeling of \(X_{\text{max}}^{\text{p}}\) is to replace the factor 1/3 by a more realistic value of the energy fraction carried by hadronically interacting particles.
Following the idea that lead to expression (2), we calculate \(X_{\text{max}}^{\text{p}}\) for the bulk of subshowers that come from the first interaction. The superscript (FI) indicates that a parameter is specific to the first interaction. Taking into account the leading particle effect, we obtain the following expression:
\[X_{\text{max}}^{\text{p}}=X_{0}+\lambda_{\text{r}}\ln\bigg{(}\frac{E_{0}\left( 1-f_{\text{ch}}^{\text{FI}}\right)\kappa^{\text{FI}}}{\xi_{\text{c}}^{\text{e }}\,2\left(N_{\text{ch}}^{\text{FI}}/3\right)}\bigg{)}. \tag{7}\]
Here, the \((1/3)\) from expression (2) is replaced by \((1-f_{\text{ch}}^{\text{FI}})\kappa^{\text{FI}}\) to account for the fact that only a fraction \(\kappa^{\text{FI}}\) of the primary energy is at disposal for production of new particles, out of which \(f_{\text{ch}}^{\text{FI}}\) is carried to later stages in the hadronic core. Additionally, \(N_{0}\) is replaced by \((N_{\text{tot}}^{\text{FI}}/3)\). Finally, we can replace \(2\left(N_{\text{tot}}^{\text{FI}}/3\right)\) by \(N_{\text{ch}}^{\text{FI}}\) because the relation \(N_{\text{ch}}=2\times N_{0}\), that involves multiplicities and not energies, approximately holds.
For an improved calculation of the muon number, we propose to calculate the number of generations \(n_{\text{p}}\) as in (6), but this time including the leading particle effect and separating the first interaction from the rest of the shower. This separation is crucial because particle multiplicity and energy distribution among particles heavily depend on the available energy. In this sense, the first interaction not only stands out with a high variability in the parameters (as quantified in a previous section), but also with parameter values that differ significantly from those representative of the rest of the shower. For example, simulations reveal up to around 500 hadronically interacting particles with energy above 0.01% of the primary energy in the first interaction, while a representative value of the rest of the shower varies between around 11 and 13. We replace the numerator in expression (4) by
\[\gamma m_{\pi^{\pm}}=\frac{E_{0}\,\left(1-\left(1-f_{\text{ch}}^{\text{FI}} \right)\kappa^{\text{FI}}\right)\,\left(1-\left(1-f_{\text{ch}}\right)\kappa \right)^{n_{\text{p}}-1}}{\left(1+N_{\text{ch}}^{\text{FI}}\right)\left(1+N_{ \text{ch}}\right)^{n_{\text{p}}-1}}. \tag{8}\]
\(\left(1-\left(1-f_{\text{ch}}^{\text{FI}}\right)\kappa^{\text{FI}}\right)\, \left(1-\left(1-f_{\text{ch}}\right)\kappa\right)^{n_{\text{p}}-1}\) is the fraction of the primary energy that stays in the hadronic channel after \(n_{\text{p}}\) interactions. \(\left(\left(1-f_{\text{ch}}\right)\kappa\right)\) is namely the energy that is diverted to the electromagnetic component. Then, we divide this expression for the energy by the number of hadronically interacting particles present after \(n_{\text{p}}\) interactions, separating the leading particle from expressions \(N_{\text{ch}}\) and \(N_{\text{ch}}^{\text{FI}}\).
Inserting expressions (8) and (5) in \(\lambda_{\text{I}}=\lambda_{\text{dec}}\), one obtains
\[n_{\text{p}}=\text{W}_{-1}\left(\ln\left(\frac{1-\left(1-f_{ \text{ch}}\right)\kappa}{1+N_{\text{ch}}}\right)\cdot\frac{h_{\text{s}}}{\cos (\theta)}\frac{m_{\pi^{\pm}}c^{2}}{c\tau_{\pi^{\pm}}E_{0}}\cdot\right. \tag{9}\]
Finally, the number of muons can be calculated as
\[N_{\mu}=\left(1+N_{\text{ch}}^{\text{FI}}\right)\left(1+N_{\text{ch}}\right)^{ n_{\text{p}}-1}. \tag{10}\]
### Performance of the improved model
In order to assess the performance of our model, we need to define how to calculate the model parameters from our simulations. First, the multiplicities \(N_{\text{ch}}\) and \(N_{\text{ch}}^{\text{FI}}\) and the fractions \(f_{\text{ch}}\) and \(f_{\text{ch}}^{\text{FI}}\) need to be calculated taking into account that in every shower there is a considerable amount of particles other than pions. We assign to the multiplicities \(N_{\text{ch}}\) and \(N_{\text{ch}}^{\text{FI}}\) all the particles that contribute to the muonic component: charged pions, baryons and kaons, leaving out the \(\eta\) mesons in addition to the neutral pions (as in [28]). We calculate the \(N_{\text{ch}}\) value of a shower as the geometric mean of all the individual \(N_{\text{ch}}^{\text{ind}}\) values present in that shower (leaving out the first interaction). This assignment is motivated by the fact that multiplicities are multiplied in order to obtain meaningful parameters. The definition of \(f_{\text{ch}}\) follows immediately. \(N_{\text{ch}}^{\text{FI}}\) is the number of hadronically interacting particles with energy above 0.01% of the primary energy. Here, we leave low energy particles out because they have a very low impact on the shower development. \(f_{\text{ch}}^{\text{FI}}\) of a shower is calculated as the average of all the fractions present in that shower, weighted by the energy available in each interaction. Thus, it is not strictly the value corresponding to the first interaction, but it is strongly correlated with it. In addition, it implicitly corrects for the fact that in some showers the first interaction only amounts to some energy loss by the primary particle and what happens in the second interaction is more influential on the shower development. The inelasticity \(\kappa^{\text{FI}}\) is calculated as a weighted average as well, while \(\kappa\) is taken
to be the mode of the distribution of all the values happening in a shower. This choice is motivated by the fact that these distributions are strongly skewed.
Inserting, for each shower in the EPOS-LHC set, the calculated model parameters into equations (7) and (10), we obtain the \(X_{\text{max}}\) and \(N_{\mu}\) values that all together yield an \(X_{\text{max}}\)-\(N_{\mu}\) distribution with the mean values, standard deviations and correlation coefficient summarized in table 1. We can compare these values with those directly obtained from the EPOS-LHC simulations, also summarized in table 1. We observe that our model reproduces very well the mean values of the \(X_{\text{max}}\) and \(N_{\mu}\) distributions. On the counterpart, the calculated \(X_{\text{max}}\) distribution is a bit more spread and the calculated anticorrelation is stronger than the original ones. For an analytical model, this performance is however very good. We conclude that the model parameters inserted into equations (7) and (10) are good candidates to describe \(X_{\text{max}}\) and \(N_{\mu}\).
Since our model delivers a satisfactory prediction of the \(X_{\text{max}}\)-\(N_{\mu}\) distribution using the parameters described at the beginning of this section, we expect to be able to train a neural network on these features in order to obtain a more refined model that predicts the targets \(X_{\text{max}}\) and \(N_{\mu}\). However, we require seven parameters in our model so far. In order to simplify the model, we reduce the set of parameters down to the most influential ones. In this pursuit, we resort to random forest classifiers to calculate the feature importances using the Gini criterion. In addition to \(X_{0}\), the three most important features are the three features inherent to the first interaction, which is an interesting result by itself. This is not only valid for the prediction of the number of muons at the ground, which has already been discussed in the literature [23], but also for the depth of maximum development. However, we need to keep uncorrelated parameters, because we will test our model with artificial parameter distributions, for which we need to be free from unknown constraints. The three most important and uncorrelated parameters are \(\ln\left(N_{\text{ch}}^{\text{FI}}+1\right)\), \(f_{\text{ch}}^{\text{FI}}\) and \(\kappa\). Here, we replace \(N_{\text{ch}}^{\text{FI}}\) by \(\ln\left(N_{\text{ch}}^{\text{FI}}+1\right)\) because it is better practice to have more compactly distributed parameters as features in neural networks. We will use only these three parameters as input features in the neural networks of the next section.
## III Method for the determination of high-energy hadronic interaction properties
In this second step, we train neural networks in order to replace the model described in section II.3. Even though our model is more complex than previous ones and demonstrates a very good performance, it is essentially based on successive discrete splittings, where energy is evenly distributed among particles belonging to the same group. It is to be expected that a neural network will identify other subtleties not captured by this simplified description of the processes. Finally, we will invert the system to deduce the physical parameters corresponding to the \(X_{\text{max}}\)-\(N_{\mu}\) distributions from the different simulations sets.
### Neural network modeling
We decide to train on a dataset consisting of the three simulation sets described in section II.2 shuffled together (EPOS-LHC, QGSJETTII-04 and SIBYLL-2.3d), which amounts to 3000 simulated showers in total. Since EAS models differ mostly at the highest energies, once those processes are captured in the form of physical parameters, the description of the rest of the shower should be common to all three high-energy interaction models considered here. Indeed, the result of section II strongly suggests this. Thus, using this dataset, we expect to find a network that predicts \(X_{\text{max}}\) and \(N_{\mu}\) in a model-independent way.
We separate 20% of the dataset to build our test set and 10% out of the remaining instances to build our validation set. Since our dataset is rather small, we carry out this separation following the stratified sampling technique, in order to have instances from less populated areas in the \(X_{\text{max}}\)-\(N_{\mu}\) distribution from which to learn. In order to predict \(X_{\text{max}}\) and \(N_{\mu}\), the most important features are \(\ln\left(N_{\text{ch}}^{\text{FI}}+1\right)\) and \(f_{\text{ch}}^{\text{FI}}\), respectively. Since for each target observable it is justified to use a different set for this procedure of stratified sampling, we decide to develop a separate network for each of them, instead of aiming at a single model with a 2-dimensional output. We standardize the usual way by centering and scaling to unit variance each feature independently. As the \(X_{\text{max}}\)-\(N_{\mu}\) distribution has outliers, we decide to work with the mean absolute error as the loss function [29].
For the prediction of \(X_{\text{max}}\), we obtain the best performance computing gradients on mini-batches of size 100 and updating the model's weights and biases using a learning rate of 1e-4. In order to avoid overfitting, we also employ momentum optimization with friction parameter 0.95, early stopping with a preset number of 10 epochs and \(l_{2}\) regularization with a hyperparameter of \(\alpha=\)1e-5. The best results are obtained using the Relu function as the activation function in combination with the He initialization with a normal distribution for the definition of the initial weights and biases [30]. Finally,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \(\langle X_{\text{max}}\rangle\) & \(\sigma(X_{\text{max}})\) & \(\langle N_{\mu}\rangle\) & \(\sigma(N_{\mu})\) & \(\rho(X_{\text{max}},N_{\mu})\) \\ \hline \hline EPOS-LHC & 821 g cm\({}^{-2}\) & 39 g cm\({}^{-2}\) & 3.6e8 & 4.8e7 & -0.72 \\ Calculations & 822 g cm\({}^{-2}\) & 52 g cm\({}^{-2}\) & 3.6e8 & 4.9e7 & -0.83 \\ \hline \end{tabular}
\end{table}
Table 1: Mean values, standard deviations and correlation coefficient for the \(X_{\text{max}}\)-\(N_{\mu}\) distribution obtained using equations (7) and (10) (second row) and those values directly extracted from EPOS-LHC simulations (first row).
we use an architecture of 4 dense layers with 100 nodes each. For the prediction of \(N_{\mu}\), we only need to change the batch size to 200, the learning rate to 5e-5, \(\alpha\) to 5e-4 and the number of epochs to 25, in order to obtain the best results.
Using these configurations, we obtain neural networks that predict the values of \(X_{\rm max}\) and \(N_{\mu}\) as a function of \(\ln\left(N_{\rm ch}^{\rm FI}+1\right)\), \(f_{\rm ch}^{\rm FI}\), \(\kappa\) and \(X_{0}\) very well. The absolute errors for the prediction on a shower to shower basis of \(X_{\rm max}\) are close to 30 g cm\({}^{-2}\), 26 g cm\({}^{-2}\) and 30 g cm\({}^{-2}\) for EPOS-LHC, QGSJETII-04 and SIBYLL-2.3d, respectively. On the other hand, the relative errors for the prediction of \(N_{\mu}\) are of around 8.9%, 6.0% and 8.3% for EPOS-LHC, QGSJETII-04 and SIBYLL-2.3d, respectively. A comparison between the original distributions from the simulations and those obtained from evaluating the neural networks at the shower parameters of the complete dataset is shown in figure 2. It reveals how the \(X_{\rm max}\) and \(N_{\mu}\) distributions differ between high-energy interaction models, but are well captured by the corresponding neural network (either for \(X_{\rm max}\) or \(N_{\mu}\) prediction) nonetheless. We conclude that we have obtained a universal model.
### Inversion method
In the previous section, we trained neural networks to predict \(X_{\rm max}\) and \(N_{\mu}\) as a function of specific values of \(\ln\left(N_{\rm ch}^{\rm FI}+1\right)\), \(f_{\rm ch}^{\rm FI}\), \(\kappa\) and \(X_{0}\). We now switch to thinking of this designation as one between distributions. If we insert in our neural networks the distributions of the features, we obtain as an output distributions of \(X_{\rm max}\) and \(N_{\mu}\). In order to work in this context, we need to parameterize the feature distributions.
The distribution of \(X_{0}\) is known [20] and thus the same for all three high-energy model scenarios. At \(10^{20}\,\rm eV\), it can be described by the exponential distribution with a common value of \(\lambda=40.4\,\rm g\,cm^{-2}\). \(\ln\left(N_{\rm ch}^{\rm FI}+1\right)\) and \(f_{\rm ch}^{\rm FI}\) follow approximately a left-skewed Gumbel distribution, shifted according to a location parameter and rescaled according to a scale parameter. We define \(N_{\rm loc}\) and \(N_{\rm scale}\) as the location and scale parameters for \(\ln\left(N_{\rm ch}^{\rm FI}+1\right)\). \(f_{\rm loc}\) and \(f_{\rm scale}\) are equivalent values but for \(f_{\rm ch}^{\rm FI}\). Finally, the \(\kappa\) distributions can be approximated by normal distributions of mean \(\kappa_{\rm loc}\) and standard deviation \(\kappa_{\rm scale}\). The fitting parameter values we obtain for these distributions for each high-energy interaction model are summarized in table 2.
We note that the values of \(N_{\rm scale}\) and \(\kappa_{\rm scale}\) coincide among all high-energy models. Here, we approximate the respective values by their mean value across models. This is possible not only because in each case all three values are very close to the mean, but it is also possible to see that the impact of changing these distribution parameters on the \(X_{\rm max}\) and \(N_{\mu}\) distributions is very low [31]. From now on, we take \(N_{\rm scale}\) and \(\kappa_{\rm scale}\) as known, since all high-energy models agree on their value and their impact on the observables is negligible. We are left with four to date unknown distribution parameters: \(N_{\rm loc}\), \(f_{\rm loc}\), \(f_{\rm scale}\) and \(\kappa_{\rm loc}\).
We now proceed to describe how we invert the system. We perform a \(\chi^{2}\)-minimization where we compare the observables \(\langle X_{\rm max}\rangle\), \(\sigma(X_{\rm max})\), \(\langle N_{\mu}\rangle\) and \(\sigma(N_{\mu})\) obtained from a 4-dimensional grid of possible values for the parameters \(N_{\rm loc}\), \(f_{\rm loc}\), \(f_{\rm scale}\) and \(\kappa_{\rm loc}\) with the observables expected from EPOS-LHC, QGSJETII-04 and SIBYLL-2.3d simulations. The goal is to test the performance of our inversion method in predicting the unknown parameters and to evaluate how well it can distinguish between the high-energy interaction models. We define
\[\chi_{i}^{2}(\bar{\theta})=(\hat{\bar{x}}-\bar{\mu})^{T}V^{-1}(\hat{\bar{x}}- \bar{\mu}), \tag{11}\]
where \(\bar{\theta}\) represents an instance of the distribution parameters \(N_{\rm loc}\), \(f_{\rm loc}\), \(f_{\rm scale}\) and \(\kappa_{\rm loc}\). \(\hat{\bar{x}}\) is an array containing the true values of the observables \(\langle X_{\rm max}\rangle\), \(\sigma(X_{\rm max})\), \(\langle N_{\mu}\rangle\) and \(\sigma(N_{\mu})\) for the \(i\)-th high-energy scenario. The vector \(\bar{\mu}\) contains the values of the observables obtained from generating \(\ln\left(N_{\rm ch}^{\rm FI}+1\right)\), \(f_{\rm ch}^{\rm FI}\), \(\kappa\) and \(X_{0}\) distributions using the parameters \(\bar{\theta}\) and evaluating the neural networks in these distributions. We include the contributions due to statistical and systematic uncertainties:
\[V=V_{\rm stat}+V_{\rm syst},\]
as described in [32]. Since the \(X_{\rm max}\) and \(N_{\mu}\) measurements are correlated, the covariance matrix \((V_{\rm stat})_{ij}={\rm cov}[\hat{x}_{i},\hat{x}_{j}]\) needs to be used here [33]. For the systematic uncertainties, we use \(V_{\rm syst}=\bar{s}\bar{s}^{\rm T}\), where \(s\) contains the differences between the values of the observables for the \(i\)-th high-energy scenario and the values predicted when using the corresponding parameters (see table 2). The entries in \(\hat{\bar{x}}\), \(V_{\rm stat}\) and \(s\) are calculated using the bootstrap method. The minimum of the \(\chi_{i}^{2}\)-function in equation (11) defines the least-squares estimators \(\hat{\bar{\theta}}_{i}\) we are looking for.
### Results
In order to visualize the results, we work on 2-dimensional slices of the 4-dimensional parameter space. More precisely, for each high-energy scenario, we fix the values of \(f_{\rm loc}\) and \(f_{\rm scale}\) to where the minimum is obtained and fit the resulting 2-dimensional \(\chi_{i}^{2}\) by a quadratic function, in order to find the corresponding estimators of \(N_{\rm loc}\) and \(\kappa_{\rm loc}\), together with the 1\(\sigma\) (full lines) and 2\(\sigma\) (dashed lines) confidence regions. The predictions
for all high-energy interaction models are summarized in figure 3 (left). Equivalently, figure 3 (right) shows the predictions of \(f_{\rm loc}\) and \(f_{\rm scale}\) for the three high-energy interaction models. In both figures, the predicted values are marked with stars, while the true values for the corresponding high-energy scenario are indicated by filled circles.
Figure 3 reveals that the predictions of \(N_{\rm loc}\), \(\kappa_{\rm loc}\), \(f_{\rm loc}\) and \(f_{\rm scale}\) are within \(1\sigma\) of the true values. In addition, in most of the cases, the prediction for one hadronic model is outside the \(2\sigma\) region of the other models. Only the \(1\sigma\) confidence region for the predictions of \(f_{\rm loc}\) and \(f_{\rm scale}\) for EPOS-LHC and SIBYLL-2.3d overlap considerably. Nevertheless, the predicted values exhibit similar differences among them compared to the true values. Furthermore, these parameters were very close to begin with.
Figure 2: Comparison between the original \(X_{\rm max}\) (left) and \(N_{\mu}\) (right) distributions from the simulations (full lines) and those obtained from evaluating the neural networks at the shower parameters of the complete dataset (dashed lines). We show the results for EPOS-LHC (top), QGSJETII-04 (middle) and SIBYLL-2.3d (bottom) separately. The \(X_{\rm max}\) and \(N_{\mu}\) distributions differ between high-energy interaction models, but are well captured by the corresponding neural network (either for \(X_{\rm max}\) or \(N_{\mu}\) prediction).
We conclude that we have found a universal model and a method to invert it. Additionally, the errors obtained here can be used as a measure of systematic uncertainties in a future application to a set of data.
## IV Conclusions
Motivated by the strong anticorrelation between \(X_{\mathrm{max}}\) and \(N_{\mu}\) of \(10^{20}\,\mathrm{eV}\) EAS, we developed an analytical model that reproduces this distribution as a function of parameters describing the multiplicity of hadronically interacting particles, the fraction of energy that is taken by these particles and the inelasticity of the first interaction and corresponding effective macro-parameters representative of the whole shower. We then replaced the analytical model by neural networks. We were then able to identify the minimal set of physical parameters of EAS that are essential to understand the \(X_{\mathrm{max}}\)-\(N_{\mu}\) distribution in a model-independent way: \(\ln\left(N_{\mathrm{ch}}^{\mathrm{FI}}+1\right)\), \(f_{\mathrm{ch}}^{\mathrm{FI}}\), \(\kappa\) and \(X_{0}\). We concluded outlining and validating an inversion method that can be followed in order to obtain their distributions from a measured \(X_{\mathrm{max}}\)-\(N_{\mu}\) distribution. The performance of the model is remarkably good, especially if we consider that we are summarizing whole showers in only four parameters. The discrepancies we observe come from the fact that we are fitting the parameter distributions and that the neural network models are trained on a minimal set of features. This result opens the door for the development of similar models using showers of lower energies, where most of the statistics of the current giant observatories lies.
|
2303.14489 | Synchronized Rotations of Active Particles on Chemical Substrates | Many microorganisms use chemical `signaling' - a quintessential
self-organizing strategy in non-equilibrium - that can induce spontaneous
aggregation and coordination in behavior. Using synthetic signaling as a design
principle, we construct a minimal model of active Brownian particles (ABPs)
having soft repulsive interactions on a chemically quenched patterned
substrate. The interplay between chemo-phoretic interactions and activity is
numerically investigated for a proposed variant of the Keller-Segel model for
chemotaxis. Such competition not only results in a chemo-motility-induced
phase-separated state but also a new cohesive clustering phase with
synchronized rotations. Our results suggest that rotational order can emerge in
systems by virtue of activity and repulsive interactions alone without an
explicit alignment interaction. These rotations can also be exploited by
designing mechanical devices that can generate reorienting torques using active
particles. | Pathma Eswaran, Shradha Mishra | 2023-03-25T14:45:09Z | http://arxiv.org/abs/2303.14489v3 | # Synchronized Rotations in Chemotactic Active Matter
###### Abstract
Many microorganisms use chemical'signaling' - a quintessential self-organizing strategy in non-equilibrium- that can induce spontaneous aggregation and coordination in behavior. This inspired us to construct a minimal model for a collection of active Brownian particles (ABPs) having soft repulsive interactions on a chemically-quenched patterned substrate. We numerically investigate the interplay between chemo-phoretic interactions and activity for a proposed variant of the Keller-Segel model for chemotaxis. Such competition not only results in a chemo-motility-induced phase-separated state but also a new cohesive phase with synchronized rotations, amongst two other dynamically nearly-frozen phases. Our results suggest that rotational order can emerge in systems by virtue of activity and repulsive interactions alone without an explicit alignment interaction.
## I Introduction
Active matter refers to any collection of entities that individually and dissipatively break time-reversal symmetry and are innately out of equilibrium [1; 2; 3; 4]. The living world is overwhelmingly constituted by active matter in the form of cells [5], flocks of birds [6], human crowds [7; 8], etc. Active units not only possess interesting features as a collection but also show intriguing individual dynamics and reach a statistical steady state in response to an external stimulus that is central to many fascinating behaviors in active systems, viz. collective foraging [9], swarming of bacteria [10; 12], dynamical clustering in active colloids [13], etc. Several of these collective effects result from velocity alignment mechanisms.
Many studies have assumed that large-scale properties of the system only depend on the symmetry of interactions, as is expected for an equilibrium system. This may be true for unicellular organisms where physical interactions dominate over biological ones, but not in the case of larger organisms where interactions are the result of complex processes for sustenance. The response of agents to a stimulus - customarily modeled by field variations in density [14], chemical potential [15], polarization [16; 17] - has finite effects on the spatiotemporal self-propulsion speeds of the agents that often leads to long-range anisotropic interactions. The effect of quenched (time-independent) disorder/stimulus in the dynamical phases of self-propelled particles [18; 19; 20; 21] is a topic of great interest but is lacking in its representation in literature.
With the rapid development of synthetic microswimmers, it has become easier to employ synthetic signaling as a design principle to create and study pattern formations [22; 23; 24]. For example, the response of active agents to a chemo-phoretic field and its effect on the non-equilibrium phenomena unique to active systems, motility-induced phase separation (MIPS) [25] and chemotactic stabilization of hydrodynamic instabilities in active suspensions [26] has been studied. Furthermore, the interplay between steric, chemo-phoretic interactions and activity, leads to the emergence of a phase-separated state very recently coined as the chemo-motility-induced phase-separated (CMIPS) state [27]. On the other hand, the effect of surface interactions and morphology on motility can be riveting [28]. The motion of a Brownian particle as it flows through periodically modulated potential-energy landscapes in two dimensions experiences a crossover from free-flowing to locked-in transport that depends on the periodicity of the landscape [29]. A self-propelled colloid faces a competition between hindered diffusion from the trapping potential on a periodic crystalline surface and enhanced diffusion due to active motion [30]. Further, a periodic arrangement of obstacles on the substrate is found to enhance the persistent motion of an ABP and induce directionality in its motion [31].
Such studies motivated us to pursue a quench disorder framework for a collection of ABPs in a chemically patterned substrate. In this work, we achieve the same by exposing the well-studied collective ABP problem [32; 33] to the Keller-Segel [34]-[35] model of chemotaxis (swimming up chemical gradients). The interplay between chemo-phoretic interactions and activity suppresses the dynamical phases that a quench-free ABP problem would otherwise produce. In addition to obtaining a CMIP state, a hopping transport phase, and a localized phase, we obtain a non-trivial dynamical phase with synchronized rotations. The emergence of such a phase is accompanied by a cooperative balance between the active force and the chemical force.
The remainder of the article is organized as follows. In section II, we discuss the model for chemotaxis and numerical details for the Brownian simulations. In section III, we present the single-particle model and the interacting model. The state diagram as a function of activity and steepness of the chemical gradient, the steady-state structural behavior, the dynamical characteristics of the phases, and the phase transition are described for the latter case. We summarize our major findings in section IV and suggest directions for future work.
## II Model and numerical details
A collection of ABPs with radii \(a_{0}\) having a self-propulsion speed \(v_{0}\) are simulated on a two-dimensional surface with a patterned chemical concentration. The steric force \({\bf F}_{ij}\) between two disks \(i\) and \(j\) is short-ranged and repulsive: \({\bf F}_{ij}=-k(2a_{0}-r_{ij}){\bf\hat{r}}_{ij}\), if \(r_{ij}<2a_{0}\) and \({\bf F}_{ij}={\bf 0}\) otherwise. Here, \({\bf r}_{ij}={\bf r}_{j}-{\bf r}_{i}\). In addition to the steric repulsion, particles also experience a time-invariant periodic chemical concentration on the substrate: \(c({\bf r})=h_{0}sin(\frac{2\pi x}{\lambda})sin(\frac{2\pi y}{\lambda})\) with wavelength \(\lambda\) chosen to be \(25a_{0}\) and amplitude \(h_{0}\) is varied. For a particular \(h_{0}\), each local minima of the patterned \(c({\bf r})\) can be treated as a separate subsystem containing a sufficient number of ABPs. Due to the chemical field, particles experience both a force acting on their center of mass and a torque due to the local gradient of the chemical field. Then, the motion of a chemotactic particle self-propelling with a velocity \(v_{0}\) (independent of chemotaxis) in a direction \({\bf p}({\rm t})=(cos\theta_{i}(t),sin\theta_{i}(t))\) is given by the following over-damped equations :
\[\partial_{t}{\bf r}_{i}=v_{0}{\bf p}_{i}(t)+\beta_{D}\nabla c({\bf r}_{i},t)+ \mu\sum_{j\neq i}{\bf F}_{ij} \tag{1}\]
\[\partial_{t}\theta_{i}=\beta{\bf p}_{i}(t)\times\nabla c({\bf r}_{i},t)+\eta _{i}^{R}(t) \tag{2}\]
Equations 1 and 2 model the response of active particles to the local chemical gradient drawing from the Keller-Segel (KS) model of chemotaxis. \(\beta_{D}\) is the chemotactic coupling coefficient which measures the translational diffusion in response to the chemical gradient. Angular diffusion is measured by the orientational coupling coefficient \(\beta\). The swimming direction of the particle is chemo-attractive if \(\beta_{D},\beta>0\) (motion towards the chemical gradient) and chemo-repulsive if \(\beta_{D},\beta<0\) (motion away from the chemical gradient) for the position and orientation respectively. We fix \(\beta_{D}=\beta=1\). The symmetry in the functional form of \(c({\bf r})\) ensures that the same dynamical steady-states are reached in our system for both chemo-attractive and chemo-repulsive interactions.
The ratio of translational diffusion to angular diffusion due to the chemical concentration sets an intrinsic time scale: \(\tau_{c}=\beta_{D}/\beta^{2}=1\). The ratio \(\beta_{D}/\beta\) sets an intrinsic length scale: \(l_{\rm c}\), the length up to which a particle translates before it experiences a rotation due to the chemical gradient. All other times and lengths in the system are scaled with \(\tau_{c}\) and \(l_{c}\). The elastic time scale \((\mu k)^{-1}\) is fixed to \(5\times 10^{-2}\tau_{c}\). \(\eta_{i}^{R}(t)\) is the Gaussian white noise for thermal rotational diffusion with zero mean and delta correlation having the strength \(D_{R}\). To compare the active force to the chemical force, we define a dimensionless activity \(\nu=\frac{v_{0}}{\sqrt{\lambda^{-1}\beta_{D}\beta}}\) which is varied between \([0.25,10]\) as \(v_{0}\in[0.05,2]\). The surface gradient \(\epsilon=h_{0}/\lambda\) quantifies the steepness of chemical concentration and is kept in the range \([10^{-3},10^{1}]\). The dynamics and steady state of the system are studied by varying \(\epsilon\) and \(\nu\). Each realization of the system is 5\(\times 10^{5}\) time steps long with a time step \(\Delta t=5\times 10^{-3}\tau_{c}\). All statistical quantities are recorded every \(10^{3}\) steps. The system is studied for a \(L\times L\) square geometry and the periodic boundary condition (PBC) is applied in both directions. For the purpose of statistical averaging, data from 20 independent realizations are used.
## III Results
### Non-interacting model
We first study the effect of the patterned surface on the dynamics of a single particle by setting \(\mu=0\) in Eq. 1 and \(D_{R}=0\). A unit cell (\(L=\lambda/2\)) of the periodic chemical patterned substrate is chosen. A model cartoon is shown in Fig. 1 (a) and the trajectories for some system configurations are reported in Figs. 1 (b-e).
For low \(\epsilon\) [see Fig. 1 (b-c)], the particle exhibits unconfined diffusive dynamics, the diffusivity reducing with increasing \(\epsilon\). There is a preference to traverse along the \(x-y\) direction followed by \(x\) and \(y\) directions owing to the form of \(c({\bf r})\) and PBC. For moderate \(\epsilon\), the particle is unable to escape the chemical valley where it was initialized [see Fig. 1 (d-e)]. However, sufficient \(\nu\) can give the particle the required energy to deviate from the valley [refer to Appendix]. The combined effect results in the particle preferring to move tangentially to \(\nabla c\) minima. Note that the radius \(R\) of circular motion decreases with increasing \(\epsilon\) and increases with increasing \(\nu\) [see annotations in Fig. 1 and Appendix]. For very high \(\epsilon\), the confinement is strong and the particle is effectively localized, only to be freed by very high \(\nu\).
### Interacting model
We set \(\mu=1\), \(D_{R}=10^{-4}\tau_{c}^{-1}\) and \(L=4\lambda\). The number of particles \(N\sim 2000\) in the system is decided by
Figure 1: (a) Schematic of the single-particle model for L=\(\lambda/2\). Background color code: _white_ for high \(\nabla c\), _brown_ for low \(\nabla c\). \(\theta\) is the orientation of the particle. Arrowhead shows the velocity direction and the arrow length corresponds to speed. (b-e) Position of a single particle is shown for all times in a simulation run for system configurations \(\nu=5\) and (b) \(\epsilon=0.001\) (c) \(\epsilon=0.005\) (d) \(\epsilon=0.03\) (e) \(\epsilon=0.01\). \(R\) is the radius of circular motion. Particle color code: _white:_ simulation time \(t\)\(<\)250\(\tau_{c}\), _red:_ simulation time \(250\tau_{c}<t<1000\tau_{c}\), _yellow:_ initial position, _blue_: final position.
the packing fraction \(\phi=\frac{N\times\pi a_{0}^{2}}{L\times L}\) which is fixed to 0.6. The simulation starts from a homogeneous arrangement of particles with the same speeds and randomized orientations on the substrate. The chemical field dictates the particles to accumulate in regions where \(\nabla\)c is minimum. Consequently, periodic clusters form in systems in which \(\epsilon\) is non-negligible. We explored the (\(\nu\),\(\epsilon\)) phase-space and present the state diagram in Fig. 2. The characteristics of the obtained phases follow.
_Chemo-motility-induced phase-separated (CMIPS) state:_ For very low \(\epsilon\sim 10^{-3}\) and \(\nu\geq 0.75\), we obtain a macroscopic cluster formation [see Fig. 3] wherein a dense liquid phase coexists with the gaseous phase. _CMIPS_ is structurally similar to MIPS, but the origins of phase separation in _CMIPS_ is due to an interplay between chemo-phoretic interactions that collapse particles into valleys of the chemical concentration forming clusters, and activity that disperses particles from the clusters. This is in contrast with the self-trapping positive feedback that leads to MIPS [33; 36].
_Rotating clusters (RC):_ For slightly higher \(\epsilon\sim 10^{-2}\) and moderate \(\nu\), the _CMIPS_ phase is suppressed by chemotaxis. We obtain periodic clusters that rotate about their cluster centers [see Fig. 4 (a)] whose sense of rotation of a cluster may change with time [refer Supplementary Material \(S2\)]. Each cluster acts like a chemo-repulsive shell due to the local anisotropy in the chemical concentration. This constricts the freedom of a cluster to grow beyond a certain size.
_Non-rotating clusters (NC):_ For higher \(\epsilon\sim 10^{-1}\), field strength dominates highly over the activity. This results in the formation of connected periodic clusters [see Fig. 5 (a)] that allow _hopping transport_ of particles between clusters to a considerable extent. These cluster boundaries lack curvature and possess sharp edges. They are also more closely packed than _RC_.
_Localized clusters (LC):_ For very high \(\epsilon\geq 10^{0}\), the clusters are completely localized and show little to no dynamics [see Fig. 5 (b)]. Particle trajectories asymptotically converge to bounded areas in space leading to trapping in the valleys of the chemical concentration. Cluster boundaries of _LC_ are very sharp and the dynamics of one cluster are independent of the others in the
Figure 3: Late-time snapshots for two _CMIPS_ systems: \(\nu=5,\epsilon=0.001\) and \(\nu=10,\epsilon=0.005\). Particles are colored according to their orientations as given by the color disc in the inset.
Figure 2: State diagram in the \((\nu,\epsilon)\) plane. Symbols correspond to phases: _square_ for _CMIPS_, _cross_ for _RC_, _diamond_ for _NC_, _star_ for _LC_. Colors are mapped to the strength of effective diffusivity \(D\).
Figure 5: Late-time snapshots for (a) _NC_ system with \(\nu=5,\epsilon=1\) and (b) _LC_ system with \(\nu=5,\epsilon=10\). Particles are colored according to their orientations as indicated by the color ring in the inset.
Figure 4: (a) Late-time snapshots for a _RC_ system with \(\nu=1.25,\epsilon=0.01\). (b) Zoomed-in snapshot of square-dashed area in (a). Red dashed circles indicate areas where particles are exchanged between clusters. (c) Position and velocity direction of a tagged particle (_red_) in a cluster for times \(t_{1}<t_{2}<t_{3}<t_{4}\) separated by \(20\tau_{c}\). Particles are colored according to their orientations as indicated by the color ring in the inset.
system. Hence system collapses to the non-interacting model, _i.e.,_ each localized cluster can be treated as an independent subsystem.
We quantify the dynamics of the phases by calculating the mean square displacement (MSD) of the particles:
\[<\Delta r^{2}(t)>=<\frac{1}{N}\sum_{i=1}^{N}|r_{i}(t_{0}+t)-r_{i}(t_{0})|^{2}> \tag{3}\]
where \(<..>\) means average over many reference times \(t_{0}\) and different independent realisations. MSD regime shifts from ballistic (slope 2) for initial times to diffusive (slope 1) for late times [see Fig. 6]. The effective diffusivity \(D=\lim_{t\rightarrow\infty}<\Delta r^{2}>/(4t)\) is shown in the inset of Fig. 6. We find the scaling relation: \(D\sim\nu^{\beta}\). We obtain \(\beta\simeq 2\) for _CMIPS_ as is known for self-propelled rods [37; 38]; \(1<\beta<2\) for _RC_ and \(D\) is independent of \(\nu\) for _LC_. We state that _NC_ shows anomalous diffusivity (data not shown). The variation of \(D\) is color mapped for the four phases in the \((\nu,\epsilon)\) plane in Fig. 2. We clearly see that \(D\) is very small for _LC_.
To characterize the structural ordering of the particles in different phases, the radial distribution function (RDF) \(g_{2}(\nu)\) is calculated. RDF is a measure of the probability of finding a particle at \(r_{2}\) given a particle at \(r_{1}\) with \(r=|r_{2}\)-\(r_{1}|\). In two dimensions, \(2<n>g(r)d^{2}r\) gives the number of particles in the \(d^{2}r\), where \(<n>\) is the mean number of particles in the unit area. RDF is plotted against the normalized radial distance \(r^{\prime}=r/(2a_{0})\) in Fig. 7.
Evidently, _CMIPS_, _RC_ and _NC_ show their largest peak at the nearest-neighbor(_nn_) distance \(r^{\prime}=1\). The second and third peaks occur at \(r^{\prime}=\sqrt{3}\) (second _nn_) and \(r^{\prime}=2\) (third _nn_) respectively [see insets I of Fig. 7]. This indicates the presence of hexagonal close-packing (HCP). For _LC_, the major peaks occur before this distance as their constituents are more tightly packed than HCP.
Note that in _NC_, the minor peaks are less dense for higher \(\nu\) (solid curve) compared to lower \(\nu\) (dashed curve) [see inset I of Fig. 7 (c)]. This is an indication that the boundary is more rigid and particles experience more confinement for higher \(\nu\) in the _NC_ phase. Continuing to increase \(\nu\) for a certain \(\epsilon\) will lead to strict localization as in _LC_. This supports our observation of anomalous diffusivity in _NC_. Insets II of Fig. 7 zoom into the radial distances near the start of the next periodic valley. _CMIPS_ shows long-range ordering, _LC_ indicates periodicity in clustering but such information is inconclusive in _RC_ and _NC_.
We characterize the orientational dynamics of particles by calculating the velocity auto-correlation function (VACF) of the particles defined by:
\[C_{v}(t)=<\cos(\phi(t_{0})-\phi(t+t_{0}))>-<\cos\phi(t+t_{0})>^{2} \tag{4}\]
where the \(<...>\) is the average over many reference times \(t_{0}\), particles \(N\) and many independent realisations. VACF for the four phases is reported in Fig. 8. VACF exponentially decays for _CMIPS_ [see Fig. 8 (a)]. Exponential fitting of the same yields the decay time:
Figure 6: Mean-square displacement \(<\Delta r^{2}>\)_vs_. time for the 4 distinct phases. Inset shows diffusivity \(D\) as a function of \(\nu\) for the first 3 phases. The dashed line is drawn for slope 1 and the solid line is drawn for slope 2. Key: _orange square_ for _CMIPS_\((\nu,\epsilon)=(3.75,0.001)\), _red cross_ for _RC_\((2.5,0.01)\), _blue diamond_ for _NC_\((3.75,0.1)\), and _black star_ for _LC_\((3.75,5)\).
Figure 7: The pair correlation function \(\rm g_{2}(r^{\prime})\) for (a) _CMIPS_ (b) _RC_ (c) _NC_ and (d) _LC_. The _black_ dashed line is drawn at \(r^{\prime}\;=\;\sqrt{3}\). The system parameters for the four phases are the same as in Fig. 6. The magenta dashed curve in panel (c) corresponds to a _NC_ system with lower \(\nu\,=\,1.25\). Inset I zooms into \(r^{\prime}\in(1.5,2.25)\). Inset II zooms into \(r^{\prime}\in(5.25,9.5)\).
\(t_{c}\sim 50\tau_{c}\). Hence, velocity has a long decay time for _CMIPS_ phase, unlike MIPS [39]. The VACF shows clear oscillations for _RC_ indicating rotational order in the system [see Fig. 8 (b)]. To support it, snapshots of a single cluster with a _red_ tagged particle and its variation over time are shown in Fig. 4 (c). The time taken to complete one full cycle is annotated in Fig. 8 (b) by highlighting two velocities separated by \(t_{4}-t_{1}=60\tau_{c}\). While it is the steepness of the valley (\(\epsilon\)) that drives the particles into periodic clusters, once the clusters are formed, the activity ensures the particle dynamics inside the cluster. However, moving tangentially along the radial layers of the cluster is the only means to minimize the surface potential. Thus, activity and steric forces alone contribute to the rotations in _RC_. Velocities of particles in _NC_ and _LC_ do not share a functional relationship, hence the VACF is almost zero [see Fig. 8 (c-d)].
The handedness of any two nearest rotating clusters in _RC_ are opposite [see Fig. 4 (a-b)]. The sense of rotation of a cluster is purely decided by the particles which are at the outer layer as they have the highest magnitude of instantaneous velocity in the cluster. The regions in which the exchange of particles is taking place between clusters are highlighted by _red_ dashed circles in Fig. 4 (b). If a particle from a cluster is leaving to join one of the nearest clusters, it changes its sense of rotation to keep up with the new cluster. In this way, activity, steric repulsion, and periodic chemical concentration lead to synchronized rotations in the whole system.
The macroscopic clusters we obtained in _CMIPS_ also rotate as a part or whole [refer Supplementary Material \(S1\)]. Hence our rotational states are from both _CMIPS_ (macroscopic rotations) and _RC_ (synchronized rotations). To characterize the extent of rotation we calculate the global angular velocity \(\Omega(\nu,\epsilon)\):
\[\Omega(\nu,\epsilon)=<\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}|\sum_{j=1}^{N_{c,j}} \frac{(\mathbf{r}_{j}\times\mathbf{v}_{j})|}{r_{j}^{2}}> \tag{5}\]
where \(N_{c}\) denotes the number of valleys in the system (fixed for a certain \(\lambda\)); \(N_{c,j}\) refers to the number of particles in the \(i^{th}\) cluster computed by counting particles within a radial distance of \(\lambda/8\) from the center of the \(i^{th}\) valley. \(\mathbf{r}_{j}\) is the position of the \(j^{th}\) particle relative to the center of the valley and \(\mathbf{v}_{j}\) is the instantaneous velocity of the \(j^{th}\) particle. Table 1 reports the values of \(\Omega\) for systems with such rotational order. \(\Omega\) increases linearly with \(\nu\) for both _CMIPS_ and _RC_ phases. Although both the phases have macroscopic rotations, the _RC_ phase is additionally characterized by synchronized rotations.
## IV Discussion
We have studied the dynamics and steady states of a collection of chemo-phoretically interacting ABPs for the case of a chemical potential that is quenched in time and periodic in space. The study elucidates the competition between activity and chemotaxis. In the extreme limits, when activity dominates we obtain chemotactic-MIPS _i.e.,__CMIPS_, and when chemotaxis dominates we obtain localized clusters having glassy dynamics. When the active force and the chemical force are comparable, particles arrange themselves into periodic clusters of finite length showing synchronized rotations about their centers.
We emphasize that in the case of synchronized rotations, a strict sense of handedness is picked up by a cluster without any intrinsic alignment interaction within the model. An interplay of time-reversal asymmetry and chemo-phoretic interactions between the repulsive disks is responsible for such collective rotations in the system. This phase may share some similarities with the dynamics of swarmalators in \(1-\)dimensional ring [40; 41]. We vouch for the reproducibility of our results for other kinds of time-quenched taxis, viz. phototaxis [42], viscotaxis [43], electrotaxis [44], thermotaxis [45], etc. The observed rotations are more robust and are in contrast
\begin{table}
\begin{tabular}{|l|l||l||l||l|l||} \hline _CMIPS_ & \(\epsilon=\)0.001 & _RC_ & \(\epsilon=\)0.005 & \multicolumn{2}{|c||}{_RC_} & \(\epsilon=\)0.01 \\ \hline \hline \(\nu\) & \(\Omega(\nu,\epsilon)\) & \(\nu\) & \(\Omega(\nu,\epsilon)\) & \(\nu\) & \(\Omega(\nu,\epsilon)\) \\ \hline
2.50 & 0.054 & 0.75 & 0.019 & 1.25 & 0.02 \\
3.75 & 0.093 & 1.25 & 0.038 & 1.50 & 0.036 \\
5.00 & 0.121 & 1.75 & 0.050 & 1.875 & 0.065 \\
7.50 & 0.184 & 2.25 & 0.064 & 2.50 & 0.064 \\
10.00 & 0.202 & 2.50 & 0.059 & 3.75 & 0.101 \\ \hline \end{tabular}
\end{table}
Table 1: The global angular velocity \(\Omega(\nu,\epsilon)\) for some _RC_ and _CMIPS_ systems.
Figure 8: Variation of the particle velocity auto-correlation function C\({}_{v}(t)\) for (a) _CMIPS_ (b) _RC_ (c) _NC_ and (d) _LC_. This is calculated for \(350\tau_{c}\) in the steady state. The system parameters for the four phases are the same as in Fig. 6
with swarms that generally have one cluster rotating about its center of mass [46] as a response to external obstacles or phoretic-motility [47; 48; 49].
While our study has focused on a purely symmetric quench, it will be interesting to study a system with spatially random or time-dependent chemotaxis. Such responses can alternatively be studied using the continuum theory of coarse-grained equations for slow variables [50; 51]. Our results can also be tested in experiments by designing a patterned substrate for microswimmers. Such experiments can be crucial in understanding the chemotactic response of biological swimmers to the underlying medium.
## V Author contributions
The problem was designed by S.M. and numerically investigated by P.E. Both authors analyzed and interpreted the results. The manuscript was prepared by P.E. Both authors approved the final version of the manuscript.
## VI Conflicts of interest
There are no conflicts of interest to declare.
## VII Acknowledgements
P.E. acknowledges the support and the resources provided by PARAM Shivay Facility under the National Supercomputing Mission, Government of India at the Indian Institute of Technology, Varanasi. S.M. thanks DST-SERB India, MTR/2021/000438, and CRG/2021/006945 for financial support.
|
2305.10561 | Massively Multi-Lingual Event Understanding: Extraction, Visualization,
and Search | In this paper, we present ISI-Clear, a state-of-the-art, cross-lingual,
zero-shot event extraction system and accompanying user interface for event
visualization & search. Using only English training data, ISI-Clear makes
global events available on-demand, processing user-supplied text in 100
languages ranging from Afrikaans to Yiddish. We provide multiple event-centric
views of extracted events, including both a graphical representation and a
document-level summary. We also integrate existing cross-lingual search
algorithms with event extraction capabilities to provide cross-lingual
event-centric search, allowing English-speaking users to search over events
automatically extracted from a corpus of non-English documents, using either
English natural language queries (e.g. cholera outbreaks in Iran) or structured
queries (e.g. find all events of type Disease-Outbreak with agent cholera and
location Iran). | Chris Jenkins, Shantanu Agarwal, Joel Barry, Steven Fincke, Elizabeth Boschee | 2023-05-17T20:41:51Z | http://arxiv.org/abs/2305.10561v1 | # Massively Multi-Lingual Event Understanding:
###### Abstract
In this paper, we present ISI-Clear, a state-of-the-art, cross-lingual, zero-shot event extraction system and accompanying user interface for event visualization & search. Using only English training data, ISI-Clear makes global events available on-demand, processing user-supplied text in 100 languages ranging from Afrikaans to Yiddish. We provide multiple event-centric views of extracted events, including both a graphical representation and a document-level summary. We also integrate existing cross-lingual search algorithms with event extraction capabilities to provide cross-lingual event-centric search, allowing English-speaking users to search over events automatically extracted from a corpus of non-English documents, using either English natural language queries (e.g. _cholera outbreaks in Iran_) or structured queries (e.g. find all events of type _Disease-Outbreak_ with agent _cholera_ and location _Iran_).
## 1 Introduction
Understanding global events is critical to understanding the world around us--whether those events consist of pandemics, political unrest, natural disasters, or cyber attacks. The breadth of events of possible interest, the speed at which surrounding socio-political event contexts evolve, and the complexities involved in generating representative annotated data all contribute to this challenge. Events are also intrinsically global: many downstream use cases for event extraction involve reporting not just in a few major languages but in a much broader context. The languages of interest for even a fixed task may still shift from day to day, e.g. when a disease emerges in an unexpected location.
The ISI-Clear (Cross-Lingual Event & Argument Retrieval) system meets these challenges by building state-of-the-art, language-agnostic event extraction models on top of massively multi-lingual language models. These event models require only English training data (not even bitext--no machine translation required) and can identify events and the relationships between them in at least a hundred different languages. Unlike more typical benchmark tasks explored for zero-shot cross-lingual transfer--e.g. named entity detection or sentence similarity, as in Hu et al. (2020)--event extraction is a complex, structured task involving a web of relationships between elements in text.
ISI-Clear makes these global events available to users in two complementary ways. First, users can supply their own text in a language of their choice; the system analyzes this text in that native language and provides multiple event-centric views of the data in response. Second, we provide an interface for cross-lingual event-centric search, allowing English-speaking users to search over events automatically extracted from a corpus of non-English documents. This interface allows for both natural language queries (e.g. _statements by Angela Merkel about Ukraine_) or structured queries (_event type = [Arrest, Protest], location = Iraq_), and builds upon our existing cross-lingual search capabilities, demonstrated in Boschee et al. (2019).
The primary contributions of this effort are threefold:
1. Strong, language-agnostic models for a complex suite of tasks, deployed in this demo on a hundred different languages and empirically tested on a representative variety of languages.
2. An event-centric user interface that presents events in intuitive text-based, graphical, or summary forms.
3. Novel integration of cross-lingual search capabilities with zero-shot cross-lingual event extraction.
We provide a video demonstrating the ISI-Clear user interface at [https://youtu.be/PE367pyuye8](https://youtu.be/PE367pyuye8).
## 2 User Interface
### On-the-Fly Language-Agnostic Event Extraction & Display
In our first mode, users are invited to supply their own text in a language of their choice. The system supports any language present in the underlying multi-lingual language model; for this demo we use XLM-RoBERTa (Conneau et al., 2020), which supports 100 languages ranging from Afrikaans to Yiddish.
After submission, the system displays the results in an initial text-based format, showing the events found in each sentence (Figure 1). For a more intuitive display of the relationships between events, users can select a graphical view (Figure 2). We can easily see from this diagram that the EU is the agent of both the _withdrawal_ and the _buying_ events, and that the two events are related (the EU is withdrawing from buying Russian oil).
Finally, the user can see an event-centric summary of the document, choosing to highlight either particular categories of event (e.g., _Crime_, _Military_, _Money_) or particular participants (e.g., _Ukraine_, _Putin_, _Russia_). When one or more categories or participants are selected, the system will highlight the corresponding events in both the original text and, where possible, in the machine translation. An example of a Farsi document is shown in Figure 3. Here, the system is highlighting three events in the document where Russia is either an agent or a patient of an event. For this demo, we use simple heuristics over English translations to group participant names and descriptions; in future work we plan to incorporate a zero-shot implementation of document co-reference to do this in the original language.
### Cross-Lingual Event-Centric Search
The second mode of the ISI-Clear demo allows users to employ English queries to search over events extracted from a foreign language corpus. To enable this, we repurpose our work in cross-lingual document retrieval (Barry et al., 2020) to index and search over event arguments rather than whole documents. A query may specify target _event types_ as well as _agent_, _patient_, or _location_ arguments; it may also include additional words to con
Figure 1: Text-based display of Polish news. The user provides only the Polish text. To aid an English-speaking user, ISI-Clear displays the extracted event information not only in Polish but also in English. All processes—including anchor detection, argument extraction, machine translation and span-projection—are carried out in real time.
Figure 2: Graph-based display of event information extracted from user provided text in Polish.
strain the _context_. A sample query might ask for _Communicate_ events with the agent _Angela Merkel_ and the context _Ukraine_.
**Query specification.** We allow queries to be specified in two ways. The first simply asks the user to directly specify the query in structured form: using checkboxes to indicate which event types should be included and directly typing in values for each condition (_agent_, _patient_, etc.). A second and more intuitive method allows users to enter a query as natural language. The system processes the query using the ISI-Clear event system and populates a structured query automatically from the results. For instance, if the user enters the phrase _anti-inflation protests in Vietnam_, ISI-Clear will detect a _Protest_ event with location _Vietnam_ in that phrase. It will turn this result into a query with event type _Protest_, location _Vietnam_, and additional context word _anti-inflation_.
**Display.** We display corpus events in ranked order with respect to the user query. The ranking is a combination of system confidence in the underlying extractions (e.g., is this event _really_ located in Vietnam?) and system confidence in the cross-lingual alignment (e.g., is _etudiants interationaux_ really a good match for the query phrase _foreign students_?). To estimate the latter, we rely on our prior work in cross-lingual retrieval, where we developed state-of-the-art methods to estimate the likelihood that foreign text \(f\) conveys the same meaning as English text \(e\)(Barry et al., 2020). We note that for locations, we include containing countries (as determined via Wikidata) in the index so that a search for _Iran_ will return events happening in, e.g., _Tehran_. More specific details on the ranking functions can be found in Appendix A.3.
As part of our display, we break down system confidence by query condition--that is, we separately estimate the system's confidence in the _agent_ vs., say, the _location_. For each condition, we display a "traffic light" indicator that shows the system's confidence in that condition for an event. Red, yellow, and green indicate increasing levels of confidence; black indicates that there is no evidence for a match on this condition, but that other conditions matched strongly enough for the event to be returned. A sample natural language query and search results are shown in Figure 4.
**Corpora.** For this demo, we support two corpora: (1) 20,000 Farsi news documents drawn from Common Crawl1 and (2) \(\sim\)55K Weibo messages (in Chinese) on the topic of the Russo-Ukrainian crisis (Fung and Ji, 2022).
Footnote 1: [https://commoncrawl.org/](https://commoncrawl.org/)
## 3 Ontology & Training Data
The ISI-Clear demo system is compatible with any event ontology that identifies a set of event types and argument roles. The system expects sentence-level English training data that identifies, for each event, one or more anchor spans and zero or more argument spans (with roles).
Figure 3: Event-centric summary of Farsi document.
For this demonstration, we use the "basic event" ontology and data developed for the IARPA BETTER program (available at [https://ir.nist.gov/better/](https://ir.nist.gov/better/)). The ontology consists of 93 event types and a small set of argument roles (_agent_, _patient_, and _related-event_). In other settings, we have trained and tested the underlying system on the publicly available ACE event ontology2, showing state-of-the-art zero-shot cross-lingual results in (Fincke et al., 2022). We prefer the BETTER ontology for this demo because of its broad topical coverage and its inclusion of event-event relations (in the form of _related-event_ arguments). The ISI-Clear system is also designed to attach general-purpose _when_ and _where_ arguments to any event, regardless of ontology; see section 4.5.
Footnote 2: [https://www.ldc.upenn.edu/collaborations/past-projects/ace](https://www.ldc.upenn.edu/collaborations/past-projects/ace)
## 4 System Components
We present here the highlights of our technical approach, which relies on a collection of strong, language-agnostic models to perform all aspects of event extraction and the classification of relationships between events, as well as machine translation and foreign-to-English projection of event output (for display purposes).
### Ingest & Tokenization
Consistent with XLM-RoBERTa, we use Sentence Piece (Kudo and Richardson, 2018) to tokenize text, and at extraction time, our models label each input subword separately. For languages where words are typically surrounded by whitespace, our system then expands spans to the nearest whitespace (or punctuation) to improve overall performance. If the system produces a conflicting sequence of labels for a single word, we apply simple heuristics leveraging label frequency statistics to produce just one label.
### Anchor Detection
ISI-Clear performs anchor identification and classification using a simple beginning-inside-outside (BIO) sequence-labeling architecture composed of a single linear classification layer on top of the transformer stack. For more details please see (Fincke et al., 2022).
### Argument Attachment
For argument attachment, we consider one event anchor \(A\) and one role \(R\) at a time. We encourage the system to focus on \(A\) and \(R\) by modifying the input to the language model. For instance, when _A=displaced_ and _R=1_ (_agent_), the input to the language model will be _displaced ; 1 </s> Floods < displaced > thousands last month_. This modification encourages the language model to produce representations of tokens like _thousands_ that are contextualized by the anchor and role being examined. The argument attachment model concatenates the language model output vector for each input token with an embedding for event type and applies a linear classifier to generate BIO labels. For more details please see (Fincke et al., 2022).
### Event-Event Relations
ISI-Clear can handle arbitrary event-event relations within a sentence, including the special case of event co-reference (when a given event has two or more anchor spans). We consider one event anchor \(A_{1}\) at a time. Again we modify the input to the language model (by marking \(A_{1}\) with special characters on either side) to encourage the model to consider all other anchors in light of \(A_{1}\). We
Figure 4: Example of search results.
then represent each event anchor in the sentence (including \(A_{1}\) itself) as a single vector, generated by feeding the language model output for its constituent tokens into a bi-LSTM and then concatenating the bi-LSTM's two final states. (This allows us to smoothly handle multi-word anchors.) To identify the relationship between \(A_{1}\) and \(A_{2}\), if any, we then concatenate the representations for \(A_{1}\) and \(A_{2}\) and pass the result to a linear classifier. The final step optimizes over the scores of all such pairwise classifications to label all relations in the sentence.
### When & Where
The ontology used for this demonstration (described in Section 3) does not annotate _when_ and _where_ arguments. However, these event attributes are critical for downstream utility. We therefore deploy an ontology-agnostic model that can assign dates and locations to events of any type. To do this, we train a question-answering model to answer questions such as _<s> When/Where did the [anchor] happen? </s> Context </s>_. We first train the model on the SQUAD2 dataset Rajpurkar et al. (2016) and then continue training on the event location and time annotations in the English ACE dataset.
### Machine Translation & Projection
All event extraction happens in the target language; no machine translation (or bitext) is required. However, for system output to be useful to English speakers, translation is highly beneficial. Here, we rely on the 500-to-1 translation engine developed by our collaborators at ISI Gowda et al. (2021)3. Translation happens after event extraction. We have not optimized this deployment of MT for speed, so we display the results without translation first and then (when the small light in the top toolbar turns green, usually after a few seconds), we can refresh the screen to show results with translations added.
Footnote 3: Available at [http://rtg.isi.edu/many-eng/](http://rtg.isi.edu/many-eng/).
To project anchor and argument spans into machine translation, we require no parallel data for training. Instead, we leverage the fact that the pre-trained XLM-RoBERTa embeddings are well aligned across languages and have been shown to be effective for word alignment tasks Dou and Neubig (2021). The similarity of a word in a foreign-language sentence to a word in the parallel English sentence is determined by the cosine distance between the embeddings of the two words. We leverage the Itermax algorithm Jalili Sabet et al. (2020) to find the best phrase matches. Since we avoid making any bespoke language specific decisions, our projection technique is highly scalable and can project from any of the 100 languages on which XLM-RoBERTa was pre-trained on.
## 5 System Evaluation & Analysis
We evaluate our system on a variety of languages and ontologies and compare where possible to existing baselines. Following community practice, e.g. Zhang et al. (2019), we consider an anchor correct if its offsets and event type are correct, and we consider an argument correct if its offsets, event type, and role find a match in the ground truth. For event coreference (same-sentence only), we consider each anchor pair separately to produce an overall F-score.
Table 1 provides overall scores in several settings where multi-lingual event annotations are available. All models are trained on English data only. For the ACE data, we follow Huang et al. (2022). The BETTER Basic task is described in Section 3; there are two ontologies (Basic-1 and Basic-2) from different phases of the originating program. The BETTER Abstract task is similar to BETTER Basic, but all action-like phrases are annotated as events, with no further event type specified4; valid roles are only _agent_ and _patient_McKinnon and Rubino (2022). More dataset statistics are found in Appendix A.1.
Footnote 4: Since abstract events lack event types, we also require anchor offsets to match when scoring arguments.
It is difficult to compare system accuracy across languages; a lower score in one language may reflect a real difference in performance across languages--or just that one set of documents is harder than another. Still, we observe the following. First, performance on anchors seems most sensitive to language choice--for instance, we note that Arabic and Chinese anchor performance on ACE differs by almost 10 points. For arguments, however, non-English performance is relatively consistent given a task--but varies more widely between tasks. Second, we note that cross-lingual performance seems best on anchors, where it exceeds 80% of English performance for all but one condition. In contrast, argument performance varies more widely, with many conditions below 70% of English (though some as high as 89%).
We also compare against existing published baselines where possible. There are relatively few pub
lished results on cross-lingual event anchor detection (and none that we could find on the task of cross-lingual event co-reference as defined here). To benchmark performance on anchors, we turn to MINION (Pouran Ben Veyseh et al., 2022), a multi-lingual anchor-only dataset that uses a derivative of the ACE ontology. For a fair comparison, we retrained our model (tuned for use with XLM-RoBERTa large) with XLM-RoBERTa base; we did not adjust any hyperparameters. Table 2 shows that the ISI-Clear model performs on average 2.7 points better than the reported MINION numbers for cross-lingual settings. We also show the numbers from our actual demo models (trained with XLM-RoBERTa large) for comparison.
For argument detection, much more published work exists, and we show in Table 3 that ISI-Clear achieves state-of-the-art performance on all ACE datasets, comparing against the previous state-of-the-art as reported in Huang et al. (2022).
## 6 Related Work
Several recent demos have presented multi-lingual event extraction in some form, but most assume training data in each target language (e.g. Li et al. (2019) or Li et al. (2020)) or translate foreign-language text into English before processing (e.g. Li et al. (2022)). In contrast, the focus of our demo is making events available in languages for which no training data exists. Other demos have shown the potential of zero-shot cross-lingual transfer, but on unrelated tasks, e.g. offensive content filtering (Pelicon et al., 2021). Akbik et al. (2016) uses annotation projection from English FrameNet to build target-language models for frame prediction; the focus of the demo is then on building effective queries over language-agnostic frame semantics for extraction. Finally, Xia et al. (2021) also produce FrameNet frames cross-lingually (using XLM-RoBERTa), but in contrast to our work, several of their supporting models use target-language data, and they also supply only a simpler user interface and lack the cross-lingual search-by-query capability that is a key aspect of our demo.
## 7 Conclusion
ISI-Clear provides a monolingual English-speaking user with effective access to global events, both on-demand (extracting events from input of a user's choice) or as a set of indexed documents accessible via cross-lingual search. The system
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Task & \multicolumn{4}{c}{ACE} & \multicolumn{4}{c}{Basic-1} & \multicolumn{4}{c}{Basic-2} & \multicolumn{4}{c}{Abstract} \\ \cline{2-11} Language & en & ar & zh & en & ar & en & fa & en & ar & fa & ko \\ \hline Anchors & 71.2 & 58.1 & 49.6 & 64.2 & 52.5 & 64.6 & 54.3 & 87.4 & 78.3 & 72.5 & 78.9 \\ Arguments & 72.1 & 51.5 & 51.7 & 64.5 & 51.5 & 71.6 & 64.0 & 69.8 & 45.0 & 45.7 & 45.0 \\ Event coreference & – & – & – & 83.4 & 67.9 & 86.5 & 65.9 & – & – & – & – \\ \hline \hline \end{tabular}
\end{table}
Table 1: Component-level accuracy by language / task. Dataset statistics are available in Appendix A.1. ACE lacks same-sentence event coreference so those figures are omitted. Event coreference is peripheral to the overall Abstract task; we chose to not model it explicitly and exclude it here.
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline & \multicolumn{3}{c}{base} & large \\ \hline & MINION & ISI-Clear & \(\Delta\) & ISI-Clear \\ \hline en & **79.5** & 78.9 & -0.6 & 78.0 \\ \hline \hline es & **62.8** & 62.3 & -0.5 & 65.3 \\ pt & **72.8** & 71.1 & -1.7 & 75.0 \\ pl & **60.1** & 52.6 & -7.5 & 66.4 \\ tr & 47.2 & **52.0** & +4.8 & 56.5 \\ hi & 58.2 & **72.2** & +14.0 & 72.7 \\ ko & 56.8 & **64.1** & +7.3 & 63.5 \\ \hline AVG & 59.7 & **62.4** & +2.7 & 66.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Cross-lingual anchor detection (F1) for MINION dataset, training on English only. Average is across all cross-lingual settings.
\begin{table}
\begin{tabular}{l|c c} \hline \hline & X-GEAR & ISI-Clear \\ \hline en & 71.2 & **72.1** \\ \hline \hline ar & 44.8 & **51.5** \\ zh & 51.5 & **51.7** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Cross-lingual argument detection (F1) for ACE over gold anchors, training on English only.
provides a variety of visualizations and modes for engaging with system results. We look forward to future work improving the quality of the underlying components and exploring additional capabilities to cross language barriers and expand access to information around the globe.
### Limitations
Our core approach is limited by the underlying multi-lingual language model it employs. For this demo, we are therefore limited to the 100 languages that make up the XLM-RoBERTa training set. Performance also varies across languages, tracking in part (though not in whole) with the volume of training data available for each language when building the multi-lingual language model. For instance, anecdotally, the performance on Yiddish (34M tokens in the CC-100 corpus used to train XLM-RoBERTa) is inferior to that of Farsi (13259M tokens). We have provided empirical results for eleven languages and five tasks, but it would be ideal to have a broader set of test conditions; unfortunately, annotated datasets for events are much less common than for simpler tasks like named entity recognition.
A second limitation of our system involves compute requirements. We employ multiple separate components for event extraction (e.g., for anchor detection vs. argument attachment), which increases memory/GPU footprint compared to a more unified system.
Finally, our system assumes an existing ontology and (English) training data set; it would be interesting to explore zero-shot ontology expansion in future work.
## Ethics Statement
One important note is that our system is designed to extract information about events that are reported in text, with no judgment about their validity. This can lead a user to draw false conclusions. For instance, the system might return many results for a person \(X\) as the agent of a _Corruption_ event, but this does not necessarily mean that \(X\) is actually corrupt. This should be prominently noted in any use case for this demonstration system or the underlying technologies.
## Acknowledgements
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2307.01215 | Functional Donoho-Stark Approximate Support Uncertainty Principle | Let $(\{f_j\}_{j=1}^n, \{\tau_j\}_{j=1}^n)$ and $(\{g_k\}_{k=1}^n,
\{\omega_k\}_{k=1}^n)$ be two p-orthonormal bases for a finite dimensional
Banach space $\mathcal{X}$. If $ x \in \mathcal{X}\setminus\{0\}$ is such that
$\theta_fx$ is $\varepsilon$-supported on $M\subseteq \{1,\dots, n\}$ w.r.t.
p-norm and $\theta_gx$ is $\delta$-supported on $N\subseteq \{1,\dots, n\}$
w.r.t. p-norm, then we show that \begin{align}\label{ME} (1) \quad \quad \quad
\quad &o(M)^\frac{1}{p}o(N)^\frac{1}{q}\geq \frac{1}{\displaystyle \max_{1\leq
j,k\leq n}|f_j(\omega_k) |}\max \{1-\varepsilon-\delta, 0\},\\ (2) \quad \quad
\quad \quad&o(M)^\frac{1}{q}o(N)^\frac{1}{p}\geq \frac{1}{\displaystyle
\max_{1\leq j,k\leq n}|g_k(\tau_j) |}\max \{1-\varepsilon-\delta,
0\},\label{ME2} \end{align} where \begin{align*} \theta_f: \mathcal{X} \ni x
\mapsto (f_j(x) )_{j=1}^n \in \ell^p([n]); \quad \theta_g: \mathcal{X} \ni x
\mapsto (g_k(x) )_{k=1}^n \in \ell^p([n]) \end{align*} and $q$ is the conjugate
index of $p$. We call Inequalities (1) and (2) as \textbf{Functional
Donoho-Stark Approximate Support Uncertainty Principle}. Inequalities (1) and
(2) improve the finite approximate support uncertainty principle obtained by
Donoho and Stark \textit{[SIAM J. Appl. Math., 1989]}. | K. Mahesh Krishna | 2023-07-01T04:11:02Z | http://arxiv.org/abs/2307.01215v1 | **FUNCTIONAL DONOHO-STAR APPROXIMATE SUPPORT UNCERTAINTY PRINCIPLE**
**FUNCTIONAL DONOHO-STAR APPROXIMATE SUPPORT UNCERTAINTY PRINCIPLE**
**K. MAHESH KRISHNA**
Post Doctoral Fellow
Statistics and Mathematics Unit
Indian Statistical Institute, Bangalore Centre
Karnataka 560 059, India
Email: [email protected]
Date: July 6, 2023
**Abstract**: Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be two p-orthonormal bases for a finite dimensional Banach space \(\mathcal{X}\). If \(x\in\mathcal{X}\setminus\{0\}\) is such that \(\theta_{f}x\) is \(\varepsilon\)-supported on \(M\subseteq\{1,\ldots,n\}\) w.r.t. p-norm and \(\theta_{g}x\) is \(\delta\)-supported on \(N\subseteq\{1,\ldots,n\}\) w.r.t. p-norm, then we show that
\[o(M)^{\frac{1}{p}}o(N)^{\frac{1}{q}} \geq\frac{1}{\max\limits_{1\leq j,k\leq n}|f_{j}(\omega_{k})|} \max\{1-\varepsilon-\delta,0\}, \tag{2}\] \[o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}} \geq\frac{1}{\max\limits_{1\leq j,k\leq n}|g_{k}(\tau_{j})|} \max\{1-\varepsilon-\delta,0\}, \tag{1}\]
where
\[\theta_{f}:\mathcal{X}\ni x\mapsto(f_{j}(x))_{j=1}^{n}\in\ell^{p}([n]);\quad \theta_{g}:\mathcal{X}\ni x\mapsto(g_{k}(x))_{k=1}^{n}\in\ell^{p}([n])\]
and \(q\) is the conjugate index of \(p\). We call Inequalities (1) and (2) as **Functional Donoho-Stark Approximate Support Uncertainty Principle**. Inequalities (1) and (2) improve the finite approximate support uncertainty principle obtained by Donoho and Stark _[SIAM J. Appl. Math., 1989]_.
**Keywords**: Uncertainty Principle, Orthonormal Basis, Hilbert space, Banach space.
**Mathematics Subject Classification (2020)**: 42C15, 46B03, 46B04.
###### Contents
* 1 Introduction
* 2 Functional Donoho-Stark Approximate Support Uncertainty Principle
## 1. Introduction
Let \(0\leq\varepsilon<1\). Recall that a function \(f\in\mathcal{L}^{2}(\mathbb{R}^{d})\) is said to be \(\varepsilon\)**-supported on a measurable subset**\(E\subseteq\mathbb{R}^{d}\) (also known as \(\varepsilon\)**-approximately supported** as well as \(\varepsilon\)**-essentially supported**) [1, 9] if
\[\left(\int\limits_{E^{\varepsilon}}|f(x)|^{2}\,dx\right)^{\frac{1}{2}}\leq \varepsilon\left(\int\limits_{\mathbb{R}^{d}}|f(x)|^{2}\,dx\right)^{\frac{1}{ 2}}.\]
Let \(d\in\mathbb{N}\) and \(\widehat{\cdot}\colon\mathcal{L}^{2}(\mathbb{R}^{d})\to\mathcal{L}^{2}(\mathbb{R }^{d})\) be the unitary Fourier transform obtained by extending uniquely the bounded linear operator
\[\widehat{\cdot}\colon\mathcal{L}^{1}(\mathbb{R}^{d})\cap\mathcal{L}^{2}( \mathbb{R}^{d})\ni f\mapsto\widehat{f}\in C_{0}(\mathbb{R}^{d});\quad\widehat{f }\colon\mathbb{R}^{d}\ni\xi\mapsto\widehat{f}(\xi)\coloneqq\int_{\mathbb{R}^{d }}f(x)e^{-2\pi i\langle x,\xi\rangle}\,dx\ \in\mathbb{C}.\]
In 1989, Donoho and Stark derived the following uncertainty principle on approximate supports of function and its Fourier transform [1].
**Theorem 1.1**.: _[_1_]_ _(**Donoho-Stark Approximate Support Uncertainty Principle**) If \(f\in\mathcal{L}^{2}(\mathbb{R}^{d})\setminus\{0\}\) is \(\varepsilon\)-supported on a measurable subset \(E\subseteq\mathbb{R}^{d}\) and \(\widehat{f}\) is \(\delta\)-supported on a measurable subset \(F\subseteq\mathbb{R}^{d}\), then_
\[m(E)m(F)\geq(1-\varepsilon-\delta)^{2}.\]
Ultimate result in [1] is the finite dimensional Heisenberg uncertainty principle known today as Donoho-Stark uncertainty principle. It is then natural to seek a finite dimensional version of Theorem 1.1. For this, first one needs the notion of approximate support in finite dimensions. Donoho and Stark defined this notion as follows. For \(h\in\mathbb{C}^{d}\), let \(\|h\|_{0}\) be the number of nonzero entries in \(h\). Let \(\widehat{\cdot}\colon\mathbb{C}^{d}\to\mathbb{C}^{d}\) be the Fourier transform. Given a subset \(M\subseteq\{1,\ldots,n\}\), the number of elements in \(M\) is denoted by \(o(M)\).
**Definition 1.2**.: _[_1_]_ _Let \(0\leq\varepsilon<1\). A vector \((a_{j})_{j=1}^{d}\in\mathbb{C}^{d}\) is said to be \(\varepsilon\)**-supported on a subset \(M\subseteq\{1,\ldots,d\}\)** if_
\[\left(\sum_{j\in M^{c}}|a_{j}|^{2}\right)^{\frac{1}{2}}\leq\varepsilon\left( \sum_{j=1}^{d}|a_{j}|^{2}\right)^{\frac{1}{2}}.\]
Finite dimensional version of Theorem 1.1 then reads as follows.
**Theorem 1.3**.: _[_1_]_ _(**Finite Donoho-Stark Approximate Support Uncertainty Principle**) If \(h\in\mathbb{C}^{d}\setminus\{0\}\) is \(\varepsilon\)-supported on \(M\subseteq\{1,\ldots,d\}\) and \(\widehat{h}\) is \(\delta\)-supported on \(N\subseteq\{1,\ldots,d\}\), then_
\[o(M)o(N)\geq d(1-\varepsilon-\delta)^{2}.\]
In 1990, Smith [8] generalized Theorem 1.3 to Fourier transforms defined on locally compact abelian groups. Recently, Banach space version of finite Donoho-Stark uncertainty principle has been derived in [2]. Therefore we seek a Banach space version of Theorem 1.3. This we obtain in this paper.
## 2. Functional Donoho-Stark Approximate Support Uncertainty Principle
In the paper, \(\mathbb{K}\) denotes \(\mathbb{C}\) or \(\mathbb{R}\) and \(\mathcal{X}\) denotes a finite dimensional Banach space over \(\mathbb{K}\). Identity operator on \(\mathcal{X}\) is denoted by \(I_{\mathcal{X}}\). Dual of \(\mathcal{X}\) is denoted by \(\mathcal{X}^{*}\). Whenever \(1<p<\infty\), \(q\) denotes the conjugate index of \(p\). For \(d\in\mathbb{N}\), the standard finite dimensional Banach space \(\mathbb{K}^{d}\) over \(\mathbb{K}\) equipped with standard \(\|\cdot\|_{p}\) norm is denoted by \(\ell^{p}([d])\). Canonical basis for \(\mathbb{K}^{d}\) is denoted by \(\{e_{j}\}_{j=1}^{d}\) and \(\{\zeta_{j}\}_{j=1}^{d}\) be the coordinate functionals associated with \(\{e_{j}\}_{j=1}^{d}\).
**Definition 2.1**.: _[_3_]_ _Let \(\mathcal{X}\) be a finite dimensional Banach space over \(\mathbb{K}\). Let \(\{\tau_{j}\}_{j=1}^{n}\) be a basis for \(\mathcal{X}\) and let \(\{f_{j}\}_{j=1}^{n}\) be the coordinate functionals associated with \(\{\tau_{j}\}_{j=1}^{n}\). The pair \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) is said to be a **p-orthonormal basis** (\(1<p<\infty\)) for \(\mathcal{X}\) if the following conditions hold._
* \(\|f_{j}\|=\|\tau_{j}\|=1\) _for all_ \(1\leq j\leq n\)
**FUNCTIONAL DONOHO-STAR APPROXIMATE SUPPORT UNCERTAINTY**
**PRINCIPLE**
(ii): _For every_ \((a_{j})_{j=1}^{n}\in\mathbb{K}^{n}\)_,_
\[\left\|\sum_{j=1}^{n}a_{j}\tau_{j}\right\|=\left(\sum_{j=1}^{n}|a_{j}|^{p} \right)^{\frac{1}{p}}.\]
Given a p-orthonormal basis \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) for \(\mathcal{X}\), we get the following two invertible isometries:
\[\theta_{f}:\mathcal{X}\ni x\mapsto(f_{j}(x))_{j=1}^{n}\in\ell^{p}([n]),\quad \theta_{\tau}:\ell^{p}([n])\ni(a_{j})_{j=1}^{n}\mapsto\sum_{j=1}^{n}a_{j}\tau_ {j}\in\mathcal{X}.\]
Then we have the following proposition.
**Proposition 2.2**.: _Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) be a p-orthonormal basis for \(\mathcal{X}\). Then_
1. \(\theta_{f}\) _is an invertible isometry._
2. \(\theta_{\tau}\) _is an invertible isometry._
3. \(\theta_{\tau}\theta_{f}=I_{\mathcal{X}}\)_._
It is natural to guess the following version of Definition 1.2 for \(\ell^{p}([n])\).
**Definition 2.3**.: _Let \(0\leq\varepsilon<1\). A vector \((a_{j})_{j=1}^{n}\in\ell^{p}([n])\) is said to be \(\varepsilon\)-supported on a subset \(M\subseteq\{1.\ldots,n\}\) w.r.t. p-norm if_
\[\left(\sum_{j\in M^{c}}|a_{j}|^{p}\right)^{\frac{1}{p}}\leq\varepsilon\left( \sum_{j=1}^{n}|a_{j}|^{p}\right)^{\frac{1}{p}}.\]
With the above definition we have following theorem.
**Theorem 2.4**.: _(**Functional Donoho-Stark Approximate Support Uncertainty Principle**) Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be two p-orthonormal bases for a finite dimensional Banach space \(\mathcal{X}\). If \(x\in\mathcal{X}\setminus\{0\}\) is such that \(\theta_{f}x\) is \(\varepsilon\)-supported on \(M\subseteq\{1,\ldots,n\}\) w.r.t. p-norm and \(\theta_{g}x\) is \(\delta\)-supported on \(N\subseteq\{1,\ldots,n\}\) w.r.t. p-norm, then_
\[o(M)^{\frac{1}{p}}o(N)^{\frac{1}{q}} \geq\frac{1}{\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|}\max\{1- \varepsilon-\delta,0\}, \tag{4}\] \[o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}} \geq\frac{1}{\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|}\max\{1- \varepsilon-\delta,0\}. \tag{3}\]
Proof.: For \(S\subseteq\{1,\ldots,n\}\), define \(P_{S}:\ell^{p}([n])\ni(a_{j})_{j=1}^{n}\mapsto\sum_{j\in S}a_{j}e_{j}\in\ell^ {p}([n])\) be the canonical projection onto the coordinates indexed by \(S\). Now define \(V\coloneqq P_{M}\theta_{f}\theta_{\omega}P_{N}:\ell^{p}([n])\to\ell^{p}([n])\). Then for \(z\in\ell^{p}([n])\),
\[\|Vz\|^{p}=\|P_{M}\theta_{f}\theta_{\omega}P_{N}z\|^{p}=\left\|P_{M} \theta_{f}\theta_{\omega}P_{N}\left(\sum_{k=1}^{n}\zeta_{k}(z)e_{k}\right)\right\| ^{p}=\left\|P_{M}\theta_{f}\theta_{\omega}\left(\sum_{k=1}^{n}\zeta_{k}(z)P_{N} e_{k}\right)\right\|^{p}\] \[=\left\|P_{M}\theta_{f}\theta_{\omega}\left(\sum_{k\in N}\zeta_{ k}(z)e_{k}\right)\right\|^{p}=\left\|P_{M}\theta_{f}\left(\sum_{k\in N}\zeta_{ k}(z)\theta_{\omega}e_{k}\right)\right\|^{p}=\left\|P_{M}\theta_{f}\left(\sum_{k\in N }\zeta_{k}(z)\omega_{k}\right)\right\|^{p}\] \[=\left\|\sum_{k\in N}\zeta_{k}(z)P_{M}\theta_{f}\omega_{k}\right\| ^{p}=\left\|\sum_{k\in N}\zeta_{k}(z)P_{M}\left(\sum_{j=1}^{n}f_{j}(\omega_{k} )e_{j}\right)\right\|^{p}=\left\|\sum_{k\in N}\zeta_{k}(z)\sum_{j=1}^{n}f_{j} (\omega_{k})P_{M}e_{j}\right\|^{p}\] \[=\left\|\sum_{k\in N}\zeta_{k}(z)\sum_{j\in M}f_{j}(\omega_{k})e_ {j}\right\|^{p}=\left\|\sum_{j\in M}\left(\sum_{k\in N}\zeta_{k}(z)f_{j}( \omega_{k})\right)e_{j}\right\|^{p}=\sum_{j\in M}\left|\sum_{k\in N}\zeta_{k}( z)f_{j}(\omega_{k})\right|^{p}\] \[\leq\sum_{j\in M}\left(\sum_{k\in N}|\zeta_{k}(z)f_{j}(\omega_{k} )|\right)^{p}\leq\left(\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|\right)^{p}\sum _{j\in M}\left(\sum_{k\in N}|\zeta_{k}(z)|\right)^{p}\] \[=\left(\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|\right)^{p}o(M) \left(\sum_{k\in N}|\zeta_{k}(z)|\right)^{p}\leq\left(\max_{1\leq j,k\leq n}| f_{j}(\omega_{k})|\right)^{p}o(M)\left(\sum_{k\in N}|\zeta_{k}(z)|^{p}\right)^{ \frac{p}{p}}\left(\sum_{k\in N}1^{q}\right)^{\frac{p}{q}}\] \[=\left(\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|\right)^{p}o(M) \left(\sum_{k\in N}|\zeta_{k}(z)|^{p}\right)^{\frac{p}{p}}o(N)^{\frac{p}{q}} \leq\left(\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|\right)^{p}o(M)\left(\sum_{ k=1}^{n}|\zeta_{k}(z)|^{p}\right)^{\frac{p}{p}}o(N)^{\frac{p}{q}}\] \[=\left(\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|\right)^{p}o(M) \|z\|^{p}o(N)^{\frac{p}{q}}.\]
Therefore
\[\|V\|\leq\left(\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|\right)o(M)^{\frac{1}{ p}}o(N)^{\frac{1}{q}}. \tag{5}\]
We now wish to find a lower bound on the operator norm of \(V\). For \(x\in\mathcal{X}\), we find
\[\|\theta_{f}x-V\theta_{g}x\| \leq\|\theta_{f}x-P_{M}\theta_{f}x\|+\|P_{M}\theta_{f}x-V\theta_{ g}x\|\leq\varepsilon\|\theta_{f}x\|+\|P_{M}\theta_{f}x-V\theta_{g}x\|\] \[=\varepsilon\|\theta_{f}x\|+\|P_{M}\theta_{f}x-P_{M}\theta_{f} \theta_{\omega}P_{N}\theta_{g}x\|=\varepsilon\|\theta_{f}x\|+\|P_{M}\theta_{f} (x-\theta_{\omega}P_{N}\theta_{g}x)\|\] \[\leq\varepsilon\|\theta_{f}x\|+\|x-\theta_{\omega}P_{N}\theta_{g} x\|=\varepsilon\|\theta_{f}x\|+\|\theta_{\omega}\theta_{g}x-\theta_{\omega}P_{N} \theta_{g}x\|\] \[=\varepsilon\|\theta_{f}x\|+\|\theta_{\omega}(\theta_{g}x-P_{N} \theta_{g}x)\|=\varepsilon\|\theta_{f}x\|+\|\theta_{g}x-P_{N}\theta_{g}x\|\] \[\leq\varepsilon\|\theta_{f}x\|+\delta\|\theta_{g}x\|=\varepsilon\| x\|+\delta\|x\|=(\varepsilon+\delta)\|x\|.\]
Using triangle inequality, we then get
\[\|x\|-\|V\theta_{g}x\|=\|\theta_{f}x\|-\|V\theta_{g}x\|\leq\|\theta_{f}x-V \theta_{g}x\|\leq(\varepsilon+\delta)\|x\|,\quad\forall x\in\mathcal{X}.\]
Since \(\theta_{g}\) is an invertible isometry,
\[(1-\varepsilon-\delta)\|x\|\leq\|V\theta_{g}x\|,\quad\forall x\in\mathcal{X} \implies(1-\varepsilon-\delta)\|y\|=(1-\varepsilon-\delta)\|\theta_{g}^{-1}y\| \leq\|Vy\|,\quad\forall y\in\ell^{p}([n]),\]
i.e.,
\[\max\{1-\varepsilon-\delta,0\}\leq\|V\|. \tag{6}\]
Using Inequalities (5) and (6) we get
\[\max\{1-\varepsilon-\delta,0\}\leq\left(\max_{1\leq j,k\leq n}|f_{j}(\omega_{k })|\right)o(M)^{\frac{1}{p}}o(N)^{\frac{1}{q}}.\]
**FUNCTIONAL DONOHO-STAR APPROXIMATE SUPORT UNCERTAINTY**
**PRINCIPLE**
To prove second inequality, define \(W:=P_{N}\theta_{g}\theta_{\tau}P_{M}:\ell^{p}([n])\rightarrow\ell^{p}([n])\). Then for \(z\in\ell^{p}([n])\),
\[\|Wz\|^{p}=\|P_{N}\theta_{g}\theta_{\tau}P_{M}z\|^{p}=\left\|P_{N} \theta_{g}\theta_{\tau}P_{M}\left(\sum_{j=1}^{n}\zeta_{j}(z)e_{j}\right) \right\|^{p}=\left\|P_{N}\theta_{g}\theta_{\tau}\left(\sum_{j=1}^{n}\zeta_{j} (z)P_{M}e_{j}\right)\right\|^{p}\] \[=\left\|P_{N}\theta_{g}\theta_{\tau}\left(\sum_{j\in M}\zeta_{j} (z)e_{j}\right)\right\|^{p}=\left\|P_{N}\theta_{g}\left(\sum_{j\in M}\zeta_{j} (z)\theta_{\tau}e_{j}\right)\right\|^{p}=\left\|P_{N}\theta_{g}\left(\sum_{j \in M}\zeta_{j}(z)\tau_{j}\right)\right\|^{p}\] \[=\left\|\sum_{j\in M}\zeta_{j}(z)P_{N}\theta_{g}\tau_{j}\right\| ^{p}=\left\|\sum_{j\in M}\zeta_{j}(z)P_{N}\left(\sum_{k=1}^{n}g_{k}(\tau_{j}) e_{k}\right)\right\|^{p}=\left\|\sum_{j\in M}\zeta_{j}(z)\sum_{k=1}^{n}g_{k}( \tau_{j})P_{N}e_{k}\right\|^{p}\] \[=\left\|\sum_{j\in M}\zeta_{j}(z)\sum_{k\in N}g_{k}(\tau_{j})e_{k }\right\|^{p}=\left\|\sum_{k\in N}\left(\sum_{j\in M}\zeta_{j}(z)g_{k}(\tau_{j })\right)e_{k}\right\|^{p}=\sum_{k\in N}\left|\sum_{j\in M}\zeta_{j}(z)g_{k}( \tau_{j})\right|^{p}\] \[\leq\sum_{k\in N}\left(\sum_{j\in M}|\zeta_{j}(z)g_{k}(\tau_{j})| \right)^{p}\leq\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)^{p}\sum_{k \in N}\left(\sum_{j\in M}|\zeta_{j}(z)|\right)^{p}\] \[=\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)^{p}o(N) \left(\sum_{j\in M}|\zeta_{j}(z)|\right)^{p}\leq\left(\max_{1\leq j,k\leq n}|g _{k}(\tau_{j})|\right)^{p}o(N)\left(\sum_{j\in M}|\zeta_{j}(z)|^{p}\right)^{ \frac{p}{p}}\left(\sum_{j\in M}1^{q}\right)^{\frac{p}{q}}\] \[=\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)^{p}o(N)\|z \|^{p}o(M)^{\frac{p}{q}}.\]
Therefore
\[\|W\|\leq\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)o(M)^{\frac{1}{q} }o(N)^{\frac{1}{p}}. \tag{7}\]
Now for \(x\in\mathcal{X}\),
\[\|\theta_{g}x-W\theta_{f}x\| \leq\|\theta_{g}x-P_{N}\theta_{g}x\|+\|P_{N}\theta_{g}x-W\theta_{ f}x\|\leq\delta\|\theta_{g}x\|+\|P_{N}\theta_{g}x-W\theta_{f}x\|\] \[=\delta\|\theta_{g}x\|+\|P_{N}\theta_{g}x-P_{N}\theta_{g}\theta_{ \tau}P_{M}\theta_{f}x\|=\delta\|\theta_{g}x\|+\|P_{N}\theta_{g}(x-\theta_{ \tau}P_{M}\theta_{f}x)\|\] \[\leq\delta\|\theta_{g}x\|+\|x-\theta_{\tau}P_{M}\theta_{f}x\|= \delta\|\theta_{g}x\|+\|\theta_{\tau}\theta_{f}x-\theta_{\tau}P_{M}\theta_{f}x\|\] \[=\delta\|\theta_{g}x\|+\|\theta_{\tau}(\theta_{f}x-P_{M}\theta_{f} x)\|=\delta\|\theta_{g}x\|+\|\theta_{f}x-P_{M}\theta_{f}x\|\] \[\leq\delta\|\theta_{g}x\|+\varepsilon\|\theta_{f}x\|=\delta\|x\|+ \varepsilon\|x\|=(\delta+\varepsilon)\|x\|.\]
Using triangle inequality and the fact that \(\theta_{f}\) is an invertible isometry we then get
\[\max\{1-\varepsilon-\delta,0\}\leq\|W\|. \tag{8}\]
Using Inequalities (7) and (8) we get
\[\max\{1-\varepsilon-\delta,0\}\leq\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j} )|\right)o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}.\]
\(\Box\)
**Corollary 2.5**.: _Let \(\{\tau_{j}\}_{j=1}^{n}\) and \(\{\omega_{j}\}_{j=1}^{n}\) be two orthonormal bases for a finite dimensional Hilbert space \(\mathcal{H}\). Set_
\[\theta_{\tau}:\mathcal{H}\ni h\mapsto(\langle h,\tau_{j}\rangle)_{j=1}^{n}\in \mathbb{C}^{n},\quad\theta_{\omega}:\mathcal{H}\ni h\mapsto(\langle h,\omega_{j} \rangle)_{j=1}^{n}\in\mathbb{C}^{n}.\]
_If \(h\in\mathcal{H}\backslash\{0\}\) is such that \(\theta_{\tau}h\) is \(\varepsilon\)-supported on \(M\subseteq\{1,\ldots,n\}\) and \(\theta_{\omega}h\) is \(\delta\)-supported on \(N\subseteq\{1,\ldots,n\}\), then_
\[o(M)o(N)\geq\frac{1}{\max_{1\leq j,k\leq n}|\langle\tau_{j},\omega_{k} \rangle|^{2}}(1-\varepsilon-\delta)^{2}.\]
_In particular, Theorem 1.3 follows from Theorem 2.4._
Proof.: Define
\[f_{j}:\mathcal{H}\ni h\mapsto\langle h,\tau_{j}\rangle\in\mathbb{K};\quad g_{j} :\mathcal{H}\ni h\mapsto\langle h,\omega_{j}\rangle\in\mathbb{K},\quad\forall 1 \leq j\leq n.\]
Then \(p=q=2\) and \(|f_{j}(\omega_{k})|=|\langle\omega_{k},\tau_{j}\rangle|\) for all \(1\leq j,k\leq n.\) Theorem 1.3 follows by taking \(\{\tau_{j}\}_{j=1}^{n}\) as the standard basis and \(\{\omega_{j}\}_{j=1}^{n}\) as the Fourier basis for \(\mathbb{C}^{n}\).
**Corollary 2.6**.: _Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be two p-orthonormal bases for a finite dimensional Banach space \(\mathcal{X}\). Let \(x\in\mathcal{X}\setminus\{0\}\) is such that \(\theta_{f}x\) is \(\varepsilon\)-supported on \(M\subseteq\{1,\ldots,n\}\) w.r.t. p-norm and \(\theta_{g}x\) is \(\delta\)-supported on \(N\subseteq\{1,\ldots,n\}\) w.r.t. p-norm. If \(\varepsilon+\delta\leq 1\), then_
\[o(M)^{\frac{1}{p}}o(N)^{\frac{1}{q}} \geq\frac{1}{\max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|}(1- \varepsilon-\delta),\] \[o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}} \geq\frac{1}{\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|}(1- \varepsilon-\delta).\]
**Corollary 2.7**.: _Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be two p-orthonormal bases for a finite dimensional Banach space \(\mathcal{X}\). If \(x\in\mathcal{X}\setminus\{0\}\) is such that \(\theta_{f}x\) is \(0\)-supported on \(M\subseteq\{1,\ldots,n\}\) w.r.t. p-norm and \(\theta_{g}x\) is \(0\)-supported on \(N\subseteq\{1,\ldots,n\}\) w.r.t. p-norm (saying differently, \(\theta_{f}x\) is supported on \(M\) and \(\theta_{g}x\) is supported on \(N\)), then_
\[o(M)^{\frac{1}{p}}o(N)^{\frac{1}{q}}\geq\frac{1}{\max_{1\leq j,k\leq n}|f_{j}( \omega_{k})|},\quad o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}\geq\frac{1}{\max_{1 \leq j,k\leq n}|g_{k}(\tau_{j})|}.\]
Corollary 2.7 is not the Theorem 2.3 in [2] (it is a particular case) because Theorem 2.3 in [2] is derived for p-Schauder frames which is general than p-orthonormal bases. Theorem 2.4 promotes the following question.
**Question 2.8**.: _Given \(p\) and a Banach space \(\mathcal{X}\) of dimension \(n\), for which pairs of p-orthonormal bases \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\), \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) for \(\mathcal{X}\), subsets \(M,N\) and \(\varepsilon,\delta\), we have equality in Inequalities (3) and (4)?_
Observe that we used \(1<p<\infty\) in the proof of Theorem 2.4. Therefore we have the following problem.
**Question 2.9**.: _Whether there are Functional Donoho-Stark Approximate Support Uncertainty Principle (versions of Theorem 2.4) for 1-orthonormal bases and \(\infty\)-orthonormal bases?_
Keeping \(\ell^{p}\)-spaces for \(0<p<1\) as a model space equipped with
\[\|(a_{j})_{j=1}^{n}\|_{p}\coloneqq\sum_{j=1}^{n}|a_{j}|^{p},\quad\forall(a_{j}) _{j=1}^{n}\in\mathbb{K}^{n},\]
we set following definitions.
**FUNCTIONAL DONOHO-STARK APPROXIMATE SUPORT UNCERTAINTY PRINCIPLE**
**Definition 2.10**.: _Let \(\mathcal{X}\) be a vector space over \(\mathbb{K}\). We say that \(\mathcal{X}\) is a **disc-Banach space** if there exists a map called as **disc-norm**\(\|\cdot\|:\mathcal{X}\to[0,\infty)\) satisfying the following conditions._
1. _If_ \(x\in\mathcal{X}\) _is such that_ \(\|x\|=0\)_, then_ \(x=0\)_._
2. \(\|x+y\|\leq\|x\|+\|y\|\) _for all_ \(x,y\in\mathcal{X}\)_._
3. \(\|x\|\leq|\lambda|\|x\|\) _for all_ \(x\in\mathcal{X}\) _and for all_ \(\lambda\in\mathbb{K}\) _with_ \(|\lambda|\geq 1\)_._
4. \(\|\lambda x\|\geq|\lambda|\|x\|\) _for all_ \(x\in\mathcal{X}\) _and for all_ \(\lambda\in\mathbb{K}\) _with_ \(|\lambda|\leq 1\)_._
5. \(\mathcal{X}\) _is complete w.r.t. the metric_ \(d(x,y)\coloneqq\|x-y\|\) _for all_ \(x,y\in\mathcal{X}\)_._
**Definition 2.11**.: _Let \(\mathcal{X}\) be a finite dimensional disc-Banach space over \(\mathbb{K}\). Let \(\{\tau_{j}\}_{j=1}^{n}\) be a basis for \(\mathcal{X}\) and let \(\{f_{j}\}_{j=1}^{n}\) be the coordinate functionals associated with \(\{\tau_{j}\}_{j=1}^{n}\). The pair \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) is said to be a **p-orthonormal basis** (\(1<p<\infty\)) for \(\mathcal{X}\) if the following conditions hold._
1. \(\|f_{j}\|=\|\tau_{j}\|=1\) _for all_ \(1\leq j\leq n\)_._
2. _For every_ \((a_{j})_{j=1}^{n}\in\mathbb{K}^{n}\)_,_ \[\left\|\sum_{j=1}^{n}a_{j}\tau_{j}\right\|=\sum_{j=1}^{n}|a_{j}|^{p}.\]
Then we also have the following question.
**Question 2.12**.: _Whether there are versions of Theorem 2.4 for p-orthonormal bases \(0<p<1\)?_
We wish to mention that in [2] the functional uncertainty principle was derived for p-Schauder frames which is general than p-orthonormal bases. Thus it is desirable to derive Theorem 2.4 or a variation of it for p-Schauder frames, which we can't.
We end by asking the following curious question whose motivation is the recently proved Balian-Low theorem (which is also an uncertainty principle) for Gabor systems in finite dimensional Hilbert spaces [5, 6, 7].
**Question 2.13**.: _Whether there is a Functional Balian-Low Theorem (which we like to call Functional Balian-Low-Lammers-Stampe-Nitzan-Olsen Theorem) for Gabor-Schauder systems in finite dimensional Banach spaces (Gabor-Schauder system is as defined in [4])?_
|
2308.00956 | Curriculum Guided Domain Adaptation in the Dark | Addressing the rising concerns of privacy and security, domain adaptation in
the dark aims to adapt a black-box source trained model to an unlabeled target
domain without access to any source data or source model parameters. The need
for domain adaptation of black-box predictors becomes even more pronounced to
protect intellectual property as deep learning based solutions are becoming
increasingly commercialized. Current methods distill noisy predictions on the
target data obtained from the source model to the target model, and/or separate
clean/noisy target samples before adapting using traditional noisy label
learning algorithms. However, these methods do not utilize the easy-to-hard
learning nature of the clean/noisy data splits. Also, none of the existing
methods are end-to-end, and require a separate fine-tuning stage and an initial
warmup stage. In this work, we present Curriculum Adaptation for Black-Box
(CABB) which provides a curriculum guided adaptation approach to gradually
train the target model, first on target data with high confidence (clean)
labels, and later on target data with noisy labels. CABB utilizes
Jensen-Shannon divergence as a better criterion for clean-noisy sample
separation, compared to the traditional criterion of cross entropy loss. Our
method utilizes co-training of a dual-branch network to suppress error
accumulation resulting from confirmation bias. The proposed approach is
end-to-end trainable and does not require any extra finetuning stage, unlike
existing methods. Empirical results on standard domain adaptation datasets show
that CABB outperforms existing state-of-the-art black-box DA models and is
comparable to white-box domain adaptation models. | Chowdhury Sadman Jahan, Andreas Savakis | 2023-08-02T05:47:56Z | http://arxiv.org/abs/2308.00956v1 | # Curriculum Guided Domain Adaptation in the Dark
###### Abstract
Addressing the rising concerns of privacy and security, domain adaptation in the dark aims to adapt a black-box source trained model to an unlabeled target domain without access to any source data or source model parameters. The need for domain adaptation of black-box predictors becomes even more pronounced to protect intellectual property as deep learning based solutions are becoming increasingly commercialized. Current methods distill noisy predictions on the target data obtained from the source model to the target model, and/or separate clean/noisy target samples before adapting using traditional noisy label learning algorithms. However, these methods do not utilize the easy-to-hard learning nature of the clean/noisy data splits. Also, none of the existing methods are end-to-end, and require a separate fine-tuning stage and an initial warmup stage. In this work, we present Curriculum Adaptation for Black-Box (CABB) which provides a curriculum guided adaptation approach to gradually train the target model, first on target data with high confidence (clean) labels, and later on target data with noisy labels. CABB utilizes Jensen-Shannon divergence as a better criterion for clean-noisy sample separation, compared to the traditional criterion of cross entropy loss. Our method utilizes co-training of a dual-branch network to suppress error accumulation resulting from confirmation bias. The proposed approach is end-to-end trainable and does not require any extra finetuning stage, unlike existing methods. Empirical results on standard domain adaptation datasets show that CABB outperforms existing state-of-the-art black-box DA models and is comparable to white-box domain adaptation models.
_Impact Statement_--In addition to preserving data privacy, commercialization of deep learning models has given rise to concerns about protecting proprietary rights. In order to alleviate these concerns, domain adaptation of black-box predictors (DABP) puts additional constraints on the already challenging domain adaptation problem by limiting access not only to the source data used for training, but also to the source model parameters during adaptation to the target domain. We take inspiration from noisy label learning and propose CABB as a curriculum guided domain adaptation approach for DABP using a dual-branch target model. Our clean-noisy sample separation process produces more accurate clean sample sets compared to the traditional sample filtering methods. The pseudolabels generated in CABB are also more robust. Unlike existing state-of-the-art, DABP methods, our model is end-to-end trainable, and outperforms other methods in all benchmarks we tested. Our method advances DABP and can have immediate impact to protect proprietary models and their training data during deployment and adaptation.
Domain adaptation, Black box models, Curriculum learning, Jensen-Shannon distance
## I Introduction
With the availability of massive amounts of labelled image data, deep learning methods have made great progress in numerous computer vision tasks, such as classification, segmentation and object detection, among others. However, it is not feasible to collect, and annotate huge amounts of data for every new environment where a deep network model may be deployed. Unsupervised domain adaptation (UDA) is a special case of domain adaptation (DA) and transfer learning which aims to mitigate the domain gap that arises from deploying a model trained on labelled source data, to a new environment with unlabelled target data. Most of the existing UDA methods either adversarially align the labelled source data features, and unlabelled target data features [1, 2], or minimize their distribution discrepancy [3, 4, 5]. These methods require access to the source data during adaptation, and therefore cannot be applied when the source data is either unavailable, or cannot be shared due to privacy and security concerns. A newer, more efficient UDA paradigm, called Source-Free UDA [6, 7] has recently emerged to address such cases, where the adaptation process, instead of the source data itself, utilizes only a model trained on the source data. Such methods still fail to adequately alleviate data privacy and security concerns as model attacks may potentially retrieve the raw source data or corrupt the model. Moreover, with the commercialization of deep learning based solutions, companies may be reluctant to share their proprietary model parameters with the end users. These issues brought forth a newer UDA paradigm called domain adaptation of black-box predictors (DABP) that adapts without accessing neither the source data, nor the source model parameters [8]. Practically, a vendor can have the source trained model as an API in the cloud, and the end user can access the black-box source model
Fig. 1: Overview of domain adaptation for black-box predictors (DABP). The source model may only be accessed to generate pseudolabels for the target data, and these pseudolabels may be used to adapt another model on the target domain.
to generate predictions for each unlabelled target instance to adapt on the target domain.
Existing DABP methods transfer knowledge from the source trained model predictions to the target model, and then finetune the target model on the target data [8, 9]. The approach in [9] utilizes a noisy label learning (NLL) algorithm [10] to separate the target domain into an easy-to-adapt subdomain with cleaner pseudolabels, and a hard-to-adapt subdomain with noisier pseudolabels using low cross-entropy (CE) loss criterion as the separator [11], and then applies supervised and semi-supervised learning strategies on the easy- and hard-to-adapt subdomains, respectively.
In this work, we propose _Curriculum Adaptation for **B**ack-**Box** (**CABB**) as an unsupervised domain adaptation framework for black-box predictors. We present Jensen-Shannon distance (JSD) as a better criterion to separate clean and noisy samples using pseudolabels generated by the source model. JSD can be modelled using a two-component Gaussian Mixture Model (GMM) where the distribution with the lower distance can be considered to be consisting of cleaner samples and that with the higher distance contains noisier samples. As opposed to traditional low loss criterion for clean-noisy separation, low JSD criterion produces a more conservative, but more accurate clean sample set. To reduce error accumulation from confirmation bias, CABB employs co-training [11, 10] two identical networks and adapts one network on the clean-noisy separated sets generated by the other, and vice versa. CABB introduces a curriculum learning strategy to adaptively learn from the clean samples first, and the noisy samples later during the adaptation process. CABB foregoes the finetuning stage of existing methods by utilizing mutual information maximization [6, 12] within its curriculum, making it end-to-end adaptable. The main contributions of our work are as follows.
* We introduce CABB as a curriculum guided domain adaptation model that progressively learns from the clean target set and the noisy target set, while utilizing co-training of a dual-branch network to suppress error accumulation resulting from confirmation bias.
* We identify Jensen-Shannon divergence loss as a better criterion than cross-entropy loss for separation of clean and noisy samples for DABP.
* CABB incorporates mutual information maximization within its curriculum and makes the adaptation process end-to-end without the need for any separate finetuning stage.
* CABB produces robust pseudolabels from the mean of an ensemble of predictions generated by the two branches of the network on a set of augmentations.
## II Related Works
### _Unsupervised domain adaptation_
Domain gap or domain shift occurs when the data distribution of the training data (source domain) is considerably different from that of the testing data (target domain) [3]. Long _et al._[13], and Tzeng _et al._[4] proposed to mitigate this distribution shift by minimizing the maximum mean discrepancy (MMD) between the two distributions, while Zellinger _et al._[5] proposed to match the higher order central moments of source and target probability distributions, and thus minimize central moment discrepancy (CMD) for UDA. Sun and Saenko [14] devised Deep CORAL to minimize second-order distribution statistics to mitigate domain shift. Ganin _et al._[1] utilized a domain discriminator module, and introduced gradient reversal layer (GRL) to adversarially align the two distributions. Many methods followed since then that have utilized adversarial alignment on the latent feature space. [15, 16]. While [1] uses a common encoder for the source and target data, Tzeng _et al._[17] proposed to decouple the encoders by first training an encoder and a classifier on the labelled source data, followed by training a separate target data encoder using a domain discriminator, and finally deploying the same source classifier as the target classifier. Hoffman _et al._[18] produced source-like images using generative image-to-image translation [19] and adversarially aligned source and target data distributions at the low-level or pixel-level. Global domain-wise adversarial alignment however may cause loss of intrinsic target class discrimination in the embedding space, and lead to suboptimal performance. To preserve class-wise feature discrimination, Li _et al._[20] simultaneously aligned the domain-wise and class-wise distributions across the source and target data by solving two complementary domain-specific and class-specific minimax problems. In a non-adversarial approach, Pan _et al._[21] proposed to calculate the source class prototypes for the labelled source data, and target class prototypes from the pseudo-labelled target data, and then enforce consistency on the prototypes in the embedding space. Tang _et al._[22] similarly bases structural domain similarity to enforce structural source regularization and conducts discriminative clustering of target data without any domain alignment. Chen _et al._[23] introduced graph matching to formulate cross-domain adaptation, and minimized Wasserstein distance for entity-matching, and Gromov-Wasserstein distance for edge matching. In order to reduce negative transfer introduced by the target samples that are either near the source-data generated decision boundaries, or are far away from their corresponding class centers, Xu _et al._[24] proposed a weighted optimal transport strategy to achieve a reliable precise-pair-wise optimal transport procedure for domain adaptation.
Although domain divergence minimization [3, 4, 5], adversarial adaptation [1, 2], and optimal transport [23, 24] are widely used techniques for UDA, they require access to both the source and target data during adaptation. Addressing situations where source data is unavailable, several source-free DA (SFDA) methods have been proposed recently. Chidlovskii _et al._[25] proposed to use a few source prototypes or representatives in place of the entire source data for semi-supervised domain adaptation. Liang _et al._[26] proposed to conduct target adaptation using source-free distant supervision to iteratively find target pseudo-labels, a domain invariant subspace where the source and target data centroids are only moderately shifted, and finally target centroids/prototypes by implementing an alternating minimization strategy. Liang _et al._[6] introduced SHOT as an SFDA framework which transfers the source hypothesis or classifier to the target model, and adapts via self
training with information maximization [27, 28, 29] and class centroid-based pseudolabel refinement. Yang _et al._[7] proposed G-SFDA which refines the pseudolabels further via consistency regularization among neighboring target samples. Ding _et al._[30] introduced SFDA-DE which samples from an estimated source data distribution, and conducts contrastive alignment between the estimated source and target distributions.
### _Black box domain adaptation_
Extending the premise of SFDA further, Liang _et al._[8] introduced a newer paradigm of black box DA where, in addition to the source data, the source model parameters are also unavailable during adaptation. This new challenging scenario is important to protect intellectual property (source model parameters) from the end users. Liang _et al._ proposed DINE which distills knowledge from the black-box source model to the target model in the first stage, followed by finetuning with target pseudolabels in the second stage. Yang _et al._[9] proposed BETA as a method that separates easy- and hard-to-learn pseudolabels using a conventional noisy label learning technique [11], and applies a twin-network coordinating strategy similar to [10], and adversarial alignment during adaptation.
In this paper, we identify Jensen-Shannon distance (JSD) as a more appropriate criterion for clean-noisy sample separation for the unbounded noise rate in UDA, compared to traditional low CE loss for the bounded loss in NLL. We formulate a curriculum learning strategy to train the target model end-to-end with cleaner samples first, and progressively with noisy samples later.
## III Methodology
The black-box source model \(f_{s}(\theta_{s}):\mathcal{X}_{s}\rightarrow\mathcal{Y}_{s}\) with model parameters \(\theta_{s}\), maps the multiclass source data \(x_{s}\in\mathcal{X}_{s}\) of source domain \(\mathcal{D}_{s}\), to the label space \(y_{s}\in\mathcal{Y}_{s}\). For DABP, we however do not have access to \(\theta_{s}\), but only the hard predictions \((\hat{y}_{t}\in\mathcal{Y}_{t})=f_{s}(\theta_{s},x_{t})\) from \(f_{s}\) on the target data \(x_{t}\in\mathcal{X}_{t}\) of target domain \(\mathcal{D}_{t}\). There exists a domain shift between the source data distribution \(\mathcal{D}_{s}\) and the target data distribution \(\mathcal{D}_{t}\), while the label space is shared, i.e \(\mathcal{Y}_{s}=\mathcal{Y}_{t}\). Due to this domain shift, a large number of predictions \(\hat{y}_{t}\) may be incorrect and could result in a set of noisy pseudolabels generated by the source model. Our objective for DA is to learn a mapping function \(f_{t}(\theta_{t}):\mathcal{X}_{t}\rightarrow\mathcal{Y}_{t}\).
Research has shown that when deep networks are trained with noisy labels, the resulting models tend to memorize the wrongly labelled samples owing to confirmation bias, as the training progresses [31]. Furthermore, in regular training of a single-branch network with noisy labels, the error from one training mini-batch flows back into the network itself for the next mini-batch, and thus the error increasingly accumulates [11]. In this work, during adaptation, we employ co-teaching [11] of a dual-branch network [10, 9] to mitigate error accumulation, resulting from the confirmation bias. In co-teaching, due to the difference in branch parameters of the dual-branch design, error introduced by the noisy pseudolabels in one branch can be filtered out by the other branch. In practice, one branch conducts the clean-noisy sample separation for the other branch, and vice versa. Since each branch generates different sets of clean and noisy samples, co-teaching breaks the flow of error through the network, and thus error accumulation attenuates. To simplify notation, the dual target branches/models \(f_{t_{1}}\) and \(f_{t_{2}}\) may be represented by \(f_{t}\) in later parts of this paper. Both networks are trained/adapted, and the final inference can be taken from either one. We follow [8] to distill knowledge from the source model to the target model in a teacher-student manner. However, unlike [8], we only have access to the hard predictions from the source model. Similar to [8], the source model predictions \(\hat{y}_{t}^{i}\) are updated during adaptation at certain intervals via exponential moving average between the source model predicted pseudolabels \(\hat{y}_{t}^{i}\) and the target model predicted pseudolabels \(y_{t}^{i}\). The process of generating \(y_{t}^{i}\) is described in section III-B.
### _Clean-noisy separation_
The predictions \(\hat{y}_{t}\) generated by the black box source model \(f_{s}\) are noisy and unreliable due to domain shift between \(\mathcal{D}_{s}\) and \(\mathcal{D}_{t}\). Research on learning with noisy labels shows that deep learning models tend to fit on the clean samples first, and on the noisy samples later during training. [32, 10] We follow this insight and separate the target domain data into a clean sample set \(\mathcal{X}_{tc}\) with reliable predictions, and a noisy sample set \(\mathcal{X}_{tn}\) with unreliable predictions. In traditional noisy label settings, the noisy labels are either caused by wrong annotations from humans or from image search engines. The noise rate is, therefore, bounded. However, as the noisy labels in UDA are generated by the source model, the noise rate in this case is unbounded and can approach unity [33]. We propose Jensen-Shannon distance (JSD) [34] between the source predicted hard labels \(\hat{y}_{t}^{i}\) and the target features as the criterion for clean-noisy sample separation under unbounded noise rate. JSD is calculated as,
\[JSD(\hat{y}_{t}^{i},p_{t}^{i})=\frac{1}{2}KL(\hat{y}_{t}^{i},\frac{\hat{y}_{t} ^{i}+p_{t}^{i}}{2})+\frac{1}{2}KL(p_{t}^{i},\frac{p_{t}^{i}+\hat{y}_{t}^{i}}{2}) \tag{1}\]
where, \(KL(a,b)\) is the Kullback-Leibler divergence between \(a\) and \(b\), and \(p_{t}^{i}\) is the target model output probability for target sample \(x_{t}^{i}\). Compared to cross-entropy loss, JSD is symmetric by design, and ranges between 0 and 1, thus becoming less susceptible to noise. When applied to the network response, JSD produces a bimodal distribution, which is modelled by a 2-component Multivariate Gaussian Mixture Model (GMM) with equal priors. In DA, the target model may _confidently_ categorize an image as the wrong class with very high prediction probability. Therefore, this is a poor criterion for identifying whether a sample is clean or noisy. For the potentially unbounded pseudolabel noise rate in DABP, we take the probability of belonging to the JSD Gaussian distribution with the lower mean value as the confidence metric of being a clean sample in our clean-noisy sample separation stage. Empirically, we apply a threshold \(\delta_{t}\) on our confidence score of belonging to the lower-mean GMM distribution to select our clean sample set \(\mathcal{X}_{tc}\), at the beginning of each epoch
for adaptation. The remaining target samples are included in the noisy label set \(\mathcal{X}_{tn}\).
### _Ensemble based pseudolabeling_
In order to produce robust target model pseudolabels \(y_{t}^{i}\), we apply a series of augmentations on the target samples and produce an ensemble of output prediction probabilities from our two target models. We give equal weights to each output prediction and take the mean of the outputs as the soft pseudolabel as follows.
\[y_{t}^{i}=\frac{1}{2M}\sum_{0}^{M}f_{t_{1}}(x_{t_{m}}^{i})+f_{t_{2}}(x_{t_{m}} ^{i}) \tag{2}\]
where \(M\) is the number of augmentations for the \(i\)-th target sample. The predictions are further sharpened with a temperature factor \(T(0<T<1)\) and then normalized as follows.
\[y_{t}^{i}=\frac{(y_{t}^{i})^{\frac{1}{T}}}{\sum_{C}(y_{t}^{iC})^{\frac{1}{T}}} \tag{3}\]
where \(y_{t}^{iC}\) is the \(C\)-th dimensional value of the pseudolabel vector \(y_{t}^{i}\).
### _Curriculum guided noisy learning_
In order to mitigate early training time memorization [32] induced from noisy labels during the adaptation of deep models, we introduce a curriculum guided learning to train the target model on the clean samples first, and on the noisy samples later. As the adaptation/training progresses, more noisy samples are reclassified as clean samples.
We employ separate training losses for the clean and noisy sample set. The clean set is trained with standard cross-entropy (CE) loss as follows.
\[\mathcal{L}_{tc}(f_{t};\mathcal{X}_{tc})=-\mathbb{E}_{x_{t}\in\mathcal{X}_{tc }}\sum_{k=1}^{C}y_{t_{k}}^{i}log(\sigma_{k}(f_{t}(x_{t}^{i}))) \tag{4}\]
where \(\sigma_{k}(a)=\frac{exp(a_{k})}{\sum_{i}exp(a_{i})}\) is the softmax function and \(C\) is the number of classes. For the noisy set, we minimize a combination of active-passive losses [35] constructed of normalized cross-entropy loss \(\mathcal{L}_{tn_{NCE}}\) and reverse cross-entropy loss \(\mathcal{L}_{tn_{NCE}}\). [35] showed that such normalization makes a model robust to noisy data. Reverse cross-entropy loss is applied to avoid any underfitting on the noisy set. Due
Fig. 3: Ensemble-based pseudolabeling in CABB. Each sample is augmented to produce 6 different views that are fed through both branches \(f_{t_{1}}\) and \(f_{t_{2}}\) to create a total of 12 output predictions, which are then averaged to produce the soft pseudolabel for co-training \(f_{t_{1}}\) and \(f_{t_{2}}\).
Fig. 2: UDA pipeline in CABB. The target data is fed to the source model \(f_{s}\) and the knowledge generated from \(f_{s}\) is transferred to both target branches \(f_{t_{1}}\) and \(f_{t_{2}}\). The source predicted pseudolabels are also used to calculate JSD and produce clean-noisy sample sets. In subsequent co-training of \(f_{t_{1}}\) and \(f_{t_{2}}\), the samples sets created by one branch are used to update the other branch, using curriculum guided losses to progressively adapt to clean samples first, and the noisy samples later.
to the unbounded nature of noise rate in UDA and conservative clean-noisy separation criteria in CABB, we employ this particular combination of active-passive losses as our noisy set loss \(\mathcal{L}_{tn}\) to make target training/adaptation robust and comprehensive on the noisy sample set. The loss function is expressed as follows.
\[\begin{split}\mathcal{L}_{tn_{NCE}}(f_{t};\mathcal{X}_{tn})=\\ -\mathbb{E}_{x_{t}^{i}\in\mathcal{X}_{tn}}&\frac{ \sum_{k=1}^{C}y_{t_{k}}^{i}log(\sigma_{k}(f_{t}(x_{t}^{i})))}{\sum_{j=1}^{C} \sum_{k=1}^{C}y_{t_{j}}^{i}log(\sigma_{k}(f_{t}(x_{t}^{i})))}\\ \mathcal{L}_{tn_{RCE}}(f_{t};\mathcal{X}_{tn})=-\mathbb{E}_{x_{t}^ {i}\in\mathcal{X}_{tn}}&\sum_{k=1}^{C}\sigma_{k}(f_{t}(x_{t}^{i} ))log(y_{t_{k}}^{i})\\ \mathcal{L}_{tn}=\mathcal{L}_{tn_{NCE}}+\beta\mathcal{L}_{tn_{ RCE}}\end{split} \tag{5}\]
where \(\beta\) is a hyperparameter.
To promote learning of clean samples first and to mitigate noisy label memorization, target training is done under curriculum guidance [36]. Based on the success of the clean-noisy sample separation, the pseudolabels in the clean sample set \(\mathcal{X}_{tc}\) are more likely to be correct, while those in the noisy sample set \(\mathcal{X}_{tn}\) have a much higher noise rate. Therefore, a deep network tends to easily learn from the unambiguous \(\mathcal{X}_{tc}\) set. We set a curriculum factor \(\gamma_{n}\) according to the following equation.
\[\gamma_{n}=\gamma_{n-1}(1-\alpha\epsilon^{-L_{x_{n}}/L_{x_{n-1}}}) \tag{8}\]
where, \(\alpha\) is a hyperparameter and \(n\) is the iteration number. \(\gamma_{n-1}\) is the curriculum factor for the previous iteration. The ratio \(L_{x_{n}}/L_{x_{n-1}}\) determines how much the curriculum factor decreases from iteration \(n-1\) to \(n\). If the CE loss on the clean set increases, \(\gamma\) decreases by a small value to allow for further training on the clean set in the subsequent iterations. But if the CE loss decreases by a large margin, \(\gamma\) decreases accordingly to accommodate learning from the noisy sample set in the coming iterations. Our curriculum guidance balances the supervised and unsupervised losses on the respective clean and noisy sets as follows.
\[\mathcal{L}_{t}=\gamma_{n}\mathcal{L}_{tc}+(1-\gamma_{n})\mathcal{L}_{tn} \tag{9}\]
We adopt the formulation of information maximization (IM) loss [27, 28, 6] from [12] to help our model produce precise predictions, while maintaining a global diversity across all classes in the output predictions. The IM loss is a combination of the following entropy loss \(\mathcal{L}_{ent}\) and equal diversity loss \(\mathcal{L}_{eqdiv}\).
\[\mathcal{L}_{ent}(f_{t};\mathcal{X}_{t})=-\mathbb{E}_{x_{t}^{i}\in\mathcal{X}_ {t}}\sum_{k=1}^{C}\sigma_{k}(f_{t}(x_{t}^{i}))log(\sigma_{k}(f_{t}(x_{t}^{i}))) \tag{10}\]
\[\mathcal{L}_{eqdiv}(f_{t};\mathcal{X}_{t})=\sum_{k=1}^{C}q_{k}log\left(\frac{q_ {k}}{\hat{q}_{k}}\right) \tag{11}\]
where \(\hat{q}_{k}=\mathbb{E}_{x_{t}\in\hat{X}_{t}^{i}}[\sigma(f_{t}(x_{t}))]\) is the mean of the softmax of the target network output response. \(\mathcal{L}_{eqdiv}\) conducts KL divergence between \(\hat{q}_{k}\) and the ideal uniform response \(q_{k}\). Our curriculum guided IM loss is as follows.
\[\mathcal{L}_{IM}=\mathcal{L}_{eqdiv}+(1-\gamma_{n})\mathcal{L}_{ent} \tag{12}\]
Minimization of entropy loss \(\mathcal{L}_{ent}\) is gradually activated as the model sufficiently adapts to the clean sample. Such curriculum guidance ensures that the potentially erroneous predictions produced in the early stages of self-training are not accumulated. The \(\mathcal{L}_{eqdiv}\) loss enforces diversity in the output predictions throughtout the training process. The overall objective function is,
\[\mathcal{L}_{tot}=\mathcal{L}_{t}+\mathcal{L}_{IM} \tag{13}\]
A brief demonstration of the CABB pipeline can be found in Algorithm 1.
```
Input: Black-box source trained model \(f_{s}\) and target data \(x_{t}^{i}\in\mathcal{X}_{t}\) Output: Target adapted model \(f_{t}\) Initialization: Dual target models \(f_{t_{1}}\) and \(f_{t_{2}}\)
1forepoch = \(1\) to epoch\({}_{total}\)do while\(m\leq iter_{distill}\)do
2 Distill from teacher \(f_{s}\) to students \(f_{t_{1}}\) and \(f_{t_{2}}\) following [8]
3 end for
4 Conduct clean(\(\mathcal{X}_{tc}\))-noisy(\(\mathcal{X}_{tn}\)) sample separation using JSD from model \(f_{t_{1}}\) for \(f_{t_{2}}\) and vice-versa for\(f_{t}\in f_{t_{1}},f_{t_{2}}\)do
5while\(n\leq iter_{adapt}\)do
6 Get ensemble averaged pseudolabels \(y_{t}^{i}\in\mathcal{Y}_{t}\) from equations 2 and 3 Calculate \(\mathcal{L}_{tc}\) on \(\mathcal{X}_{tc}\), \(\mathcal{L}_{tn}\) on \(\mathcal{X}_{tn}\), and \(\mathcal{L}_{ent}\) and \(\mathcal{L}_{eqdiv}\) on (\(\mathcal{X}_{tc},\mathcal{X}_{tn})\in\mathcal{X}_{t}\) using equations 4, 7, 10, and 11 respectively
7 Calculate \(\gamma_{n}\) using equation 8 Calculate \(\mathcal{L}_{t}\) and \(\mathcal{L}_{IM}\) using equations 9 and 12
8 Optimize \(f_{t}\) with loss \(\mathcal{L}_{tot}\) using equation 13
9 end while
10 end while
11 end while
```
**Algorithm 1**Pseudocode for CABB
## IV Experimental setup
### _Datasets_
We evaluate CABB on three popular domain adaptation datasets viz. Office-31 [43], Office-Home [44], and VisDA-C [45]. **Office-31** is a small-scale DA dataset consisting of images of 31 classes of common objects found in an office across 3 domains viz. Amazon (A), Webcam (W), and DSLR (D). **Office-Home** is a medium-sized DA dataset consisting
of 4 domains viz. Art (A), Clipart (C), Product (P), and Real-World (R). The dataset contains images of 65 classes of items found in office and home environments. **VisDA-C** is a large-scale dataset consisting of 12 classes of objects across 2 domains: Synthetic (S) and Real (R). The 152K synthetic images are generated by 3D rendering and taken as the source domain. The 55K real samples are taken from MS COCO dataset [46] and taken as the target domain.
### _Implementation details_
We follow the same protocol in [8, 9] for source training to ensure fairness for comparison. Our target models are initialized with ImageNet pretrained weights, since source model parameters are inaccessible. For Office-31 and Office-Home, we use ResNet50, and for VisDA-C we use ResNet101 as the backbone [47], on top of which we attach an MLP-based classifier, similar to [8, 9]. The target models are trained with SGD optimizer with \(0.9\) momentum and weight decay \(1e^{-3}\). The learning rate for the backbone is set to \(1e^{-3}\), while that of the classifier is set to \(1e^{-2}\). \(\alpha\) in the curriculum factor is set to \(2e^{-3}\) for Office-31, and \(2e^{-4}\) for Office-Home and VisDA-C, depending on the size of the dataset. The model is adapted for 50 epochs for Office-31 and Office-Home datasets, and for 5 epochs for the VisDA-C dataset. Temperature sharpening factor \(T\) is set to \(0.5\). We implement our method using the PyTorch library on an NVIDIA-A100 GPU.
## V Results
### _Overall evaluation_
Liang _et al._[8] pioneered this area and formulated the problem statement. They also presented a number of baselines for comparison. Among them **NLL-KD** and **NLL-OT** are inspired by noisy label learning and utilize KL divergence and optimal transport respectively for refining pseudolabels. **HD-SHOT** and **SD-SHOT** are based on the SHOT [6] model and treat the source model predictions as hard labels and soft labels, respectively. In addition to these baselines, we compare CABB against state-of-the-art black-box DA models **DINE**[8] and **BETA**[9]. We further compare against a number of standard DA methods, such as **DANN**[1], **ALDA**[37], **GVB-GD**[38], **SRDC**[22], **SHOT**[6], **A\({}^{2}\)-Net**[39], **SFDA-DE**[30] etc.
In Figure 4, we present the accuracy of the clean sample set after clean-noisy sample separation for the first epoch after distillation from the source teacher model to the target student model. We can see that our choice of low JSD separation criterion in CABB consistently outperforms the low CE loss criterion used in BETA by 1%-7% across all 12 source-target domain pairs for Office-Home dataset.
The classification accuracies after adaptation across the 6 domain pairs for Office-31 dataset are shown in Table I. CABB outperforms BETA and DINE on average by 0.5% and 2.3%, respectively. While CABB beats DINE across all the domain pairs, it only underperforms BETA for **Webcam-Amazon** adaptation by 0.5%. Overall, CABB is on-par with _white-box source-free_ model SHOT and _non-source-free_ model ALDA.
Fig. 4: Accuracy on the clean sample set achieved via clean-noisy sample separation using low JSD (CABB) vs low CE (BETA), after distillation from the source teacher at the first epoch.
\begin{table}
\begin{tabular}{l|c|c|c|c c c c c c c c c c c c|c} \hline \hline Method & SF & BB & \multicolumn{1}{c|}{plane} & \multicolumn{1}{c|}{bcycl} & bus & car & horse & knife & \multicolumn{1}{c|}{mcycle} & person & plant & sktbrd & train & truck & Per-class \\ \hline DANN [1] & \(\bigtimes\) & \(\bigtimes\) & 81.9 & 77.7 & 82.8 & 44.3 & 81.2 & 29.5 & 65.2 & 28.6 & 51.9 & 54.6 & 82.8 & 7.8 & 57.6 \\ ALDA [37] & \(\bigtimes\) & \(\bigtimes\) & 93.8 & 74.1 & 82.4 & 69.4 & 90.6 & 87.2 & 89.0 & 67.6 & 93.4 & 76.1 & 87.7 & 22.2 & 77.8 \\ \hline SHOT [6] & \(\bigcheck
The results for Office-Home dataset are presented in Table II. CABB outperforms BETA and DINE by 0.7% and 3.1%, respectively. Moreover, CABB outperforms several standard _non-source-free_ DA methods such as SRDC and FixBi, and is either better than, or on par with existing state-of-the-art _white-box source-free_ DA models like HCL, \(A^{2}\)Net, and SFDA-DE.
A comparative evaluation of CABB against other state-of-the-art DA methods and DABP baselines on the VisDA-C dataset is shown in Table III. CABB surpasses both DINE and BETA by \(9.3\%\) and \(2.3\%\), respectively in terms of mean-per-class accuracy. CABB beats BETA in the most challenging category _truck_ by 10.4%. CABB also outperforms _white-box source-free_ models SHOT and \(A^{2}\)Net comfortably.
### _Ablation study_
A detailed ablation study on the efficacy of our curriculum adaptation method is given in Table IV. The impact of curriculum on the noisy set loss \(\mathcal{L}_{tn}\) and entropy loss \(\mathcal{L}_{ent}\) is shown, as curriculum is applied to these two components. In this table, in the absence of curriculum adaptation, \(\gamma_{n}\) is set to \(0.5\). In row 2, \(\mathcal{L}_{tn}\) is set to \(0\).
The results clearly indicate the benefit of a guided adaptation framework that progressively learns from the clean samples first and the noisy samples later. We see in the first three rows in Table IV that without curriculum guidance, adaptation performance suffers significantly. In the absence of curriculum guidance, we see that leaving out learning from the noisy samples during the adaptation process is better than adapting to the noisy samples with \(\mathcal{L}_{tn}\) loss, and further enforcing the wrong predictions with \(\mathcal{L}_{ent}\) loss. The drawback of blindly adapting to noisy samples becomes evident in the second and third rows, particularly in the most challenging _truck_ class. By adapting to unrefined noisy samples from the beginning, the model performance drastically deteriorates and accuracy on _truck_ can fall to as low as \(~{}1\%\).
The results in the 4th through 6th rows in Table IV show the necessity for curriculum guidance during adaptation. In the presence of curriculum learning, CABB outperforms existing state-of-the-art DABP methods. Curriculum guidance progressively refines the noisy sample pseudolabels. While enforcing the refined predictions by minimizing the \(\mathcal{L}_{ent}\) loss produces improved results, learning from the noisy pseudolabels by minimizing the \(\mathcal{L}_{tn}\) loss significantly boosts the model performance. Minimizing losses \(\mathcal{L}_{tn}\) and \(\mathcal{L}_{ent}\) on the refined pseudolabels together produce the strongest results.
## VI Conclusion
In this paper we present a curriculum guided self-training based domain adaptation method called CABB to adapt a black-box source model/predictor to the target domain. Without access to the source data or the source model parameters during adaptation, we draw inspiration from noisy label learning algorithms. We employ a co-training scheme and propose to use Jensen-Shannon distance or JSD as the criterion to filter clean and reliable samples from noisy and unreliable samples. JSD calculated between the source model predicted pseudolabels and target model predictions is modelled using a mixture of Gaussian distributions. The samples with high probability of lying on the distribution with the lower mean JSD are taken as clean samples, and the target model is trained under a curriculum schedule first on the clean samples and progressively on the noisy samples. The dual-branch design of CABB also allows robust ensemble-based pseudolabeling. CABB consistently outperforms existing black-box domain adaptation models on three popular domain adaptation benchmarks, and is on par with other white-box source free models.
## Acknowledgment
The authors would like to thank Nazmul Karim for his valuable suggestions and insights regarding noisy label learning. The authors would also like to thank RIT Research Computing for making computing resources available for experimentation.
|
2308.11683 | Learning to generate and corr- uh I mean repair language in real-time | In conversation, speakers produce language incrementally, word by word, while
continuously monitoring the appropriateness of their own contribution in the
dynamically unfolding context of the conversation; and this often leads them to
repair their own utterance on the fly. This real-time language processing
capacity is furthermore crucial to the development of fluent and natural
conversational AI. In this paper, we use a previously learned Dynamic Syntax
grammar and the CHILDES corpus to develop, train and evaluate a probabilistic
model for incremental generation where input to the model is a purely semantic
generation goal concept in Type Theory with Records (TTR). We show that the
model's output exactly matches the gold candidate in 78% of cases with a
ROUGE-l score of 0.86. We further do a zero-shot evaluation of the ability of
the same model to generate self-repairs when the generation goal changes
mid-utterance. Automatic evaluation shows that the model can generate
self-repairs correctly in 85% of cases. A small human evaluation confirms the
naturalness and grammaticality of the generated self-repairs. Overall, these
results further highlight the generalisation power of grammar-based models and
lay the foundations for more controllable, and naturally interactive
conversational AI systems. | Arash Eshghi, Arash Ashrafzadeh | 2023-08-22T15:09:55Z | http://arxiv.org/abs/2308.11683v1 | # Learning to generate and corr- uh I mean _repair_ language in real-time
###### Abstract
In conversation, speakers produce language _incrementally_, word by word, while continuously monitoring the appropriateness of their own contribution in the dynamically unfolding context of the conversation; and this often leads them to repair their own utterance on the fly. This real-time language processing capacity is furthermore crucial to the development of fluent and natural conversational AI. In this paper, we use a previously learned Dynamic Syntax grammar and the CHILDES corpus to develop, train and evaluate a probabilistic model for incremental generation where input to the model is a purely _semantic generation goal concept_ in Type Theory with Records (TTR)1. We show that the model's output exactly matches the gold candidate in 78% of cases with a ROUGE-l score of 0.86. We further do a zero-shot evaluation of the ability of the same model to generate _self-repairs_ when the generation goal changes mid-utterance. Automatic evaluation shows that the model can generate self-repairs correctly in 85% of cases. A small human evaluation confirms the naturalness and grammaticality of the generated self-repairs. Overall, these results further highlight the generalisation power of grammar-based models and lay the foundations for more controllable, and naturally interactive conversational AI systems.
Footnote 1: All relevant code, models, and data are available at [https://bitbucket.org/dylandaloguesystem/dstrtr/src/dstrtr_arash_a/](https://bitbucket.org/dylandaloguesystem/dstrtr/src/dstrtr_arash_a/)
## 1 Introduction
People process language incrementally, in real-time (see Crocker et al. (2000); Ferreira (1996); Kempson et al. (2016) among many others), i.e. both language understanding and generation proceed on a word by word rather than a sentence by sentence, or utterance by utterance basis. This real-time processing capacity underpins participant coordination in conversation (Gregoromichelaki et al., 2012, 2020) and leads to many characteristic phenomena such as split-utterances (Poesio and Rieser, 2010; Purver et al., 2009), mid-utterance feedback in the form of backchannels (Heldner et al., 2013) or clarification requests (Healey et al., 2011; Howes and Eshghi, 2021), hesitations, self-repairs (Schegloff et al., 1977) and more.
Language generation - our focus here - is just as incremental as language understanding: speakers normally do not have a fully formed conceptualisation or plan of what they want to say before they start articulating, and conceptualisation needs only to be one step ahead of generation or articulation (Guhe, 2007; Levelt, 1989). This is possible because speakers are able to continuously monitor the syntax, semantics, and the pragmatic appropriateness of their own contribution (Levelt, 1989) in the fast, dynamically evolving context of the conversation. In turn this allows them to pivot or correct themselves on the fly if needed, e.g. because they misarticulate a word, get feedback from their interlocutors (Goodwin, 1981), or else the generation goal changes due to the dynamics of the environment.
Real-time language processing is likewise crucial in designing dialogue systems that are more responsive, more naturally interactive (Skantze and Hjalmarsson, 2010; Aist et al., 2006), and are more accessible to people with memory impairments (Addlesee et al., 2019; Addlesee and Damonte, 2023; Nasreen et al., 2021). Despite this importance, relative to turn-based systems, it has received little attention from the wider NLP community; perhaps because it has deep implications for the architecture of such systems (Schlangen and Skantze, 2009; Skantze and Schlangen, 2009; Kennington et al., 2014), which make them much harder to build and maintain.
In this paper, we extend the work of Purver and Kempson (2004); Hough and Purver (2012);
Hough (2015), who lay the theoretical foundations for incremental generation and later the processing of self-repairs in Dynamic Syntax (Kempson et al., 2001, 2016, Sec. 2.3). For the first time, we develop a probabilistic model for incremental generation (Sec. 3) that conditions next word selection on the current incrementally unfolding context of the conversation, and also on features of a _purely semantic generation goal concept_, expressed as a Record Type (RT) in Type Theory with Records (Cooper, 2012; Cooper and Ginzburg, 2015). The model is trained and evaluated on part of the CHILDES corpus (MacWhinney, 2000) using an extant grammar that was learned by Eshghi et al. (2013) from the same data. Results show that in the best case, the model output matches the gold generation test candidate in 83% of cases (Sec. 4.2). We then go on to experiment with and evaluate the ability of the same model to generate self-repairs in a zero-shot setting in the face of _revisions to the goal concept RT_ under various conditions (Sec 4.3): viz. for forward-looking and backward-looking repair and at different distances from the reparandum. Automatic evaluation shows that it can generate self-repairs correctly in 85% of cases. A small human evaluation confirms the overall naturalness and grammaticality of the generated repairs. Overall, these results further highlight the generalisation power of grammar-based models (see also Mao et al. (2021); Eshghi et al. (2017) and lay the foundations for more controllable, and naturally interactive conversational AI systems.
## 2 Dynamic Syntax and Type Theory with Records (DS-TTR)
Dynamic Syntax (DS, Kempson et al., 2016; Cann et al., 2005; Kempson et al., 2001) is a process-oriented grammar formalism that captures the real-time, incremental nature of the dual processes of linguistic comprehension and production, on a word by word or token by token basis. It models the time-linear construction of _semantic_ representations (i.e. _interpretations_) as progressively more linguistic input is parsed or generated. DS is idiosyncratic in that it does not recognise an independent level of structure over words: on this view syntax is sets of constraints on the incremental processing of semantic information.
The output of parsing any given string of words is thus a _semantic tree_ representing its predicate-argument structure (see Fig. 1). DS trees are always binary branching, with argument nodes conventionally on the right and functor nodes to the left; tree nodes correspond to terms in the lambda calculus, decorated with labels expressing their semantic type (e.g. \(Ty(e)\)) and formulae - here as record types of Type Theory with Records (TTR, see Sec. 2.1 below); and beta-reduction determines the type and formula at a mother node from those at its daughters (Fig. 1). These trees can be _partial_, containing unsatisfied _requirements_ potentially for any element (e.g. \(?Ty(e)\), a requirement for future development to \(Ty(e)\)), and contain a _pointer_, \(\diamond\), labelling the node currently under development.
Grammaticality is defined as parsability in a context: the successful incremental word-by-word construction of a tree with no outstanding requirements (a _complete_ tree) using all information given by the words in a string. We can also distinguish _potential grammaticality_ (a successful sequence of steps up to a given point, although the tree is not complete and may have outstanding requirements) from _ungrammaticality_ (no possible sequence of steps up to a given point).
Fig. 1 shows "John arrives", parsed incrementally, starting with the axiom tree with one node (\(?Ty(t)\)), and ending with a complete tree. The intermediate steps show the effects of: (i) DS Computational Actions (e.g. Completion which moves the pointer up and out of a complete node or Anticipation which moves the pointer down from the root to its functor daughter.) which are language-general and apply without any lexical input whenever their preconditions are met; and (ii) Lexical Actions which correspond to words and are triggered when a word is parsed.
ContextIn DS, context, required for processing various forms of context-dependency - including pronouns, VP-ellipsis, and short answers, as well as self-repair - is the parse search Directed Acyclic Graph (DAG), and as such, is also process-oriented. Edges correspond to DS actions - both Computational and Lexical Actions - and nodes correspond to semantic trees after the application of each action (Sato, 2011; Eshghi et al., 2012; Kempson et al., 2015). Here, we take a coarser-grained view of the DAG with edges corresponding to words (sequences of computational actions followed by a single lexical action) rather than single actions, and we drop abandoned parse
paths (see Eshghi et al., 2015; Howes and Eshghi, 2021, for details) - Fig. 4 shows an example.
### Type Theory with Records (TTR)
Dynamic Syntax is currently integrated with TTR (Cooper, 2012, 2005) as the semantic formalism in which meaning representations are couched (Eshghi et al., 2012; Purver et al., 2011, 2010)2.
Footnote 2: DS models the structural growth of representations and is agnostic to the formalism for semantic representation. As such, it has also been combined with RDF (Addlesee and Eshghi, 2021) and with vector-space representations (Purver et al., 2021)
TTR is an extension of standard type theory, and has been shown to be useful in contextual and semantic modelling in dialogue (see e.g. Ginzburg, 2012; Fernandez, 2006; Purver et al., 2010, among many others), as well as the integration of perceptual and linguistic semantics (Larsson, 2013; Dobnik et al., 2012; Yu et al., 2017). With its rich notions of underspecification and subtyping, TTR has proved crucial for DS research in the incremental specification of content (Purver et al., 2011; Hough, 2015); specification of a richer notion of dialogue context (Purver et al., 2010); models of DS grammar learning (Eshghi et al., 2013); and models for learning dialogue systems from data (Eshghi et al., 2017; Kalatzis et al., 2016; Eshghi and Lemon, 2014).
In TTR, logical forms are specified as _record types_, which are sequences of _fields_ of the form \([\begin{array}{c}l:T\end{array}]\) containing a label \(l\) and a type \(T\). Record types can be witnessed (i.e. judged true) by _records_ of that type, where a record is a sequence of label-value pairs \([\begin{array}{c}l=v\end{array}]\). We say that \([\begin{array}{c}l=v\end{array}]\) is of type \([\begin{array}{c}l:T\end{array}]\) just in case \(v\) is of type \(T\). Fields can be _manifest_, i.e. given a singleton type e.g. \([\begin{array}{c}l:T_{a}\end{array}]\) where \(T_{a}\) is the type of which only \(a\) is a member; here, we write this as \([\begin{array}{c}l_{sa}:T\end{array}]\). Fields can also be _dependent_ on fields preceding them (i.e. higher) in the record type (see Fig. 2).
The standard subtype relation \(\subseteq\) can be defined for record types: \(R_{1}\subseteq R_{2}\) if for all fields \([\begin{array}{c}l:T_{2}\end{array}]\) in \(R_{2}\), \(R_{1}\) contains \([\begin{array}{c}l:T_{1}\end{array}]\) where \(T_{1}\subseteq T_{2}\). In Fig. 2, \(R_{1}\subseteq R_{2}\) if \(T_{2}\subseteq T_{2^{\prime}}\), and both \(R_{1}\) and \(R_{2}\) are subtypes of \(R_{3}\). This subtyping relation allows semantic information to be incrementally specified, i.e. record types can be indefinitely extended with more information and/or constraints.
Additionally, Larsson (2010) defines the meet (\(\mathbb{A}\)) operation of two (or more) RTs as the union of their fields; the equivalent of conjunction in FoL; see figure 3 for an example. We will need this below (Sec.3) where we define our probabilistic model.
### Generation in DS-TTR
As alluded to in the introduction, to handle typical incremental phenomena in dialogue such as split utterances, interruptive clarification requests or self-repair, any generation model must be as incremental as interpretation: full syntactic and semantic information should be available after generating every word with continual access to the incrementally unfolding context of the conversation (Hough and Purver, 2012; Eshghi et al., 2015).
Figure 1: Incremental parsing in DS-TTR: “_John arrives_”
Figure 3: Example of merge operation between two RTs
Figure 2: Example TTR record types
In generation, there is an extra requirement on models, namely _representational interchangability_(Eshghi et al., 2011): parsing and generation should employ the same mechanisms and use the same kind of representation so that parsing can pick up where generation left off, and vice versa.
DS-TTR can meet these requirements, because generation employs exactly the same mechanisms as in parsing (Purver and Kempson, 2004) with the simple addition of a _subsumption check_ against a _generation goal concept_, expressed as a Record Type (RT) in TTR (see Sec. 2.1); and where this goal concept can be partial (does not need to correspond to a complete sentence), and need only to be one step ahead of the generated utterance so far. This ease of matching incrementality in both generation and parsing is not matched by other models aiming to reflect incrementality in the dialogue model while adopting relatively conservative grammar frameworks, some matching syntactic requirements but without incremental semantics (Skantze and Hjalmarsson, 2010), others matching incremental growth of semantic input but leaving the incrementality of structural growth unaddressed (Guhe, 2007).
As such, generation involves _lexical search_ whereby at every step, words from the lexicon are test-parsed in order to find words that (i) are parsable in the current context; and (ii) the resulting TTR semantics of the current DS tree subsumes or is monotonically extendable the generation goal. The subsumption relation is the inverse of the subtype relation defined above (see Sec. 2.1; i.e. \(R_{1}\)_subsumes_\(R_{2}\) iff \(R_{2}\subseteq R_{1}\)).
Without a probabilistic model for word selection at each step of generation, this process is effectively brute-force, computationally very inefficient, and therefore simply impractical, especially with large lexicons. This is the shortcoming that we address here for the first time by conditioning word selection on the generation goal RT. This involves learning, through Maximum Likelihood Estimation from data, \(P(w|T,R_{g})\), where \(w\) ranges over the lexicon, \(T\) is the current DS tree including its maximal semantics, and \(R_{g}\) is the generation goal. This parametrisation is described in full below in Sec. 3.
### Processing Self-repair in DS-TTR
In this section, we briefly introduce the DS model of self-repair from (Hough and Purver, 2012): there are two types of self-repair that are addressed: _backward-looking repair_ (aka. overt repair), where the repair involves a local, and partial restart of the reparandum, as in (1) and forward-looking repair (aka. covert repair) where the repair is simply a local extension, i.e. a further specification of the reparandum as in (2).
1. "Sure enough ten minutes later the bell r-the doorbell rang" _(Schegloff et al., 1977)_
2. "I-I mean the-he-they, y'know the guy, the the pathologist, looks at the tissue in the microscope..." _(Schegloff et al., 1977)_
In the model set out above, a backward-looking repair arises due to an online revision of a generation goal RT, whereby the new goal is not a subtype of the one the speaker (or the dialogue manager) had initially set out to realise. We model this via backtracking along the incrementally available context DAG as set out above. More specifically, repair is invoked if there is no possible DAG extension after the test-parsing and subsumption check stage of generation (resulting in no candidate succeeding word edge).
The repair procedure proceeds by restarting generation from the last realised (generated) word edge. It continues backtracking by one DAG vertex at a time until the root record type of the current partial tree is a subtype of the new goal concept. Generation then proceeds as usual by extending the DAG from that vertex. The word edges backtracked over are not removed, but are simply marked as repaired (see also Eshghi et al. (2015) for a fuller account), following the principle that the revision process is on the public conversational record and hence should still be accessible for later anaphoric reference (see Fig. 4).
Forward-looking repairs on the other hand, i.e. _extensions_, where the repair effects an "after-thought" are also dealt with straightforwardly by the model. The DS-TTR parser simply treats these as monotonic extensions of the current tree, resulting in subtype extension of the root TTR record type. Thus, a change in goal concept during generation will not always put demands on the system to backtrack, such as in generating the fragment after the pause in "I go to Paris...from London". Backtracking only operates at a semantics-syntax mismatch where the revised goal concept is no longer a subtype of the root record type for the (sub-)utterance so far realised, as in Figure 4.
## 3 Probabilistic Model of Generation
In this section, we follow on from Sec. 2.3 above and describe the probabilistic model that we have developed for incremental probabilistic generation. First we describe the model itself, its parameters, and how these are estimated from data. Then we describe how the model is used at inference time to generate.
Model and Parameter EstimationAs noted, generation in Dynamic Syntax is defined in terms of parsing. Specifically, it proceeds via lexical search, i.e. test-parsing (all) words from the lexicon while checking for _subsumption_ against the _goal concept_: a record type (RT) in TTR; henceforth \(R_{g}\). Words that parse successfully with a resulting (partial) semantics that subsume the goal concept are successfully generated. This process goes on until the semantics of the generated sentence equals the goal. This process is highly inefficient and impractical for larger lexicons.
On a high level, we solve this problem by building a probabilistic model which conditions the probability of generating the next word, \(w\), on: (i) \(R_{cur}\): the semantics of the generated utterance thus far; (ii) \(R_{g}\), the goal concept; and (iii) the current DS tree (henceforth \(T_{cur}\)). We condition on (i) to allow the model to keep track of the semantics of what's already been generated, i.e. the left semantic context of generation; on (ii) to aid the model in selecting words that contribute the correct semantic increments to approach the goal concept; and on (iii) to capture the syntactic constraints on what words can grammatically follow. In sum, we need to compute \(P(w|T_{cur},R_{cur},R_{g})\) for all the words \(w\) in the lexicon.
As you will see below, we learn to generate by parsing, and therefore we the use Bayes rule in Eq. 3 to cast probabilistic generation roughly in probabilistic parsing terms:
\[\underbrace{P(w|T_{cur},R_{cur},R_{g})}_{\text{probabilistic generation}}\overset{\text{Bayes Rule}}{=}\overbrace{P(T_{cur},R_{cur},R_{g}|w)}^{\text{probabilistic parsing}}P(w) \tag{3}\]
On the right hand side of Eq. 3, \(P(w)\) is the prior probability of \(w\), which we obtain from the frequency of \(w\) in our training data; and \(P(T_{cur},R_{cur},R_{g})\) a normalisation constant which we do not need to estimate.
We learn \(P(T_{cur},R_{cur},R_{g}|w)\) from gold data in the form of \(\{Utt=\langle w_{1},\dots,w_{N}\rangle,\ R_{g}\}\), where \(Utt\) is the utterance to be generated, and \(R_{g}\) is its gold semantics. To do this, we use the DS parser to parse \(Utt\) yielding a parse path (see e.g. Fig. 4) that starts with the DS axiom tree (empty tree) to the tree whose semantics is \(R_{g}\) together with all the DS trees produced after parsing each \(w_{i}\) in between; viz. a sequence \(S_{p}=\{\langle T_{1},w_{1}\rangle,\dots,\langle T_{N},w_{N}\rangle\}\), where \(T_{i}\) are the DS trees in the context of which the \(w_{i}\)'s were parsed. This sequence constitutes the observations from which we estimate \(P(T_{cur},R_{cur},R_{g}|w)\) by Maximum Likelihood Estimation (MLE).
\(T_{cur}\), \(R_{cur}\) and \(R_{g}\) are all composed of many individual features, and as a whole, would be observed
Figure 4: Incremental DS-TTR generation of a self-repair upon change of goal concept. Type-matched record types are double-circled nodes and edges indicating failed paths are dotted.
very rarely. Therefore, for generalisation, we need to decompose them and compute the probability of the whole as the conjunction (product) of the probabilities of their individual atomic features.
For \(T_{cur}\) we follow Eshghi et al. (2013) and consider only one feature of \(T_{cur}\): that of the type of the pointed node, or a requirement for a type (e.g. \(Ty(e)\), \(?Ty(e\to t)\), etc) - call this \(Ty_{p}\). This simplifies the model considerably, but has the downside of not capturing all grammatical constraints (e.g. _person_ constraints in English verbs will not be captured this way), and leading to some over-generation.
We also simplify the model by conditioning on the semantics that _remains to be generated_ - call it \(R_{inc}\) - rather than conditioning on both \(R_{cur}\) and \(R_{g}\). We can compute \(R_{inc}\) each time through the well-defined _record type subtraction_ operation in TTR where: \(R_{inc}=R_{g}\backslash R_{cur}\).
With these simplifications, what we need to estimate by MLE from each sequences \(S_{p}\) (see above) is: \(P(Ty_{p},R_{inc}|w)\).
As noted, for any generalisation at all, \(R_{inc}\) now needs to be decomposed into its individual atomic features so that we can compute the probability of each of these features individually, rather than that of \(R_{inc}\) as a whole. We decompose \(R_{inc}\) as follows: \(R_{inc}=\bigwedge_{k}(R_{k})\), where \(\bigwedge\) is the TTR equivalent of the conjunction operation in FoL (see above, Sec. 2.1); and each \(R_{k}\) is potentially _dependent_ on \(R_{j}\) where \(j<k\).
Using the probabilistic variant of TTR (Cooper et al., 2013, 2014), we can use the chain rule to then derive Eq. 4:
\[P(\bigwedge_{k}R_{k}|w)=\Pi_{k}P(R_{k}|w,R_{1}\bigwedge\ldots\bigwedge R_{k-1}) \tag{4}\]
This then allows us to express the probability of a complex record type in terms of the product of its potentially _dependent_, atomic supertypes. This, finally, puts us in a position to compute \(P(Ty_{p},R_{inc}|w)\) as follows:
\[P(R_{inc},Ty_{p}|w)\stackrel{{ independence}}{{=}}P(R_{inc}|w) \cdot P(Ty_{p}|w)\] \[\stackrel{{ decompose\ R_{inc}}}{{=}}P(\bigwedge_{k}R _{k}|w)\cdot P(Ty_{p}|w)\]
We implement the above procedure by constructing a 2D conditional count table where the rows are the words, and the columns are all the atomic semantic features observed during learning by parsing: effectively the result of decomposing all the \(R_{g}\)'s in our data; this, in addition to all the \(Ty_{p}\) features we have observed on all the DS trees encountered in the \(S_{p}\) sequences above. Then, each time we observe an atomic semantic feature of \(R_{inc}\), say, \(R_{k}\), in the context of a word, \(w\), we increment the cell \((R_{k},w)\) by 1. After learning, we normalise the columns of the table to obtain all \(P(F|w)\) where \(F\) ranges over all semantic features and pointed node types, and \(w\) over all words in the lexicon.
**Inference** At inference time, we need to estimate \(P(w|T_{cur},R_{cur},R_{g})\): a probability distribution over all the words in the lexicon, given the current context of generation, \(T_{cur}\) including the current semantics so far generated, \(R_{cur}\), and the goal concept, \(R_{g}\). Given the above we take the following steps to _populate a beam_ for generating the next word: (i) compute \(R_{inc}=R_{g}\backslash R_{cur}\); (ii) compute all the atomic semantic features, \(R_{k}\) - the headings in the columns in our conditional probability table - that \(R_{inc}\) triggers or 'turns on'. This can be done by checking whether \(R_{inc}\subseteq R_{k}\); (iii) compute the single \(Ty_{p}\) (type of pointed node) feature by observing the type of the pointed node on \(T_{cur}\); (iv) for each row (i.e. each word) take the product (or sum of log probabilities) of all the column features thus triggered in steps (ii) and (iii); (v) sort the words in the lexicon by their probability from (iv) and have the top N fill the beam of size N.
Once the beam is thus populated, we use the DS grammar to parse each word in the beam in turn; upon success, that is, if the word is parsable, and the resulting semantics subsumes the goal concept, \(R_{g}\), we move on to generate the next word incrementally until we reach the goal concept, that is, until \(R_{g}\subseteq R_{cur}\wedge R_{cur}\subseteq R_{g}\).
**Repair mechanism** The DS repair mechanism, i.e. that of backtrack and parse / generate (see above Sec. 2.3), is triggered when none of the words in the beam successfully generate; either because neither are parsable, or else their resulting semantics don't subsume \(R_{g}\) (because it may have been revised). When triggered, the model backtracks over the context DAG path (see above), and, following the same inference process, attempts to (re-)populate the beam and generate from there. Backtracking continues until generation is successful, with the model having generated the interregnum (e.g. "I mean", "sorry I mean", "uh", "no", etc.) before it generates the first repair word.
Generation continues normally from that point until the (potentially new) goal concept is reached.
## 4 Evaluation
### Data
The data to train and test our model comes from the Eve section of the CHILDES corpus MacWhinney (2000). This section was annotated with logical forms (LF) by Kwiatkowski et al. (2012). The LFs were then converted to TTR record types (RT) by Eshghi et al. (2013). This dataset consists of utterances towards children from parents, therefore the sentences have a relatively simple structure than adult language. We will use it in the shape of (Utterance, Goal Concept) pairs to train and test our model.
For training our generator, we test-parsed the dataset using two versions of the grammar learned by Eshghi et al. (2013): the grammar containing the top 1 hypothesis and the grammar containing the top 3. This resulted in two subsets of the data that could be parsed and in which the produced RT semantics matched the gold semantics exactly. Let's call these top-1 and top-3 respectively. We report their relevant statistics in Table 1.
However, even as the top-3 grammar from Eshghi et al. (2013) gives wider parsing coverage, it included many erroneously learned lexical actions. We therefore decided to carry out our experiments below on the top-1 dataset filtered using the top-1 grammar. This is at the expense of not generating sentences that we'd otherwise be able to generate since the overall distribution of the two datasets are similar. Therefore, the results we report below are more conservative (i.e. lower) than those we'd have been able to achieve if we'd manually cleaned up the top-3 grammar and applied it to learning and generation.
### Model Evaluation
We evaluate our generation model on the top-1 set in two ways: (i) standard evaluation of generation without any mid-generation revisions to the goal; (ii) we evaluate the capability of the same model to generalise to cases where the goal concept is revised mid-generation, i.e. to cases where the model needs to produce _self-repairs_.
Standard evaluationFor this, we report percentage of exact match (EM), ROUGE-1, Rouge-2, and ROUGE-1 between the gold sentences in the dataset and the output sentences from the model. On the training set, we could observe that out of 656 training samples, we can generate 597 utterances (91.01%) whose semantics exactly matches the generation goal concept; 416 of these fully match the gold sentence, yielding an EM score of 0.6341 (meaning 63.41% of the output sentences fully match the gold sentences). For the test set, out of 73 total samples, 64 sentences were generated fully to the goal concept (87.67%), and 46 of these (63.01%) completely matched the gold sentence in the dataset. Among the outputs not fully match by the gold sentences a large portion of them are very close to an exact match. For example the generated sample, "what is that", where the gold sentence is "what's that": such samples were not counted initially among the exact matches. We then took these to be exact matches and recomputed evaluation scores. The final results are summarised in Table. 2.
### Generating self-repairs: a zero-shot evaluation
To evaluate the ability of the model to generate self-repairs in a zero shot setting, we generate a dataset of _semantic revisions_ to the goal concept using the original top-1 data. We use the Stanford POS tagger to automatically generate a set of revisions, where each revision is a tuple, \(\langle R_{g},index,R_{r},Utt_{r},forward\rangle\): \(R_{g}\): is the original goal concept; \(index\): is the position along the generation path where the revision takes place; \(R_{r}\): is the revised goal; \(Utt_{r}\): is the result of replacing a single word in the _original_ gold utterance with a word from our data of the same POS - \(R_{r}\) now corresponds to the (revised goal) semantics of \(Utt_{r}\); and, finally: _forward_: is either true or false, marking whether the revised semantic material has already been contributed before \(index\) or not; if true, we would expect a _forward
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline dataset & total & total & mode & max & type / \\ & samples & words & length & length & token ratio \\ \hline top-1 & 729 & 2152 & 3 & 7 & 18.08 \\ \hline top-3 & 1361 & 4194 & 3 & 7 & 21.96 \\ \hline \end{tabular}
\end{table}
Table 1: Filtered Dataset Statistics
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & EM & ROUGE-1 & ROUGE-2 & ROUGE-1 \\ \hline Train & 0.84 & 0.94 & 0.71 & 0.92 \\ \hline Test & 0.78 & 0.88 & 0.67 & 0.86 \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation results for generation without any goal concept revisions
_looking_ self-repair, and otherwise a _backward-looking_ one (see Sec. 2.3 above). We derive these revision tuples for every utterance in the dataset with length greater than 4, and on the following Parts of Speech: {NOUN, ADJ, PROPN, ADP, ADV}. These tuples therefore give us 4 experimental conditions, across two binary factors: (i) locality: is the point at which the revision is made strictly local to the repairandum; or does it have a distance of more than 1; (ii) Is the revision after or before the corresponding semantic contributions have been made?
We then run the revisions through the model and evaluate the output automatically as follows: we use a simple rule-based algorithm to 'clean out' the self-repair from the model output, and compare this to the revised utterance, \(Utt_{r}\). For this comparison, we only report EM - see Table 3. We observed 641 of the generatable revisions in total are an exact match.
Since we do not have gold data for self-repairs, we did a small human evaluation on the model output: the authors each independently annotated a subset of 30 examples, assigning scores on a Likert scale from 1 to 3 for: (a) grammaticality of the self-repairs; and (b) their human-likness or naturalness, which initially led to a low agreement. They then met to discuss the disagreements in order to iron out the differences between the criteria they had applied. They then continued to annotate 70 additional system outputs. This led to a Krippendorff's alpha score of 0.88 for grammaticality and 0.82 for naturalness, demonstrating very high agreement. To then report the average scores given by the human annotators, the lower score was chosen when there was a disagreement, resulting in 2.72 and 2.28 mean scores for grammaticality and naturalness respectively, confirming the quality of the generated output.
## 5 Discussion
During the error analysis we observed the following error patterns: In the standard evaluation of generation, there were 199 instances where the model had fully generated to the goal concept, while the generated output did not match the gold utterance. Many were cases where the model had generated a statement instead of a question or vice versa (e.g. "I may see them" is generated over "may I see them"). In a few cases, the generated output was ungrammatical with the wrong word order: both of these are caused by the original grammar from Eshghi et al. (2013) overgenerating - this is acknowledged by the authors, and it is due to the fact that their induced grammar did not capture the full set of syntactic constraints present in their data. This is in turn because they were only conditioning their search on the type of the pointed node, like we do here. Inducing the full set of syntactic constraints was left to future work, as it is here.
### Limitations
Our evaluation in this paper has at least two important **limitations**:
(1) We evaluate our incremental generation model on a small, and relatively simple dataset (leading to high ROUGE scores because of the little variation in data and relative similarity between training and testing sets) due to the fact that we currently do not have access to a wider coverage grammar. However, this was a conscious choice on the authors' part: we used a learned grammar to induce our probabilistic generation model and evaluated it on exactly the same dataset from which the grammar was learned (Eshghi et al., 2013). This was deemed to be methodologically both sounder and cleaner than, say, use of a manually constructed grammar. We also believe that the probabilistic model we have contributed here will generalise to larger, more complex datasets when wider-coverage grammars becomes available. We leave this for future work.
(2) Perhaps more importantly, we have no comparative evaluation, and this in a climate where neural NLG has seen astonishing advances in the work on Transformer-based (large) Language Models. To carry out this comparative evaluation, we need to integrate our model with a downstream, and, ideally, multimodal dialogue task (see e.g. Yu et al. (2016, 2017) for how DS-TTR can be integrated within a visually grounded task). This requires substantial further work which is our next step.
### Why a grammar-based approach?
It might reasonably be asked why we are using a grammar-based approach in the age of Large Lan
\begin{table}
\begin{tabular}{|c|c|c|} \hline & forward-looking & backward-looking \\ \hline local & 0.93 & 0.89 \\ \hline distant & 0.73 & 0.82 \\ \hline \end{tabular}
\end{table}
Table 3: EM for zero-shot evaluation of repairs
guage Models (LLM) such as GPT-4 and a large number of other, open source models following. These models are astonishing few-shot learners, and have recently achieved great successes that few thought possible (e.g. in open-domain dialogue, conversational question answering, essay writing, summerisation, translation etc), and are changing the human world in ways that we have not yet had time to grasp.
Nevertheless, for the moment, the fact remains that: (a) these models are extremely costly to train and run due their sheer size and the amount of resources (data, compute power, energy) needed to train them; it's also been demonstrated, time and again, that they have poor compositional generalisation properties (see Pantazopoulos et al. (2022); Nikolaus et al. (2019) among others), which explains much of their characteristic data inefficiency; (b) they are very difficult to _control_ and/or adapt while often producing factually incorrect statements, commonly referred to as hallucinations (Rashkin et al., 2021; Dziri et al., 2022) using very convincing language - this extends to confident prediction of erroneous actions or plans in multi-modal, embodied settings; (d) they are very hard to sufficiently _verify_, making them unsuitable for use in safety-critical domains such as healthcare; (e) particularly important for us here, unlike recurrent models such as RNNs and LSTMs, standard Transformer-based neural architectures (Vaswani et al., 2017) are not properly incremental - even the auto-regressive variants such as GPT - in the sense that they process word sequences as whole, rather than word by word; they can be run under an 'incremental interface' (Madureira and Schlangen, 2020; Rohanian and Hough, 2021) where input is reprocessed from the beginning with every new token, but even then, they exhibit poor incremental performance with unstable output compared to e.g. LSTMs (Madureira and Schlangen, 2020). Interesting recent work has explored using Linear Transformers (Katharopoulos et al., 2020) with recurrent memory to properly incrementalise LMs (Kahardipraja et al., 2021), but this work is as yet in its infancy, and we do not yet know of any work that integrates LMs end to end within a real-time, incremental dialogue system.
On the other hand, grammar-based approaches have the advantage of being highly controllable and transparent; but crucially, they incorporate the very large wealth of linguistic knowledge that has arisen from decades of linguistics and semantics research. This knowledge has been demonstrated to be a very effective source of inductive bias in grammar-based models which in turn translates to remarkable generalisation potential, and thus also data efficiency (see e.g. Mao et al. (2021) for a CCG-based multi-modal model, and Eshghi et al. (2017) for a DS-TTR-based one) - see Eshghi et al. (2022) for an extended discussion. One common criticism is that grammar-based models are brittle. This is often true, but we do not believe this to be a fundamental property, and think that specific grammars of a language are adaptable and learnable from interaction. But much work remains to be done to demonstrate this property.
For these reasons, we believe that grammar-based approaches hold promises that are as yet unfulfilled, and are therefore still worth exploring in parallel to the much needed work on making LM architectures and training regimes more incremental (see Kahardipraja et al. (2021, 2023)).
## 6 Conclusion
We developed the first semantic, probabilistic model of real-time language generation using the Dynamic Syntax framework. The results show that the model performs well, even though we evaluated it only on a small dataset. We also demonstrated the zero-shot generalisation ability of the model to generate self-repairs where none were observed during training. To our knowledge, this is the first model capable of reacting to real-time changes to the generation goal by generating suitable self-corrections. This ability is essential in dialogue systems in highly dynamic contexts or environments. Our generation model can be seamlessly integrated into incremental dialogue system architectures (e.g. based on Schlangen and Skantze (2009)). This work further highlights the generalisation power of grammar-based approaches, and lays the foundations for creating conversational AI systems that are controllable, data-efficient, more naturally interactive, and more accessible to people with cognitive impairments.
## Acknowledgements
We are very grateful to Tanvi Dinkar and Julian Hough for some of the ideas in this paper and subsequent discussion. We would also like to thank the SemDial reviewers whose constructive critique lead to further changes and elaboration. |
2307.16769 | 2-Level Reinforcement Learning for Ships on Inland Waterways: Path
Planning and Following | This paper proposes a realistic modularized framework for controlling
autonomous surface vehicles (ASVs) on inland waterways (IWs) based on deep
reinforcement learning (DRL). The framework improves operational safety and
comprises two levels: a high-level local path planning (LPP) unit and a
low-level path following (PF) unit, each consisting of a DRL agent. The LPP
agent is responsible for planning a path under consideration of dynamic
vessels, closing a gap in the current research landscape. In addition, the LPP
agent adequately considers traffic rules and the geometry of the waterway. We
thereby introduce a novel application of a spatial-temporal recurrent neural
network architecture to continuous action spaces. The LPP agent outperforms a
state-of-the-art artificial potential field (APF) method by increasing the
minimum distance to other vessels by 65% on average. The PF agent performs
low-level actuator control while accounting for shallow water influences and
the environmental forces winds, waves, and currents. Compared with a
proportional-integral-derivative (PID) controller, the PF agent yields only 61%
of the mean cross-track error (MCTE) while significantly reducing control
effort (CE) in terms of the required absolute rudder angle. Lastly, both agents
are jointly validated in simulation, employing the lower Elbe in northern
Germany as an example case and using real automatic identification system (AIS)
trajectories to model the behavior of other ships. | Martin Waltz, Niklas Paulig, Ostap Okhrin | 2023-07-25T08:42:59Z | http://arxiv.org/abs/2307.16769v3 | # 2-Level Reinforcement Learning for Ships on Inland Waterways
###### Abstract
This paper proposes a realistic modularized framework for controlling autonomous surface vehicles (ASVs) on inland waterways (IWs) based on deep reinforcement learning (DRL). The framework comprises two levels: a high-level local path planning (LPP) unit and a low-level path following (PF) unit, each consisting of a DRL agent. The LPP agent is responsible for planning a path under consideration of nearby vessels, traffic rules, and the geometry of the waterway. We thereby leverage a recently proposed spatial-temporal recurrent neural network architecture, which is transferred to continuous action spaces. The PF agent is responsible for low-level actuator control while accounting for shallow water influences on the marine craft and the environmental forces winds, waves, and currents. Both agents are thoroughly validated in simulation, employing the lower Elbe in northern Germany as an example case and using real AIS trajectories to model the behavior of other ships.
keywords: deep reinforcement learning, path planning, path following, autonomous surface vehicle, inland waterway +
Footnote †: journal: Elsevier
## 1 Introduction
Inland waterway (IW) transport is widely regarded as an energy-efficient and low-emitting mode of transportation in terms of greenhouse gases com
pared to road or rail transportation. Additionally, IW transport offers a significant freight capacity, making it an integral part of a sustainable transport system (Rohacs and Simongati, 2007; de Barros et al., 2022). Traditionally, the navigation and control of inland vessels is performed by human operators, and recent research began to investigate the use of autonomous surface vehicles (ASVs) for such inland operations (Gan et al., 2022; Vanneste et al., 2022). For a comprehensive understanding of ASV systems, we refer to Liu et al. (2016), Fossen (2021), and Negenborn et al. (2023).
One crucial factor affecting the economic potential of shipping companies is the costs associated with crew members and personnel (Al Enezy et al., 2017). These costs can be significantly reduced since ASVs require little to no on-board personnel (Vagale et al., 2021). Furthermore, although IW operations already exhibit relatively low accident rates compared to other transportation modes (Hofbauer and Putz, 2020), the human factor can still pose a significant threat to operational safety. For instance, according to Backalov et al. (2023), human failures were responsible for 58% and 19% of accidents in Austria and Serbia, respectively, from the early 2000s to 2017. The lack of centralized data collection at the European level makes it challenging to conduct a more detailed analysis of accident causes in inland navigation (Backalov et al., 2023). However, as emphasized by Gan et al. (2022), IW operations can be particularly challenging for human operators due to waterway geometry and potential high traffic densities, further reinforcing the potential benefits of employing ASVs in inland operations.
There are different approaches to categorizing the level of autonomy of autonomous ships (Ringbom, 2019). In this paper, we adopt the definition from Vagale et al. (2021, p. 1296), which defines an ASV as _"a vessel capable of making decisions and operating independently, without human guidance, navigation, and control"_. An ASV has to continuously generate a path which it can subsequently follow. The literature distinguishes between global (GPP) and local path planning, where GPP addresses the static problem of defining a plan for the entire voyage ignoring kinematic and dynamic constraints, while LPP is an ongoing process based on real-time information to generate a feasible local path (Siegwart et al., 2011; Vagale et al., 2021). Path following describes the task of following a pre-determined (local) path without considering the visitation time of a particular waypoint (Fossen, 2021). As stated in the beginning, in this paper, we focus on LPP and PF. We refer to the controlled ASV as the _own ship_ and to its surrounding ships as _target ships_.
In recent years, the maritime literature has started to embrace the advancements in artificial intelligence (Munim et al., 2020; Zhuge et al., 2023). One notable approach is deep reinforcement learning (DRL, Matsuo et al. 2022), which combines reinforcement learning (RL, Sutton and Barto 2018) with deep neural networks. DRL has showcased remarkable success across challenging application domains (Vinyals et al., 2019; Ibarz et al., 2021; Bellemare et al., 2020; Silver et al., 2018; Wu et al., 2023), including maritime control tasks (Heiberg et al., 2022; Hart et al., 2023). In RL, an agent acquires knowledge by engaging in iterative interactions with an environment, where it learns to make decisions through trial-and-error. For instance, in the context of LPP, the agent represents the own ship and is required to adapt its heading based on the presence of nearby target ships and the geometric characteristics of the waterway. By defining an appropriate feedback mechanism, known as a reward, the agent receives penalties for causing collisions or running aground. For a detailed understanding and mathematical derivations of RL algorithms, we refer to the works of Sutton and Barto (2018) and Szepesvari (2010).
There have been recent attempts to perform maritime path planning based on DRL. The first major study on this topic was Cheng and Zhang (2018), in which a concise DRL algorithm for static obstacle avoidance was proposed. Heiberg et al. (2022) designs a DRL agent using Proximal Policy Optimization (Schulman et al., 2017) while relying on advanced collision risk assessment theory. Xu et al. (2022) apply the Deep Deterministic Policy Gradient (DDPG, Lillicrap et al. 2015) with a modified experience replay mechanism to tackle a path planning and collision avoidance (COLAV) task. A similar proposal has been made in Zhai et al. (2022), where the authors build on the popular Deep \(Q\)-Network (DQN) of Mnih et al. (2015) to construct a COLAV algorithm. Finally, Waltz and Okhrin (2023) outline a spatial-temporal recurrent network architecture for the DQN in maritime LPP. The architecture effectively handles an arbitrary number of target ships and is robust to partial observability. Further contributions for DRL-based approaches to maritime path planning include Chun et al. (2021), Li et al. (2021), Meyer et al. (2020), and Guo et al. (2020).
However, these studies primarily focus on path planning for open waters and do not consider the unique challenges associated with IW operations. These challenges include navigating narrow waterways, accounting for water depth effects on marine craft dynamics, and dealing with a potentially large number of target ships (Cao et al., 2022). To the best of our knowledge, only
Vanneste et al. (2022) have explored the application of DRL to LPP on IWs, albeit limited to static obstacles without addressing the dynamic COLAV problem. Furthermore, most existing studies overlook the impact of environmental disturbances such as winds, waters, and waves, which are critical factors in complex maritime operations (Faltinsen, 1993; Fossen, 2021).
To address these gaps in the literature, our study makes the following contributions:
* We introduce a two-level architecture for ASV control on IWs based on DRL. Our framework comprises separate agents dedicated to the LPP and PF tasks. The architecture takes into account environmental disturbances such as winds, waves, and currents, while also adhering to existing traffic rules and considering the geometry of the waterway.
* To accommodate continuous action spaces, we transfer the spatial-temporal recurrent network architecture proposed by Waltz and Okhrin (2023) from DQNs to actor-critic frameworks, and employ it for the LPP agent.
* To assess the effectiveness of our approach, we conduct extensive testing on both agents in a variety of challenging scenarios that are representative of the most difficult practical occurrences. In particular, we focus on advanced overtaking maneuvers and situations involving strong environmental forces. Furthermore, we validate the entire architecture by using real trajectories obtained from the automatic identification system (AIS).
* To ensure reproducibility and facilitate further research, we have made the source code for this paper publicly accessible at [https://github.com/MarWaltz/TUD_RL](https://github.com/MarWaltz/TUD_RL). In addition, we have open-sourced our trajectory extraction pipeline from AIS data at [https://github.com/nikpau/pytsa/tree/indspline](https://github.com/nikpau/pytsa/tree/indspline). By providing these resources, we aim to encourage the inclusion of real AIS data in the validation process of ASV control systems.
This paper is structured as follows: Section 2 provides additional maritime background information about traffic rules, sensor systems, actuator control, and classical path planning algorithms. The newly proposed architecture is visualized and described in section 3, while section 4 details the relevant
theory used in this work. Sections 5 and 6 contain detailed descriptions of the LPP and PF modules, respectively. The results and validation scenarios are shown in section 7, and section 8 concludes the paper.
## 2 Background and related work
### Traffic rules
While path planning is an extensively studied problem in robotics (Sicciliano et al., 2008), maritime path planning presents unique challenges due to the incorporation of traffic rules specific to each waterway. The International Regulations for Preventing Collisions at Sea (COLREGs, International Maritime Organization 1972) form the basis of these rules, governing the required behavior of ships during encounters at sea. However, one significant limitation of these regulations is their lack of precise metrics. Formulated in the 1970s, when autonomous systems did not exist, the rules rely on verbal descriptions based on principles of good seamanship (Singh et al., 2018; Heiberg et al., 2022). For example, COLAV actions should "be large enough to be readily apparent to another vessel" (Rule 8 of International Maritime Organization 1972), and vessels should "take early and substantial action to keep well clear" (Rule 16).
Additionally, national regulations exist that further define the traffic rules for specific waters. As a use case, we focus in this paper on the lower part of the Elbe river in northern Germany, where the relevant regulation is the _Seeschiffdahtrtsstrafsen-Ordnung_ of the Bundesministerium fur Digitales und Verkehr (1998). Of particular importance to us are two specific rules within this regulation. First, overtaking should be conducted on the portside of the vessel being overtaken. Second, the vessel being overtaken should facilitate the overtaking maneuver as much as possible. For the convenience of the reader, the exact wording of the regulation is provided in Appendix A.
### ASV sensors
LPP relies on navigational information on the vessel's current state, including position, velocity, accelerations, and environmental forces such as current and wind speed. These quantities stem from various sensors such as radar, LIDAR, sonar, visual sensors, infrared sensors, inertial measurement units, or the global positioning system (Liu et al., 2016). However, not every seagoing vessel is necessarily equipped with each of these sensors, and the
sensor data might be noisy or erroneous and requires advanced state estimation techniques (Lefeber et al., 2003; Motwani et al., 2013). Moreover, data about target ships is received via AIS, mandatory equipment since the end of 2004 for all cargo ships of certain sizes and all passenger ships (International Maritime Organization, 2023). In addition to the sensor-related challenges, practical aspects such as environmental disturbances, non-cooperative target ships, the interaction between ASVs and human-operated ships, and cyber-security threats must be considered before deploying an ASV in real-world scenarios (Akdag et al., 2022).
### Actuator control
The PF task describes the derivation of low-level control commands from a given local path. First, the controller, sometimes called _autopilot_, needs to derive a guidance law from the received set of waypoints, which is typically a vector-field guidance (VFG, Nelson et al. 2007) or a line-of-sight guidance law (Fossen et al., 2003). The functionality of these mechanisms will be detailed in subsection 4.2. Once the directional awareness is established through the guidance law, the next step is to convert it into a low-level actuator command that minimizes spatial angular and deviation from the desired path and the guidance signal. These actuator commands are typically a propeller or rudder adjustment to control the vessel's speed and course, respectively. In this paper, we focus on using rudder adjustments as the primary actuator command. Steering changes are preferred over speed adjustments due to fuel considerations and better visual and radar observability of course changes by other ships (Wang et al., 2017).
Various control algorithms can be employed to translate the guidance signal into an actuator command. Arguably one of the most popular techniques is the classic proportional-integral-derivative (PID) controller (Paramesh and Rajendran, 2021), which will also serve as a benchmark for our PF module. Further methodologies used in the literature include \(H_{\infty}\) control (Donha et al., 1998), linear quadratic Gaussian control (Sharma et al., 2012), model predictive control (Annamalai et al., 2015), sliding motion control (Liu et al., 2018), or backstepping control (Zhang et al., 2017), to mention a few. In recent work, Paulig and Okhrin (2023) applied the methodology proposed in Waltz and Okhrin (2022) to design a DRL agent for PF based on VFG and showed strong performance on different paths under varying influence of currents. Their work serves as a motivation for our PF module. Additionally,
we consider the impact of waves and winds on the vessel, providing a more comprehensive and robust autopilot system.
### Path planning algorithms
Tam et al. (2009) provide historical perspectives on the development of maritime path planning algorithms, while recent reviews are available in Vagale et al. (2021), Ozturk et al. (2022), and Yu et al. (2023). The most important methodological approaches in the field include evolutionary algorithms (Tam and Bucknall, 2010), velocity obstacle methods (Kuwata et al., 2013), model predictive control (Johansen et al., 2016), artificial potential field methods (Lyu and Yin, 2019), rapidly-exploring random trees (Zhang et al., 2023), dynamic-window approaches (Serigstad et al., 2018), fast marching methods (Liu and Bucknall, 2015), and particle swarm optimization (Ding et al., 2018). These approaches focus primarily on LPP tasks, although some studies consider GPP or perform overlapping tasks.
Moreover, these works are primarily concerned with ASVs operating on the ocean. In contrast to the extensive literature on classical algorithms for ASVs in open waters, there is a notable scarcity of work done on ASVs operating in IWs. Zhang et al. (2023) tackle this gap by employing an anisotropic fast marching algorithm for ASVs, with a particular emphasis on navigating through restricted areas near bridges. Chen et al. (2016) compare the performance of the heuristic search algorithm A* (Hart et al., 1968) and its various extensions for GPP in Dutch IWs. Similar to the concepts introduced in Lyu and Yin (2018), Gan et al. (2022) develop a planning algorithm for inland rivers that incorporates the safety potential field theory to account for static and dynamic obstacles and waterway bank walls. Finally, Cao et al. (2022) detail a modification of the RRT algorithm to perform path planning on the Yangtze River Channel in Zhenjiang.
## 3 Proposed architecture
Figure 1 visualizes the proposed architecture for ASV control on inland waterways. The vessel controller comprises three key components: a GPP module, an LPP module, and a PF module. Two DRL agents are involved: a high-level agent at the core of the LPP module and a low-level agent for the PF task. The computational flow of the architecture is as follows: The global path generated by the GPP module serves as input for the LPP module, which in turn produces a local path. The PF module processes this local
path and generates an actuator control command. Note that we leave the algorithm or heuristic employed in the GPP module unspecified, although A* or its extensions are reasonable choices (Chen et al., 2016; Singh et al., 2018).
Describing the procedure in more detail, the LPP module is activated every \(t_{\text{replan}}\) seconds to generate a new local plan based on existing information. In our simulation, we set \(t_{\text{replan}}=30\,\mathrm{s}\), although this parameter can be freely chosen depending on the geometrical difficulty of the waterway segment or the traffic density. To optimize computation time, the local path is replanned only when target ships are near the own ship. Otherwise, a simple
Figure 1: The proposed architecture for an ASV based on DRL, with the visualization being inspired by Chen et al. (2016).
linear local path can be constructed to return to the global path, which is illustrated in Figure 2.
When the LPP unit is activated, the DRL agent within it processes information from two sources: a cross-track and course error signal, indicating deviation from the global path, and navigational data from the sensor suite. The sensor data includes the positions and velocities of target ships as well as information about the waterway's geometry. Based on this information, the DRL agent generates a heading change command for the own ship, and the positions and velocities of all vessels are updated according to the vessel's model detailed in Section 4. The own ship's position is stored as a waypoint of the local path, and the error signals to the global path are updated. This process is repeated until a sufficient number of waypoints has been generated. We emphasize that the LPP procedure always occurs in simulation, even when deployed on a real ship, as path planning does not involve any actuator control.
For simplicity, control commands for the target ships are not incorporated during these planning iterations. Therefore, the LPP agent assumes that
Figure 2: The left plot visualizes the functionality of the framework in an overtaking scenario. The own ship recognizes the target ship in front and plans a safe local path accordingly. In the right plot, we visualize how the local path planning procedure can be simplified if no target ships are present. In this case, it suffices to select a waypoint of the global path and linearly connect it to the own ship’s position.
all target ships move linearly and maintain a constant course and speed, regardless of their actual behavior. In practice, this simplification can be replaced with a more sophisticated intent estimation or trajectory prediction unit (Huang et al., 2020; Ma et al., 2021), which can be seamlessly integrated into the architecture.
Finally, the PF module receives the local path and calculates the cross-track and course error using the VFG approach. Combined with velocity information and observations of environmental forces, an actuator command is generated to navigate the vessel along the desired path.
## 4 Theory
### Vessel dynamics
In this study, we employ the 3-degree-of-freedom Maneuvering Modeling Group (MMG) model introduced by Yasukawa and Yoshimura (2015) to simulate the dynamic behaviour of a 1:5 scale replica of the KVLCC2 tanker, a vessel commonly used in maritime operations. The principal particulars of the tanker can be found in Paulig and Okhrin (2023). Two coordinate systems are considered in our research, which are illustrated in Figure 3. The first system, denoted as \(\{n\}\), follows the _North-East-Down_ convention. It describes the navigational status of the vessel using the vector \(\eta=(x_{n},y_{n},\psi)^{\top}\). The variables \(x_{n}\) and \(y_{n}\) represent the north and east positions of the vessel, respectively, relative to the origin point \(o_{n}\). The variable \(\psi\) is the heading angle of the ship and represents the angle between the \(x_{n}\)-axis and the \(x_{b}\)-axis of the second coordinate system, denoted as \(\{b\}\). The \(\{b\}\) reference frame is a _body-fixed_ frame that is centered at the midship position of the vessel. Within this coordinate system, the longitudinal axis is denoted as \(x_{b}\), and the transversal axis is denoted as \(y_{b}\). The velocity of the vessel is described by the vector \(\nu=(u,v,\tilde{r})^{\top}\), consisting of the surge velocity \(u\), sway velocity \(v\), and yaw rate \(\tilde{r}\). Following Fossen (2021), the total speed of the vessel is \(U=\sqrt{u^{2}+v^{2}}\) and the course angle is \(\chi=\psi+\arctan(v/u)\).
Unlike the majority of prior studies in the field of ASVs (Heiberg et al., 2022; Fan et al., 2022; Zhai et al., 2022), our research incorporates environmental forces, namely winds, waves, and currents, into our simulations. These forces play a critical role in real-life marine operations. Consequently,
the vessel's movement is governed by the following set of equations:
\[\begin{split}&(m+m_{x_{b}})\dot{u}-(m+m_{y_{b}})v\tilde{r}-x_{G}m \tilde{r}^{2}=X=X_{H}+X_{R}+X_{P}+X_{WI}+X_{WA},\\ &(m+m_{y_{b}})\dot{v}+(m+m_{x_{b}})u\tilde{r}+x_{G}m\dot{\tilde{r} }=Y=Y_{H}+Y_{R}+Y_{WI}+Y_{WA},\\ &(I_{zG}+x_{G}^{2}m+J_{z})\dot{\tilde{r}}+x_{G}m(\dot{v}+u\tilde{ r})=N_{m}=N_{H}+N_{R}+N_{WI}+N_{WA},\end{split} \tag{1}\]
where \(X\) represents the surge force, \(Y\) denotes the lateral force, and \(N_{m}\) signifies the yaw moment around the midship. The first order derivative of a variable \(x\) with respect to time is denoted \(\dot{x}\). The variables \(m\), \(m_{x_{b}}\), and \(m_{y_{b}}\) represent the mass of the ASV, the added masses in the \(x_{b}\) and \(y_{b}\) directions, respectively. Additionally, \(x_{G}\) represents the longitudinal coordinate of the center of gravity in \(\{b\}\), \(I_{zG}\) represents the moment of inertia, and \(J_{z}\) represents the added moment of inertia. The set of equations (1) encompasses five distinct force components: hull (H), rudder (R), propeller (P), wind (WI), and wave (WA). It is important to note that the propeller component only affects the surge force. The equations for the hull, rudder, and propeller components are derived and explained in detail in Yasukawa and Yoshimura (2015), which also provides relevant hydrodynamic derivatives and further parameter values. Importantly, the rudder components \(X_{R}\), \(Y_{R}\), and \(N_{R}\) depend on the rudder angle \(\delta\), which is used to control the ship.
To compute the wind forces depending on the wind speed \(V_{wi}\) and wind
Figure 3: Coordinate systems considered in this work; see Yasukawa and Yoshimura (2015).
angle \(\beta_{wi}\), we use the wind coefficient approximation for symmetrical ships described in Fossen (2021, Chapter 10.1). Regarding wave forces and moments, we follow Taimuri et al. (2020) and Sakamoto and Baba (1986) to compute the respective quantities depending on the wave amplitude \(\zeta_{wa}\), wave angle \(\beta_{wa}\), wave period \(T_{wa}\), and wave length \(\lambda_{wa}\). The most significant environmental force for vessels operating on IWs are currents, represented by the current speed \(V_{c}\) and the current angle \(\beta_{c}\) in the global frame \(\{n\}\). Following Fossen (2021, Chapter 6.7), these are not considered via an additional force or moment term in (1) but by directly specifying the velocity vector of the vessel relative to the currents. Consequently, the total speed of the ship in the presence of currents becomes \(\sqrt{(u-u_{c})^{2}+(v-v_{c})^{2}}\), where \(u_{c}\) and \(v_{c}\) are the longitudinal and lateral components of the currents in \(\{b\}\), respectively.
Furthermore, another crucial yet often neglected environmental characteristic for vessels operating on IWs is the influence of the water depth \(H\) on the vessel dynamics. To account for this circumstance, we follow Taimuri et al. (2020) and consider the effects of shallow waters on wake fraction, thrust deduction, and flow-straightening coefficient as proposed by Amin and Hasegawa (2010), while the hydrodynamic derivatives in shallow waters are approximated following Kijima and Nakiri (1990) and Ankudinov et al. (1990).
In our simulation, we discretize the dynamics in (1) using a step size of 5 seconds and use the ballistic method of Treiber and Kanagaraj (2015) to update the velocity and the position vector of the vessel. Throughout the paper, we refer to a variable \(x\) at time \(t\) as \(x_{t}\).
### Vector-field guidance
Both our LPP and PF units use vector-field guidance (VFG) signals to ensure accurate path tracking. The concept of VFG originated from unmanned aerial vehicle control (Nelson et al., 2007) and is visually represented in Figure 4. The objective is to establish a vector field that guides the vessel back to its intended path, with the steepness of the field determined by a proportional gain parameter, denoted as \(k>0\). When given two consecutive waypoints, \(P_{k}\) and \(P_{k+1}\), the angle from \(P_{k}\) to \(P_{k+1}\) in the global frame \(\{n\}\) is denoted as \(\chi_{P_{k}}\). The vessel's cross-track and along-track errors are denoted as \(y_{e}\) and \(x_{e}\) respectively. Computation details for these quantities can be found in Fossen (2021, Chapter 12.4).
Based on this information, the VFG method defines the desired course of the vessel as \(\chi_{d}=\chi_{P_{k}}-\chi^{\infty}\frac{2}{\pi}\arctan(k\cdot y_{e})\), where \(\chi^{\infty}\in\left(0,\frac{\pi}{2}\right]\). In this study, we set \(\chi^{\infty}=\frac{\pi}{2}\), effectively reducing the guidance mechanism to a proportional line-of-sight guidance law (Fossen and Pettersen, 2014). The formulation introduces a course error, \(\chi_{e}=\chi_{d}-\chi\), which represents the deviation between the desired course and the actual course of the ship, \(\chi\). It is worth noting that sudden changes in the desired course may occur when a waypoint is passed. To address this issue, we adopt the approach proposed by Paulig and Okhrin (2023) and redefine the desired course as follows:
\[\chi_{d}=\left\{\left[1-\frac{x_{e}}{d(P_{k},P_{k+1})}\right]\chi_{P_{k}}+ \frac{x_{e}}{d(P_{k},P_{k+1})}\chi_{P_{k+1}}\right\}-\arctan(k\cdot y_{e}),\]
where \(d(P_{k},P_{k+1})\) represents the Euclidean distance between waypoints \(P_{k}\) and \(P_{k+1}\). This modification allows for a weighted transition between the current and subsequent path segments, resulting in smoother course adjustments.
### Collision risk assessment
Estimating the collision risk with nearby target ships is a core task in maritime guidance, navigation, and control systems (Ozturk and Cicek, 2019). In the maritime literature, two concepts hold special importance. The first
Figure 4: Visualization of the induced vector field; inspired by Paulig and Okhrin (2023).
concept, the _ship domain_, was originally introduced by Toyoda and Fujii (1971) and specifies a safe area around a vessel that should not be entered by any other ship. While various shapes of ship domains have been explored in the literature (Szlapczynski and Szlapczynska, 2017), we adopt a symmetric domain configuration that allows for one ship length of space in front of the USV's bow, and one ship width to the USV's starboard, stern, and port side. Furthermore, we define a collision event when the midship position of a target ship is at or inside our own ship's ship domain.
The second concept is the _closest point of approach_ (CPA) (Lenart, 1983), which includes the measures distance (DCPA) and time (TCPA) to the closest point of approach. The CPA quantifies the criticality of a scenario by assuming that both the own ship and the target ship maintain their speed and course. Several recent studies have built upon these two concepts and defined specific collision risk metrics to assess the severity of a situation (Mou et al., 2010; Ha et al., 2021; Waltz and Okhrin, 2023).
### Reinforcement learning
Reinforcement learning is a fundamental pillar of artificial intelligence that involves an agent learning through iterative trial-and-error interactions with an environment (Sutton and Barto, 2018). Its roots can be traced back to dynamic programming, which is why RL is sometimes referred to as _approximate dynamic programming_(Bertsekas, 2019). In this learning paradigm, the agent's interactions with the environment are formalized as a Markov decision process (MDP) (Puterman, 2014), represented by a tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\). Here, \(\mathcal{S}\) denotes the state space of the system, \(\mathcal{A}\) represents the action space, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the state transition probability function, \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is a bounded reward function, and \(\gamma\in[0,1)\) is a discount factor that balances the importance of immediate and future rewards. The agent's objective within this framework is to discover a policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) that maximizes the expected cumulative discounted reward. We have specified the degenerated case where \(\pi\) is deterministic, mapping each state \(s\in\mathcal{S}\) to a particular action \(a\in\mathcal{A}\), as the algorithms we consider are based on deterministic policies. In more generality, policies are represented by probability distributions over available actions for a given state.
In practical applications, fully-observed systems, where the agent has access to the complete state vector, are uncommon due to sensor noise, delays, or other disturbances (Meng et al., 2021). Recognizing this reality, we extend the MDP formalism to encompass partially-observable Markov decision
processes (POMDPs) (Kaelbling et al., 1998) by introducing an observation space \(\mathcal{O}\) and an observation function \(\mathcal{Z}:\mathcal{S}\times\mathcal{A}\times\mathcal{O}\rightarrow[0,1]\). Consequently, at each time step \(t\), the system resides in a state \(s_{t}\in\mathcal{S}\), the agent receives an observation \(o_{t}\in\mathcal{O}\) generated according to \(\mathcal{Z}\), and then selects an action \(a_{t}\in\mathcal{A}\). The system transitions to the next state \(s_{t+1}\in\mathcal{S}\) based on \(\mathcal{P}\) and provides a reward \(r_{t}\) determined by \(\mathcal{R}\). In our PF module, the observation vector includes the deviation to the local path and the currently acting environmental forces, while the action is a change in the rudder angle.
In this study, we use the Long-Short-Term-Memory-based Twin Delayed Deep Deterministic Policy Gradient (LSTM-TD3) algorithm proposed by Meng et al. (2021) for both agents. Building upon prior work by Lillicrap et al. (2015) and Fujimoto et al. (2018), the LSTM-TD3 is an actor-critic algorithm with two critics, in which the actor specifies the policy and the critics evaluate the actor's choice. The algorithm incorporates LSTM layers (Hochreiter and Schmidhuber, 1997), offering a robust solution to address the challenges posed by partial observability. Throughout the paper, we set the history length of the algorithm to \(h=2\), processing the last two time steps in addition to the current one.
## 5 Local path planning module
### Configuration of the RL agent
#### 5.1.1 Observation space
The LPP agent aims to generate a reliable local plan while taking into account surrounding target ships, traffic rules, and the waterway geometry. This module does not consider environmental forces such as winds, waves, and currents, as those fall under the responsibility of the PF unit.
The observation for the LPP agent at time \(t\), denoted as \(o_{t}^{\text{LPP}}\), is defined as the union of three components: \(o_{\text{OS,t}}^{\text{LPP}}\), which summarizes information about the own ship; \(o_{\text{IW,t}}^{\text{LPP}}\), which describes the navigational area; and \(o_{\text{TS,t}}^{\text{LPP}}\), which delivers information about the surrounding target ships. Specifically, we have:
\[o_{t}^{\text{LPP}}=\left(\left(o_{\text{OS,t}}^{\text{LPP}}\right)^{\top}, \left(o_{\text{IW,t}}^{\text{LPP}}\right)^{\top},\left(o_{\text{TS,t}}^{ \text{LPP}}\right)^{\top}\right)^{\top}.\]
**Own ship observation.** For the observation about the own ship, \(o_{\text{OS,t}}^{\text{LPP}}\), the
following features are included:
\[o^{\text{LPP}}_{\text{OS},t}=\left(\frac{U_{\text{OS},t}}{U_{\text{scale}}},\frac{[ \psi_{\text{OS},t}-\chi_{P_{k},t}]^{\pi}_{-\pi}}{\pi},\frac{[\chi^{\text{global }}_{e,t}]^{\pi}_{-\pi}}{\pi},\frac{y^{\text{global}}_{e,t}}{y_{\text{scale}}} \right)^{\top},\]
where \(U_{\text{OS},t}\) represents the speed of the own ship at time \(t\), \(\psi_{\text{OS},t}\) is the heading of the own ship, \(\chi_{P_{k},t}\) is the angle between the two active waypoints of the own ship (as shown in Figure 4), \(y^{\text{global}}_{e,t}\) is the cross-track error on the global path, and \(\chi^{\text{global}}_{e,t}\) is the course error derived from the VFG method. The superscript \(global\) indicates that these features are specific to the LPP agent and are computed with respect to the global path. The VFG gain parameter is set to \(k^{\text{LPP}}=0.001\). The function \([\cdot]^{a+2\pi}_{a}:\mathbb{R}\rightarrow[a,a+2\pi)\) is used to transform an angle to a desired domain; see Waltz and Okhrin (2023). Additionally, we have scaling parameters \(U_{\text{scale}}=3\,\text{m/s}\) and \(y_{\text{scale}}=64\,\text{m}\), where 64 meters corresponds to one length between perpendiculars (\(L_{pp}\)) of the downscaled KVLCC2 tanker.
**Waterway observation.** Furthermore, to ensure the agent can plan without running aground, it requires information about the navigational area; see Figure 5. We construct a vector, denoted as \(o^{\text{LPP}}_{\text{IW},t}\), which contains proximities to the navigational boundary in 10 different directions:
\[o^{\text{LPP}}_{\text{IW},t}=\left(1-\frac{d^{H}(\gamma_{1})}{d^{H}_{\text{ scale}}},\ldots,1-\frac{d^{H}(\gamma_{10})}{d^{H}_{\text{scale}}}\right)^{\top}. \tag{2}\]
The value \(\gamma_{i}\) represents the \(i\)-th component, where \(i=1,\ldots,10\), in radians of the vector with degree values \((0,20,45,90,135,180,225,270,315,340)^{\top}\). We carefully selected these angles to provide the agent with a comprehensive view of its surroundings while avoiding the need for a high-dimensional feature vector that would require specifying angles, for example, for every degree around the clock. The function \(d^{H}:[0,2\pi)\rightarrow[0,d^{H}_{\text{scale}}]\) calculates the distance to either the coastline or the global path of the opposing traffic for a given angle.
During implementation, we examine for each angle up to 50 logarithmically scaled distances within a maximum range of \(d^{H}_{\text{scale}}=1\,\text{NM}\), leading to a normalization of the fractions in (2) to the interval \([0,1]\). We determine if a particular point is below the required water depth for the tanker or if it lies on the opposite side of the opposing traffic's path. Additionally, we follow
the approach in Heiberg et al. (2022) by reversing the signs of the fractions to provide the agent with information about the proximity rather than the distance to either the coastline or the opposing path.
**Target ship observation.** For the observation of the \(i\)-th target ship at time \(t\), denoted as \(o_{\text{TS},i,t}\), the following features are considered:
\[o_{\text{TS},i,t}=\left(\frac{d^{i}_{\text{OS},t}-D(\alpha^{i}_{\text{OS},t})} {d_{\text{scale}}},\,\frac{[\alpha^{i}_{\text{OS},t}]_{-\pi}^{\pi}}{\pi},\frac {[\psi_{i,t}-\chi_{P_{k},t}]_{-\pi}^{\pi}}{\pi},\frac{U_{i,t}-U_{\text{OS},t}}{ U_{\text{scale}}},\sigma_{i,t},\frac{t^{\text{cpa}}_{i,t}}{t_{\text{norm}}}, \frac{d^{\text{cpa},*}_{i,t}}{d_{\text{norm}}}\right)^{\top}.\]
Here, \(d^{i}_{\text{OS},t}\) represents the Euclidean distance between the own ship and the \(i\)-th target ship, \(\alpha^{i}_{\text{OS},t}\) is the relative bearing of ship \(i\) from the perspective of the own ship, \(\psi_{i,t}\) and \(U_{i,t}\) are the heading and speed of the target ship, respectively. The function \(D:[0,2\pi)\rightarrow\mathbb{R}\) computes the ship domain around the own ship for a certain angle. Thus, the distance between the target ship's midpoint and the own ship's ship domain is provided to the agent, directly
Figure 5: Visualization of \(o^{\text{LPP}}_{\text{IW},t}\), representing the LPP units’ awareness of the geometry of the waterway. The reversed global path is the path used to generate opposing traffic.
reflecting our definition of a collision event. The binary variable \(\sigma_{i,t}\) indicates whether the other vessel is traveling in the same direction as the own ship and is defined as follows:
\[\sigma_{i,t}=\begin{cases}-1&\text{if}\quad|[\psi_{i,t}-\psi_{OS,t}]_{-\pi}^{ \pi}|\geq\pi/2,\\ 1&\text{else}.\end{cases}\]
The collision risk with the other ship is captured by \(t_{i,t}^{\text{cpa}}\), the TCPA between the own ship and ship \(i\), and the variable \(d_{i,t}^{\text{cpa},*}\). The latter follows Waltz and Okhrin (2023) and is defined as \(d_{i,t}^{\text{cpa},*}=\max\left[0,d_{i,t}^{\text{cpa}}-D(\alpha_{\text{OS},t}^ {i,\text{cpa}})\right]\), where \(d_{i,t}^{\text{cpa}}\) is the regular DCPA and \(\alpha_{\text{OS},t}^{i,\text{cpa}}\) is the relative bearing of the ship \(i\) from the perspective of the own ship at the closest point of approach. Thus, we consider the ship domain in the computation of the DCPA metric similar to as we do it for the present distance to the target ship. The scaling parameters \(t_{\text{norm}}=300\,\text{s}\) and \(d_{\text{norm}}=0.25\,\text{NM}\) are used.
The complete target ship observation vector is constructed as:
\[o_{\text{TS},t}^{\text{LPP}}=\left(\left(o_{\text{TS},1,t}\right)^{\top}, \ldots,\left(o_{\text{TS},N_{t},t}\right)^{\top}\right)^{\top},\]
where \(N_{t}\) is the number of present target ships at time step \(t\). The target ships within this vector are sorted in descending order based on their distance to the own ship. Furthermore, a target ship is only included if its distance to the own ship is less than \(d_{\text{scale}}=0.5\,\text{NM}\), which is considered a reasonable range for practical operations on IWs.
The number of surrounding ships, \(N_{t}\), can vary with time, posing a challenge for standard feed-forward neural networks that require a fixed input size. To address this issue, we adapt the spatial-temporal recurrent neural network approach proposed by Waltz and Okhrin (2023) to our continuous action space, thereby enabling to loop over surrounding target ships in both the actor and the two critics of the LSTM-TD3 algorithm. The architecture of the actor is identical to the one in Waltz and Okhrin (2023), except that we apply a tanh activation at the last layer to yield actions in \([-1,1]^{|\mathcal{A}|}\), where \(|\mathcal{A}|\) is the cardinality of the action space, which will be detailed in the following section. The critic architecture also follows Waltz and Okhrin (2023), except that we concatenate the output of the temporal LSTM with the actions provided by the actor. With this approach, our LPP agent is robust to partial observability and can adjust to situations with an arbitrary
number of target ships.
Additionally, if there are no target ships present, we artificially create a no-risk ship with \(o_{\mathrm{TS,1,t}}=(1,-1,1,-1,1,-1,1)^{\top}\) to fulfill the requirement of having at least one target ship for the recurrence loop in the network architecture.
#### 5.1.2 Action space
We define a one-dimensional action space (\(|\mathcal{A}|=1\)) for the LPP unit, which represents a change in the own ship's heading. At time step \(t\), the agent generates an action \(a_{t}^{\mathrm{LPP}}\in[-1,1]\) that is used to update the own ship's heading \(\psi_{\mathrm{OS,}t}\) according to the following formula:
\[\psi_{\mathrm{OS,}t+1}=\psi_{\mathrm{OS,}t}+a_{t}^{\mathrm{LPP}}\cdot a^{ \mathrm{LPP}},\]
where \(a^{\mathrm{LPP}}=10^{\circ}\). To ensure realistic and feasible paths, we only apply the agent's action every four time steps. Given our simulation step size of 5 seconds, this means that the planner can adjust the heading by a maximum of ten degrees every 20 seconds.
#### 5.1.3 Reward function
Designing an appropriate reward function is crucial in RL applications as it provides feedback to the agent's actions. By incorporating insights from Waltz and Okhrin (2023) and Paulig and Okhrin (2023), we have identified five reward components that facilitate LPP on IWs. These components address global-path following, collision avoidance, traffic rule adherence, and comfort considerations.
The first two components focus on following the global path. We introduce a cross-track error-based reward, denoted as \(r_{y_{e},t}^{\mathrm{LPP}}\), and a course error-based component, denoted as \(r_{\chi_{e},t}^{\mathrm{LPP}}\). They are defined as follows:
\[r_{y_{e},t}^{\mathrm{LPP}}=\exp\left(-k_{y_{e}}^{\mathrm{LPP}}\cdot\frac{|y_{e,t}^{\mathrm{global}}|}{y_{e,\mathrm{norm}}}\right),\quad r_{\chi_{e},t}^{ \mathrm{LPP}}=\exp\left(-k_{\chi_{e}}^{\mathrm{LPP}}\cdot|[\chi_{e,t}^{ \mathrm{global}}]_{-\pi}^{\pi}|\right),\]
where the power weights are \(k_{y_{e}}^{\mathrm{LPP}}=2\) and \(k_{\chi_{e}}^{\mathrm{LPP}}=4\), and the normalization length is \(y_{e,\mathrm{norm}}=128\,\mathrm{m}\), which corresponds to \(2L_{pp}\) of the KVLCC2 replica ship. These values are chosen to appropriately weigh the importance of the respective error terms in the reward computation.
The third reward component, denoted as \(r^{\text{LPP}}_{\text{coll},t}\), penalizes collisions with other ships and leaving the navigational area, and is defined as follows:
\[r^{\text{LPP}}_{\text{coll},t}=k_{\text{coll}}\cdot\left(\sigma_{\text{ground},t}+ \sigma_{\text{lane},t}+\sum_{i=1}^{N_{t}}\sigma_{\text{coll},i,t}\right)-\max_{ i=1,\dots,N_{t}}f\left[\alpha^{\text{OS}}_{i,t},d^{i}_{\text{OS},t}-D\left( \alpha^{i}_{\text{OS},t}\right)\right], \tag{3}\]
where \(f:[0,2\pi)\times\mathbb{R}\rightarrow\mathbb{R}\) is a function defined as:
\[f(\alpha,d)=\exp\left\{\frac{-\left[d\cdot\sin(\alpha)\right]^{2}}{e^{2}_{ \text{norm}}}\right\}\cdot\exp\left\{\frac{-\left[d\cdot\cos(\alpha)\right]^{2 }}{n^{2}_{\text{norm}}}\right\}.\]
In (3), the binary variables \(\sigma_{\text{ground},t}\) and \(\sigma_{\text{lane},t}\) are equal to one if the ship runs aground or crosses the global path of the opposing traffic, respectively, and zero otherwise. Both situations indicate leaving the navigational area and are therefore considered collision events. The variable \(\sigma_{\text{coll},i,t}\) is also a binary variable that takes the value one if the own ship has a collision with target ship \(i\) at time step \(t\), and zero otherwise. The collision weight \(k_{\text{coll}}\) is set to \(-10\).
Equation (3) involves two distinct angles: \(\alpha^{\text{OS}}_{i,t}\), which represents the relative bearing of the own ship from the perspective of target ship \(i\), and \(\alpha^{i}_{\text{OS},t}\), which represents the relative bearing of target ship \(i\) from the perspective of the own ship. The function \(f(\cdot)\) defines an elliptical collision reward around the target ship, where \(e_{\text{norm}}=3B\) and \(n_{\text{norm}}=1L_{pp}\), with \(B\) being the width of the ship. This design encourages the agent to maintain a larger longitudinal distance from the target ship while allowing for smaller lateral distances, as the latter are necessary for overtaking maneuvers. In contrast to Waltz and Okhrin (2023), we use the maximum operator instead of the sum operator in (3). This change provides the advantage of making \(r_{\text{coll},t}\) relatively robust to the number of target ships \(N_{t}\). We have observed significantly improved training results with this modification.
The fourth reward component, \(r^{\text{LPP}}_{\text{rule},t}\), concerns the compliance with traffic rules, and is defined as \(r^{\text{LPP}}_{\text{rule},t}=k_{\text{rule}}\cdot\sum_{i=1}^{N_{t}}\sigma_{ \text{rule},i,t}\). We set the constant to \(k_{\text{rule}}=-2\), while the binary variable \(\sigma_{\text{rule},i,t}\) takes value one if at time step \(t\) a traffic rule with respect to target ship \(i\) is violated. More precisely, we
define:
\[\sigma_{\text{rule,i,t}}=\begin{cases}1&\text{if}\quad\left\{d_{\text{OS,t}}^{i} \leq g(\alpha_{i,t}^{\text{OS}})\right\}\wedge\left\{|[\psi_{i,t}-\psi_{OS,t}]_{ -\pi}^{\pi}|<\frac{\pi}{2}\right\}\\ &\wedge\left\{\left[(U_{\text{OS,t}}>U_{i,t})\wedge\left(\frac{\pi}{2}\leq \alpha_{i,t}^{\text{OS}}\leq\pi\right)\right]\vee\left[\sigma_{\text{spd,t}} \wedge\left(\frac{3}{2}\pi\leq\alpha_{i,t}^{\text{OS}}<2\pi\right)\right]\right\} \\ 0&\text{else.}\end{cases} \tag{4}\]
In (4), the function \(g:[0,2\pi)\rightarrow\mathbb{R}\) calculates a bearing-dependent distance to determine if the target ship is within a range that requires consideration of traffic rules. Specifically, we select a lateral distance of \(0.25\,\mathrm{NM}\) and a longitudinal distance of \(0.5\,\mathrm{NM}\) from the target ship, and \(g(\cdot)\) is defined to linearly interpolate between the four resulting corner points. The second term in the condition of (4) checks if the target ship is moving in the same direction as the own ship. Finally, the third condition reflects the two rules we enforce during simulation. Firstly, other ships should be overtaken on their port side, which corresponds to SS23 (1) of Bundesministerium fur Digitales und Verkehr (1998). Secondly, if the own ship is being overtaken by another vessel, it should facilitate the maneuver by making space for the target ship, representing SS23 (2) of the regulation. To account for this, we introduce the variable \(\sigma_{\text{spd},t}\), which takes the value one if all target ships traveling in the same direction as the own ship are faster than the own ship, and zero otherwise.
The fifth component is a comfort reward, \(r_{\text{comf},\text{t}}^{\text{LPP}}\), and is defined as \(r_{\text{comf},\text{t}}^{\text{LPP}}=-\left(a_{t}^{\text{LPP}}\right)^{2}\). It should prevent the agent from frequently selecting large heading changes, resulting in a stable and smooth local path. Finally, we aggregate all five reward components into a single scalar via:
\[r_{t}^{\text{LPP}}=r_{y_{e},t}^{\text{LPP}}\omega_{y_{e}}^{\text{LPP}}+r_{ \chi_{e},t}^{\text{LPP}}\omega_{\chi_{e}}^{\text{LPP}}+r_{\text{coll,t}}^{ \text{LPP}}\omega_{\text{coll}}^{\text{LPP}}+r_{\text{rule,t}}^{\text{LPP}} \omega_{\text{rule}}^{\text{LPP}}+r_{\text{comf},\text{t}}^{\text{LPP}} \omega_{\text{comf}}^{\text{LPP}},\]
where we experimentally determined the weights as follows: \(\omega_{y_{e}}^{\text{LPP}}=\frac{4}{19}\approx 0.211\), \(\omega_{\chi_{e}}^{\text{LPP}}=\frac{1}{19}\approx 0.053\), \(\omega_{\text{coll}}=\omega_{\text{rule}}^{\text{LPP}}=\frac{6}{19}\approx 0.316\), and \(\omega_{\text{comf}}^{\text{LPP}}=\frac{2}{19}\approx 0.105\).
### Training environment
#### 5.2.1 Waterway generation
To establish the simulation environment, we first sample a global path by interchanging straight and curved segments following the method described in Fossen (2021, Chapter 12). In the following, we denote a uniform distribution with support \([a,b]\) as \(\mathcal{U}(a,b)\), an integer-valued discrete uniform
distribution with support \([a,b]\) as \(\mathcal{DU}(a,b)\), and an exponential distribution with expectation \(\beta\) as \(Exp(\beta)\). The length of each straight river interval is generated according to \(400\,\mathrm{m}+\mathcal{DU}(0,32)\cdot 50\,\mathrm{m}\), and the radii of the curves are sampled from \(1000\,\mathrm{m}+\mathcal{DU}(0,4000)\cdot 1\,\mathrm{m}\). Additionally, the curve angles are generated from \(60^{\circ}+\mathcal{DU}(0,40)\cdot 1^{\circ}\). We randomly sample left or right curves, respectively. Furthermore, we construct another path by imposing a fixed offset of 200 meters to the global path. The second path is referred to as reversed global path and is responsible for generating the opposing traffic.
Once we have established the global path, we proceed to create an IW by sampling water depths around it, following Paulig and Okhrin (2023). Initially, we sample a maximum water depth in meters from \(\mathrm{clip}\left[Exp(35),20,100\right]\) with the clipping operation: \(\mathrm{clip}(x,l,u)=\min\left[u,\max(x,l)\right]\). The generated waterway has a maximum width of 500 meters. To introduce some variability to the depth data, we incorporate noise by adding realizations from \(\mathcal{U}(-2,2)\) in meters. We resample a generated waterway at the beginning of every fifth episode to keep the computational effort low.
#### 5.2.2 Target ship generation
To initialize the target ships, we incorporate insights from the recent work conducted by Hart and Okhrin (2022). Our aim is to create training scenarios that pose significant challenges to the agent, increasing the risk of collisions with other ships. In each episode, we randomly sample \(N\sim\mathcal{DU}(0,10)\) target ships, encompassing a range of speeds, including both slower and faster vessels than the own ship, as well as ships traveling in opposing directions. The base vessel speed in our simulation is \(U_{\mathrm{base}}=3\,\mathrm{m/s}\), and at each episode, we randomly sample the own ship's speed from \(\mathcal{U}(0.8,1.2)\cdot U_{\mathrm{base}}\).
To initiate target ship \(i\), where \(i=1,\ldots,N\) if \(N>0\), there is a 15% probability of sampling a faster ship with a speed of \(U_{i}\sim\mathcal{U}(1.3,1.5)\cdot U_{\mathrm{base}}\). In this case, the vessel \(i\) is initiated behind the own ship and travels in the same direction. The initial distance to the own ship along the global path, denoted \(d_{i}\), is generated via \(d_{i}\sim\mathcal{U}(0.3,0.7)\cdot 1\,\mathrm{NM}\), ensuring the vessel will create a threat for the own ship in the future. In the remaining 85% of cases, vessel \(i\) is slower with \(U_{i}\sim\mathcal{U}(0.4,0.8)\cdot U_{\mathrm{base}}\) and is initiated in front of the own ship. In this case, the vessel \(i\) is randomly assigned to travel in the same or opposing direction. When traveling in the opposing direction, we sample \(d_{i}\sim\mathcal{U}(1.1,1.9)\cdot 1\,\mathrm{NM}\) and place vessel \(i\) on the reversed global path. Otherwise, we have \(d_{i}\sim\mathcal{U}(0.3,0.7)\cdot 1\,\mathrm{NM}\).
To introduce further variability and prevent the target ships from consistently spawning exactly on their respective paths, we introduce small positional noise to the target vessel's initial positions. An episode ends if a maximum of 150 steps has been reached or if the agent is more than 0.5 NM away from the global path. A screenshot of the full environment including waterway and target ships is shown in Figure 6.
#### 5.2.3 Target ship behavior modeling
One of the key considerations in simulating AVSs is the control mechanism for the target ships. For a comprehensive overview of this topic, we refer to Zhou et al. (2019). In our study, we focus on the non-communicative
Figure 6: Screenshot of the simulation environment for the LPP agent with four target ships. The own ship is depicted in red and travels in the south-east direction in this case. The target ships are grey if they travel in the same direction as the own ship, and golden otherwise. The scale on the right refers to the water depth in meter. Note that the latitude and longitude values are artificial and serve as orientation.
scenario, where there is no explicit exchange of intentions or planned trajectories among vessels. In many existing works on ASVs, the assumption of linearly moving target ships with no course and speed changes is commonly adopted, particularly in open water scenarios (Guo et al., 2020; Fan et al., 2022; Xu et al., 2022a,b; Sawada et al., 2021). However, while trained policies based on this assumption can generalize to non-linear target ship motions, as shown by Waltz and Okhrin (2023), the linearity and non-reactiveness assumptions are unrealistic in practical scenarios, particularly in the context of IWs.
To address this limitation, we employ a simple rule-based controller for the target ships during the training of the LPP agent. The basis of this controller is the VFG method, as outlined in subsection 4.2. Additionally, the target ships are capable of performing basic overtaking maneuvers, in compliance with SS23(1) of the collision regulations outlined in Bundesministerium fur Digitales und Verkehr (1998). In head-on scenarios, they avoid collisions by executing a turn to starboard. For a detailed description of the complete controller, please refer to Algorithm 1 in Appendix B. Importantly, considering the recent work of Akdag et al. (2022), we incorporate a 20% probability of generating a non-cooperative target vessel which is solely controlled based on VFG and does not perform any COLAV maneuvers.
Finally, we recognize that the environmental design and the definition of the state, action, and reward spaces for the LPP agent depend on various constants and hyperparameters. However, as mentioned earlier, we introduce noise in all fundamental aspects of the environmental generation. This includes the initial generation of the waterway, the number of target ships, and their behavior. By incorporating this approach, we can effectively optimize for a robust policy for the LPP task. The validation scenarios in section 7 will illustrate the effectiveness of our approach.
## 6 Path following module
### Configuration of the RL agent
#### 6.1.1 Observation space
The PF unit takes as input the local path generated from the LPP module and controls the rudder angle under consideration of the environmental disturbances. The observation of the PF agent at time \(t\) is denoted as \(o_{t}^{\text{PF}}\) and consists of a component describing the status of the own ship, denoted
\(o_{\text{OS},t}^{\text{PF}}\), and a component regarding the environmental forces, denoted \(o_{\text{Env},t}^{\text{PF}}\). Thus, we have \(o_{t}^{\text{PF}}=\left(\left(o_{\text{OS},t}^{\text{PF}}\right)^{\top},\left(o_{ \text{Env},t}^{\text{PF}}\right)^{\top}\right)^{\top}\).
**Own ship observation.** The component describing the status of the own ship is defined as:
\[o_{\text{OS},t}^{\text{PF}}=\left(\frac{u_{\text{OS},t}}{u_{\text{scale}}}, \frac{v_{\text{OS},t}}{v_{\text{scale}}},\frac{\tilde{r}_{\text{OS},t}}{\tilde{r }_{\text{scale}}},\frac{\dot{\tilde{r}}_{\text{OS},t}}{\dot{\tilde{r}}_{\text{ scale}}},\frac{\delta_{\text{OS},t}}{\delta_{\text{max}}},\frac{y_{e,t}^{ \text{local}}}{y_{\text{scale}}},\frac{[\chi_{e,t}^{\text{local}}]_{-\pi}^{\pi} }{\pi}\right)^{\top}.\]
Here, \(u_{\text{OS},t}\), \(v_{\text{OS},t}\), \(\tilde{r}_{\text{OS},t}\), \(\dot{\tilde{r}}_{\text{OS},t}\), \(\delta_{\text{OS},t}\) are the surge velocity, sway velocity, yaw rate, change in yaw rate, and rudder angle of the own ship, respectively. The scaling constants are: \(u_{\text{scale}}=3\,\text{m/s}\), \(v_{\text{scale}}=0.2\,\text{m/s}\), \(\tilde{r}_{\text{scale}}=0.002\,\text{rad/s}\), \(\dot{\tilde{r}}_{\text{scale}}=8\cdot 10^{-5}\,\text{rad/s}^{2}\), and \(\delta_{\text{max}}=20^{\circ}\). The variables \(y_{e,t}^{\text{local}}\) and \(\chi_{e,t}^{\text{local}}\) are the cross-track error and the course error from the VFG method for the local path, which uses a gain parameter of \(k^{\text{PF}}=0.01\).
**Environmental force observation.** Further, we define the environmental observation component as follows:
\[o_{\text{Env},t}^{\text{PF}} =\left(\frac{V_{c,t}}{V_{\text{c,norm}}},\frac{[\beta_{c,t}-\psi_{ \text{OS},t}]_{-\pi}^{\pi}}{\pi},\frac{V_{wi,t}}{V_{wi,\text{norm}}},\frac{[ \beta_{wi,t}-\psi_{\text{OS},t}]_{-\pi}^{\pi}}{\pi},\frac{[\beta_{wa,t}-\psi_{ \text{OS},t}]_{-\pi}^{\pi}}{\pi},\right.\] \[\left.\frac{\zeta_{wa,t}}{\zeta_{wa,\text{norm}}},\frac{T_{wa,t} }{T_{wa,\text{norm}}},\frac{\lambda_{wa,t}}{\lambda_{wa,\text{norm}}},\frac{H _{t}}{H_{\text{norm}}}\right)^{\top},\]
where \(V_{c,t}\), \(\beta_{c,t}\), \(V_{wi,t}\), \(\beta_{wi,t}\) are the current and wind's speed and angle of attack, respectively, at the position of the own ship at time \(t\). Moreover, \(\beta_{wa,t}\), \(\zeta_{wa,t}\), \(T_{wa,t}\), \(\lambda_{wa,t}\), are the wave's angle, height, period, and length. Finally, we included the water depth \(H_{t}\) at the own ship's position into the observation vector since it also affects the dynamics via the shallow water corrections outline in section 4.1. The scaling parameters are set to: \(V_{c,\text{norm}}=0.5\,\text{m/s}\), \(V_{wi,\text{norm}}=15\,\text{m/s}\), \(\zeta_{wa,\text{norm}}=2\,\text{m}\), \(T_{wa,\text{norm}}=7\,\text{s}^{-1}\), and \(\lambda_{wa,\text{norm}}=H_{\text{norm}}=100\,\text{m}\).
Note that the PF module has no information about the IW geometry or the target ships and is solely responsible for tracking the given local path. We use the network architecture outlined in Hart et al. (2021) for the LSTM-TD3 algorithm used for the PF unit.
#### 6.1.2 Action space
The PF agent controls the rudder angle of the own ship. Building on the observation defined in the last subsection, the agent computes an action \(a_{t}^{\text{PF}}\in[-1,1]\) which adjusts the rudder angle as follows:
\[\delta_{\text{OS},t+1}=\text{clip}\left(\delta_{\text{OS},t}+a_{t}^{\text{PF}} \cdot a^{\text{PF}},-\delta_{\text{max}},\delta_{\text{max}}\right),\]
where the clipping operation ensures the absolute value of the rudder angle does not exceed \(\delta_{\text{max}}=20^{\circ}\). Further, we set \(a^{\text{PF}}=5^{\circ}\), which results in combination with our simulation step size of 5 seconds in realistic rudder changes. To stress again: the PF agent adjusts the rudder angle of the own ship, while the LPP agent directly modifies the heading angle.
### Reward function
The reward function for the PF unit builds on Paulig and Okhrin (2023) and consists of three components: a cross-track error reward, \(r_{y_{e},t}^{\text{PF}}\), a course error component, \(r_{\chi_{e},t}^{\text{PF}}\), and a comfort reward, \(r_{\text{comf},t}^{\text{PF}}\). Similar to the LPP agent's reward function of section 5.1.3, we define:
\[r_{y_{e},t}^{\text{PF}} =\exp\left(-k_{y_{e}}^{\text{PF}}|y_{e,t}^{\text{local}}|\right),\quad r_{\text{comf},t}^{\text{PF}}=-\left(a_{t}^{\text{PF}}\right)^{2},\] \[r_{\chi_{e},t}^{\text{PF}} =\begin{cases}k_{\text{turn}}&\text{if}\ \ |\chi_{e,t}^{\text{local}}|\geq\frac{\pi}{2}\\ \exp\left(-k_{\chi_{e}}^{\text{PF}}|\chi_{e,t}^{\text{local}}|\right)&\text{ else.}\end{cases}\]
We set the constants \(k_{y_{e}}^{\text{PF}}=0.05\), \(k_{\chi_{e}}^{\text{PF}}=5\), and \(k_{\text{turn}}=-10\). The condition with the large negative penalty for the course error is included to prevent the agent from completely turning around and following the path in the wrong direction. The reward components are aggregated to the reward for the PF agent at time \(t\), denoted \(r_{t}^{\text{PF}}\), as follows:
\[r_{t}^{\text{PF}}=r_{y_{e},t}^{\text{PF}}\omega_{y_{e}}^{\text{PF}}+r_{\chi_{ e},t}^{\text{PF}}\omega_{\chi_{e}}^{\text{PF}}+r_{\text{comf},t}^{\text{PF}} \omega_{\text{comf}}^{\text{PF}},\]
where a small grid search yielded the equal weights: \(\omega_{y_{e}}^{\text{PF}}=\omega_{\chi_{e}}^{\text{PF}}=\omega_{\text{comf}} ^{\text{PF}}=\frac{1}{3}\).
### Training environment
The IW generation for the PF unit is identical to the one described in section 5.2.1. However, in this module, we need to sample the current, wind, and wave conditions since it is the PF unit's primary responsibility
to account for these. The respective angles of attack in radians are separately sampled from \(\mathcal{U}(0,2\pi)\). Moreover, we sample a current speed from \(\operatorname{clip}\left[Exp(0.2),0,0.5\right]\cdot 1\,\mathrm{m/s}\) and a wind speed from \(\mathcal{U}(0,15)\cdot 1\,\mathrm{m/s}\). The wave height, length, and period are sampled from \(\operatorname{clip}\left[Exp(0.1),0.01,2\right]\cdot 1\,\mathrm{m}\), \(\operatorname{clip}\left[Exp(20),1,100\right]\cdot 1\,\mathrm{m}\), and \(\operatorname{clip}\left[Exp(1),0.5,7\right]\cdot 1\,\mathrm{s}^{-1}\), respectively. Furthermore, we introduce zero-mean Gaussian noise to each value when queried by the own ship, enhancing the robustness of the policy.
The termination conditions for an episode are as follows: if more than 500 steps have elapsed, if the agent strays more than 400 meters away from the local path, or if the water depth at the own ship's position becomes insufficient, rendering the shallow water approximations infeasible.
## 7 Results and validation
### Training details
The LPP and PF agents are separately trained for \(2\cdot 10^{6}\) and \(3\cdot 10^{6}\) steps, respectively. The remaining hyperparameters are the same for both agents and are outlined in Table 1. We make the source code to this study publicly available at [https://github.com/MarWaltz/TUD_RL](https://github.com/MarWaltz/TUD_RL) to ensure full reproducability. During the training process, we average every 5000 steps the test return, which is the sum of rewards, of 3 evaluation episodes. To enhance clarity, we apply exponential smoothing to these values. Additionally, we perform the experiment ten times under different random initialization of the neural networks. This enables us to compute confidence intervals that better represent the training performance, which are shown in Figure 7. The LPP unit's training plot has a larger variance due to its dependence on five distinct reward components. In contrast, the PF unit's training is less variable since path following is a more straightforward control task, minimizing spatial and angular deviation from the local path.
In the upcoming sections, we separately validate the proposed LPP and PF modules on simulation examples. Afterward, we employ the complete architecture described in section 3 in validation scenarios on real AIS data.
### Validation: Local path planning module
We have developed a comprehensive testing procedure to thoroughly evaluate the performance of the LPP unit. The procedure consists of six distinct setups, which are among the most challenging scenarios encountered on inland waterways:
1. overtaking a vessel train,
2. overtaking an overtaker,
3. overtaking under oncoming traffic,
4. overtaking an overtaker under oncoming traffic,
5. getting overtaken,
6. and navigating along static obstacles.
The inclusion of static obstacles along the ship's path is particularly important to assess the LPP agent's generalization capabilities, as the agent has not been exposed to such obstacles during training. We analyze the behavior of the DRL agent separately in each of these six setups on a straight waterway segment, a left curve, and a right curve, respectively. This results in a total of 18 different scenarios being studied. The trajectories for the straight case are visualized in Figure 8. Additionally, Figure 9 provides plots of the cross-track and course errors to the global path, the selected action by the agent, and the distance to the target ships. The results for the curved scenarios can be found in Appendix C.
\begin{table}
\begin{tabular}{l l} Hyperparameter & Value \\ \hline Batch size & 32 \\ Discount factor & 0.99 \\ History length (\(h\)) & 2 \\ Learning rate (actor) & 0.0001 \\ Learning rate (critic) & 0.0001 \\ Loss function & Mean squared error \\ Max. replay buffer size & \(5\cdot 10^{5}\) \\ Min. replay buffer size & 5,000 \\ Optimiser & Adam (Kingma and Ba, 2014) \\ Policy update delay & 2 \\ Soft update rate & 0.001 \\ Target policy smoothing noise & 0.2 \\ Target policy smoothing noise clip & 0.5 \\ \end{tabular}
\end{table}
Table 1: List of hyperparameters. For a detailed description of each parameter we refer to Fujimoto et al. (2018) and Meng et al. (2021).
Focusing on Scenario 1 in Figures 8 and 9, we can observe the successful overtaking maneuver performed by the LPP agent on a vessel train consisting of four target ships. Initially, the agent steers to the port side to avoid a collision with the nearest target ship and then overtakes each vessel one by one until the entire vessel train has been passed. Importantly, this behavior complies with SS23(1) of the regulations specified in Bundesministerium fur Digitales und Verkehr (1998), as the agent performs overtaking maneuvers on the target vessel's port side. Once the overtaking maneuver is completed safely, the agent returns to the global path until the cross-track and course errors are close to zero again. Moreover, the action selection during the maneuver is relatively moderate since the agent avoids large consecutive heading changes.
Similar successful results can be observed in the overtaking cases of Scenarios 2 to 4, demonstrating the LPP agent's proficiency in executing advanced maneuvers while maintaining appropriate safe distances from the target ship. Furthermore, the agent's action selection demonstrates a balanced approach in these scenarios, indicating the successful integration of the comfort reward component. In Scenario 5, we can observe the agent's compliance with SS23(2), as it enables overtaking of the target ship by slightly moving towards starboard, which directly reflects the traffic rule reward component defined in (4). Of particular note is Scenario 6, where the agent effectively navigates around static obstacles while maintaining a minimum distance of approximately 50 meters.
Figure 7: The two plots show the test return’s development during training for the two agents. The blue line is the mean over ten independent runs, and the light blue area around it represents a 95% confidence interval. The orange line is the run whose final policy is validated in the upcoming section.
### Validation: Path following module
We evaluate the performance of our PF agent by subjecting it to various environmental conditions and measuring key metrics such as the local cross-track and course error as well as the rudder angle. To ensure comprehensive testing, we consider three major environmental forces: currents, winds, and waves. Each force is tested separately in both moderate and extreme scenarios. The results are shown in Figure 8. The results are shown in Figure 8.
ios, resulting in a total of six validation scenarios. In each scenario, the agent is tasked with following a straight path initially unaffected by environmental forces. Subsequently, a force field perpendicular to the path is introduced, followed by another segment with zero forces. Finally, the force direction is reversed in the last segment. This testing procedure enables us to assess the agent's adaptability to different environmental conditions.
For comparison purposes, we follow Paramesh and Rajendran (2021) and Paulig and Okhrin (2023) and include a PID-controller for the rudder angle, which was optimized using the particle swarm optimization (PSO) approach outlined in Eberhart and Shi (2000); see Appendix D for details. It should be emphasized that the PID-controller is specifically optimized for the validation scenarios, while the RL agent's training encompasses a broader range of scenarios. The obtained results are presented in Figures 10 and 11.
The RL agent demonstrates remarkable performance across all six scenarios, effectively adapting to force fields by promptly adjusting the rudder angle to steer back to the desired path. As a result, it successfully maintains minimal cross-track and course error. Notably, the inclusion of a comfort re
Figure 9: Global cross-track and course error, selected actions, and distances to the target ships during validation of the LPP agent on a straight waterway segment.
ward component has proven to be beneficial, as the agent generally exhibits smooth and moderate changes in the rudder angle. In contrast, the PID-controller exhibits the typical undesired oscillation behavior, which persists even after applying the PSO technique. The challenging aspect lies in the heterogeneity of the scenarios, where the transition dynamics of the environment vary significantly under different force fields. This variability likely poses a difficulty for the PID-controller in achieving stable and consistent performance.
Figure 10: Validation results for path following under moderate environmental conditions. Each column refers to a testing scenario for a separate environmental force, while the other two forces are set to zero. From top to bottom, the rows depict the local cross-track error, the local course error, the surge and sway velocities, the yaw rate, and the rudder angle, respectively. Afterward, the rows include the specific attributes for each force; for example, the speed and angle of attack of the currents.
### Validation: Complete architecture
After validating the LPP and PF units in separation, in this subsection, we aim to conduct a comprehensive evaluation of the complete architecture introduced in section 3. We deploy it in simulation on the lower part of the river Elbe in northern Germany. To ensure a realistic simulation, we specifically choose the date of January 29, 2022, and utilize the actual AIS vessel trajectories and environmental disturbances observed on that day. It is worth noting that the presence of the storm named _Malik_ in Middle Europe
Figure 11: Validation results for path following under extreme environmental conditions. Each column refers to a testing scenario for a separate environmental force, while the other two forces are set to zero. From top to bottom, the rows depict the local cross-track error, the local course error, the surge and sway velocities, the yaw rate, and the rudder angle, respectively. Afterward, the rows include the specific attributes for each force; for example, the speed and angle of attack of the currents.
on that date adds an extra challenge to ASV. We manually specify a global path from Lighthouse Tinsdal to the Elbe estuary close to Cuxhaven, which is illustrated in Figure 12. Further information regarding the data sources and the trajectory interpolation of the target ships are deferred to Appendix E.
To test the architecture's generalization capabilities, we perform the same test for three different base speeds (in m/s) of the own ship: \(U\in\{3,4,5\}\), even though both agents have only been trained on \(U=3\,\)m/s. Figure 13 presents the angular and spatial deviations of the local and global paths, respectively, while selected COLAV maneuvers are displayed in Figure 14.
Upon analyzing Figure 13, we observe that the maximum spatial deviation from the global path reaches approximately 200 meters, while the vast majority of deviations are smaller than 40 meters in absolute value. These deviations are relatively low, considering the fairway width of the Elbe is typically between 600 and 2000 meters in the selected segment. The local cross-track error, which measures the deviation from the planned local path, is naturally of smaller magnitude and exceeds 20 meters only in a few selected cases. Additionally, we notice a slight increase in the spatial deviation from both the local and global paths as the speed increases. However, this increase remains moderate and demonstrates the successful generalization ability of both agents. Moreover, the angular deviations show resilience to changes in speed and, importantly, do not differ significantly between the global and local levels. This is likely attributed to the replanning of the local path, which presents the PF agent with a new path to react to.
Analyzing the maneuvers depicted in Figure 14, it becomes evident that the framework possesses the capability to execute COLAV actions effectively. The model successfully performs starboard turns in head-on scenarios (Panels (a) and (b)), creates space for overtaking vessels (Panel (c)), and overtakes slower vessels on their portside (Panel (d)). These observations align with the validation results presented in subsection 7.2, further highlighting the architecture's suitability for real-world scenarios. Notably, the maneuvers prove successful for all three selected speeds.
While the architecture demonstrates strong performance in various validation scenarios, it is essential to note that collisions have occurred in rare cases during the AIS testing routine. However, these collisions arise due to the nature of the testing format, where the ASV is deployed into scenarios with static target ship trajectories. Consequently, the target ships remain unaware of the presence of the own ship, as illustrated in Figure 14. Con
sider, for instance, Panel (c) of Figure 14, where the own ship makes room for a faster target ship. If the target ship suddenly executes a starboard turn while being alongside the own ship, a collision will occur regardless of the own ship's reaction. As stated by Lyu and Yin (2019), it is impossible to avoid a collision if the target ship intentionally moves towards the own ship.
Figure 12: Global path for validation of the complete architecture based on AIS data.
Figure 13: Empirical distributions of the global and local cross-track and course errors over the complete journey from Lighthouse Tinsday to the Elbe estuary close to Cuxhaven.
## 8 Conclusion
The use of autonomous surface vessels for inland waterway transportation shows promise in creating a sustainable and economically attractive transportation system. Our study introduces a two-level architecture for ASVs
Figure 14: Encounter scenarios based on real depth and AIS data using the complete architecture for ASV control. The scale on the right in each figure is the water depth in meters.
operating on inland waterways based on DRL, employing separate agents for local path planning and path following. We consider relevant environmental disturbances, adhere to traffic rules, and validate our approach using simulated and real AIS trajectories.
We acknowledge the limitations of our work, which highlight avenues for future research. Firstly, we do not address limited visibility or sensor faults, which are critical high-risk scenarios in actual maritime operations. Secondly, seafarers usually exchange communication signals when operating on inland waterways. Such information can be directly incorporated into our local path planning unit, thereby removing the simplifying assumption of linear target ship movement during the planning iterations. Lastly, our focus on simulation experiments calls for validating the proposed architecture in real-world waterways. Addressing these limitations will enable further advancements in developing ASVs for inland waterway transportation, enhancing safety and effectiveness while paving the way for a sustainable transport system.
## Acknowledgments
The authors thank the members of the RL Dresden Group, especially Martin Treiber and Fabian Hart, for their constructive feedback on this work. Furthermore, the authors thank Thor Fossen for his insightful answers regarding the consideration of environmental forces in the simulation. The authors also acknowledge the Center for Information Services and High Performance Computing at TU Dresden for providing the resources for high-throughput calculations. Further, the authors thank the European Maritime Safety Agency for providing the AIS data for the validation scenarios. Finally, the authors express their gratitude to the Weisse Flotte Dresden for sharing their extensive experience in vessel operations. Their detailed explanations of the functionalities of the inland vessel 'Grafin Kosel' have greatly enriched this research. Niklas Paulig was funded by BAW - Bundesanstalt fur Wasserbau (Mikrosimulation des Schiffsverkehrs auf dem Niederrhein), Germany.
|
2306.02435 | On the complexity of linear systems: an approach via rate distortion
theory and emulating systems | We define the complexity of a continuous-time linear system to be the minimum
number of bits required to describe its forward increments to a desired level
of fidelity, and compute this quantity using the rate distortion function of a
Gaussian source of uncertainty in those increments. The complexity of a linear
system has relevance in control-communications contexts requiring local and
dynamic decision-making based on sampled data representations. We relate this
notion of complexity to the design of attention-varying controllers, and
demonstrate a novel methodology for constructing source codes via the endpoint
maps of so-called emulating systems, with potential for non-parametric,
data-based simulation and analysis of unknown dynamical systems. | Eric Wendel, John Baillieul, Joseph Hollmann | 2023-06-04T18:39:19Z | http://arxiv.org/abs/2306.02435v1 | # On the complexity of linear systems: an approach via
###### Abstract
We define the _complexity_ of a continuous-time linear system to be the minimum number of bits required to describe its forward increments to a desired level of fidelity, and compute this quantity using the rate distortion function of a Gaussian source of uncertainty in those increments. The complexity of a linear system has relevance in control-communications contexts requiring local and dynamic decision-making based on sampled data representations. We relate this notion of complexity to the design of attention-varying controllers, and demonstrate a novel methodology for constructing source codes via the endpoint maps of so-called _emulating systems_, with potential for non-parametric, data-based simulation and analysis of unknown dynamical systems.
## I Introduction
In certain application contexts, for example distributed control, a stabilizing control signal is transmitted from a control subsystem to an open loop plant over a physical communications channel of finite capacity. This capacity can be shared with other sensors and subsystems, and thus both the number of bits and the time allocated to the control subsystem for processing those bits are limited. The celebrated data rate theorem [2, 16] established the minimum required channel capacity, in bits per second, below which a controller cannot stabilize an unstable plant. This result was extended through the use of topological feedback entropy to nonlinear systems in [10]. See [9] for a review of results on minimum required channel capacities for stability.
The data rate theorem and related results are focused on the transmission of control signal information for the singular purpose of system stabilization. However, even within a single system information can be transmitted across physical channels for a plurality of tasks. For example, a rate sensor with a fixed bit depth and sampling frequency necessarily provides information about changes in its sensed quantities at a fixed data rate. These bits may require further compression and accumulation before transmission to a local state estimation subsystem or embedded unsupervised learning algorithm. Are there particular regions of state space in which these sensors and subsystems will be required to operate at higher rates and resolution? Conversely, where and at what times can these data rates be lowered without impacting overall system performance?
These are questions about required information rates for estimation and control tasks in _local_ and _dynamic_ sampled-data contexts. The data rate theorem is concerned with information required for _asymptotic_ stability. We may ask the nuanced question: _how many bits are required to describe the changes in state of a continuous-time linear system to a required level of fidelity?_ Our main result is a complete answer to this question for continuous-time linear systems subject to process noise, with fidelity measured in the mean square sense. Specifically, we show that the minimum number of bits is given by the rate distortion function of the source of system uncertainty, and construct explicit source codes using the endpoint maps of so-called _emulating systems_. The resulting encodings have advantages for non-parametric, data-based simulation and analysis.
_Related literature_. A notion of "information complexity" was introduced in [13] as the data rate required to achieve a control or estimation task in the asymptotic limit for the stochastic, linear time-invariant (LTI), discrete-time system
\[x_{k+1}=F\,x_{k}+w_{k}\]
where \(k=0,\ldots,T-1\) and \(w_{k}\) is iid process noise. In the companion paper [14] the authors compute the average information content per unit time contained in length \(T\) trajectories of this linear system, under the additional restriction that the information in trajectories cannot be encoded at once as a single set (or block) of \(T\) observations of system state, but must be encoded iteratively as each state in the trajectory sequence is received. This led to the definition of the so-called _sequential rate distortion function_ (SRDF), which does not have a closed-form expression in the asymptotic limit as \(T\to\infty\)[11].
The systems of interest to [13, 14] could be obtained by sampling a stochastic continuous-time system at a constant uniform rate \(f_{s}=1/\Delta t\). Our interest is in local information content relevant to dynamic execution of control and estimation tasks at potentially non-uniform sampling rates. We are therefore focused on the information content in forward increments of sampled-data representations of continuous-time linear stochastic systems, where \(\Delta t\) is allowed to increase or decrease by small amounts. As such, we allow block encoding of the information content within a (small) interval of time and assume negligible encoding delays.
Finally, although the information content of an infinitesimally short trajectory as \(\Delta t\to 0\) is of important theoretical interest [15], our main results require finite sampling rates.
_Contributions and organization_. Our full contributions and
the organization of the paper are as follows. Our primary, novel contribution is Proposition III.1 in Section III, which establishes the minimum amount of information required to describe how the state of a continuous-time linear system changes over a finite time interval as given by the rate distortion function of a Gaussian source of uncertainty in its forward increments. As an apparently fundamental property of the class of linear systems we consider, we call it the _complexity_ of the linear system (Definition III.1).
Our second main result is a proof, for the time-invariant case, of the intuition that increasing the sampling rate of a linear system reduces the number of bits required to describe that system at each sampling time (Proposition III.2, Corollary III.3). In contrast to the data rate theorem, this is a statement regarding the information content of a linear system with relevance to coder-controller design that one can make regardless of the stability of the system matrix.
Both of these results follow from the standard "reverse water-filling" interpretation of the rate distortion function for a multivariate Gaussian source of uncertainty. We review rate distortion theory in Section II.
Our secondary contributions are presented in Section IV, where we construct source codes for the forward increments of an unknown system using the endpoint map of a so-called _emulating system_[12]. In contrast with than encoding state by discretizing state space as in [13], using the "zooming" quantizers of [6], or an enumeration of a finite open cover of a compact subset of state space as in [10], our state encoders are constructed by appropriate quantization of directions in the tangent space. We discuss the code rate of two example emulating systems and demonstrate how they enable simulation of new sample paths of the unknown system without assumptions about the model parameters or process noise characteristics.
## II A review of rate distortion theory
Let \(X\sim p(x)\) be a source of continuous vector-valued random variables taking values on a subset \(\mathsf{V}\subseteq\mathbb{R}^{n}\). A _reproduction_ of \(X\) is a random variable \(\hat{X}:=g\circ f(X)\) where the _compressor_\(f:\mathsf{V}\rightarrow\mathsf{U}\) maps a realization of the random variable \(X\) onto its representation in the set \(\mathsf{U}\) and the _decompressor_\(g:\mathsf{U}\rightarrow\mathsf{V}\) maps the representation back.
The quality or fidelity of a reproduction is defined in terms of averages of a so-called _distortion function_\(\rho(x,g\circ f(x))\)
\[\rho:\mathsf{V}\times\mathsf{V}\rightarrow\mathbb{R}_{\geq 0}\]
quantifying the consequences of reproducing the source via the transformation \(\hat{X}=g\circ f(x)\). Rate distortion theory places few restrictions on the choice of distortion function. In this paper we will consider only the \(L_{2}\) norm distortion function
\[\rho(x,\hat{x})=\|x-\hat{x}\|_{2}^{2}, \tag{1}\]
and its average, the _mean square error_.
The expected value of \(\rho\) depends on the joint density \(P(\hat{x},x)\) between the source and its reproduction. With a fixed and given source \(p(x)\), this joint is completely determined by a conditional density function \(Q(\hat{x}\,\big{|}\,x)\), which may be viewed as a statistical characterization of the behavior of the as-yet unknown de/compressor functions \(f\) and \(g\). The mutual information between the source and its reproduction depends on \(Q(\cdot\,\big{|}\,\cdot)\)
\[I(\hat{X};X)=H(\hat{X})-H(\hat{X}\,\big{|}\,X)=I(X;\hat{X})\]
and the _rate distortion function_\(R(D)\) is determined by the \(Q(\cdot\,\big{|}\,\cdot)\) that minimizes that mutual information
\[R(D):=\min_{Q(\hat{X}\,\big{|}\,X)}I(\hat{X};X)\] \[\text{s.t.}\quad E[\rho(X,\hat{X})]\leq D\]
Block codesLet the density \(p(x)\) be a (vector-valued) _memoryless source_, meaning that any discrete sequence of random variables \(X^{(i)}\sim p(x)\), \(i=1,\ldots,L\) is iid, for any \(L>0\). Suppose that \(\hat{X}^{L}:=(\hat{X}^{(1)},\ldots,\hat{X}^{(L)})\) is a reproduction of \(X^{L}\) with each \(\hat{X}^{(i)}\) taking values on a finite set of vectors \(\mathcal{V}=\{V_{i}\}_{i=1}^{K}\subset\mathbb{R}^{n}\). The elements of \(\mathcal{V}\) are called _codevectors_ or _symbols_ and the set \(\mathcal{V}\) itself a _source code_ of _size_\(K\) and _blocklength_\(L\) with _code rate_
\[R:=\frac{1}{L}\log_{2}(K)\]
in units of bits per symbol, or simply: \(\mathcal{V}\) is a \((K,L)\)_-source code_.
Fix a particular \((K,L)\)-source code \(\mathcal{V}\) with de/compressor functions \(f,g\). We measure the expected performance of this source code by averaging the chosen distortion function \(\rho\) over the given source distribution:
\[\bar{D}:=\frac{1}{L}\sum_{i=1}^{L}E\big{[}\rho(X^{(i)},g\circ f(X^{(i)}))\big{]}\]
We have exploited the fact that the variables \(X^{(i)}\) are iid. If \(\bar{D}\leq D\), where \(D\) is the maximum allowed distortion, then \(\mathcal{V}\) is said to be _admissible_. If \(\bar{D}>D\) then it is _inadmissible_.
The source coding theorem, and its converse, establish the rate distortion function as the minimum possible code rate of any admissible source code. We specialize slightly the statement of the general source coding theorem from [3] for our purposes.
**Theorem II.1** ([3, Theorem 7.2.4-5]).: _Let \(X\sim p(x)\) be a memoryless source with maximum admissible distortion \(D\geq 0\) and rate distortion function \(R(D)\)._
_Then, for any \(\epsilon>0\) there exists an admissible source code with average distortion \(\bar{D}\leq D+\epsilon\) and rate \(R<R(D)+\epsilon\). Conversely, any source code with rate \(R<R(D)\) has \(\bar{D}>D\) and is inadmissible._
The following well-known result (cf. [7, Theorem 10.3.3], [3, equation (4.5.21)]) defines the minimum admissible code rate of a memoryless multivariate Gaussian source. We apply it in Section III to compute the code rate of the trajectories of a linear system affected by noise.
Let \(\log^{+}(x):=\max\{0,\log(x)\}\), \(I\) denote the \(n\times n\) identity matrix, and \(\lambda_{i}(\Sigma)\) denote the \(i^{\text{th}}\) eigenvalue of a symmetric positive definite matrix \(\Sigma\).
**Lemma II.2**.: _The rate distortion function of a memoryless source \(X\sim p(x)=N(\mu,\Sigma)\) with maximum mean square distortion \(D\geq 0\) in units of nats per symbol is given by_
\[R(D)=\frac{1}{2}\sum_{i=1}^{n}\log^{+}\Big{(}\frac{\sigma_{i}^{2}}{D_{i}(\theta )}\Big{)}\]
_where \(\sigma_{i}^{2}:=\lambda_{i}(\Sigma)\), \(D_{i}(\theta):=\min\{\theta,\sigma_{i}^{2}\}\) and \(\theta\geq 0\) is chosen so that \(D=\sum_{i=1}^{n}D_{i}(\theta)\). When \(D/n<\min_{i}\{\sigma_{i}^{2}\}\) the rate distortion function can be expressed simply as_
\[R(D)=\frac{1}{2}\log\det(\Sigma)-\frac{1}{2}\log\det(\frac{D}{n}I)\]
This is the classical "reverse water-filling" characterization of the minimum admissible code rate for a memoryless Gaussian source, and captures the intuitive result that no bits need be allocated by an optimal compressor to describe any principal components of the source signal whose variance falls below the "water-level" or threshold \(\theta\).
To simplify the following exposition we express \(R(D)\) in nats per symbol for small admissible distortions satisfying \(D/n<\min_{i}\{\lambda_{i}(\Sigma)\}\), with the understanding that \(R(D)\) for large \(D\) is obtained by reverse water-filling.
## III Code rate and complexity of linear systems
Consider the continuous-time Ito stochastic differential equation
\[dx(t)=A(t)x(t)\,dt+dw(t), \tag{2}\]
where \(w(t)\) is a \(n\)-dimensional Brownian motion process with constant covariance \(N\). We are interested in the information content in the next sample of system state \(X(t+\Delta t)\) given \(X(t)\). Define the "change of state" of (2) to be the random variable
\[\Delta X(t):=X(t+\Delta t)-X(t)\,\big{|}\,X(t)\]
conditioned on \(X(t)\). The change of state is given by variation of constants
\[\Delta X(t)=\Delta\mu(t)+\int_{t}^{t+\Delta t}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Since \(A\) is Hurwitz, by [5, Theorem 11.3] there exists a unique, symmetric positive-definite equilibrium solution \(\mathcal{W}_{\infty}\) to this differential equation. For fixed \(D\),
\[\tilde{R}_{\Delta t}(D):=\frac{1}{2}\ln\det\mathcal{W}(\Delta t)-\frac{1}{2}\ln \det(\frac{D}{n}I)\]
is a smooth function of \(\Delta t\) with derivative
\[\dot{\tilde{R}}_{\Delta t}(D):=\frac{dR_{\Delta t}(D)}{d\Delta t}=\operatorname{ tr}\Big{(}\mathcal{W}^{-1}(\Delta t)\frac{d\mathcal{W}(\Delta t)}{d\Delta t} \Big{)}.\]
Thus, it has a unique equilibrium and is non-decreasing from
\[-\infty=\lim_{\Delta t\downarrow 0}\tilde{R}_{\Delta t}(D)\ \ \text{to}\ \ R_{\infty}(D):=\lim_{\Delta t\to\infty}\tilde{R}_{\Delta t}(D).\]
It follows that \(R_{\Delta t}(D)=\max(0,\tilde{R}_{\Delta t}(D))\) is also non-decreasing and asymptotically reaches \(R_{\infty}(D)\).
Attention and complexityIn [4] a framework for the design of controllers capable of operating both with and without state feedback, called _attention-varying control_, was developed under the premise that open-loop control functions admit simple algorithmic implementations, but closed-loop control functions require more complex implementations and computing resources, and are therefore undesirable when an open-loop controller will suffice. These ideas were further explored in [2, 1].
Consider an application in which we are are to communicate the change in state of system (2) over a memoryless channel with a fixed maximum information capacity \(C\) in bits per channel use. The change in system state over a time interval \(\Delta t\) is received by a control subsystem whose performance degrades unacceptably if the mean square error in \(\Delta X(t)\) is above a prescribed limit \(D\). Assume also that we are given a compressor for \(\Delta X(t)\) that is efficient in the sense that for every \(t\) and \(\Delta t\), it operates with an expected average distortion \(\bar{D}\) at a rate \(\bar{R}=R_{t,\Delta t}(\bar{D})+\epsilon\) for some small \(\epsilon>0\). By the _source-channel separation theorem_[7, Theorem 10.4.1], the mean square performance level \(\bar{D}\) is achievable over the given channel with capacity \(C\) if and only if \(R_{t,\Delta t}(\bar{D})+\epsilon<C\).
If system (2) is time-invariant, the system matrix \(A\) is also Hurwitz, and \(R_{\infty}(\bar{D})<C\) then by Proposition III.2 the control subsystem is free to operate at a sufficiently low rate: any sufficiently slow sampling rate \(f_{s}:=1/\Delta t\) suffices to meet the distortion requirement, cf. case (a) in Figure 1. The controller can operate in an essentially open-loop mode.
On the other hand, suppose that the system matrix \(A\) is not Hurwitz and does not have an equilibrium solution to the Lyapunov equation (6). Then the minimum admissible code rate is unbounded with increasing \(\Delta t\) and there exists a sampling rate below which the channel cannot support the controller's performance requirements. For example, fix the channel capacity at \(8\) bits/use, and consider the control systems with code rates (complexities) shown in Figure 1(b, c). By the source-channel separation theorem, in order to meet a maximum distortion requirement of \(D=0.01\) the sampling rate must be increased to at least \(f_{s}=1.63\) Hz for unstable system (c), and \(f_{s}=1.2\) Hz for system (b). This discussion motivates the following result.
**Corollary III.3**.: _Let \(C\) be the capacity of a given channel over which reproductions of the change in state \(\Delta X(t)\sim N(\Delta\mu(t),\mathcal{W}(\Delta t))\) for a time-invariant system (2), \(A(t)=A\), are to be transmitted. If Lyapunov equation (6) has no solution then there exists a sufficiently fast finite sampling rate \(f_{s}<\infty\) for which \(C>R_{1/f_{s}}(D)\), for any \(D\geq 0\)._
Proof.: If there does not exist a solution \(\mathcal{W}_{\infty}\) to (6) then (7) has no equilibria and then \(\tilde{R}_{\Delta t}(D)\) is strictly increasing without bound as \(\Delta t\to\infty\). Then, either \(R_{\Delta t}(D)=\max(0,\tilde{R}_{\Delta t}(D))=0\) for all \(\Delta t\geq 0\), or there exists a finite nonzero \(\Delta\tau\) for which \(R_{\Delta t}(D)>0\) for all \(\Delta t\geq\Delta\tau\).
The source-channel separation theorem requires the control subsystem to increase its sampling rate in order to maintain a required mean square performance over the fixed capacity communications channel. If the rate of channel uses \(f_{s}\) increases then, intuitively speaking, the control subsystem is "more attentive" to the system's change in state and operating in an essentially closed-loop mode. We propose \(f_{s}\) as a discrete-time measure of "attention" consistent with the premise of attention-varying control [4].
**Definition III.1**.: The _complexity_ of the continuous-time linear system (2) is the minimum admissible code rate given by the rate distortion function \(R_{t,\Delta t}(D)\), equation (4), of the source of uncertain changes in state \(\Delta X(t)\sim N(\Delta\mu(t),\mathcal{W}_{t}(\Delta t))\).
The complexity of a discrete-time system varies with
* _time_ \(t\geq 0\), unless of course (2) is time-invariant,
* _fidelity_ or mean square distortion \(D\geq 0\), and
* _attention_ or sampling rate \(f_{s}=1/\Delta t\), \(\Delta t>0\).
## IV Emulating systems and source families
In this section we construct source codes for \(\Delta X(t)\) using the endpoint map of a special class of control systems called _emulating systems_. We define the code rate of an emulating system by way of example, and illustrate the capability of emulating systems for data-based, "model-free" simulation of unknown dynamical systems.
**Definition IV.1** (Emulating systems).: An _emulating system_ is the control system on \(\mathbb{R}^{n}\)
\[\dot{x}(t)=\sum_{i=1}^{K}V_{i}(x(t))u_{i}(t) \tag{8}\]
associated with a _source family_ of \(K\) autonomous vector fields \(\mathcal{V}=\{V_{i}\}_{i=1}^{K}\) and an _admissible control set_\(\mathcal{U}\) of _binary control functions_ taking values on the set \(\{0,1\}^{K}\) with finitely many switchings.
Recall that the _flow_ of vector field \(V_{i}\) is a diffeomorphism \(\varphi_{\delta t}^{(i)}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) mapping an initial condition \(x(t)\) to the solution \(x(t+\delta t)\) of the differential equation \(\dot{x}=V_{i}(x)\).
The _endpoint map_ of control system (8) is the function \(\phi:\mathbb{R}^{n}\times\mathbb{R}_{\geq 0}\times\mathcal{U}\to\mathbb{R}^{n}\) mapping an admissible control
function \(u\in\mathcal{U}\) to the solution \(x(t+\Delta t)\) of system (8) with initial condition \(x(t)\):
\[\phi(x(t),\Delta t,u)=x(t)+\sum_{i=1}^{K}\int_{t}^{t+\Delta t}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(Z(t)=\sum_{j}\delta t_{j}^{*}\). At each \(t\in\{k\Delta t\}_{k=1}^{T}\), generate a new \(\Delta\tilde{x}(t)\) as follows:
1. Compute \((p^{(i)}(t),Z^{(i)}(t))=f(\Delta x^{(i)}(t))\) for each \(i\in[L]\), and average: \(\tilde{p}(t)=\frac{1}{L}\sum_{i=1}^{L}p^{(i)}(t)\), \(\tilde{Z}(t)=\frac{1}{L}\sum_{i=1}^{L}Z^{(i)}(t)\).
2. Sample \(\tilde{n}(t)\sim\mathsf{Mult}(n\,\big{|}\,N,\tilde{p}(t))\) from the multinomial distribution for \(N\) independent trials with \(K\) outcomes; \(\tilde{p}_{j}(t)=\mathsf{Pr}(V(t)=V_{j})\) is the probability of selecting \(V_{j}\in\mathcal{V}\) in one trial.
3. Compute \(\Delta\tilde{x}(t):=g(\tilde{n}_{i}(t)/N,\tilde{Z}(t))\) using (11).
We call this scheme "non-parametric" because the generation of new trajectories does not require us to tune or identify system parameters such as poles, zeros, or process noise covariances, nor integrate any differential equations. Figure 2 shows the performance of this scheme on training data collected from a stable LTI system. The emulating system (5) of [12] has as its source family a set of 24 constant "vector fields"
\[\mathcal{V}=\Big{\{}\begin{pmatrix}-2\\ -2\end{pmatrix},\begin{pmatrix}-2\\ -1\end{pmatrix},\cdots,\begin{pmatrix}2\\ 1\end{pmatrix},\begin{pmatrix}2\\ 2\end{pmatrix}\Big{\}}.\]
The training data \(\mathcal{X}\) consists of sample paths of the time-invariant stochastic linear system with process noise intensity \(N=0.01I_{2}\) and the stable \(A\) matrix of Figure 1(a). Trajectories are emulated for 3 seconds at a sampling rate of \(f_{s}=100\) Hz. With admissible mean square distortion \(D=0.01\) the minimum required code rate (Proposition III.1) is 2 bits/symbol per channel use, or a data rate of 200 bits/symbol/sec.
## V Conclusion
We have defined the complexity of a sampled data representation of a linear stochastic system to be the minimum admissible code rate of a source code for its forward increments. The complexity is a quantity of relevance in applications requiring local and dynamic decision-making. In a context requiring communication of the change in state of a given linear stochastic control system over a channel of fixed capacity we proposed the minimum sampling rate required to lower the system complexity below the channel capacity as a measure of the degree to which an "attention-varying controller" [4] could operate with or without feedback. We constructed explicit source codes from the endpoint maps of emulating systems, and illustrated their use in data-based, non-parametric simulation and analysis of unknown dynamical systems. Further applications of emulating systems to estimation and control tasks in both the infinitesimal setting [15] and in a sequential encoding context [11] will appear in future work.
|
2303.07393 | Many learning agents interacting with an agent-based market model | We consider the dynamics and the interactions of multiple reinforcement
learning optimal execution trading agents interacting with a reactive
Agent-Based Model (ABM) of a financial market in event time. The model
represents a market ecology with 3-trophic levels represented by: optimal
execution learning agents, minimally intelligent liquidity takers, and fast
electronic liquidity providers. The optimal execution agent classes include
buying and selling agents that can either use a combination of limit orders and
market orders, or only trade using market orders. The reward function
explicitly balances trade execution slippage against the penalty of not
executing the order timeously. This work demonstrates how multiple competing
learning agents impact a minimally intelligent market simulation as functions
of the number of agents, the size of agents' initial orders, and the state
spaces used for learning. We use phase space plots to examine the dynamics of
the ABM, when various specifications of learning agents are included. Further,
we examine whether the inclusion of optimal execution agents that can learn is
able to produce dynamics with the same complexity as empirical data. We find
that the inclusion of optimal execution agents changes the stylised facts
produced by ABM to conform more with empirical data, and are a necessary
inclusion for ABMs investigating market micro-structure. However, including
execution agents to chartist-fundamentalist-noise ABMs is insufficient to
recover the complexity observed in empirical data. | Matthew Dicks, Andrew Paskaramoorthy, Tim Gebbie | 2023-03-13T18:15:52Z | http://arxiv.org/abs/2303.07393v4 | # Many learning agents interacting with an agent-based market model
###### Abstract
We consider the dynamics and the interactions of multiple reinforcement learning optimal execution trading agents interacting with a reactive Agent-Based Model (ABM) of a financial market in event time. The model represents a market ecology with 3-trophic levels represented by: optimal execution learning agents, minimally intelligent liquidity takers, and fast electronic liquidity providers. The optimal execution agent classes include buying and selling agents that can either use a combination of limit orders and market orders, or only trade using market orders. The reward function explicitly balances trade execution slippage against the penalty of not executing the order timeously. This work demonstrates how multiple competing learning agents impact a minimally intelligent market simulation as functions of the number of agents, the size of agents' initial orders, and the state spaces used for learning. We use phase space plots to examine the dynamics of the ABM, when various specifications of learning agents are included. Further, we examine whether the inclusion of optimal execution agents that can learn is able to produce dynamics with the same complexity as empirical data. We find that the inclusion of optimal execution agents changes the stylised facts produced by ABM to conform more with empirical data, and are a necessary inclusion for ABMs investigating market micro-structure. However, including execution agents to chartist-fundamentalist-noise ABMs is insufficient to recover the complexity observed in empirical data.
keywords: agent-based model, reinforcement learning, multi-agent, market simulation, order splitting, stylised facts, order flow, price impact, volatility clustering, market micro-structure, optimal execution MSC: 91G10 90C20 62P05
## 1 Introduction
At high-frequency time scales, we can investigate how prices change at the level of individual trades revealing several remarkable micro-structural stylised facts [35; 22; 11; 4; 38; 8; 7].The most important of these stylised facts include the long-memory of order flows and absolute returns, the distribution of rare-events, and the power-law of price impact. Taken together, these stylised facts motivate a model for price dynamics based on order flow and liquidity provision, arising from the strategic behaviour of different classes of heterogeneous agents, operating at different time scales and under asymmetric information. These agents conceal private information to prevent adverse selection which limits the available liquidity, and has given rise to the notion of latent supply and demand for liquidity not reflected in the visible order-book. These micro-structural stylised facts are plausibly thought to arise from the interaction of latent supply and demand with revealed liquidity, facilitated by optimal execution agents. However, by definition, data pertaining to latent demand and supply is not publicly available, it is hidden, preventing a detailed analysis into either top-down and bottom-up causation, or linkages, between the order execution process that reveals this liquidity and the market dynamics from historical market data.
Agent Based Models (ABMs) have been extensively used to explain subsets of various low-frequency stylised facts arising from the interactions of heterogeneous agents; most commonly chartists and fundamentalists within a minority game setting[9; 10; 32]. However, some micro-structural stylised facts can be directly attributed to the behaviour of optimal execution agents and isolating these features has received far less attention within the ABM literature. The execution of any parent order arising from latent demand incurs trading costs in the form of price impact. Optimal execution agents will try to minimise this impact, which arises from limited liquidity and incomplete information, through the use of strategic order-splitting. Here, a large parent order is split into smaller child orders to be sequentially executed. Empirically, order-splitting appears to be wide-spread in real markets, and is used to explain the observed persistence of order flow and the long-memory of the size of returns. Furthermore, it is now well-established that realised price impact is concave with a power-law relationship, which has been attributed to the transfer of latent liquidity to the visible limit order book in response to a change in price [14; 16]. In light of this,
how should optimal execution agents be defined in order to minimise costs and to simultaneously reproduce these stylised facts?
Firstly, it appears that realistic execution agents must also be able to use limit orders to execute parent orders, in order to facilitate the transfer of latent demand and supply to the limit order book, and hence recover the concave price impact function. Next, we can distinguish between static and dynamic execution strategies. Markets are not automatically efficient in the sense of price predictability suggesting there is some form of tatonnement. Static strategies typically require some form of market efficiency in order to be optimal, _e.g._ consistency with linear price impact [29; 25; 16]. In contrast, the observed concavity of price impact [3; 20] provides both a normative and a positive argument for dynamic execution strategies that are aware of market conditions. Since a correct _a priori_ specification of the data-generating process is unrealistic, it seems that some form of learning is necessary for dynamic strategies to be operational in practice. However, in the presence of many learning agents acting competitively, is learning still possible? This seems to depend on how market dynamics increase in complexity with the number of additional agents and the correlation between their payoffs [18; 39; 36].
In summary, the process that seems to dominate high-frequency phenomena appears to be the coupling between the low-frequency latent demand with the high-frequency mechanics, _i.e._ the market-microstructure, of continuous-time double auction markets in the presence of limited liquidity. We speculate that the role of learning and the choice of order types in optimal execution are important, if not fundamental, to this process. We are concerned about the role and nature of learning and how this could impact the stylised facts, and the market environment more broadly.
To explore this, we simulate how latent demand is revealed in a single stock financial market using an ABM to capture the market environment into which we introduce both single and many optimal execution agents that engage in learning. These agents each execute a single parent order in a market environment that also consists of (minimally intelligent) chartists, fundamentalists, and high frequency market makers. Model-free learning is incorporated using a simple (Multi-Agent) Reinforcement Learning specification. From these simulations, we examine how different model specifications affect stylised facts. We investigate whether the inclusion of the new agent-type, the execution agents with learning, to the traditional agent classes used to define the learning environment, can provide a more complete description of market ecology both in terms of stylised facts, but also by trying to recover the empirically measured market complexity. To do this, we view the ABMs as nonlinear dynamical systems and compare their complexity as measured by Grassberger-Proaccia Correlation Dimension plots.
However, our main contributions is to demonstrate how different model specifications affect stylised facts. This yields several key findings. Firstly, we find that learning decreases the persistence in order flow, with some evidence that learning can also decrease the memory in the absolute returns. Second, we find that the persistence of order flow is (unsurprisingly) largely determined by the difference in the number of buying and selling agents. Third, we find that increasing the number of agents increases the persistence of order flow. Fourth, the ability to use limit orders to execute a parent order results in lower price impact and faster decay in the size of absolute returns. This suggests that a good approach to optimal execution will always be a judicious combination of resting limit-orders, with market-orders that enjoy immediate execution. Surprisingly, we did not find conclusive evidence that learning reduced price impact. Lastly, the inclusion of many execution agents endowed with learning, trading a single stock, is not able to recover the complexity observed in real-world data.
The rest of this paper is organised as follows. In section 2, we review the literature to motivate our investigation into optimal execution with learning. In section 3, we specify a novel learning agent that can post market and limit orders to execute a parent order, and examine its learning dynamics (section 3.3). In section 4, we present the remaining results of our study, which includes the analysis of stylised facts 4.1, and an investigation into the market dynamics and complexity 4.5. Finally, in section 5, we conclude by summarising our study and indicating possible future directions of research.
## 2 Background and Motivation
### Reinforcement Learning for Optimal Execution
An important question in formal models of learning is determining the conditions to allow for successful learning. Learning is said to be possible when, given a sufficient amount of data, the errors of the learners outputs can be made arbitrarily small. When it is difficult to show analytically that an algorithm can learn, the ability to learn is intimated from measuring performance on test data. In sequential decision problems, asymptotic convergence guarantees have been derived for many reinforcement learning algorithms such as Q-learning, under the assumption that the agent's environment is stationary [30]. Here, stationarity refers to state transition probabilities and the reward function. In financial markets, where the dynamics of the environment is changing, is learning possible?
Intuitively, we may guess that under a changing environment, a learning agent would have to continually relearn its policy to be optimal under the prevailing conditions. Thus, it would seem that asymptotic convergence may not be possible, but this need not be catastrophic to learning. Learning algorithms may still be able to out-perform rule-based counterparts; but inferring performance on static data may be misleading. Indeed, it was
shown that a simple learning agent could outperform a TWAP agentDicks and Gebbie [13] where thelearning dynamics of a simple RL execution agent in an ABM where non-stationary dynamics in the environment are primarily driven by the interactions of minimally intelligent agents did not have a policy that converged. Instead, the difference in policy over consecutive training periods appeared to converge. This work extends [13] by studying the learning and market dynamics that arise from many learning agents interacting within an environment that includes minimally intelligent agents.
Simulating multiple RL agents in an existing ABM of the LOB is a key long-term objective to investigate the effects of learning in a dynamic market environment, where each RL agent is attempting to learn the optimal trading strategy by continually adapting to each other and the foundational model. Interactions between minimally intelligent agents can produce nonlinear dynamics, but agents' decision rules are not changing over time, limiting the variation in the environmental dynamics. However, in the presence of multiple learning agents, the best policy of an agent changes in response to the changing environmental dynamics, which in turn changes with the other agent's policies [6]. Studying optimal execution in a MARL framework within an ABM can lead to a better understanding how increasing the number of RL agents affects the agents' ability to learn as well as how these affect the underlying ABM.
In multi-agent reinforcement learning, an important problem is the appropriate specification of learning goals, which typically involves a trade-off between stability and adaption [6]. The former is desirable for inferential purposes and establishing generalisation, but the latter is desirable for continual performance improvements. For example, learning stability can be established by specifying convergence to an optimal Nash equilibrium as a common learning goal. The combination of the two broad learning goals can be seen as having each agent converge to a stationary policy where each agent's policy is the best response to all the other agents' policies. Convergence to a Nash equilbria is easier to establish when agents are fully cooperative (agents have identical reward functions) or competitive (agents rewards are zero-sum), in comparison to when agents are trained independently without a common goal [5]. In the latter case, asymptotic performance guarantees of (single-agent) Q-learning may fail [40], but training agents independently still can work well in practice [42; 17].
### Learning in Complex Multi-player Games
This work is further motivated by the study of learning dynamics in complex games [18; 39; 36], where complexity is measured in terms of the number of possible actions. In particular, Galla and Farmer [18] considers a two-player game, and show how the learning dynamics of an RL algorithm called experience-weighted attraction (EWA) are affected by the correlation between the player's payoffs and the memory parameter of the learning algorithm. Galla and Farmer [18] finds that the learning dynamics of player's strategies can be separated into three different regimes. Namely, _i)_ strategies can converge to a unique fixed point if agents have short memory, and correlation between the players payoffs becomes increasingly negative (ie. competitive), or _ii)_ if correlations are negative but players have long memory, then learning dynamics can be chaotic or converge to limit cycles, and _iii)_ if players have long memory and payoffs are positively correlated (ie. cooperative), then learning has a multiplicity of fixed points. In the second regime in particular, strategies are essentially random and learning is not possible. In two-player normal form games, a possible reason for this is that as games get more complicated and/or more competitive this causes best reply cycles to become dominant, which [36] shows to be a good predictor of the non-convergence of several learning algorithms. In games involving more than two players Sanders et al. [39] show that the parameter range in which learning converges to a fixed point becomes smaller as the number players increase. This implies that chaotic behaviour is characteristic of many-player games.
In summary, as games become more complex, and agents payoffs become increasingly independent, and the number of players increases, equilibrium becomes more unlikely. So, if normal form games are a good model for the games market participants play, and if these participants can be modelled reasonably well by learning algorithms, this body of work calls into question the common assumption of equilibrium in economics and finance. However, a key constraint related to the carrying capacity of real world markets is that of liquidity: without sufficient liquidity, trading decisions can not be made, compromising both the ability to learn and the profitability of the corresponding policy. Hence to understand how the scaling insights of many player games can impact financial markets one needs to include a broader ecosystem that captures these salient constraints: liquidity and the indirect cost of trading.
### Market Ecology
The interactions of market participants can be viewed from an ecological perspective, forming a useful conceptual framework to develop ABMs to investigate disequilibrium price dynamics. Agent classes are specified in terms of strategies, analagous to "species" in ecological systems, which are rules describing how agents make trading decisions and hence govern their interactions [15]. Whilst market ecology is far from a formal theory of market functioning, two classification systems have seemed to have emerged in the literature: i.) Chartist-Fundamentalist-Noise (CFN) models and ii.) Liquidity-Provider (LP) and Liquidity-Taker (LT) models [2]. In both classification systems, agent interactions are intermediated by price.
Chartists determine trading behaviour purely on past performance, whereas fundamentalists determine trading behaviour based on current price relative to some subjective valuation. Chartists and fundamentalists have oppos
ing effects on price dynamics, and their interaction is able to produce many early stylised facts observed in intraday and lower frequencies (mesoscale) including clustered volatility and fat tails in returns. In contrast, the liquidity provider and liquidity-taker classification is useful to explain microstructural stylised facts in terms of how information is processed into prices by agents operating at different timescales, whilst keeping prices unpredictable. In particular, liquidity-takers operating at low frequencies (particularly the fundamentalists) with large liquidity demands, can only trade incrementally owing to the limited available liquidity provided by higher frequency market makers. In between the large low-frequency liquidity takers and high-frequency market markers, are chartists acting at shorter time intervals acting on information contained in price changes.
To capture an ecology with limited liquidity and reasonably realistic market impact, we postulate a model in terms of 3-trophic levels defined respectively by three agents classes: learning agents, liquidity takers, and liquidity providers. This is similar to the thinking used in a 3-trophic level model of carnivore-herbivore-plant systems. Here, we have learning agents as the "carnivores", or predators that engage in opportunistic behaviours which necessarily requires adaption and hence learning. We have the liquidity takers, both the fundamentalists and the trend-followers, as "foraging herbivores" since their activity is rule-based and hence "passive" in their consumption of liquidity. Liquidity, is the "food" provided by "plants" which are uninformed High-Frequency Traders (HFTs), or rather Electronic Liquidity Providers (ELPs). This suggests not only a financial market equivalent of a "food web" but also the inherent nonlinearity and feedbacks.
How trophic levels are defined in financial markets is necessarily contested, and can be fluid as markets change and adapt. However, this metaphor frames a narrative around how we are currently thinking about the role of learning agents within our financial market ecosystem - in particular, the relationship of strategic order-splitting to the latent order book in an environment with a highly constrained carrying capacity. We apparently do not need learning for the first two trophic levels of our system, since static mechanistic rule based responses to the market states seems to be sufficient to recover almost all of the necessary stylised facts of the environment [13]. Learning seems to only become important as a coupling mechanism between the visible market (the learning environment) and the latent order book, when one needs to find a mechanism to drive strategic order splitting that can be used to fine tune the observed balanced between order-flow persistence, price volatility and clustering, and the observed extreme events in the complete market, particularly when further tuning the parameters describing the environment cannot do so.
Whilst there are feedbacks between each class of agents in the different trophic levels, our framework suggests a hierarchy based on dependency. Although not a functioning market, the presence of ELPs is sufficient for the existence of a persistent order-book, hence providing an environment for the other agent classes, and thus existing at the bottom of the hierarchy. In contrast, learning is situated in higher trophic levels [1], because the higher levels necessarily include adaption. However, learning agents cannot adapt to ELPs alone who just create noise. Thus, indicating the need for minimally intelligent agents at an intermediary level whose actions create learning opportunities. This picture in turn would suggest that a market made entirely of learning agents would not be feasible because of the extremely limited liquidity. The carrying capacity of the environment would be too low to support both the cost of learning, and then using it successfully, because other agents adapted similarly to the single learning agent with insufficient liquidity to take the necessary learning actions. This is speculative, but we think it is helpful in framing the overall thinking behind our specific model choices.
### Limitations
One of the draw backs of agent-based models in that they are computationally expensive to run. This necessarily forces pragmatic choices on the modeller because one can't generate all possible combinations of parameters and paths, so we don't know with certainty all possible results, we can't cherry pick the good ones, and the systems itself is non-linear. In our setting we have still retained the three step approach for the development and calibration of the learning environment: 1. sensitivity analysis, 2. calibration with a minimal set of known and re-used seeds using the relationships between model and parameter variations found from the sensitivity analysis, and 3. simulation using the same restricted number of seeds using the calibrated parameters for the training environment [27, 13].
However computationally restricted the agent-based model for the environment may be, the inclusion of learning imposes computational considerations that are at least an order of magnitude more onerous. Learning requires many more training episodes than path simulations used in a typical sensitivity analysis. This means that the number and selection of seeds for the Monte Carlo paths used to calibrate the model, and hence are required to simulate the model consistently, need to be limited as these should be replicated in the learning espisode's in a way that optimises the compute times but also provides relative stability of the parameters and a learning environment to faithfully capture the empirically measured environment. This is a task that is both nuanced and tedious.
On the learning side of the problem, the number of training episodes needs to be sufficient to achieve reasonable learning and the path variations over key sub-cases prudently chosen to explore the impact both both model and sample variations, while being tractable. With that in mind we picked a set of cases that was as large as we could manage within project timelines and the cloud computing
resources we had access too. We tried to get enough coverage and generated reliable simulations using this, while trying to anticipate and account for both model and path variations that could confound our results. This substantially restricts this type of simulation work. However, in our work we have not _a-priori_ selected results to support particular conclusions, but the conclusions are never as robust as one would like, particular when critiqued through the lens of simple and often linear statistical modelling techniques, but are rather indicative and aimed to guide incremental model refinement to explain specific features. One key lesson is that, although we can capture most of the key stylised facts, we are not able to capture the observed market complexity, but many of the narratives that emerge do make physical (or rather financial) sense.
## 3 Agent Specification and Learning
We investigate the interaction of different types of optimal execution agents within a pre-calibrated event-based minimally intelligent ABM, which forms a training environment. The minimally intelligent agents consist of chartists, fundamentalists, and liquidity providers. Their full specification, as well as details related to their learning dynamics, can be found in [13], which we don't restate here for the sake brevity. We study different cases, described in 2, where each case is defined by the number, type, and side (ie. buying or selling) of the agents, to determine how these characteristics affect the stylised facts.
### Actions
The execution agents consist of a minimally intelligent execution agent, characterised by a TWAP schedule consisting of market orders only, and two types of learning agents. We denote the first class of learning agents as "Type I" agents, which are also adopted from [13]. Type I agents have explicit order-splitting but only use market-orders (MOs) to interact with the market. Furthermore, in this study, we introduce a new class of agents, denoted "Type II" agents, which will be able to trade with both market orders and limit orders. This means that Type I agents cannot directly interact with each other in the model but are inter-mediated only by the liquidity providers. However, Type II agents can interact directly with each other and other agents as they use both limit-orders and market-orders to trade.
Both type I and type II agent use trading schedules that are multiples of TWAP and order sizes following Dicks and Gebbie [13]. The type I agents have actions as integers \(a_{X}=[0,0.25,0.5,0.75,1,1.25,1.5,1.75,2]\) giving a grid of actions from small orders to large orders as multiples of the TWAP strategy. For the type II agents market-orders are placed into the market at some rate of trading \(\nu\) as a function of machine time, while the limit-orders will be place into the order-book at a depth \(\delta\) from the mid-price. The order sizes will be as before. Here the actions will be the placement depth at an activation is \(a_{\delta}=[0.01,1]\) (shallow or deep) and the rate of trading \(a_{\nu}=\nicefrac{{1}}{{r}}[100,10,1]\) (fast, moderate or slow) as a multiple of the total session length.
### Rewards
We have to specify a reward function that is symmetric between buying and selling agents, so that whether agent is a buy or sell doesn't affect learning. Thus, we must modify the reward function of the agent specified in Dicks and Gebbie [13] which is suitable for profit maximisation on the sell side. The obvious transformation for a buying agent is to use a cost minimisation, however the objective function then has a minimum occuring at zero, corresponding to no trading, indicating that a penalty needs to be assigned relative to the cost to encourage trading.
As in prior work [23; 13], we considered using Perold's implementation shortfall [37], where the buying agent would try minimise the difference between the Volume Weighted Average Price (VWAP) achieved using its strategy and the hypothetical VWAP it would receive if it traded the entire parent order at the initial price, with no price impact using an Immediate Execution (IE) strategy. However, in this case, the minimum implementation shortfall is negative, which can incentivise buying at higher prices. Conversely, if you aim to maximise the implementation shortfall, to try incentivise buying at lower prices, then you have the same problem arising from minimising the total cost, because the agent learns not to trade.
The aim is to formulate a reward function that is intuitive and minimises the cost on the buy side, maximises the profit on the sell side, and creates an incentive to trading. This can be achieved by a combination of the trader's slippage and a penalty for not trading. For the \(n^{th}\) order in an episode the reward function for the \(\ell^{\text{th}}\) learning agent:
\[R_{\ell,n}=\underbrace{\pm\ln\left(\frac{p_{{}_{\text{VWAP}}}(\mathcal{X})}{p_ {{}_{\text{VWAP}}}(\mathcal{X}\setminus\mathcal{X}_{\ell})}\right)}_{\text{ slippage}}-\underbrace{\left(\frac{x_{\ell,n}}{v_{n}}\right)\lambda_{\nu}e^{ \gamma_{\nu}t}}_{\text{penalty}}. \tag{1}\]
The first term is the _slippage_, and is positive for selling agents and negative for buying agents. By maximising this reward we aim to get higher prices when we sell, and lower prices when we buy. Here \(p_{{}_{\text{VWAP}}}(\mathcal{X})\) is the VWAP price received from the set \(\mathcal{X}=\{x\}_{i=1}^{n}\) of all trades including the \(n^{th}\) trade submitted by the \(\ell^{th}\) RL agent. The VWAP price found from all trades not including all the trades submitted by the \(\ell^{\text{th}}\) RL agent is \(p_{{}_{\text{VWAP}}}(\mathcal{X}\setminus\mathcal{X}_{\ell})\). The difference between these two quantities measures the slippage between a given RL agent's trades, and the trades made by the rest of the market (potentially including other RL agents).
The second term is the _penalty_ term for not trading, which creates an incentive to trade whether on the buy and sell side. Here the total time past in the simulation is \(t\), the remaining inventory is \(x_{\ell,n}\), and the total volume matched for the \(n^{\text{th}}\) order is \(v_{n}\). The penalty increases exponentially
as time increases, forcing the agents to learn to trade before the end of the simulation. The penalty is proportional to the inventory remaining and inversely proportional to the amount of volume matched for the last trade, ensuring that the penalty increases as the remaining inventory increases, and decreases as the amount traded by the \(n_{th}\) order increases. The agent will want to trade more when there is lots of inventory remaining. The penalty has two parameters: \(\lambda_{r}\) controls how much effect the penalty term has on the reward function, and \(\gamma_{r}\) controls the sensitivity to time.
Together, the slippage and penalty mean that the agents aim to minimise slippage and get the best prices relative to the rest of the market, while being incentivised to trade. The return function for the RL agent is the accumulated reward: \(\sum_{n}R_{n}\). This will be maximised by all the RL agents.
The state space used here corresponds to the smaller state space from Dicks and Gebbie [13]. The results here are directly comparable with prior work following this approach. Similarly, the agent actions for market-orders as based on multiples of a TWAP strategy to provide an additional ability to compare and leverage the preparatory work [13]. The state space is explicitly given in Table 1.
### Optimal Policies and Convergence
To visualise the convergence of the training of learning agents we plot the agent policy returns as a function of the training episodes in Figure 1. Here we have used 1000 training episodes. Figure 1b shows the return rewards for agent type I, where both buyers (+) and seller (-) are shown. Agents are taken from different model configuration sets _e.g._ the blue line is a lone buying agent (\(I^{+}\)) using only MO's, and has similar dynamics to the a lone selling agent (\(I^{-}\)) given in pink. This shows that the rewards appear to converge under training. Similarly, Figure 1b has the same plot but for agents of type II, these are agents that use both market orders and limit orders. Again, we note reasonable evidence of convergence behaviour in the reward function over the 1000 training episodes for the ten different learning agent configurations. The TWAP agents (\(S^{\pm}\)) are included for comparison.
In Figure 2 we show the final buying and selling learning agents greedy policy after 1000 training episodes in the training environment [13] for Case 5 (the case which includes both type \(I^{+}\) and \(I^{-}\) agents together in the environment). The five discrete inventory states increase bottom to top and the temporal states increase left to right across the episode. Within in each inventory and time state there are five spread and volume states, represented as heat maps. The specific actions are label as colour legend on the left and action "-1" represents the state has not been reached, and increasing aggression via the size label move bottom to top. Here only market orders (MO) are used. The learning agent with a mixture of market and limit order (LO) is given in the action space example for Case 6 in Figure 3.
In Figure 3 we have the final greedy policy for Case 6 (type II\({}^{-}\)) learnt over 1000 training episodes in the ABM training environment. The five inventory states increase bottom to top, and the five temporal states left to right, from the first fifth of the training episode to the last fifth. Within each inventory and time combination a \(5\times 5\) heat
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Agent type & Reward & State-space & \multicolumn{3}{c}{Actions} \\ & & Size & \(\pm\) & Parent order & Order-type \\ \hline Hendricks and Wilcox [23] & short-fall & 5 & Sell & multiple of AC & MO’s \\ Dicks and Gebbie [13] & trading profit & 5,10 & Sell & multiple of TWAP & MO’s \\ \hline Type S\({}^{\pm}\) & - & - & Buy/Sell & TWAP & MO’s \\ Type I\({}^{\pm}\) & slippage - penalty & 5 & Buy/Sell & multiple of TWAP & MO’s \& LO’s \\ \hline \hline \end{tabular}
\end{table}
Table 1: Agents using strategic order splitting. Those engaged in learning their their respective rewards and actions are given with the state-space size on \((n_{t},n_{I},n_{S},n_{v})\). All the agent below engage in learning except for the minimally intelligent agent \(S\)
\begin{table}
\begin{tabular}{l l l l} \hline \hline Case \# & Types & \#Agents & Parent order size \\ \hline
0 & - & 0 & - \\
1 & S\({}^{-}\) & 1 & 6\% ADV \\
2 & SS\({}^{+}\) & 5 & \(\frac{1}{5}\)(6 \% ADV) \\
3 & I\({}^{+}\) & 1 & 6\% ADV \\
4 & I\({}^{-}\) & 1 & 6\% ADV \\
5 & I\({}^{+}\),I\({}^{-}\) & 2 & 3 \% ADV \\
6 & II\({}^{+}\) & 1 & 6 \% ADV \\
7 & II\({}^{+}\),II\({}^{-}\) & 2 & 3 \% ADV \\
8 & II\({}^{+}\),I\({}^{-}\) & 2 & 3 \% ADV \\
9 & 5I\({}^{-}\) & 5 & \(\frac{1}{5}\)(6\% ADV) \\
10 & 5II\({}^{+}\) & 5 & \(\frac{1}{5}\)(6\% ADV) \\
11 & 5I\({}^{+}\), 5I\({}^{-}\) & 10 & \(\frac{10}{10}\) (6\% ADV) \\
12 & 5II\({}^{+}\), 5II\({}^{-}\) & 10 & \(\frac{10}{10}\) (6\% ADV) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Different combinations of agent engaging in strategic order splitting. The first case is the agent-based model without any learning agents – type 0. The minimally intelligent agents engaging in naive order splitting using a TWAP strategy are denoted as agents of type S. The agent types are either an acquisition agent (+), or a liquidation agent (\(-\)). The number of acquisition agents and liquidation agents are given for each case. The overall order size is in multiples of \(X_{0}=6\%\) of ADV for all agent classes to ensure that the market has similar liquidity across all the cases considered. The learning agents are then either of class I or II. Here class I only use market-orders and class II use both market-order and limit-orders (see Table 1). The state-space is of size 5 for all the learning agents.
map for the spread and volume state combinations is plot. The legend on the left provide the mapping from the number of actions taken for a particular combination to the action and represents the greedy actions taken in a particular state. The Case 4 agent is a combination of market order (MO) and limit order (LO) actions with varying aggression.
## 4 Exploratory Data Analysis
The measured trade-and-quote (TAQ) data is a single 8 hour day of trading, on 08-07-2019, for a single Johannesburg Stock Exchange (JSE) dual-listed equity, Naspers (NPN.J) [28]. The data excludes all auctions. The simulated data similarly represents a single period comparable to 8 hour of trading on the basis of the average daily volume (ADV) where the ABM component was calibrated to the estimated moments. The method of simulated moments was used to calibrate the model for the base case (Case 0) and is described in prior work [27; 13].
### Comparing Stylised Facts
Table 3 compares the moments for key configurations. The sample paths used to capture the model variations are comparable to those found in the calibration. The cases are ranked, left-to-right, on the micro-price fluctuation volatilities and the GPH, the measure of the long-range dependency. All the models have more extreme events than found in the real world data. None of the classes suggest evidence for a unit-root.
Multiple balanced learning agents (Case 5) increases the long-range dependencies (GPH) and mean-reversion (Hurst), possibly because of interactions. Conversely, Type II agents decrease long-range dependency whilst reducing
Figure 1: The agent returns are given as a function of the training episodes. This demonstrates how the different agents rewards converge under training. In Figure 0(a) the return rewards for agent type I, both buyers (+) and seller (-) is shown. Type I agents only trade using market orders (MOs).0(b) has the same plot but for agents of type II, these are agents that use both market orders and limit orders.
Figure 2: The final \(\epsilon\)-greedy policy is shown for a type I agent example as a heat map. Here for Case 3 (see Table 2) with the buying agent in Figure 1(a) and the selling agent in 1(b).
volatility. We believe this occurs since allowing execution agents to post limit orders introduces additional heterogeneity in the order flow and liquidity provision processes, reducing long range dependence, whilst the additional liquidity provided by these agents reduces the occurrence of liquidity shocks, decreasing volatility. From the Hill estimators we see that Case 5 has less extreme events (a slower decay in the tail distribution exponent) than Case 1, which may suggest that balanced execution agents (_i.e._ equal number of buying and selling agents) more generally, reduces the tail effects.
### Persistence of Orderflow
The ACF of trade-signs reflects the persistence in the direction of the market order flow1. Due to potentially large sampling variation, analysis of a single path is not indicative of general behaviour. On the other hand, averaging sample ACFs tends to conceal interesting differences in the distribution of sample paths between the different cases. Thus, we consider average sample ACFs and the individual sample paths to explain the ABM behaviour, and discuss broad phenomena observed from their plots.
Footnote 1: The ACFs are not on de-meaned data because the data is ordinal.
Firstly, as seen in Figure 5(a), the average level of ACF reflects the number and (net) direction of the execution agents. The ABM was calibrated in an upward trending market, and hence the base case produces sample paths that are slightly more biased towards buying. Adding execution agents increases the persistence of the trade signs, and hence the ACF, simply by increasing the proportion of buys in the order flow. Conversely, including selling agents in the ABM results in an increase in activity in opposition to the original orderflow, thus decreasing the ACFs. Including both selling and buying agents produces ACFs that tend to lie between the ACFs of the buying and selling agents respectively.
However, as seen in Figure 5(b), we believe the observed increases or decreases in ACF, largely reflect the bias of direction of order flow in the ABM. If the ABM was biased towards selling, we would observe the effects in the opposite direction observed here. Nonetheless, we observe that the change in level from including agents of a given type become more pronounced with the number of agents, as these execution agents start to dominate the order flow over the other classes.
Secondly, as seen in Figure.9, following the inclusion of execution agents, the ABM produces, with greater frequency, sample paths where the cumulative direction of the order flow persists for very long periods. The corresponding sample ACFs are linear with very slow decay. This is especially evident in the minimally intelligent cases (TWAPs), where ACFs become increasingly linear with greater frequency as the number of agents increase. Surprisingly, the presence of sample paths with persistent cumulative order flow is also evident (although to a lesser degree) when the number of buying and selling agents are equal. However, the remaining sample paths may be nonlinear with several changes in the direction of cumulative order flow with corresponding nonlinear ACFs, which can produce nonlinear behaviour in the average ACFs.
We observe that increasing the number of agents tends to decrease the rate of change of the ACF and is most evident at low lags (less than 1000 events). In the base case, the orderflow arises from the interaction of fundamentalists and chartists, which tends to have higher autocorrelations at shorter horizons, most likely due to minority game dynamics. In contrast, the order flow increasingly reflects the activity of execution agents as their numbers increase, which tends to result in linear slow decaying ACFs, and decrease the decay rate of ACF.
### Price Impact
Price impact is defined as the instantaneous change to the mid-price following a trade, which depends on the shape of the orderbook at the time of trade. The greater the amount of liquidity, particularly at prices at and close to best quotes, the lower the price impact. Conversely, reducing the available liquidity will increase price impact.
Liquidity supply is dynamic and is reduced by liquidity taking agents (and order cancellations), and is increased by liquidity providing agents. Thus, differences between cases can be attributed to the differences in the trading and liquidity provision behaviour of the different agent classes. As with the analysis of tradesigns, the trading and liquidity provision behaviour is path-dependent and hence has a large degree of sampling variation, which we try to eliminate by analysing average price impact curves.
The clearest pattern that we observe is that cases with type 2 agents have lower price impact than Type 1 agents. Type 2 agents can use limit orders in place of market orders to execute their parent orders, thereby reducing liquidity taking behaviour whilst increasing liquidity provision, both of which result in greater average available liquidity.
Figure 3: The final greedy policy example for a type II agent as shown for Case 4 (see Table 2) as a heat map.
### Memory in Absolute Returns
The ACF of absolute returns measures persistence in the size of returns calculated from micro-prices 2, reflecting the dynamics of the top of the order book. By definition, a change in the micro-price arises from a change in the top-of-book price and/or top-of-book volume. Changes in these quantities are due to events: a trade, a new limit order or a cancellation, and how these quantities change depend on the shape of the order book. The different event-types are mutually exclusive and have their own processes defined by volume, price, and relative frequency.
Footnote 2: The ACF of the absolute returns uses demeaned data.
Thus, the ACF of absolute returns encapsulates the behaviour of a wider array of market variables, in comparison to the ACF of tradesigns and price impact curves, making any observed patterns remarkable and worthy of attention, but difficult to interpret.
Here, we observe two patterns which appear to support the hypothesis that the variation in liquidity demand in excess of liquidity determines the decay in the ACF of
Figure 4: Price impact plots for the twelve cases which include optimal execution agents. Left and right for buying and seller initiated. We see that the cases which include Type II agents (blue) have lower price impact than Type I agents. This suggests that using limit orders to take advantage of opportunities created by market flow and changes in the spread.
Figure 5: Absolute value auto-correlations plots comparing Type I (orange) and Type II (blue) cases. Type II agents suppress autocorrelations while agents type I agents have non-trivial auto-correlations in absolute value of the mid-price returns. This is indicative that the more complex Type II agents reduce regularity in potentially both the order flow, and the liquidity provision processes.
\begin{table}
\begin{tabular}{|l|c c c c c c c|c c c|} \hline & \multicolumn{6}{c|}{Simulated} & \multicolumn{2}{c|}{Estimated} \\ & \multicolumn{3}{c}{Case \(5{\rm:ABM+I^{\pm}}\)} & \multicolumn{3}{c}{Case \(1{\rm:ABM+S^{-}}\)} & \multicolumn{3}{c}{Case \(0{\rm:ABM}\)} & \multicolumn{3}{c|}{Case \(6{\rm:ABM+II^{+}}\)} & \multicolumn{3}{c|}{JSE:NPN.J} \\ Moment & \(\mathbf{m}^{s}\) & [97.5\% CI] & \(\mathbf{m}^{s}\) & [97.5\% CI] & \(\mathbf{m}^{s}\) & [97.5\% CI] & \(\mathbf{m}^{s}\) & [97.5\% CI] & \(\mathbf{m}^{s}\) & [97.5\% CI] \\ \hline Mean & 0 & - & 0 & - & 0 & - & 0 & - & 0 & - & 0 & - \\ Std\(\times 10^{-4}\) & 4.60 & [3.90,5.30] & 4.04 & [3.36,4.75] & 2.31 & [1.61,3.02] & 1.48 & [0.78,2.18] & 1.39 & [1.19,1.59] \\ KS & 0.22 & [0.16,0.28] & 0.16 & [ 0.16,0.27] & 0.18 & [0.12,0.23] & 0.27 & [0.21,0.32] & 0.00 & [-0.01,0.01] \\ Hurst & 0.29 & [0.22,0.35] & 0.33 & [0.26,0.40] & 0.40 & [0.33,0.46] & 0.38 & [0.31,0.44] & 0.47 & [0.41,0.52] \\ GPH & 0.69 & [0.57,0.82] & 0.62 & [0.49,0.74] & 0.51 & [0.39,0.63] & 0.42 & [0.29,0.54] & 0.44 & [0.30,0.59] \\ ADF & -154 & [-158,-151] & -158 & [-161,-154] & -148 & [-152,-145] & -167 & [-171,-164] & -136 & [-140,-133] \\ GARCH & 1.31 & [1.24,1.38] & 0.95 & [0.87,1.02] & 0.99 & [0.91,1.05] & 1.06 & [0.99,1.13] & 1,00 & [0.96,1.03] \\ Hill & 0.83 & [0.41,1.25] & 1.39 & [0.97,1.81] & 1.24 & [0.82,1.66] & 1.25 & [0.83,1.67] & 1.99 & [1.72,2.26] \\ \hline \end{tabular}
\end{table}
Table 3: Simulated moments (using the calibrated model for the environment) and estimated moments. These are the same used in prior work [13] and are briefly summarised here in Table 2. The estimated moments are from the market data [13]. The simulated moment sets are ordered on decreasing micro-price volatility (Std.) left to right.
absolute returns. Firstly, the ACF decays at the slowest rate when five buying TWAP agents (Case 2) are added the base ABM, since this reduces variation in the volume and frequency of market orders. Furthermore, we see more generally that the ACF of type II agents decay faster than type I agents, because the ability to post limit orders of varying size and depth introduces further variation in order flow and liquidity processes, which is temporally uncorrelated due changes in the market's state. However, limitations in the data prevented a convincing support or falsification of this hypothesis.
### The missing complexity
Although we are able to recover many of the stylised facts (See Table 2 and Table 3) we are not able to fully recover sufficient model complexity relative to the measure real-world data. This is shown in Figure 8. The empirical data from the JSE test data is given with confidence intervals (red). The training environment is shown in black with confidence intervals. The learning agent configurations use the type labelling from Table 2. We notice that the none of model configurations are able to capture the full complexity of the real world data, and all have dimensions less than at least 2. The single agents tend to have dimensions slightly greater than that of the e-ABM, and the many agent configurations, slightly less. This is elaborated in Table 7(b). Table 7(b) gives the relative differences of the model configurations. The agent case (first column), RL agents types (second column) and \(\Delta D\), the difference between fractal dimensions of the different configurations, and the ABM's fractal dimensions, as averaged across the embedding dimension (see Figure 7(a)) are sorted on the number of RL agents interacting with the ABM.
We estimate the correlation dimensions using the Grassberger-Procaccio algorithm [21]. This gives us a reasonable bound on the required phase space from the micro-price data. The resulting correlation dimension will be used as a proxy for representation dimension of a particular model configuration _i.e_ as a measure the model configurations relative complexity. We select the correlation time in machine-time by selecting the first minimum in the micro-price autocorrelations [24].
From Table 7(b) we can consider the relative differences in the model complexity measured at the higher embedding dimensions where there is the slowest increase in the estimated fractal dimension (the right of Figure 7(a)). In particular when focusing on the difference on the right of the inset of Figure 7(a). Here we find that when single learning agents are combined with ABM they tend to increase the fractal dimension, and hence, the complexity of the model configuration. While adding multiple learning agents tends to decrease the fractal dimension - possibly because these tend to interact with each other and reduce the overall impact of adaption and learning in the combined market. However, it is important to realised that here the over-all total outstanding initial parent order volume has been keep the same, at \(X_{0}\)=6% of ADV (see Table 2). This means that the combined outstanding orders, the volume exposure (Vol. Exp.) for the multi (many) learning agents cases are _balanced_ for cases 5,7,8,11 and 12, where the size of the buying position is the same as the selling position, zero. This is not the case for cases 1,2,3,4,6,9 and 10 where there is non-zero volume exposure, the volume exposure is _one-sided_. This suggests that increases in the fractal dimension may be explained in this model to be due to asymmetric liquidity demand. The extent to which the action state relationships are different across these cases can be seen in the appendices where the final greedy policies are compared for similar agent types.
Figure 6: Auto-correlation functions of tradesigns. Left and right for not demeaning and demeaning respectively. On the left, we observe that the level of the ACF reflects the imbalance in the number of execution that are buying vs selling. The level of the ACF increases with more buying agents (blue), is moderated by the including both buying and selling agents (green), and decreases by including selling agents only (orange). Note however, that directional changes in the ACF reflect may reflect that the base ABM (red) was calibrated in an upward trending market. On the right, we demean the ACF so that the imbalance in number of agents doesn’t dominate the plot visually. We find that the increasing the number of agents (red is lowest, cyan is greatest) decreases the rate of decay of the ACF.
## 5 Conclusion
Our expanded description of market ecology includes optimal execution agents, which is necessary to produce stylised facts associated with order flow and the cost of trading. We further argue that learning is necessary because markets fail when the volume traded by a given agent exceeds the carrying capacity of the environment. However, having many agents attempting to learn simultaneously can result in complex non-stationary market dynamics, preventing successful learning for any agent.
The inclusion of execution agents to a minimally intelligent ABM introduces additional heterogeneity into the observed order flow and liquidity provision processes. How these processes, and consequently the stylised facts, change depend on the specification of the optimal execution agents. In particular, we find that: _i)_ persistence in order flow increases with the number of execution agents trading on a single side, _ii)_ the realised cost of trading decreases when
Figure 7: The phase space reconstruction plots using a 2-dimensional embedding, and a delay times \(\tau=10\), \(10\), and \(6\), respectively, for figures 6(a), 6(c) and 6(b). Segments, each of 250 points, are high-lighted in red and show the dynamics associated with large micro-price movements, and these associated time-series segments are given in second row of figures 6(d), 6(f) and 6(e). We notice that there is non-random structure to the dynamics with indications of quasi-periodic orbits. The plots are noticeable distorted relative to each other. With the single learning agent (Case 6) having the most extended phase-space, and the situation with both a buying and selling learning agent (Case 5) having the most concentrated dynamics. The lower row of plots are equivalent to “population” plots for the four different agent classes, the liquidity providers, the two classes of minimally intelligent liquidity takers, and the optimal execution agents where figures 6(g), 6(h) and 6(i) plot the total running profit of the four agent classes.
agents can submit limit orders, and _iii)_ increasing the complexity of trading agents introduce an additional source of variation into the price process. These findings suggest the necessity of including optimal execution agents in ABMs to recover empirical high-frequency stylised facts.
Furthermore, we find that learning introduces further variation into the order flow and liquidity processes, as agents make state-based decisions using a decision rule that adapts over time. This is demonstrated by the decrease in the level and persistence of autocorrelations. Surprisingly, we did not find that learning decreased the average cost of trading as reflected by the realised price impact functions. However, we still find evidence that learning is feasible since execution agents increase their performance over successive training periods. Although learning agents added complexity to the market's dynamics, this was insufficient to recover the complexity observed in empirical data.
In spite of the necessity of incorporating execution agents for a realistic ABM framework, neither execution agents or learning appears to be the dominant source of financial market complexity, at least when considered for a single stock in isolation from the broader market. Thus, as future work, we think it is worthwhile to consider the interaction of at least two markets, and investigate the emergence of correlations as the driver of the missing financial market complexity. This would allow for the investigation of a new agent class: multi-asset portfolio optimising agents, and how they are situated within the existing market ecology. In short, the bulk of the nonlinear dynamics, and complexity, in the single stock setting, seems to arise from the minimally intelligent agent dynamics, and it is this dynamic that provides the opportunity for learning.
## Code and Data Availability
### Acknowledgements
We thank Ivan Jericevich for support and advice with respect to the matching engine and hybrid agent-based model implementation.
### Author contributions statement
T.G. and M.D. conceived the experiments and models, M.D. implemented and conducted the experiments, A.P implemented and conducted the analysis, M.D, A.P. and T.G. analysed the results. All authors reviewed the manuscript.
### Competing interests
There are no competing interest.
|
2305.14707 | SciFix: Outperforming GPT3 on Scientific Factual Error Correction | Due to the prohibitively high cost of creating error correction datasets,
most Factual Claim Correction methods rely on a powerful verification model to
guide the correction process. This leads to a significant drop in performance
in domains like scientific claims, where good verification models do not always
exist. In this work, we introduce SciFix, a scientific claim correction system
that does not require a verifier but can outperform existing methods by a
considerable margin -- achieving correction accuracy of 84% on the SciFact
dataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to next
best accuracies of 7%, 5%, and 15% on the same datasets respectively. Our
method leverages the power of prompting with LLMs during training to create a
richly annotated dataset that can be used for fully supervised training and
regularization. We additionally use a claim-aware decoding procedure to improve
the quality of corrected claims. Our method outperforms the very LLM that was
used to generate the annotated dataset -- with Few-Shot Prompting on GPT3.5
achieving 58%, 61%, and 64% on the respective datasets, a consistently lower
correction accuracy, despite using nearly 800 times as many parameters as our
model. | Dhananjay Ashok, Atharva Kulkarni, Hai Pham, Barnabás Póczos | 2023-05-24T04:24:16Z | http://arxiv.org/abs/2305.14707v2 | # The student becomes the master: Matching GPT3 on Scientific Factual Error Correction
###### Abstract
Due to the prohibitively high cost of creating error correction datasets, most Factual Claim Correction methods rely on a powerful verification model to guide the correction process. This leads to a significant drop in performance in domains like Scientific Claim Correction, where good verification models do not always exist. In this work, we introduce a claim correction system that makes no domain assumptions and does not require a verifier but is able to outperform existing methods by an order of magnitude -- achieving 94% correction accuracy on the SciFact dataset, and 62.5% on the SciFact-Open dataset, compared to the next best methods 0.5% and 1.50% respectively. Our method leverages the power of prompting with LLMs during training to create a richly annotated dataset that can be used for fully supervised training and regularization. We additionally use a claim-aware decoding procedure to improve the quality of corrected claims. Our method is competitive with the very LLM that was used to generate the annotated dataset -- with GPT3.5 achieving 89.5% and 60% correction accuracy on SciFact and SciFact-Open, despite using 1250 times as many parameters as our model.
## 1 Introduction
The widespread adoption of the Internet has led to the distribution of more written content than ever before in human history, and the recent strides of generative AI models is predicted to push this trend even further (Kingma et al., 2014; Salakhutdinov, 2015; Maaloe et al., 2016). As this decade has seen, this revolution comes with its demerits, because with more content comes more inaccurate or false content (Balakrishnan et al., 2022; Paschen, 2020; Ozbay and Alatas, 2020). A reliable way to automatically flag these incorrect claims or, more ambitiously, to automatically correct incorrect claims would do wonders for our ability to manage this risk. Researchers have identified and been working on Factual Claim Verification with some success. This is not the case, however, for Factual Error Correction, where the prohibitively high cost of manually annotating corrections of incorrect claims means there is currently no available dataset for this task (Chen et al., 2022; Thorne and Vlachos, 2021). The few methods that tackle this problem use Claim verification datasets for distant supervision and try to use claim verification models to provide signals that can guide the correction process. This exposes the correction methods to flaws in the verification models -- one of which is that current verification methods often make either an implicit or explicit domain assumption -- focusing on news and political domains (Zeng et al., 2021; Guo et al., 2022). Due to this, most powerful claim verification methods of today fail to transfer well to scientific claims, which is especially worrying when we consider that scientific reports and claims are uniquely hard for people without domain expertise to verify or correct. This has an adverse impact on the best claim correction methods as well with almost none of them able to perform satisfactorily on claim correction tasks from the scientific domain.
In this paper we introduce a Factual Claim Correction system that makes no domain assumptions, does not require a claim verification model and is shown to work well on scientific claims. Our method leverages the power of Large Language Models (LLMs) like GPT (Brown et al., 2020) to generate a claim correction dataset from a claim verification dataset by corrupting 'correct' claims into 'incorrect' ones. This dataset not only allows us to learn a mapping between incorrect claims and their associated correction but also allows us to generate rich annotations via explanations for why this mapping is the correct one. We use this dataset to train a conditional generation language model to map evidence and incorrect claims to the appropriate correction, with the explanations serv
ing as a useful guide during the learning process. Finally, we introduce a claim-aware decoding procedure to guide the generation process. While this component does not require a verifier, any verifier can be easily integrated into our procedure if it is available for the domain at hand. Our system is able to achieve 94% correction accuracy on the SciFact dataset and 62.50% on the SciFact-Open dataset, this is an order of magnitude better than competing methods, with the best contemporary method achieving 0.5%, 1.50% on the datasets, respectively. More impressively, our method also matches the performance of the very Pretrained LLMs which generated the dataset. Despite using around 1250 times as many parameters as our model, Few-Shot Prompting on GPT3.5 achieves 89.5% and 60% on the datasets respectively, which is comparable performance to our method.
Our work presents an alternative route forward for claim correction efforts that does not rely on having access to a powerful verification model, and more generally shows that general LLMs can be used effectively as part of the model training pipeline to create a more compact and yet more powerful model.
## 2 Related Work
**Factual Error Correction:** This task was first tackled by Shah et al. (2020). Their strategy is to first use a pre-trained fact verification model to identify and mask out the components of a sentence that cause the given claim to be incorrect, and then use a separate model to fill in the masked sentence while remaining consistent with the provided evidence. This is taken further by Thorne and Vlachos (2021). Their approach adopts the same masking and infilling paradigm, but makes advances on the choice of how to use the fact verification model in the masker as well as in the infilling model. Most recently, Chen et al. (2022) consider fact correction as an iterative sampling problem and sampling editing actions with respect to a target density function. They also use a pre-trained verification model to guide the sampling process. These methods have all made significant advances in factual error correction, but none of them are expected to perform reasonably well if their verification model is poor. All of these methods work around the FEVER dataset, where fact verification models can achieve around 80% accuracy. This is not the case in scientific domains, with the best verifiers for some of these datasets (Wadden et al., 2022) reaching no more than 70% accuracy.
**Factual Consistency of Summarizations:** Similar problems have arisen in efforts to make the summary of a paragraph consistent with the facts of the paragraph. The relevant approaches here are the post-editing processes which can detect and correct factual errors in the summary. Two key methods in this domain are Cao et al. (2020) and Zhu et al. (2020), which try to manually introduce corruptions into the correct claims and reconstruct the correct claim using a Seq2Seq model (Sutskever et al., 2014). Our method extends this approach by discarding the need for manually defined entity swapping, back translation, or other labor-intensive methods of introducing corruptions, by using a LLM to provide a diverse set of corrupted claims. We also provide a way to generate a set of rich annotations that is fundamentally not possible via the corruption approaches (Cao et al., 2020; Zhang et al., 2022; Zhu et al., 2020).
**Prompting with LLMs:** It has been shown recently that an LLM can achieve high performance on certain Few-Shot tasks with a few examples in the context window of the input (Brown et al., 2020; Wei et al., 2022). Chain-of-Thought prompting improves on simple prompting approaches by providing examples in the prompt which not only contain question-answer pairs, but also some stated reasoning for the provided answer. We use these powerful methods with General LLMs to generate data for our more compact and task-dependent model.
## 3 Method
First, we specify how we interpret both the claim verification and error correction tasks using notation from Thorne and Vlachos (2021):
Given a claim \(c\) and evidence \(E(c)\) such that \(c\) is contradicted by the evidence i.e: \(E(c)\not\models c\), our task is to generate a corrected claim \(c^{\prime}\), such that \(c^{\prime}\) is supported by the claim : \(E(c^{\prime})=E(c)\models c^{\prime}\), and \(c^{\prime}\) makes claims about similar ideas, concepts objects, etc as \(c\).
In our work, we assume that we have access to a domain-specific claim verification dataset, as opposed to a correction dataset itself. Our system consists of three key components: i) an LLM Driven Data Generation, ii) Domain Adaptive Language Modeling, and iii) a Claim Aware Decoding Procedure.
**LLM Driven Data Generation:** First, we identify something fundamental about the nature of the claim correction problem. One direction, mapping incorrect claims to correct claims, is very difficult to do, requiring a deep understanding of the semantics of both the evidence and claim to perform. However, the reverse direction, mapping correct claims to incorrect claims, is much easier and requires only a partial understanding of the concepts and words in the correct claim, often not requiring any evidence at all. A concrete example of this is shown in Figure 2, where it is possible to generate an incorrect claim from a correct claim without seeing the evidence or understanding any of the medical concepts in the sentence. However, to recover the correct claim, one must comprehend the evidence, identify the error, and correct it. There is a similar pattern when it comes to generating explanations for why a claim is true or rewording a correct claim such that it maintains the same meaning overall. We exploit this property using a Pretrained LLM (GPT3.5): First, we take the evidence and supported claims from the verification datasets and then produce a correction dataset with annotations and explanations for why the correct claim is true. After these steps, we create an 'augmented correct claim' (an alternate correct claim with the exact same meaning) for each example. A sample from this dataset is shown in Figure 2 and more are provided in the Appendix Figure 7.
**Domain Adaptive Language Modeling:** We use Domain Adaptive Pretraining Gururangan et al. (2020) on the evidence passages to give the LLM a better understanding of the kinds of words and concepts that are unique to the specific domain we are interested in. Using this adapted LM we learn a 'claim correction model'. The claim correction model is trained to map the evidence and incorrect claim to a correct claim along with an explanation for why that is a correct claim. Note that, unlike previous methods which interpret error correction as a masking and infilling problem, we interpret it as a conditional generation task. This allows us to tackle a significantly more diverse set of incorrect claims because many incorrect claims can be corrected by swapping tokens and may even require the structure of the sentence to change considerably.
**Claim Aware Decoding Procedure:** Lu et al. (2021) introduced an effective way to perform constrained decoding for generative LLMs. During beam search decoding, instead of scoring a partial sequence as just the probability of generating that sequence of tokens, the constrained decoding method performs a lookahead search to make a greedy estimate of what the complete sequence is
Figure 1: Full Description of the idk system. During training a fact verification database is converted to a well annotated error correction database using Prompting with LLMs, this database is used to train a Seq2Seq correction LM and a Seq2Label Semantic Difference Model. During prediction the semantic difference model helps guide the generative model using claim aware decoding to generate the corrected claim
Figure 2: A sample from the generated dataset
likely to be and then incorporates the goodness of the full sequence into the score of the partial sequence. We utilize the fact that the corrected claim should not have the same meaning as the incorrect claim to guide the decoding process. Specifically, we use the same adapted LM from the previous component to learn a'semantic difference model'. The semantic difference model is a classifier that is trained to identify when two claims are semantically similar, giving a score of \(1\) when they have a different meaning and \(0\) when they are identical in meaning. We then use this as the scoring function for the lookahead estimate, incentivizing the decoding process to avoid solutions that have the same meaning as the incorrect claim.
In the following sections, we showcase the power of this method on scientific claim verification datasets.
## 4 Implementation and Evaluation Metric
To help the reproduction of our results, below we list the important details in our implementation.
**Datasets:** We use four different scientific domain claim verification datasets -- SciFact, SciFact-Open, CovidFact, and HealthVer (Wadden et al., 2020, 2020, 2021, 2020). These datasets come with evidence passages and claims, along with labels for whether the claim is supported or refuted. Samples can be found in the Appendix 7. Throughout the training process, we only use claims that are supported by these datasets. Our eventual test set is the set of claims that are refuted by this dataset. This test dataset contains only the incorrect claims and no provided labels are provided for the corrected claims.
During the training process, we use only the supported claims, ensuring that the test set claims are never seen by any stage of the language models which make up the system.
**Components:** For the LLM Driven Data Generation step we used FewShot Prompting on GPT3.5. For the Domain Adaptive Language Modeling, we trained our BART based model Lewis et al. (2019) (140m params) on the abstracts of SciFact and SciFact-Open. After these steps, we fine-tuned this model for the Claim Correction Model.
**Evaluation:** There is no automatic way to evaluate the goodness of a claim correction system. In our setting, we do not have access to ground truth labels, but even if we did there is no way to compute how good a candidate corrected claim is with reference to the gold standard correct claim. Other methods in claim correction have converged on the SARI metric (Xu et al., 2016), which was created for text summarization, as a proxy. This metric is defined as \((\mathrm{F1_{add}}+\mathrm{F1_{keep}}+\mathrm{P_{del}})/3\) where \(\mathrm{F1_{add}}\) is the n-gram F1 score for add operations, \(\mathrm{F1_{keep}}\) is the n-gram F1 score for keep operations, and \(\mathrm{P_{del}}\) is the n-gram precision score for delete operations. This is entirely based on using the number of token-based operations needed, i.e. edit distance between two claims, to give a measure of the semantic similarity between the two claims. We argue that this is a misleading way to measure the success of a correction system because this method will always lack the semantic understanding to truly reflect the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Unified & SciFact & SciFact-Open & CovidFact & HealthVer \\ \hline VENCE & 0.5 & 1.0 & 3.0 & 3.5 \\ GPT3.5 ZS & 84.5 & 50.5 & 66.5 & 55.0 \\ GPT3.5 FS & 89.5 & 60.0 & **70.5** & **68.5** \\ Our Best & **94.0** & **62.5** & 68.5 & 54.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Correction Accuracy % results on the Unified training task. Results show significant improvement over baseline methods, and competitive performance with GPT3.5-based prompting methods
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Split & SciFact & SciFact-Open & CovidFact & HealthVer \\ \hline VENCE & 0.5 & 1.0 & 3.0 & 3.5 \\ GPT3.5 ZS & 84.5 & 50.5 & 66.5 & 55.0 \\ GPT3.5 FS & 89.5 & 60.0 & **70.5** & **68.5** \\ Our Best & **92.5** & **67.5** & 46.5 & 41.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Correction Accuracy % results on Split training task. Results show clear decline in accuracy on out-of-distribution datasets (CovidFact and HealthVer). Results for baseline methods are same as Unified case in Table 1 as there is no training involved for baselines
correctness of a candidate. See an example in Figure 3, where we clearly show that sentences that have an extremely high SARI score with respect to the label can still be incorrect answers. Similarly, a candidate with a lower SARI score can be a more appropriate answer. Even if we had access to a powerful verification model and use this as an automated method, there is a concern of correlated errors between the correction and verification methods, leaving the error estimates dubious. We argue that with the current state of the field, we must use human evaluation to get an accurate reflection of the performance of a system. We use correction accuracy as our metric. This is simply defined as the number of examples in the test set that are marked to be 'correct' answers by human annotators who have been given the error correction description as presented in Section 3.
## 5 Experiments
**Methods:** We compare our method against the following approaches - i) VENCE [3], ii) Zero-Shot Prompting, and iii) Few-Shot Prompting on GPT3.5 (2 examples). For all approaches the metric considered is the Correction Accuracy through human evaluation on a sample of 200 random points from the datasets. We employ a single annotator for this evaluation. Note that automatic metrics (like SARI) for these results are impossible to compute because there is no ground truth provided. All of our software implementations and trained models will be publicly released after the review period.
**Unified Setting:** In our first experiment, we train our model on data from all four datasets and show results on the test sets of all of these datasets.
**Domain Shift Setting:** In this setting we separate our datasets into two distinct 'domains'. SciFact and SciFact-Open are biomedical datasets, which use abstracts as their evidence passages. Covid-Fact and HealthVer are datasets taken from news reporting on COVID-19, with selected sentences as evidence passages. We split these into the 'biomed' and 'covid news' domains respectively, and observe the performance when the training data is taken from the biomed domain and the prediction task is on the covid news domains. This tests the ability of the methods to generalize to shifting domains.
Given a pre-trained verifier, VENCE is a zero-shot system, similarly, Zero-Shot Prompting also does not require any retraining, and hence we repeat the baseline values from the Unified setting when reporting results for the Domain Shift setting. We did not observe a significant difference in the performance of FewShot prompting when including CovidFact and HealthVer examples (a difference of less than 5% on 75 examples), constrained by the expensiveness of human evaluation, we choose to report the same results for Few-Shot prompting as well.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Variations & SciFact & SciFact-Open & CovidFact & HealthVer \\ \hline All Components & **94.0** & 62.5 & **68.5** & 54.0 \\ No explanations & 92.0 & **63.5** & 68.0 & 54.5 \\ Beam Decoding & 90.0 & 59.0 & 65.0 & 50.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation over system components show that score-based Claim Aware Decoding is consistently helpful while the benefit of explanations varies with datasets.
Figure 3: SARI metric is higher for an incorrect claim than a correct claim. It is unable to identify semantic similarity
## 6 Results
The results in Table 1 show clearly that our method is able to outperform VENCE in the the unified task setting by a considerable margin, as well as match or exceed the performance of GPT3.5 in specific datasets. The highest performance is in SciFact, this is because the refuted claims in the SciFact dataset are mostly constructed by the alteration of a few semantically important words (e.g. positively correlated with negatively correlated), which matches the nature of our dataset very closely. The accuracy is not as high for any method on the more difficult datasets -- CovidFact and HealthVer. These datasets have refuted claims that are significantly more free-form, often in the form of a newspaper headline with a very different language from that of the evidence sentences. These datasets additionally have refuted claims which are extremely difficult to convert to a correct claim by altering the incorrect claim, often with a complete restatement of the evidence being more appropriate, as shown in Figure 4 and the Appendix 8. The domain shift dataset setting in Table 2 shows that the supervised training of the Covid datasets makes a considerable difference. The performance falls for the out-of-distribution covid domains while it remains high for the biomed domains. In conclusion, the method is able to handle different domains but suffers a significant penalty.
## 7 Ablation and Analysis
We next vary components of the pipeline to investigate their individual contribution to the success of the system. Specifically, we remove the explanations and the claim-aware decoding procedure from the pipeline. Table 3 shows mixed results. The setting 'all components' the most consistently the best option, however, there are datasets where not having an explanation gives marginally better performance. The change in the decoding strategy, however, seems to have a clear negative effect on the system. The setting with beam search decoding is worse on every dataset than the claim-aware decoding method. An inspection of the decoding output shows examples where the claim-aware decoding method fails because the semantic difference model is not robust to small meaningless perturbations. As seen in Figure 5, the prediction sentence is semantically the same as the incorrect claim, however, the semantic difference model outputs a high score, implying that the two sentences have different meanings.
The explanations show an interesting trend, with the coherence and faithfulness of the explanation being empirically quite correlated with the correctness of the predicted claim. This can be seen with an example in Figure 6, where an explanation is clearly incoherent, preempting an incorrect prediction. More examples of the explanations are provided in the Appendix Figure 9
## 8 Limitations
Our method is verifier free, however, this means it is unable to choose to abstain when the input claim is actually correct. Our observations show that when we try using our method on correct claims the model sometimes predicts claims with no alterations, but can also occasionally change words which may or may not change the meaning of the sentence. Additionally, our method relies on data generation from LLMs whose training sets are undisclosed to the public (Brown et al., 2020). This can lead to questions of data contamination. However, it is worth noting that since our compar
Figure 4: Complex example where it is unclear if there is a corrected claim that is faithful to the incorrect claim
Figure 5: Score Guided Claim Aware Decoding making trivial restatements of the incorrect claim instead of a meaningful change
isons are against GPT3.5, any data contamination concerns affect the baseline far more than it affects our method.
## 9 Future Work
There are several interesting directions for future work. The first would be to see if there are other NLP tasks where the newly available method of prompting-based Data Generation can be used to create synthetic datasets. There are potentially several problems where the key bottleneck has been the prohibitive cost of human annotation and data creation -- the success of this method on Factual Error Correction suggests there may be other subtasks where this method would perform well. Within error correction, there is an interesting question on whether the same system could be modified to handle both correct and incorrect claims, implicitly performing verification by choosing not to edit already true claims. Specifically, our method could be quickly improved by employing prompt engineering to optimize the diversity and quality of the datasets generated, as well as using more powerful Semantic Difference models or score signals during the claim-aware decoding procedure.
## 10 Conclusion
In this work, we put forward a new way to utilize existing Claim Verification datasets to effectively perform Claim Correction. We show that our method is not only more powerful than existing methods on Scientific Claim Correction but also competitive with Prompting on GPT3 -- one of the largest Language Models accessible today.
Factual Claim Correction and Scientific Claim Correction in specific are vital, yet nascent, subfields of NLP. It is our hope that this work helps expand the range of possible solutions by offering an effective way to perform correction without access to a powerful verification model.
|
2310.01802 | Unifying Safety Approaches for Stochastic Systems: From Barrier
Functions to Uncertain Abstractions via Dynamic Programming | Providing safety guarantees for stochastic dynamical systems has become a
central problem in many fields, including control theory, machine learning, and
robotics. Existing methods either employ Stochastic Barrier Functions (SBFs) or
rely on numerical approaches based on abstractions. While SBFs are analogous to
Lyapunov functions to prove (probabilistic) set invariance, abstraction-based
approaches approximate the stochastic system into a finite model for the
computation of safety probability bounds. In this paper, we offer a new
perspective on these seemingly different methods. We show that both these
approaches arise as approximations of a stochastic dynamic programming problem.
Such a new and unifying perspective allows us to formally show the correctness
of both approaches, characterize their convergence and optimality properties,
and examine advantages and disadvantages of each. Our analysis reveals that
abstraction-based methods can provide more accurate certificates of safety, but
generally at a higher computational cost. We conclude the article with a
discussion that highlights possible future directions. | Luca Laurenti, Morteza Lahijanian | 2023-10-03T05:19:55Z | http://arxiv.org/abs/2310.01802v2 | Unifying Safety Approaches for Stochastic Systems: From Barrier Functions to Uncertain Abstractions via Dynamic Programming
###### Abstract
Providing safety guarantees for stochastic dynamical systems has become a central problem in many fields, including control theory, machine learning, and robotics. Existing methods either employ Stochastic Barrier Functions (SBFs) or rely on numerical approaches based on abstractions. While SBFs are analogous to Lyapunov functions to prove (probabilistic) set invariance, abstraction-based approaches approximate the stochastic system into a finite model for the computation of safety probability bounds. In this paper, we offer a new perspective on these seemingly different methods. We show that both these approaches arise as approximations of a stochastic dynamic programming problem. Such a new and unifying perspective allows us to formally show the correctness of both approaches, characterise their convergence and optimality properties, and examine advantages and disadvantages of each. Our analysis reveals that abstraction-based methods can provide more accurate certificates of safety, but generally at a higher computational cost. We conclude the article with a discussion that highlights possible future directions.
Finite Abstraction, Barrier Function, Probabilistic Safety, Robustness, Stochastic Systems
## I Introduction
In the age of autonomous systems, control systems have become ubiquitous, playing a pivotal role in _safety-critical_ applications. Examples span from autonomous vehicles [1] to medical robots [2], where the consequences of bad decisions are not only costly but can also prove fatal. A common characteristic of these systems is the inherent complexity and nonlinearity of the dynamics that are subject to uncertainty due to physics (e.g., sensor or actuation noise) or algorithms (e.g., black-box controllers or perception). Consequently, the question, _how to ensure the safety of stochastic control systems?_ has emerged as a central research topic in various disciplines, including control theory, machine learning, formal methods, and robotics [3, 4, 5].
Safety for a stochastic system generalizes the standard notion of stochastic stability [5]. It is defined as the probability that the system does not exhibit unsafe behavior, i.e., that with high probability the first exit time from a safe set is greater than a given threshold or infinite. As exact computation of the safety probability is generally infeasible even in the case of stochastic systems with linear dynamics, existing formal approaches rely on under-approximations, with the two most-commonly employed approaches being _stochastic barrier functions_ (SBFs) [6, 4] and abstraction-based methods [7, 8]. Similar to Lyapunov functions for stability, stochastic barrier functions are energy-like functions that allow one to lower bound the probability of a stochastic system remaining within a given set. On the other hand, abstraction-based methods abstract the original system into a finite, discrete-space stochastic process, generally a variant of a Markov chain, for which efficient algorithms that compute safety probability exist [9, 10, 11, 12]. A fundamental component of the abstraction process is the computation of the abstraction error, which can be used to formally bound the safety probability for the original system.
This paper presents a unified treatment of safety for discrete-time continuous-space stochastic systems under the framework of _dynamic programming_ (DP), bridging the gap between SBFs and abstraction-based methods. While traditionally seen as distinct approaches, with SBFs typically derived from supermartingale conditions [5], this work seeks to harmonize these concepts to provide a more comprehensive perspective on safety verification of stochastic systems. We first show that, for this class of systems, the safety probability can be computed via DP and that there always exists a deterministic Markov policy (strategy) that optimizes this probability. Then, we show that both SBFs and abstraction-based methods arise as an approximation of this DP. In particular, for SBFs we show that existing bounds can be obtained by over-approximating the indicator function of the unsafe set with a barrier function, i.e., a non-negative function that is greater than one in the unsafe set. In contrast, existing approaches that perform abstractions to (uncertain) Markov process arise as piecewise-constant over- and under-approximations of the defined DP.
Viewing both SBF and abstraction-based methods as approximations for a DP problem has several advantages: (i) it gives a unified treatment of safety for stochastic systems, (ii) it allows us to establish formal bounds and guarantees on the precision and correctness of these approaches, and (iii) it enables us to fairly compare these methods and highlight their strengths and weaknesses. Specifically, we show that abstraction-based methods often return tighter bounds on the safety probability compared to SBFs, and it generally comes at the cost of an increased computation effort.
The paper is organized as follows. In Section II, we introduce the class of systems we consider and formally define probabilistic safety. In Section III, we define Stochastic Barrier Functions (SBFs). We present abstraction-based methods in Section IV. Finally, in Section V, we analyze strengths and weaknesses of each method. We conclude the paper with Section VI, where we highlight important open research questions. We provide all the proofs in Section VII.
## 2 Probabilistic Safety for Discrete-Time Stochastic Processes
Consider a discrete-time controlled stochastic system described by the following stochastic difference equation:
\[\mathbf{x}[k+1]=F(\mathbf{x}[k],\mathbf{u}[k],\mathbf{v}[k]) \tag{1}\]
where \(\mathbf{x}[k]\) is the state of the system at time \(k\) taking values in \(X\subseteq\mathbb{R}^{n_{s}}\), \(\mathbf{u}[k]\in U\subset\mathbb{R}^{n_{u}}\) denotes the control or action at time \(k\) for \(U\) being a compact set, and \(\mathbf{v}[k]\) is an independent random variable distributed according to a time-invariant distribution \(p(v)\) over an uncertainty space \(V\subseteq\mathbb{R}^{n_{v}}\). Sets \(X\), \(U\), and \(V\) are all assumed to be appropriately measurable sets. \(F:X\times U\times V\to X\) is a possibly nonlinear function representing the one-step dynamics. System (1) represents a general model of nonlinear controlled stochastic system. For instance, this model includes stochastic difference equations with additive or multiplicative noise as well as stochastic dynamical systems with neural networks in the loop [3, 13].
**Definition 1** (Policy).: _A feedback policy (or strategy) \(\pi=(\pi_{0},\pi_{1},...)\) for System (1) is a sequence of possibly random and history-dependent universally measurable functions \(\pi_{k}:X^{k+1}\rightarrow\mathcal{P}(U)\)1, where \(\mathcal{P}(U)\) is the set of probability measures over \(U\). Let \(p_{u_{k}}\) denote a universally measurable kernel such that \(p_{u_{k}}(\pi_{k}(x_{0},...,x_{k})\in U)=1\). Plicy \(\pi\) is called deterministic if for each \(k\) and \((x_{0},...x_{k})\), \(p_{u_{k}}\) assigns mass one to some \(u\in U\). If for each \(k\), \(\pi_{k}\) is only parametrized by \(x_{k},\,\pi\) is a Markov policy. A policy is stationary if, for every \(k_{1},k_{2}\in\mathbb{N}\), it holds that \(\pi_{k_{1}}=\pi_{k_{2}}\), in which case, with an abuse of notation, we use \(\pi\) to denote any of these functions. The set of all policies is denoted by \(\Pi\), while the set of deterministic Markov policies by \(\Pi^{M,D}\)._
Footnote 1: Note that we assume that \(\pi_{k}\) is independent of the value of the previous actions, but only depends on previous states. This is without lost of generality because probabilistic safety, as defined in Definition 2, only depend on the state values.
For a given initial condition \(x_{0}\), a time horizon \(H\in\mathbb{N}\), and a policy \(\pi=(\pi_{0},...,\pi_{H-1})\), \(\mathbf{x}[k]\) is a stochastic process with a well defined probability measure \(P\) generated by the noise distribution \(p(v)\)[14, Proposition 7.45] such that for measurable sets \(X_{0},X_{k+1}\subseteq X\), it holds that
\[P(\mathbf{x}[0]\in X_{0}\mid\mathbf{u}[0]=a)=\mathbb{1}\,(x_{0},X_{0}),\] \[P(\mathbf{x}[k+1]\in X_{k+1}\mid\mathbf{x}[k]=x,\mathbf{u}_{k} =a)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad:=T(X_{k+1} \mid x_{k},a)dx,\]
where
\[\mathbb{1}\,(x_{k},X_{k})=\begin{cases}1&\text{if }x_{k}\in X_{k}\\ 0&\text{otherwise}\end{cases}\]
is the indicator function. We refer to \(T(X_{k+1}\mid x_{k},a)\) as the _stochastic kernel_ of System (1), and we assume that for each measurable \(X_{j}\subseteq X\), \(x\in X,a\in U,\,T(X_{j}\mid x,a)\) is Lipschitz continuous in both \(x\) and \(a\).
### Probabilistic Safety
For a given policy \(\pi\) and a time horizon \(H\in\mathbb{N}\), probabilistic safety is defined as the probability that \(\mathbf{x}[k]\) stays within a measurable safe set \(X_{\mathrm{s}}\subseteq X\) for the next \(H\) time steps. That is, the first exit time from \(X_{s}\) is greater than \(H\).
**Definition 2** (Probabilistic Safety).: _Given a policy \(\pi,\) safe set \(X_{\mathrm{s}}\subset X\), time horizon \(H\in\mathbb{N}\), and initial set of states \(X_{0}\subseteq X_{\mathrm{s}}\), probabilistic safety is defined as_
\[P_{\mathrm{s}}(X_{\mathrm{s}},X_{0},H\mid\pi)=\\ \inf_{x_{0}\in X_{0}}P(\forall k\in[0,H],\mathbf{x}[k]\in X_{ \mathrm{s}}\mid\mathbf{x}[0]=x_{0},\pi).\]
Probabilistic safety and its equivalent dual probabilistic reachability2 are widely used to certify the safety of a dynamical system [15] and represent a generalization of the notion of invariance that is commonly employed for analysis of deterministic systems [16].
Footnote 2: Given a finite-time horizon \(H\in\mathbb{N}\), policy \(\pi\), initial point \(x_{0}\), and a target set \(X_{\mathrm{u}}\), probabilistic reachability is defined as \(P_{\mathrm{reach}}(X_{\mathrm{u}},x_{0},H\mid\pi)=P(\exists k\in[0,H],\mathbf{ x}[k]\in X_{\mathrm{u}}\mid x[0]=x_{0},\pi)\). Consequently, we have that \(P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi)=1-P_{\mathrm{reach}}(X\setminus X _{\mathrm{s}},x_{0},H\mid\pi)\).
In the rest of this section, we show that, to compute a policy that maximizes probabilistic safety, it is sufficient to restrict to deterministic Markov policies. To that end, we first show that \(P_{\mathrm{s}}\) can be characterized as the solution of a DP problem. In particular, for \(k\in\{H,H-1,\ldots,0\}\) and \(X_{\mathrm{u}}:=X\setminus X_{\mathrm{s}}\), consider value functions \(V_{k}^{*}:X\rightarrow[0,1]\) defined recursively (backwardly in time) as:
\[V_{H}^{*}(x) =\mathbb{1}\,(x,X_{\mathrm{u}}), \tag{2}\] \[V_{k}^{*}(x) =\inf_{a\in U}\Big{(}\mathbb{1}\,(x,X_{\mathrm{u}})+\mathbb{1} \,(x,X_{\mathrm{s}})\,\mathbb{E}_{x^{\prime}\sim T(\cdot\mid x,a)}[V_{k+1}^{*}( x^{\prime})]\Big{)}, \tag{3}\]
where notation \(x^{\prime}\sim T(\cdot\mid x,a)\) means that \(x^{\prime}\) is distributed according to \(T(\cdot\mid x,a)\). Intuitively, at each time step \(k\), \(V_{k}^{*}\) selects the (deterministic) action that minimizes the probability of reaching a state from which the system may reach \(X_{\mathrm{u}}\) in the next \(H-k\) time steps. Consequently, by propagating \(V_{k}^{*}\) backward over time, we compute the probability of reaching \(X_{\mathrm{u}}\) in the future. The following theorem guarantees that
\(\sup_{\pi\in\Pi}P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi)\) is equal to \(1-V_{0}^{*}(x_{0})\). Note that while \(\Pi\) also includes history-dependent random policies, in obtaining \(V_{0}^{*}\) via (2)-(3), we only consider deterministic Markov policies.
**Theorem 1**.: _For an initial state \(x_{0}\in X_{\mathrm{s}}\), it holds that_
\[\sup_{\pi\in\Pi}P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi)=1-V_{0}^{*}(x_{ 0}).\]
A straightforward consequence of Theorem 1 is Corollary 1, which guarantees that deterministic Markov deterministic policies are optimal.
**Corollary 1**.: _It holds that_
\[\sup_{\pi\in\Pi}P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi)=\sup_{\pi\in\Pi^ {M,D}}P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi).\]
_Furthermore, for every \(\pi\in\Pi^{M,D}\), it holds that_
\[P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi)=1-V_{0}^{\pi}(x_{0}),\]
_where \(V_{0}^{\pi}(x_{0})\) is defined recursively as_
\[V_{k}^{\pi}(x) =\mathbb{1}(x,X_{\mathrm{u}}), \tag{4}\] \[V_{k}^{\pi}(x) =\mathbb{1}(x,X_{\mathrm{u}})+\mathbb{1}(x,X_{\mathrm{s}})\, \mathbb{E}_{x^{\prime}\sim T(\cdot\mid x,\pi_{k}(x))}[V_{k+1}^{\pi}(x^{\prime} )]. \tag{5}\]
Theorem 1 and Corollary 1 guarantee that in order to synthesize optimal policies, it is enough to consider deterministic Markov policies, and these policies can be computed via DP.
We should stress that without the assumptions made in this paper (compactness of \(U\), continuity of \(T\), and measurability of the various sets), \(V_{k}^{*}\) may not be measurable and the infimum in \(U\) in (3) may not be attained. In that case, the integrals in the expectations in (3)-(5) have to be intended as outer integrals [14]. However, under the assumptions in this paper, the expectations in the above DP are well-defined and a universally measurable deterministic Markov optimal policy exists [14], i.e., the \(\inf\) is attainable for every point in the state space by a universally measurable function. However, even if \(V_{0}^{*}\) and \(V_{0}^{\pi}\) are well-defined, computation of (3) is infeasible in practice due to the need to solve uncountably many optimization problems. Thus, computing \(P_{\mathrm{s}}\) requires approximations.
In what follows, we consider the two dominant approaches in the literature that (with certified error bounds) compute probabilistic safety and synthesize policies for System (1), namely, _stochastic barrier functions_ (SBFs) and _abstraction-based methods_. We show that both approaches arise as over-approximations of the value functions \(V_{k}^{\pi}\). Such a unified framework provides a basis for a fair comparison of these approaches, which consequently reveals their advantages and disadvantages.
## 3 Stochastic Barrier Functions
We start with the setting, where a deterministic Markov policy \(\pi\) is given, and we aim to compute a (non-trivial) lower bound of \(P_{\mathrm{s}}(X_{\mathrm{s}},X_{0},H\mid\pi)\). We first show how one can use SBFs to compute an upper bound on \(V_{0}^{\pi}\) (hence a lower bound on \(P_{\mathrm{s}}\)) without the need to directly evolve the dynamics of System (1). Then, we focus on the key challenge with SBFs: how to find an SBF that allows one to bound \(P_{\mathrm{s}}\) without leading to overly conservative results. The control synthesis case is considered in Section 3.
An SBF [6, 17] is simply a continuous almost everywhere function \(B:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}_{\geq 0}\) that over-approximates \(V_{H}^{\pi}(x)\). In particular, we say function \(B\) is an SBF iff it over-approximates the indicator function for the unsafe set, i.e.,
\[\forall x\in X,\,B(x)\geq 0\quad\text{and}\quad\forall x\in X_{\mathrm{u}},\,B(x )\geq 1. \tag{6}\]
The intuition is that when \(B\) is propagated backwards over time in a DP fashion, it produces an over-approximation for \(V_{k}^{\pi}(x)\). That is, for the value functions \(\bar{V}_{k}^{\pi}:\mathbb{R}^{n_{x}}\rightarrow[0,1]\) with \(k\in\{0,\ldots,H\}\) defined recursively as
\[\bar{V}_{H}^{\pi}(x) =B(x), \tag{7}\] \[\bar{V}_{k}^{\pi}(x) =\mathbb{1}(x,X_{\mathrm{u}})+\mathbb{1}(x,X_{\mathrm{s}})\, \mathbb{E}_{x^{\prime}\sim T(\cdot\mid x,\pi_{k}(x))}[\bar{V}_{k+1}^{\pi}(x^{ \prime})], \tag{8}\]
the following Lemma holds.
**Lemma 1**.: _For every \(k\in\{0,\ldots,H\}\) and every \(x\in X\), it holds that \(\bar{V}_{k}^{\pi}(x)\geq V_{k}^{\pi}(x)\)._
Now, define constant \(\beta\geq 0\) as
\[\beta\geq\sup_{x\in X_{\mathrm{s}},k\in\{0,\ldots,H-1\}}\Big{(}\mathbb{E}_{x ^{\prime}\sim T(\cdot\mid x,\pi_{k}(x))}[B(x^{\prime})]-B(x)\Big{)}. \tag{9}\]
That is, \(\beta\) bounds how much the probability of reaching \(X_{\mathrm{u}}\) can grow in a single time step. Then, by rearranging terms in (8), we obtain
\[V_{k}^{\pi}(x)\;\leq\;\bar{V}_{k}^{\pi}(x)\;\leq\;(H-k)\beta+B(x). \tag{10}\]
This leads to Theorem 2 below.
**Theorem 2**.: _Let \(B:X\rightarrow\mathbb{R}_{\geq 0}\) be a function that satisfies (6). Call \(\eta=\sup_{x\in X_{0}}B(x)\). Then, it holds that_
\[\inf_{x_{0}\in X_{0}}P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi)\geq 1-( \eta+\beta H).\]
Theorem 2 guarantees that once constants \(\eta\) and \(\beta\) are computed, one can lower-bound \(P_{\mathrm{s}}(X_{\mathrm{s}},x_{0},H\mid\pi)\) without the need to propagate the system dynamics over time.
**Remark 2**.: _From (7) and (8), we can observe that the closer \(B(x)\) is to the indicator function, the closer \(\bar{V}_{k}^{\pi}\) gets to \(V_{k}^{\pi}\), and thus the tighter the bound computation becomes for \(P_{\mathrm{s}}\). This may lead the reader to wonder why we do not simply assume \(B(x)=\mathbb{1}(x,X_{\mathrm{u}})\). To clarify this, note that, in the derivation of Theorem 2, another source of conservatism comes from the choice of \(\beta\). In fact, \(\beta\) is the supremum expected change over all \(x\in X_{\mathrm{s}}\). Hence, setting \(B(x)=\mathbb{1}(x,X_{\mathrm{u}})\) may lead to overly conservative results, e.g., cases where there are only few regions from which the probability that the system transition to the unsafe set is not negligible. This is illustrated in Figure 1, where an indicator barrier function is compared against a SBF synthesized using Sum-of-Square (SoS) optimization as proposed in [17], showing the impact of the choice of \(B\) on \(\beta\) and \(\eta\)._
Following Remark 2, it is clear that the key challenge in the SBF approach is in finding a valid \(B\) that does not lead
to excessively conservative results. Let \(\mathcal{B}\subset\{f:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}_{\geq 0}\}\) be a class of non-negative functions, e.g., exponential or Sum or Square (SoS). Then, the problem of searching for a valid barrier can be formulated as the solution of the following optimization problem:
\[\max_{B\in\mathcal{B}}\;1-(\eta+\beta H)\] (11) subject to: \[\inf_{x\in X_{a}}B(x)\geq 1\] \[\eta=\sup_{x\in X_{b}}B(x)\] \[\beta=\sup_{x\in X_{s}}\big{(}\mathbb{E}_{x^{\prime}\sim T(x^{ \prime}|x,\pi(x))}[B(x^{\prime})]-B(x)\big{)}.\]
In the case, where \(\mathcal{B}\) is the class of exponential or SoS functions and dynamics function \(F\) is polynomial in \(\mathbf{x}\) and linear in \(\mathbf{v}\), the above optimization problem can be reformulated as a convex optimization problem [17, 18]. However, in the more general setting, the optimization problem in (11) is non-convex, and relaxations, which generally lead to partitioning of \(X_{s}\), or ad-hoc methods are required to solve it efficiently [19, 13, 20].
**Remark 3**.: _Note that Theorem 2 only requires \(B\) to be almost surely continuous. That is, \(B\) can be discontinuous in a set of (Lebesque) measure \(0\). Consequently, \(B\) can be taken to be a piecewise continuous functions. In practice, this can often be advantageous especially in the cases where \(F\) is nonlinear [21]._
### Control Synthesis with SBFs
We now study how to find a policy \(\pi\) that maximizes \(P_{\mathrm{s}}\) using SBFs. In particular, consider a stationary policy (feedback controller) \(\pi(\cdot\mid\theta):X\to U\) parameterized in some parameters \(\theta\in\mathbb{R}^{n_{\theta}}\). Then, control synthesis with SBFs can be performed by modifying the optimization problem in (11) to the following:
\[\max_{B\in\mathcal{B},\theta\in\mathbb{R}^{n_{\theta}}}1-(\eta+ \beta H)\] (12) subject to: \[\inf_{x\in X_{a}}B(x)\geq 1\] \[\eta=\sup_{x\in X_{0}}B(x)\] \[\beta=\sup_{x\in X_{s}}\big{(}\mathbb{E}_{x^{\prime}\sim T(x^{ \prime}|x,\pi(x|\theta))}[B(x^{\prime})]-B(x)\big{)}.\]
This optimization problem aims to simultaneously synthesize a barrier function \(B\) and a stationary policy (feedback controller) \(\pi\). Unfortunately, because the expectation in the \(\beta\) term depends on both \(\pi\) and \(B\), the resulting optimization problem is generally non-convex [22, 23]. To address this problem, recent approaches employ iterative methods [6, 8, 13]. They generally proceed by first finding a \(B\) for a fixed \(\pi\) and then updating \(\pi\) to maximize the lower bound on \(P_{\mathrm{s}}\), i.e., \(1-(\eta+\beta H)\), for the fixed \(B\). By repeating this process, the lower bound on \(P_{\mathrm{s}}\) is shown to improve. Additionally, approaches employing machine learning have been recently developed [24].
As discussed, there are two main sources of conservatism in bounding \(P_{\mathrm{s}}\) using SBFs: (i) the choice of the barrier, and (ii) the \(\beta\) term in (9), which is obtained as a uniform bound over the safe set. Both of these sources of errors can be mitigated by abstraction-based approaches, usually at the price of an increased computational effort.
## IV Discrete Abstraction
Another class of well-established approaches to computing (a lower bound) for \(P_{\mathrm{s}}\) and a policy \(\pi\) that maximizes \(P_{\mathrm{s}}\) is based on (discrete) abstraction. Intuitively, these approaches aim to numerically solve the continuous DP in (4)-(5) by discretizing \(X\), while accounting for the error induced by discretization. We again start with the case where a deterministic Markov policy \(\pi\) is given, and consider the control synthesis case in Section IV-B.
Abstraction-based methods first partition the safe set \(X_{\mathrm{s}}\) into \(n_{p}\) sets \(X_{1},\ldots,X_{n_{p}}\) and treat \(X_{\mathrm{u}}\) as another discrete region (i.e., \(X_{n_{p}+1}=X_{\mathrm{u}}\)), resulting in a total of \(n_{p}+1\) regions. For \(k\in\{0,...,H-1\}\) and a given policy \(\pi\), one can then define piecewise-constant functions
\[\gamma_{k}^{\pi}(x)=\begin{cases}\gamma_{k,1}^{\pi}&\text{if }x\in X_{1}\\ \vdots&\vdots\\ \gamma_{k,n_{p}}^{\pi}&\text{if }x\in X_{n_{p}}\\ \gamma_{k,n_{p}+1}^{\pi}&\text{otherwise}\end{cases}\]
recursively, for \(i\in\{1,\ldots,n_{p}+1\}\), as3:
Footnote 3: Note that under the assumption that \(\pi\) and \(T\) are locally continuous in \(X_{i}\), we can replace the supremum with the maximum in the DP in (13)-(14). In fact, a continuous function on a compact set attains its maximum and minimum on this set.
\[\gamma_{H}^{\pi}(x)=\gamma_{H,i}^{\pi}:=\max_{x\in X_{i}}\mathbb{1 }\left(x,X_{\text{u}}\right) \tag{13}\] \[\gamma_{k}^{\pi}(x)=\gamma_{k,i}^{\pi}:=\max_{x\in X_{i}}\Big{(} \mathbb{1}\left(x,X_{\text{u}}\right)+\] \[\mathbb{1}\left(x,X_{s}\right)\sum_{j=1}^{n_{p}+1}\gamma_{k+1,j}^ {\pi}T(X_{j}\mid x,\pi_{k}(x))\Big{)}. \tag{14}\]
Because of the maximum operator in (13)-(14), \(\gamma_{k}^{\pi}(x)\) overapproximates \(V_{k}^{\pi}(x)\), which is the probability of reaching the unsafe set starting from \(x\) in the next \(H-k\) time steps. This leads to the following theorem.
**Theorem 3**.: _Let \(\gamma_{k}^{\pi}\) be defined as in (13)-(14). Then, it holds that_
\[P_{\text{s}}(X_{\text{s}},x_{0},H\mid\pi)\geq 1-\gamma_{0}^{\pi}(x_{0}).\]
Proof.: Because of Theorem 1 it is enough to show that for each \(k\in\{0,...,H\}\) and \(x\in X\), it holds that \(V_{k}(x)\leq\gamma_{k}^{\pi}(x)\). The case \(x\in X_{i}\) for \(X_{i}\cap X_{u}\neq\emptyset\) is trivially verified. Consequently, in what follows we assume \(x\in X_{i}\subseteq X_{\text{s}}\). The proof is by induction. Base case is \(k=H-1\), in this case we have
\[V_{H-1}^{\pi}(x) =T(X_{\text{u}}\mid x,\pi_{H-1}(x))\] \[\leq\max_{x\in X_{i}}T(X_{\text{u}}\mid x,\pi_{H-1}(x))\] \[\leq\max_{x\in X_{i}}\sum_{j=1}^{n_{p}+1}\gamma_{H,j}^{\pi}T(X_{j }\mid x,\pi_{H-1}(x))\] \[=\gamma_{H-1}^{\pi}(x)\]
For the induction case, assume \(x\in X_{\text{s}}\). Then, under the assumption that \(V_{k+1}^{\pi}(x)\leq\gamma_{k+1}^{\pi}(x)\) (induction hypothesis), it follows that:
\[V_{k}^{\pi}(x) =\mathbb{E}_{x^{\prime}\sim T(\cdot\mid x,\pi_{k}(x))}[V_{k+1}^{ \pi}(x^{\prime})]\] \[\leq\mathbb{E}_{x^{\prime}\sim T(\cdot\mid x,\pi_{k}(x))}[\gamma_ {k+1}^{\pi}(x^{\prime})]\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
and upper bound of the transition probability between each pair of states such that
\[\hat{P}(q_{i},q_{j}) =\begin{cases}\min_{x\in X_{i}}T(X_{j}\mid x,\pi_{k}(x))&\text{if }i \neq n_{p}+1\\ 1&\text{if }i,j=n_{p}+1\\ 0&\text{otherwise,}\\ \end{cases}\] \[\hat{P}(q_{i},q_{j}) =\begin{cases}\max_{x\in X_{i}}T(X_{j}\mid x,\pi_{k}(x))&\text{if }i \neq n_{p}+1\\ 1&\text{if }i,j=n_{p}+1\\ 0&\text{otherwise.}\end{cases}\]
Consequently, it follows that, in solving the LP in Proposition 1, we select the more conservative feasible distribution4 w.r.t. probabilistic safety within the set of feasible distributions.
Footnote 4: A feasible distribution \(t\) is a distribution that satisfies constraints of Proposition 1.
**Remark 5**.: _The optimization problem in Proposition 1 can be solved particularly efficiently due to the specific structure of the linear program. In particular, as shown in [26], one can simply order states based on the value of \(\gamma_{k+1}^{\pi}\) and then assign upper or lower bounds based on the ordering and on the fact that \(\sum_{j=1}^{n_{p}+1}t_{j}=1\). However, note that to solve the DP in (13)-(14), the resulting linear program problem needs to be solved \(H\) times for each of the \(n_{p}+1\) states in \(Q\)._
**Remark 6**.: _Abstraction based methods are numerical methods. As a consequence, their precision is dependent on the discretization of the state space. An example is illustrated in Figure 2, where we show how for a finer partition, the precision of approach increases (a proof of convergence to \(P_{s}\) is given in Section 4) and the resulting bound is greater than \(0.8375\), the value obtained in Figure 1 for stochastic barrier functions with SoS optimization for the same setting. How to optimally discretizing the safe set, while limiting the explosion of the partition size, is still an active area of research, e.g., [3, 27]._
**Remark 7**.: _Note that an alternative approach for approximately solving (14) would be to associate a representative point \(x_{i}\in X_{i}\) to each partition and then consider the following approximation_
\[\max_{x\in X_{i}}\sum_{j=1}^{n_{p}+1}\gamma_{k+1,j}^{\pi}T(X_{j} \mid x,\pi_{k}(x))\approx\\ \sum_{j=1}^{n_{p}+1}\gamma_{k+1,j}^{\pi}T(X_{j}\mid x_{i},\pi_{k} (x_{i})). \tag{16}\]
_The resulting abstraction leads to an MC with the same state space \(Q\), and the transition probabilities between each pair of states are computed using representative points in the starting region as shown in (16). Similar numerical approaches have been widely studied in the literature, including also for continuous-time systems [28], and approaches to quantify the resulting error have been developed [29, 30]. However, similar to SBF (see Theorem 2), the resulting error grows linearly with time and is generally more conservative compared to abstracting to an IMC (or IMDP)._
### Control Synthesis with IMDPs
For control policy computation, the set of controls or actions \(U\) should be included in the abstraction model. Hence, the abstraction becomes an Interval Markov Decision Process (IMDP) \(\mathcal{I}=(Q,U,\hat{P},\hat{P})\), where the transition probability bounds \(\hat{P},\hat{P}:Q\times U\times Q\to[0,1]\) are now also functions of \(U\), i.e., given \(q_{i},q_{j}\in Q\) and \(a\in U\), \(\hat{P}(q_{i},a,q_{j})\) and \(\hat{P}(q_{i},a,q_{j})\) are the lower- and upper-bound transition probability from \(q_{i}\) to \(q_{j}\) under action \(a\), respectively. An optimal policy via IMDP abstractions for region \(X_{i}\subseteq X_{s}\) can then be computed by solving the following value iteration, which combines Proposition 1 with (13)-(14), and where we iteratively seek for the action that minimizes the probability that \(\mathbf{x}_{k}\) enters the unsafe region \(X_{\text{u}}\).
\[\bar{\gamma}_{H,i}^{\pi} =\max_{x\in X_{i}}\mathbbm{1}(x,X_{\text{u}}) \tag{17}\] \[\bar{\gamma}_{k,i}^{*} =\min_{a\in U}\max_{t\in T_{\text{u}}^{2}}\sum_{j=1}^{n_{p}+1} \bar{\gamma}_{k+1,j}^{*}t_{j}, \tag{18}\]
where for \(i\leq n_{p}\) we define
\[T_{a}^{i}=\Big{\{}t\in[0,1]^{n_{p}+1}\mid t_{j}\in\big{[}\inf _{x\in X_{i}}T(X_{j}\mid x,a),\\ \sup_{x\in X_{i}}T(X_{j}\mid x,a)\big{]},\ \sum_{j=1}^{n_{p}+1}t_{j}=1 \Big{\}}.\]
and
\[T_{a}^{n_{p}+1}=\{t\in[0,1]^{n_{p}+1}\mid t_{j}=0\text{ for }j\leq n_{p},\ t_{n_{p}+1}=1\}.\]
Note that in the inner maximization problem in (18), we seek for the feasible distribution maximizing the problem posed in Proposition 1. Consequently, synthesizing a policy according to (17)-(18) boils down to recursively solving the following min-max optimization problem for each partition \(X_{i}\subseteq X_{s}\)
\[\min_{a\in U}\max_{t\in T_{\text{u}}^{2}}\sum_{j=1}^{n_{p}+1}\bar{\gamma}_{k+1,j}^{*}t_{j}. \tag{19}\]
Figure 2: We consider the same setting as in Figure 1 and for each state in the safe set, we plot upper and lower bound of the probability of remaining within the safe set \([-1,1]\) for \(\mathbf{10}\) time steps starting from that state using two IMC abstractions: one obtained by discretizing the safe set uniformly with discretization step \(\boldsymbol{dx}=\mathbf{0.1}\) and the other with \(\boldsymbol{dx}=\mathbf{0.02}\). For the initial set \(\boldsymbol{X_{0}}=[-\mathbf{0.25},\mathbf{0.25}]\) we obtain a lower bound on \(P_{\text{u}}\) of \(\mathbf{0.756}\) for the case \(\boldsymbol{dx}=\mathbf{0.1}\) and \(\mathbf{0.975}\) for \(\boldsymbol{dx}=\mathbf{0.02}\).
If \(U\) is discrete, then we simply need to apply Proposition 1\(|U|\) times, one for each action, and then take the action that minimizes the expression. If \(U\) is uncountable, then in the case where \(T\) is a convex or concave function of \(a\), a solution to (19) can be found efficiently via convex optimization [31]. For the more general cases, where one cannot rely on convex optimization, a sub-optimal policy can be found via heuristics or by discretizing \(U\)[12].
The following theorem guarantees that the solution of the DP in (17)-(18) returns a lower bound on probabilistic safety for System (1) and that the resulting policy converges to the optimal policy in the limit of a fine enough partition.
**Theorem 4**.: _Let \(\tilde{\gamma}_{k}^{*}\) be as defined in (17)-(18). Then, for any \(x\in X,\) it holds that \(\tilde{\gamma}_{0}^{*}(x)\geq V_{0}^{*}(x)\). Furthermore, assume that \(X_{\mathrm{s}}\) is a bounded hyper-rectangle that it is discretized uniformly. Then for every \(x\in X\), in the limit of \(\eta_{p}\to\infty\), we have that \(\tilde{\gamma}_{0}^{*}(x)\) converges uniformly to \(V_{0}^{*}(x)\)._
**Remark 8**.: _Note that in the proof of Theorem 4, reported in Section VII, we characterize the convergence rate of (17)-(18) to the optimal policy and to \(P_{s}\). Nevertheless, the resulting convergence rate is generally conservative and to bound the error of abstraction-based methods, a much less conservative approach is as follows. For a given policy \(\pi\), possibly obtained by solving (17)-(18), solve (13)-(14), to get a lower bound of \(P_{\mathrm{s}}(X_{\mathrm{s}},x,H\mid\pi)\). Then, an upper bound can be similarly obtained by solving the following DP:_
\[\gamma_{H}^{\pi,L}(x) =\gamma_{H,i}^{\pi,L}:=\min_{x\in X_{i}}\mathbb{1}(x,X_{u}), \tag{20}\] \[\gamma_{k}^{\pi,L}(x) =\gamma_{k,i}^{\pi,L}:=\min_{t\in T_{\pi_{k}(x)}^{i}}\sum_{j=1}^ {n_{p}+1}\tilde{\gamma}_{k+1,j}^{*}t_{j}, \tag{21}\]
_where \(T_{\pi_{k}(x)}^{i}\) is defined as in (18). Intuitively, in the above problem we iteratively seek for the feasible distribution that minimizes the probability of reaching the unsafe region, thus \(1-\gamma_{k}^{\pi,L}(x)\) returns an upper bound of \(1-P_{\mathrm{s}}(X_{\mathrm{s}},x,H\mid\pi)\), as also illustrated in Figure 2._
## 5 Abstractions vs Barriers: Pros and Cons
Since both IMDP abstractions and SBFs are introduced under the framework of DP, we can now discuss pros and cons of the two approaches in terms of: soundness, optimality, accuracy, computational effort, and scalability.
* **Soundness:** As we show in Theorem 2 and 4, both approaches are guaranteed to return a valid lower bound on \(P_{s}\).
* **Optimality Guarantees:** As the \(\beta\) term in Theorem 2 is obtained by the supremum expected change for all points in \(X_{\mathrm{s}}\), the policy synthesized with SBF-based approaches are necessarily sub-optimal. Similarly, there are no convergence guarantees of the safety certificate to \(P_{s}\). In contrast, for IMDP based approaches, as proved in Theorem 4, for a fine enough partition, both the policy and the certificate of safety converges to the optimal values for System (1).
* **Accuracy:** As illustrated in Figures 1 and 2, IMDP based approaches can generally provide less conservative bounds on \(P_{\mathrm{s}}\). However, this comes at the price of a fine enough partition.
* **Computational Effort:** IMDP abstraction methods always require to partition the state space and to solve a DP over the resulting discretized space. Consequently, IMDP approaches necessitate to solve a number of linear programs that depends linearly on both the number of states in the partition and of the time horizon. While, as explained in Remark 5, each of these problems can be solved efficiently and they are highly parallelizible, SBF-based approaches only require to solve a single (in general non-convex) optimization process. Furthermore, in the case of linear or polynomial dynamics, the resulting optimization problem can often be solved without any partitioning by employing existing tools from convex optimization.
* **Scalability:** In terms of scalability w.r.t. the dimensionality of \(X\), both methods generally suffer. Abstraction-based methods face the state-space explosion problem since the size of discretization grows exponentially with dimensionality. Similarly, as dimensionality grows, existing SBF methods face exponential increase in complexity. For instance, SoS-based approaches [17] experience a (exponential) blow up in the number of basis functions, e.g., number of monomials, leading to exponential growth in the optimization parameters. Alternative approaches that parameterise an SBF as a neural network [19, 20, 24] also experience exponential complexity in the dimension of \(X\) to prove that a neural network is a valid SBF and to compute \(\eta\) and \(\beta\) as defined in Theorem 2.
A summary of the pros and cons is reported in Table 1. In this table, computational effort is w.r.t. the number of optimization problems that each method is required to solve. As explained above, this is not necessarily equivalent to time complexity of the algorithms.
From the above discussion it becomes clear how abstraction-based methods and SBFs are complementary approaches, with a trade-off between computational demands and accuracy. Consequently, the choice of the method should depend on the
particular application. We should also mention that often one is interested in properties beyond safety and reachability, such as temporal logic properties. While this problem has been well studied for abstraction-based methods [10], only few works have recently considered this problem for SBFs [23].
## 6 Conclusion and Future Directions
Safety analysis of stochastic systems is a major problem in today's world, of which autonomous cyber-physical systems are becoming an integral part. Existing methods that allow such analysis are either based on stochastic barrier functions or discrete abstraction. Our analysis shows that both of these methods arise as approximations of a stochastic DP problem. This view unifies these approaches and allows a fair comparison, which in turn reveals that the methods are complementary with their own strengths and weaknesses. Hence, the choice of the approach should depends on the particular application under consideration.
Our demonstration of the effectiveness of these methods in solving the safety DP problem in this work gives rise to several open research questions. The first question is rooted in the connection that this paper establishes between the SBFs and abstraction-based methods and the revealed fact that they are complementary approaches with their own pros and cons. Specifically, the open question is, given that knowledge, is it possible to devise new approaches that integrate their strengths and mitigate their weaknesses and improve scalability of existing approaches? The second question comes from the observation that only a few approaches for controller synthesis algorithms exist, and they are generally limited to discrete or convex settings [17, 23, 31]. Therefore, there is a need for more research to tackle the control synthesis problem with better and more general algorithms. Another interesting question is related to what this paper does not consider. That is, in this paper, we only consider systems with uncertainty rooted in the noise in the dynamics of the system (i.e., aleatoric uncertainty); however, often the noise characteristics or the dynamics of the system are themselves uncertain. This setting is recently attracting interest from the control and machine learning communities [32]. However, how to provide guarantees of safety and synthesize controllers for this more general setting is still an open question.
This work poses a theoretical foundation for the computation of safety guarantees for stochastic systems. By developing a common ground from which both SBFs and abstraction-based methods are developed, we hope that this work stimulates new research at the intersection of these frameworks. We believe that these methods may finally allow one to obtain safety guarantees for real-world control systems.
## 7 Proofs
### Proof of Theorem 1
Define \(P_{\text{u}}(X_{\text{s}},x_{0},H\mid\pi):=1-P_{\text{s}}(X_{\text{s}},x_{0}, H\mid\pi)\). Then, to prove the result it is enough to show that
\[\inf_{\pi\in\Pi}P_{\text{u}}(X_{\text{s}},x_{0},H\mid\pi)=V_{0}^{*}(x_{0}).\]
In the proof we are going to require the following lemma, which, for \(\pi\in\Pi\) allows us to represent \(P_{\text{u}}(X_{\text{s}},x_{0},H\mid\pi)\) via dynamic programming for history-dependent, possibly random, policies.
Lemma 2.: _For any \(\pi=(\pi_{0},...,\pi_{H-1})\in\Pi\), \(k\in\{0,...,H\}\), let \(\bar{V}_{k}^{\pi}:X^{k+1}\to[0,1]\) be defined as:_
\[\bar{V}_{H}^{\pi}(x_{0},...,x_{H})=\mathbb{1}(x_{H},X_{\text{u}})\] \[\bar{V}_{k}^{\pi}(x_{0},...,x_{k})=\mathbb{1}(x_{k},X_{\text{u}})+\] \[\mathbb{1}(x_{k},X_{\text{s}})\mathbb{E}_{k^{\prime}\sim T(\cdot |x_{k},\pi_{k}(x_{0},...,x_{k}))}[\bar{V}_{k+1}^{\pi}(x_{0},...,x_{k},x^{ \prime})].\]
_Then, it holds that_
\[P_{\text{u}}(X_{\text{s}},x_{0},H\mid\pi)=\bar{V}_{0}^{\pi}(x_{0}).\]
Proof.: The proof is by induction over the time index \(k\). Define
\[P_{\text{u}}^{[x_{0},...,x_{k}]}(X_{\text{s}},[k,H]\mid\pi)=\] \[P[\exists k^{\prime}\in[k,H]\ s.t.\ \mathbf{x}_{k^{\prime}}\in X_{ \text{u}}\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi].\]
Then, base case is for \(k=H\), that is \(P_{\text{u}}^{[x_{0},...,x_{H}]}(X_{\text{s}},[H,H]\mid\pi)=\bar{V}_{H}(x_{0},...,x_{H})\), which follows trivially. Then, assuming that
\[P_{\text{u}}^{[x_{0},...,x_{k+1}]}(X_{\text{s}},[k+1,H]\mid\pi)=\bar{V}_{k+1} ^{\pi}(x_{0},...,x_{k+1}),\]
it holds that
\[P_{\text{u}}^{[x_{0},...,x_{k}]}(X_{\text{s}},[k,H]\mid\pi)\] \[= P[\exists k^{\prime}\in[k,H]\ s.t.\ \mathbf{x}_{k^{\prime}}\in X_{ \text{u}}\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[= P[\mathbf{x}_{k}\in X_{\text{u}}\vee\exists k^{\prime}\in[k+1,H] \ s.t.\ \mathbf{x}_{k^{\prime}}\in X_{\text{u}}\] \[\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[= P[\mathbf{x}_{k}\in X_{\text{u}}|\mathbf{x}_{k}=x_{k}]\] \[+P[\exists k^{\prime}\in[k+1,H]\ s.t.\ \mathbf{x}_{k^{\prime}}\in X_{ \text{u}}\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[-P[\mathbf{x}_{k}\in X_{\text{u}}\wedge\exists k^{\prime}\in[k+1,H ]\ s.t.\ \mathbf{x}_{k^{\prime}}\in X_{\text{u}}\] \[\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[= \mathbb{1}(x_{k},X_{\text{u}})+P[\exists k^{\prime}\in[k+1,H]\ s.t. \mathbf{x}_{k^{\prime}}\in X_{\text{u}}\] \[\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[-\mathbb{1}(x_{k},X_{\text{u}})P[\exists k^{\prime}\in[k+1,H]\ s.t. \mathbf{x}_{k^{\prime}}\in X_{\text{u}}\] \[\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[= \mathbb{1}(x_{k},X_{\text{u}})+\mathbb{1}(x_{k},X_{\text{s}})P[ \exists k^{\prime}\in[k+1,H]\ s.t.\ \mathbf{x}_{k^{\prime}}\in X_{\text{u}}\] \[\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[= \mathbb{1}(x_{k},X_{\text{u}})+\mathbb{1}(x_{k},X_{\text{s}})\int_{X} P[\exists k^{\prime}\in[k+1,H]\ s.t.\ \mathbf{x}_{k^{\prime}}\in X_{\text{u}}\] \[\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[\mid\mathbf{x}_{0}=x_{0},...,\mathbf{x}_{k}=x_{k},\pi]\] \[= \mathbb{1}(x_{k},X_{\text{u}})+\mathbb{1}(x_{k},X_{\text{s}})\int_{X} \bar{V}_{k+1}^{\pi}(x_{0},...,x_{k},x^{\prime})P[\mathbf{x}_{k+1}=x^{\prime}\] \[\mid\mathbf{x}_{0}=x_{0},..
We can now prove the main statement. As \(\Pi^{M,D}\subset\Pi\), it holds that
\[\inf_{\pi\in\Pi}P_{\mathrm{u}}(X_{\mathrm{s}},X_{0},H\mid\pi)\leq\inf_{\pi\in \Pi^{M,D}}P_{\mathrm{u}}(X_{\mathrm{s}},X_{0},H\mid\pi).\]
Consequently, the proof is concluded if we can show that also holds that \(\inf_{\pi\in\Pi}P_{\mathrm{u}}(X_{\mathrm{s}},X_{0},H\mid\pi)\geq\inf_{\pi\in \Pi^{M,D}}P_{\mathrm{u}}(X_{\mathrm{s}},X_{0},H\mid\pi)\). This can be done by induction over the time index \(k\in\{0,H\}\). The base case is \(k=H\), which follows trivially as \(\forall x_{0}\in X\) it holds that for any \(\pi\in\Pi,\)
\[\bar{V}_{H}^{\pi}(x_{0},...,x_{H})=V_{H}^{*}(x_{H})=1(x_{H},X_{u}),\]
which is independent of the action and of the previous states. Now assume that \(\forall(x_{0},...,x_{k+1})\in X^{k+1},\inf_{\pi\in\Pi}V_{k+1}^{\pi}(x_{0},..., x_{k+1})\geq V_{k+1}^{*}(x_{k+1})\), then it holds that
\[\inf_{\pi\in\Pi}\bar{V}_{k}^{\pi}(x_{0},...,x_{k})\] \[=\inf_{\pi\in\Pi}\mathbb{1}(x_{k},X_{u})+\] \[\mathbb{1}(x_{k},X_{\mathrm{s}})\mathbb{E}_{x^{\prime}\sim T( \cdot|x_{k},\pi_{k}(x_{0},...,x_{k}))}[\bar{V}_{k+1}^{*}(x_{0},...,x_{k},x^{ \prime})]\] \[\geq\mathbb{1}(x_{k},X_{u})+\] \[\mathbb{1}(x_{k},X_{\mathrm{s}})\inf_{\pi\in\Pi}\mathbb{E}_{x^{ \prime}\sim T(\cdot|x_{k},\pi_{k}(x_{0},...,x_{k}))}[V_{k+1}^{*}(x^{\prime})]\] \[=\mathbb{1}(x_{k},X_{u})+\mathbb{1}(x_{k},X_{\mathrm{s}})\inf_{a \in U}\mathbb{E}_{x^{\prime}\sim T(\cdot|x_{k},a)}[V_{k+1}^{*}(x^{\prime})]\] \[=V_{k}^{*}(x_{k})\]
### _Proof of Theorem 4_
We recall that \(\bar{\gamma}_{k}^{*}\) is the solution of the following dynamic programming problem
\[\bar{\gamma}_{H}^{*}(x)=\bar{\gamma}_{H,i}^{*}:=\max_{x\in X_{i} }\mathbb{1}(x,X_{\mathrm{u}})\] \[\bar{\gamma}_{k}^{*}(x)=\bar{\gamma}_{k,i}^{*}:=\min_{a\in U}\max_ {t\in T_{i}^{2}}\Big{(}\mathbb{1}(x,X_{u})+\] \[\mathbb{1}(x,X_{\mathrm{s}})\sum_{j=1}^{n_{p}+1}\bar{\gamma}_{k+ 1,j}^{*}t_{j}\Big{)},\]
where for \(i\leq n_{p}\)
\[T_{a}^{i}=\Big{\{}t\in[0,1]^{n_{p}+1}\mid t_{j}\in\big{[}\inf_{x \in X_{i}}T(X_{j}\mid x,a),\] \[\sup_{x\in X_{i}}T(X_{j}\mid x,a)\big{]},\;\sum_{j=1}^{n_{p}+1}t_{j }=1\Big{\}}\]
and for \(i=n_{p}+1\) we have \(T_{a}^{n_{p}+1}=\{t\in[0,1]^{n_{p}+1}\mid t_{j}=0\text{ for }j\leq n_{p}\, \wedge\,t_{n_{p}+1}=1\}\). Then, we first have to show that for any \(k\in\{0,...,H\}\) and \(x\in X\) in holds that \(\bar{\gamma}_{0}^{*}(x)\geq V_{0}^{*}(x)\). This can be easily proved by contradiction. In fact, assume that this statement is not true, then there must exist a strategy \(\pi\) such that for \(x\in X\) and \(k\in\{0,...,H\}\) it holds that \(\bar{\gamma}_{k}^{*}(x)\leq V_{k}^{\pi}(x)\). However, this contradicts Theorem 3 and Proposition 1, thus concluding the proof that \(\bar{\gamma}_{0}^{*}(x)\geq V_{0}^{*}(x)\). A consequence of this result is that for any \(\bar{x}\in X_{i}\), if we consider \(a^{*}\in\arg\min_{a\in U}\mathbb{E}_{x^{\prime}\sim T(\cdot|\bar{x},a)}[V_{k+1 }^{*}(x^{\prime})]\), it holds that
\[\leq\min_{a\in U}\max_{x\in X_{i}}\mathbb{E}_{x^{\prime}\sim T(\cdot|x,a)}[ \bar{\gamma}_{k+1}^{*}(x^{\prime})]-\mathbb{E}_{x^{\prime}\sim T(\cdot|\bar{x},a ^{*})}[V_{k+1}^{*}(x^{\prime})]\]
\[\leq\max_{x\in X_{i}}\mathbb{E}_{x^{\prime}\sim T(\cdot|x,a^{*})}[\bar{\gamma }_{k+1}^{*}(x^{\prime})]-\mathbb{E}_{x^{\prime}\sim T(\cdot|x,a^{*})}[V_{k+1}^{* }(x^{\prime})].\]
We will use this result to prove that for any \(\epsilon>0\) there exists a partition of \(X\) in \(n_{p}+1\) regions such that \(\forall x\in X\), \(\bar{\gamma}_{0}^{*}(x)-V_{0}^{*}(x)\leq\epsilon\). That, is \(\bar{\gamma}_{0}^{*}(x)\) converges uniformly to \(V_{0}^{*}(x)\), thus concluding the proof.
Without any lost of generality assume that \(X_{\mathrm{s}}\) is uniformly partitioned in \(n_{p}\) hyper-cubes with edge of size \(\delta\). Further, consider an additional set in the partition \(X_{n_{p}+1}=X_{\mathrm{u}}\). Then, the proof is as follows. We first show that for any \(\bar{\epsilon}>0\) such that \(\max_{x\in X}\big{(}\bar{\gamma}_{k+1}^{*}(x)-V_{k+1}^{*}(x)\big{)}\leq\bar{\epsilon}\), then for all \(x\in X\) it holds that \(\bar{\gamma}_{k}^{*}(x)-V_{k}^{*}(x)\leq\bar{\epsilon}+L\delta,\) where \(L\) is such that for any \(x,x^{\prime}\in X_{\mathrm{s}}\) and \(X_{j}\subseteq X\)
\[L|x-x^{\prime}|_{\infty}\geq\max_{a\in U}|T(X_{j}\mid x,a)-T(X_{j}\mid x^{ \prime},a)|,\]
which is guaranteed to exist because of the Lipschitz continuity of \(T(X_{j}\mid x,a^{*})\) wrt \(x\). Then, we derive a bound for \(\bar{\epsilon}\) by considering the case \(k=H-1\). We start with the first part of the proof.
As mentioned, we start by assuming that \(\forall x\in X,\bar{\gamma}_{k+1}^{*}(x)-V_{k+1}^{*}(x)\leq\bar{\epsilon}\). Under this assumption, for \(x\in X_{i}\) with \(i\leq n_{p}\) select \(a^{*}\in\arg\min_{a\in U}\mathbb{E}_{x^{\prime}\sim t(x^{\prime}|x,a)}[V_{k+1}^{* }(x^{\prime})]\), then the following holds
\[\bar{\gamma}_{k}^{*}(x)-V_{k}^{*}(x)\] \[\leq\max_{t\in T_{a^{*}}^{*}}\sum_{j=1}^{n_{p}+1}\bar{\gamma}_{k+ 1,j}^{*}t_{j}-\mathbb{E}_{x^{\prime}\sim T(\cdot|\bar{x},a^{*})}[V_{k+1}^{*}(x^{ \prime})]\] \[\leq\max_{t\in T_{a^{*}}^{*}}\sum_{j=1}^{n_{p}+1}\bar{\gamma}_{k+ 1,j}^{*}t_{j}-\sum_{j=1}^{n_{p}+1}\min_{x^{\prime}\in X_{j}}V_{k+1}^{*}(x^{ \prime})T(X_{j}\mid x,a^{*})\] \[\leq\max_{t\in T_{a^{*}}^{*}}\sum_{j=1}^{n_{p}+1}\big{(}\min_{x^{ \prime}\in X_{j}}V_{k+1}^{*}(x^{\prime})+\bar{\epsilon}\big{)}t_{j}-\] \[\sum_{j=1}^{n_{p}+1}\min_{x^{\prime}\in X_{j}}V_{k+1}^{*}(x^{ \prime})T(X_{j}\mid x,a^{*})\] \[\leq\bar{\epsilon}+\max_{t\in T_{a^{*}}^{*}}\sum_{j=1}^{n_{p}+1} \min_{x^{\prime}\in X_{j}}V_{k+1}^{*}(x^{\prime})\big{(}t_{j}-T(X_{j}\mid x,a^{*}) \big{)}\] \[\leq\bar{\epsilon}+\max_{t\in T_{a^{*}}^{*}}\max_{j\in\{1,\ldots, |
2305.04514 | Nash equilibria for total expected reward absorbing Markov games: the
constrained and unconstrained cases | We consider a nonzero-sum N-player Markov game on an abstract measurable
state space with compact metric action spaces. The payoff functions are bounded
Carath\'eodory functions and the transitions of the system are assumed to have
a density function satisfying some continuity conditions. The optimality
criterion of the players is given by a total expected payoff on an infinite
discrete-time horizon. Under the condition that the game model is absorbing, we
establish the existence of Markov strategies that are a noncooperative
equilibrium in the family of all history-dependent strategies of the players
for both the constrained and the unconstrained problems, We obtain, as a
particular case of results, the existence of Nash equilibria for discounted
constrained and unconstrained game models. | François Dufour, Tomás Prieto-Rumeau | 2023-05-08T07:15:33Z | http://arxiv.org/abs/2305.04514v1 | Nash equilibria for total expected reward absorbing Markov games: the constrained and unconstrained cases+
###### Abstract
We consider a nonzero-sum \(N\)-player Markov game on an abstract measurable state space with compact metric action spaces. The payoff functions are bounded Caratheodory functions and the transitions of the system are assumed to have a density function satisfying some continuity conditions. The optimality criterion of the players is given by a total expected payoff on an infinite discrete-time horizon. Under the condition that the game model is absorbing, we establish the existence of Markov strategies that are a noncooperative equilibrium in the family of all history-dependent strategies of the players for both the constrained and the unconstrained problems, We obtain, as a particular case of results, the existence of Nash equilibria for discounted constrained and unconstrained game models.
**Keywords:** Nonzero-sum Markov games; Nash equilibrium; Constrained and unconstrained games; Total expected payoff criterion; Absorbing game model.
**AMS 2020 Subject Classification:** 91A10, 91A15.
## 1 Introduction
The topic of noncooperative games has been extensively studied in the last decades and research on this subject has spread in many directions. Here, we are interested in nonzero-sum Markov games; namely, we deal with a stochastic dynamic game on an infinite time horizon: the state of the system evolves according to a stochastic Markov-like kernel, players take their actions after each transition of the system, and their goal is to maximize a given optimality criterion in a noncooperative way. The primary goal is to establish the existence of a Nash equilibrium: that is, a strategy for each of the players in such a way that none of the players can improve his payoff by using another strategy. The interested reader can consult, for instance, the survey [19] to have an overview of this topic.
When dealing with the discounted payoff optimality criterion, the usual technique is the dynamic programming approach. This consists in considering a one-shot game (with a single decision epoch) and then establish the existence of a fixed point for some suitably defined selectors, which will yield the Nash equilibrium. Such results have been obtained under various hypotheses in, e.g., [25, 26], and the most refined and elegant conditions have been proposed in [18] by introducing so-called "decomposable coarser transition kernels".
A natural generalization of the game models described so far is to consider games with _constraints_. In this case, the players try to maximize their payoff function subject to the condition
that some constraints (related to some other payoff functions) must be satisfied. A noncooperative equilibrium is then defined as a set of strategies of the players which satisfy simultaneously all the constraints and for which, in addition, no player can improve his payoff when unilaterally modifying his strategy while still satisfying his own constraints. Such constrained game models were studied first in [2] for a model with finite state and actions spaces, and then generalized in [3] for game models with countable state space and compact action spaces, under some conditions which make the countable state space model nearly finite in each transition of the system. It is not possible, in general, to solve these constrained game models by using the dynamic programming approach. The linear programming approach (which consists in considering the spaces of occupation measures associated to the strategies of the players) appears to be well suited to study constrained problems.
Extending this linear programming approach, in the context of obtaining Nash equilibria for game models, from the finite or countable state space cases to a more general state space entails, however, serious technical difficulties. This was achieved in [11] and, based on this reference, there has been a recent growing interest in the study of constrained games. In [11], a game model on a measurable state space with ARAT (additive reward, additive transition) structure under the discounted optimality criterion was studied. The approach consists in defining a correspondence on a space of measures, endowed with the weak-strong topology, which is shown to have a fixed point. From this fixed point, optimal constrained strategies of the players are obtained. The approach in [11] combines also the use of Young measures to identify Markov strategies of the players. With this approach, in the reference [21], a countable state space discounted game model is studied, dropping the ARAT condition. Also, in [22], the authors study a discounted constrained game on a general state space and consider the weaker notion of an approximate equilibria.
The above cited references are, generally, concerned with games under the _discounted payoff_ optimality criterion. In this paper, we shall consider games under the total expected payoff optimality criterion, for constrained and unconstrained games, both of which have received much smaller attention than the discounted payoff counterpart. Indeed, as far as we know, only a few references deal with that topic: in [17], the existence of an \(\epsilon\)-equilibrium for a countable state space game is established, while in the references [9, 10], a stopping zero-sum game (in which one player is allowed to stop the evolution of the system) with countable state space under the total expected payoff criterion is studied, and in [23] for a finite state game. Summarizing, there do not exist general existence results for Nash equilibria for the total expected payoff criterion, even in the case of a countable state space.
The extension from the discounted optimality criterion to the total expected payoff optimality criterion is far from being straightforward. First of all, the dynamic programming approach using the one-shot game does not work in our context: indeed, the usual technique --see, e.g., [26]--consists in showing that the Bellman operator maps the final payoff function, which typically belongs to a compact subset of the space of functions \(L^{\infty}\), into itself when computing the one-shot optimal payoff. To use this technique, however, the discount factor plays a crucial role and, for the total expected payoff criterion, the dynamic programming approach does not yield an operator mapping a compact set of functions into itself. Secondly, while, under the discounted optimality criterion, the occupation measures are probability or finite measures, in the context of the total expected payoff criterion, the occupation measures may be infinite. Ensuring finiteness of these occupation measures and establishing compactness properties of these spaces of measures becomes more technically demanding and some additional conditions must be imposed. More precisely, we will need to assume that the game model is uniformly absorbing, which means that the dynamic system enters into some subset \(\Delta\) of the state space (in which no further reward is earned) in a uniformly bounded expected time, with the queues of the hitting time of \(\Delta\) converging uniformly to zero as well (a precise definition will be given in the text). As already mentioned, our techniques
herein will allow to deal, at the same time, with constrained and unconstrained game models.
In this paper we will consider an \(N\)-player Markov Markov game under the total expected reward optimality criterion. The state space is an abstract measurable space and the action spaces are compact metric spaces. The payoff functions are assumed to be bounded Caratheodory functions and we impose that the transition probabilities have a density function with respect to some reference probability measure, and that they satisfy some suitable \(L^{1}\)-continuity properties. Some uniformly absorbing requirement must be imposed to ensure finiteness of the payoff criteria and compactness of the occupation measures, plus the usual Slater condition. It is worth stressing that we do not need the ARAT separation property and, instead, we impose some minimal sufficient conditions ensuring the continuity of the transition and cost functionals.
The rest of the paper is organized as follows. In the remaining of this section we introduce some notation and recall some standard results that will be useful in the sequel. Section 2 is devoted to define the constrained and unconstrained game models, and to propose some basic assumptions. Our main results in the paper are stated in Section 3. In Section 4 we study the occupation measures and introduce the spaces of Young measures, which shall be identified with Markov stationary strategies of the players. Some useful continuity results relating narrow convergence of Young measures and weak-strong convergence of measures are established. Our main results on the existence of constrained and unconstrained equilibria are proved in Section 5.
Notation and terminology.A metric space \(SS\) will be always endowed with its Borel \(\sigma\)-algebra \(\mathfrak{B}(SS)\). On the product of a finite number of metric spaces \(SS=SS^{1}\times\ldots\times SS^{N}\), we will consider the product topology which makes the product again a metric space. If the metric spaces \(SS^{i}\) are separable, then we have \(\mathfrak{B}(SS)=\mathfrak{B}(SS^{1})\otimes\ldots\otimes\mathfrak{B}(SS^{N})\).
On a measurable space \((\boldsymbol{\Omega},\mathcal{F})\) we will consider the set of finite signed measures \(\boldsymbol{\mathcal{M}}(\boldsymbol{\Omega})\), the set of finite nonnegative measures \(\boldsymbol{\mathcal{M}}^{+}(\boldsymbol{\Omega})\), and the set of probability measures \(\boldsymbol{\mathcal{P}}(\boldsymbol{\Omega})\). For a set \(\Gamma\in\mathcal{F}\), we denote by \(\mathbf{I}_{\Gamma}:\Omega\to\{0,1\}\) the indicator function of the set \(\Gamma\), that is, \(\mathbf{I}_{\Gamma}(\omega)=1\) if and only if \(\omega\in\Gamma\). For \(\omega\in\boldsymbol{\Omega}\), we write \(\delta_{\{\omega\}}\) for the Dirac probability measure at \(\omega\) defined on \((\boldsymbol{\Omega},\mathcal{F})\) by \(\delta_{\{\omega\}}(B)=\mathbf{I}_{B}(\omega)\) for any \(B\in\mathcal{F}\). If \(\mu\in\boldsymbol{\mathcal{M}}(\boldsymbol{\Omega})\) and \(\Gamma\in\mathcal{F}\), we denote by \(\mu_{\Gamma}\) the measure on \((\boldsymbol{\Omega},\mathcal{F})\) defined by \(\mu_{\Gamma}(B)=\mu(\Gamma\cap B)\) for \(B\in\mathcal{F}\). The trace \(\sigma\)-algebra of a set \(\Gamma\subseteq\boldsymbol{\Omega}\) is denoted by \(\mathcal{F}_{\Gamma}\). On \(\boldsymbol{\mathcal{P}}(\boldsymbol{\Omega})\), the \(s\)-topology is the coarsest topology that makes \(\mu\mapsto\mu(D)\) continuous for every \(D\in\mathcal{F}\).
Given a measurable space \((\boldsymbol{\Omega},\mathcal{F})\) and \(\lambda\in\boldsymbol{\mathcal{P}}(\boldsymbol{\Omega})\), we will denote by \(L^{1}(\boldsymbol{\Omega},\mathcal{F},\lambda)\) the family of measurable functions (identifying those which are \(\lambda\)-a.s. equal) \(f:\boldsymbol{\Omega}\to\mathbb{R}\) which are \(\lambda\)-integrable, i.e., \(\|f\|_{1}=\int_{\mathbf{X}}|f(x)|\lambda(dx)<\infty\). Also, let \(L^{\infty}(\boldsymbol{\Omega},\mathcal{F},\lambda)\) be the set of \(\lambda\)-essentially bounded measurable functions \(f:\boldsymbol{\Omega}\to\mathbb{R}\) (again, we identify functions that coincide \(\lambda\)-a.s.). We will denote by \(\|f\|_{\infty}\) the corresponding essential supremum. On \(L^{\infty}(\boldsymbol{\Omega},\mathcal{F},\lambda)\) we will consider the weak\({}^{*}\) topology, that is, we have \(f_{n}\stackrel{{*}}{{\rightharpoonup}}f\) whenever
\[\int_{\boldsymbol{\Omega}}f_{n}h\,d\lambda\to\int_{\boldsymbol{\Omega}}fh\,d \lambda\quad\text{for every }h\in L^{1}(\boldsymbol{\Omega},\mathcal{F},\lambda).\]
Let \((\boldsymbol{\Omega},\mathcal{F})\) and \((\widetilde{\boldsymbol{\Omega}},\widetilde{\mathcal{F}})\) be two measurable spaces. A kernel on \(\widetilde{\boldsymbol{\Omega}}\) given \(\boldsymbol{\Omega}\) is a mapping \(Q:\boldsymbol{\Omega}\times\widetilde{\mathcal{F}}\to\mathbb{R}^{+}\) such that \(\omega\mapsto Q(B|\omega)\) is measurable on \((\boldsymbol{\Omega},\mathcal{F})\) for every \(B\in\widetilde{\mathcal{F}}\), and \(B\mapsto Q(B|\omega)\) is in \(\boldsymbol{\mathcal{M}}^{+}(\widetilde{\boldsymbol{\Omega}})\) for every \(\omega\in\boldsymbol{\Omega}\). If \(Q(\widetilde{\boldsymbol{\Omega}}|\omega)=1\) for all \(\omega\in\boldsymbol{\Omega}\) then we say that \(Q\) is a _stochastic kernel_. We write \(\mathbb{I}_{\Gamma}\) for the kernel on \(\boldsymbol{\Omega}\) given \(\boldsymbol{\Omega}\) defined by \(\mathbb{I}_{\Gamma}(B|\omega)=\mathbf{I}_{\Gamma}(\omega)\delta_{\{\omega\}}(B)\) for \(\omega\in\boldsymbol{\Omega}\) and \(B\in\mathcal{F}\). Let \(Q\) be a stochastic kernel on \(\widetilde{\boldsymbol{\Omega}}\) given \(\boldsymbol{\Omega}\). For a bounded measurable function \(f:\widetilde{\boldsymbol{\Omega}}\to\mathbb{R}\), we will denote by \(Qf:\boldsymbol{\Omega}\to\mathbb{R}\) the measurable function
\[Qf(\omega)=\int_{\boldsymbol{\Omega}^{\prime}}f(\widetilde{\omega})Q(d\widetilde{ \omega}|\omega)\quad\text{for }\omega\in\boldsymbol{\Omega}.\]
For a measure \(\mu\in\mathbf{\cal M}^{+}(\mathbf{\Omega})\), we denote by \(\mu Q\) the finite measure on \((\widetilde{\mathbf{\Omega}},\widetilde{\mathbf{\cal F}})\) given by
\[B\mapsto\mu Q\left(B\right)=\int_{\mathbf{\Omega}}Q(B|\omega)\mu(d \omega)\quad\mbox{for $B\in\widetilde{\mathbf{\cal F}}$}.\]
The product of the \(\sigma\)-algebras \(\cal F\) and \(\widetilde{\mathbf{\cal F}}\) is denoted by \({\cal F}\otimes\widetilde{\mathbf{\cal F}}\) and consists of the \(\sigma\)-algebra generated by the measurable rectangles, that is, the sets of the form \(\Gamma\times\widetilde{\Gamma}\) for \(\Gamma\in{\cal F}\) and \(\widetilde{\Gamma}\in\widetilde{\mathbf{\cal F}}\). We denote by \(\mu\otimes Q\) the unique probability measure (or finite measure) on the product space \((\mathbf{\Omega}\times\widetilde{\mathbf{\Omega}},{\cal F }\otimes\widetilde{\mathbf{\cal F}})\) satisfying
\[(\mu\otimes Q)(\Gamma\times\widetilde{\Gamma})=\int_{\Gamma}Q(\widetilde{ \Gamma}|\omega)\mu(d\omega)\quad\mbox{for $\Gamma\in{\cal F}$ and $\widetilde{\Gamma}\in\widetilde{\mathbf{\cal F}}$},\]
see Proposition III-2-1 in [24] for a proof of existence and uniqueness of such measure. Let \((\overline{\mathbf{\Omega}},\overline{\mathbf{\cal F}})\) be a third measurable space and \(R\) a stochastic kernel on \(\overline{\mathbf{\Omega}}\) given \(\widetilde{\mathbf{\Omega}}\). Then we will denote by \(QR\) the stochastic kernel on \(\overline{\mathbf{\Omega}}\) given \(\Omega\) given by
\[QR(\Gamma|\omega)=\int_{\widetilde{\mathbf{\Omega}}}R(\Gamma|\tilde{ \omega})Q(d\tilde{\omega}|\omega)\quad\mbox{for $\Gamma\in\overline{\mathbf{\cal F}}$ and $\omega\in{\cal F}$}.\]
Given \(\mu\in\mathbf{\cal M}(\mathbf{\Omega}\times\widetilde{ \mathbf{\Omega}})\), the marginal measures are \(\mu^{\mathbf{\Omega}}\in\mathbf{\cal M}(\mathbf{ \Omega})\) and \(\mu^{\widetilde{\mathbf{\Omega}}}\in\mathbf{\cal M}( \widetilde{\mathbf{\Omega}})\) defined by \(\mu^{\mathbf{\Omega}}(\cdot)=\mu(\cdot\times\widetilde{\mathbf{\Omega}})\) and \(\mu^{\widetilde{\mathbf{\Omega}}}(\cdot)=\mu(\mathbf{\Omega$ }\times\cdot)\). If \(\pi\) is a kernel on \(\widetilde{\mbox{\boldmath$\Omega}},\times\overline{\mathbf{\Omega}}\) given \(\Omega\) the marginal kernels are \(\pi^{\widetilde{\mathbf{\Omega}}}\) and \(\pi^{\widetilde{\mathbf{\Omega}}}\), respectively defined by \(\pi^{\widetilde{\mathbf{\Omega}}}=\pi(\cdot\times\overline{\mathbf{\Omega}}|\omega)\) and \(\pi^{\overline{\mathbf{\Omega}}}=\pi(\widetilde{\mathbf{ \Omega}}\times\cdot|\omega)\) for \(\omega\in\mathbf{\Omega}\).
We say that \(f:\mathbf{\Omega}\times SS\to SS^{\prime}\), where \(SS^{\prime}\) is a metric space, is a _Caratheodory function_ if \(f(\cdot,s)\) is measurable on \(\Omega\) for every \(s\in SS\) and \(f(\omega,\cdot)\) is continuous on \(SS\) for every \(\omega\in\mathbf{\Omega}\). The family of the so-defined Caratheodory functions is denoted by \({\cal Car}(\mathbf{\Omega}\times SS,SS^{\prime})\). The family of Caratheodory functions which, in addition, are bounded is denoted by \({\cal Car}_{b}(\mathbf{\Omega}\times SS,SS^{\prime})\). When the metric space \(SS\) is separable then any \(f\in{\cal Car}(\mathbf{\Omega}\times SS,SS^{\prime})\) is a jointly measurable function on \((\mathbf{\Omega}\times{\bf S},{\cal F}\otimes\mathbf{\mathfrak{ B}}({\bf S}))\); see [1, Lemma 4.51].
Given \(\lambda\in\mathbf{\cal P}(\mathbf{\Omega})\), let \(\mathbf{\cal P}_{\lambda}(\mathbf{\Omega})=\{\eta\in\mathbf{\cal P}(\mathbf{\Omega}):\eta\ll\lambda\}\) be the family of probability of probability measures which are absolutely continuous with respect to \(\lambda\).
If \(SS\) is a Polish space (a complete and separable metric space), on \(\mathbf{\cal M}(\mathbf{\Omega}\times SS)\) we will consider the \(ws\)-topology (weak-strong topology) which is the coarsest topology for which the mappings
\[\mu\mapsto\int_{\mathbf{\Omega}\times SS}f(\omega,s)\mu(d\omega,ds)\]
for \(f\in{\cal Car}_{b}(\mathbf{\Omega}\times SS,\mathbb{R})\) are continuous. There are other equivalent definitions of this topology as discussed, for instance, in [15, Section 3.3].
Inequality \(\geq\) in \(\mathbb{R}^{p}\) means a componentwise inequality \(\geq\), while the inequality \(>\) in \(\mathbb{R}^{p}\) is a componentwise strict inequality \(>\). Let \(\mathbf{1}\in\mathbb{R}^{p}\) be the vector with all components equal to one.
The next disintegration lemma will be useful in the forthcoming (see Theorem 1 in [27]).
**Lemma 1.1** (Disintegration lemma): _Let \((\mathbf{\Omega},{\cal F$})\) be a measurable space and let \(SS\) be a Polish space. Let \(\varphi:\mathbf{\Omega}\twoheadrightarrow SS\) be a weakly measurable correspondence with nonempty closed values, and let \({\bf K}\) be the graph of the correspondence. For every \(\mu\in\mathbf{\cal M}^{+}(\mathbf{\Omega}\times SS)\) such that \(\mu({\bf K}^{c})=0\) there exists a stochastic kernel \(Q\) on \(SS\) given \(\Omega\) such that_
\[\mu=\mu^{\mathbf{\Omega}}\otimes Q \tag{1.1}\]
_and such that \(Q(\varphi(\omega)|\omega)=1\) for each \(\omega\in\mathbf{\Omega}\). Moreover, \(Q\) is unique \(\mu^{\mathbf{\Omega}}\)-almost surely, meaning that if \(Q\) and \(Q^{\prime}\) are two stochastic kernels that satisfy (1.1) then for all \(\omega\) in a set of \(\mu^{\mathbf{\Omega}}\)-probability one, the probability measures \(Q(\cdot|\omega)\) and \(Q^{\prime}(\cdot|\omega)\) coincide._
Definition of the game model
### Elements of the noncooperative game model
Next we give the primitive data of our \(N\)-person game model.
1. The state space is a measurable space \(\mathbf{X}\) endowed with a \(\sigma\)-algebra \(\boldsymbol{\mathfrak{X}}\).
2. The separable metric space \(\mathbf{A}^{i}\), with \(i\in\{1,\ldots,N\}\), stands for the action space of player \(i\). Given any \(x\in\mathbf{X}\), the nonempty measurable set \(\mathbf{A}^{i}(x)\subseteq\mathbf{A}^{i}\) is the set of actions available to player \(i\) at state \(x\). We will use the notations \[\mathbf{A}=\mathbf{A}^{1}\times\ldots\times\mathbf{A}^{N}\quad\text{and} \quad\mathbf{A}(x)=\mathbf{A}^{1}(x)\times\ldots\times\mathbf{A}^{N}(x) \subseteq\mathbf{A}.\] A typical element of \(\mathbf{A}\) will be written \(a=(a^{1},\ldots,a^{N})\).
3. Given \(i\in\{1,\ldots,N\}\), the bounded measurable functions \(r^{i}:\mathbf{X}\times\mathbf{A}\to\mathbb{R}\) and \(c^{i}:\mathbf{X}\times\mathbf{A}\to\mathbb{R}^{p}\) stand for the reward and constraint functions for player \(i\). The components of \(c^{i}\) will be denoted by \(c^{i,j}\) for \(1\leq j\leq p\). The corresponding constraint constant is \(\rho^{i}\in\mathbb{R}^{p}\). Here, \(p\geq 1\) is a fixed integer assumed to be the same for all the players. We write \(\rho=(\rho^{1},\ldots,\rho^{N})\in\mathbb{R}^{pN}\).
4. The transitions of the system are given by a stochastic kernel \(Q\) on \(\mathbf{X}\) given \(\mathbf{X}\times\mathbf{A}\).
5. The initial distribution of the system is the probability measure \(\eta\in\boldsymbol{\mathcal{P}}(\mathbf{X})\).
We will consider the sets
\[\mathbf{H}_{0}=\mathbf{X}\quad\text{and}\quad\mathbf{H}_{t}=(\mathbf{X}\times \mathbf{A})^{t}\times\mathbf{X}\ \text{ for }t\geq 1,\]
which are the sets of histories of the state-action process up to time \(t\geq 0\). An element of \(\mathbf{H}_{t}\) is denoted by \(h_{t}=(x_{0},a_{0},\ldots,x_{t-1},a_{t-1},x_{t})\). _We note that, throughout this paper, sub-indices will usually refer to the time component \(t\geq 0\), while super-indices will typically denote the players \(i\in\{1,\ldots,N\}\)._
For this model, it is assumed that, at time \(t\geq 0\), the players choose their actions independently of each other conditionally on the history \(h_{t}\) of the system; hence, its noncooperative nature. This game will be denoted by \(\mathcal{G}(\eta,\rho)\). We use this notation because, in the sequel, we will need to vary both the initial distribution and the constraint constants.
**Definition 2.1**: _Fix a player \(i\in\{1,\ldots,N\}\)._
1. _A policy for player_ \(i\) _is a sequence_ \(\{\pi_{t}^{i}\}_{t\geq 0}\) _of stochastic kernels on_ \(\mathbf{A}^{i}\) _given_ \(\mathbf{H}_{t}\) _that verify_ \(\pi_{t}^{i}(\mathbf{A}^{i}(x_{t})|x_{0},a_{0},\ldots,x_{t})=1\) _for every_ \(t\geq 0\) _and_ \(h_{t}=(x_{0},a_{0},\ldots,x_{t})\in\mathbf{H}_{t}\)_. Let_ \(\boldsymbol{\Pi}^{i}\) _be the family of all policies of player_ \(i\)_._
2. _Let_ \(\mathbf{M}^{i}\) _be the family of stochastic kernels_ \(\pi^{i}\) _on_ \(\mathbf{A}^{i}\) _given_ \(\mathbf{X}\) _for which_ \(\pi^{i}(\mathbf{A}^{i}(x)|x)=1\) _for every_ \(x\in\mathbf{X}\)_. The policy_ \(\{\pi_{t}^{i}\}_{t\geq 0}\in\boldsymbol{\Pi}_{i}\) _is said to be a stationary Markov policy for player_ \(i\) _if, for some_ \(\pi^{i}\in\mathbf{M}^{i}\)_, we have_ \[\pi_{t}^{i}(\cdot|x_{0},a_{0},\ldots,x_{t})=\pi^{i}(\cdot|x_{t})\quad\text{ for every }t\geq 0\text{ and }h_{t}=(x_{0},a_{0},\ldots,x_{t})\in\mathbf{H}_{t}.\]
Our Assumption (A\({}_{4}\)) below will ensure that these sets of policies are nonempty. We will usually refer to \(\boldsymbol{\Pi}^{i}\) as to the class of history-dependent policies of player \(i\). The family of history-dependent policies for the players is \(\boldsymbol{\Pi}=\boldsymbol{\Pi}^{1}\times\ldots\times\boldsymbol{\Pi}^{N}\). We will say that \(\pi\in\boldsymbol{\Pi}\) is a strategy profile. We can identify the class of stationary Markov policies for player \(i\) with \(\mathbf{M}^{i}\) itself, and so we will write \(\mathbf{M}^{i}\subseteq\boldsymbol{\Pi}^{i}\). Similarly, we introduce the notation \(\mathbf{M}=\mathbf{M}^{1}\times\ldots\times\mathbf{M}^{N}\) for the class of stationary Markov profiles of the players.
The \(-i\) notation.Given \(\pi=(\pi^{1},\ldots,\pi^{N})\) and some \(i\in\{1,\ldots,N\}\), let
\[\pi^{-i}=(\pi^{1},\ldots,\pi^{i-1},\pi^{i+1},\ldots,\pi^{N})\in\mathbf{\Pi}^{1} \times\ldots\times\mathbf{\Pi}^{i-1}\times\mathbf{\Pi}^{i+1}\times\ldots\times \mathbf{\Pi}^{N}.\]
In addition, given \(\sigma\in\mathbf{\Pi}^{i}\), we will use the notation \((\pi^{-i},\sigma)\) to denote the strategy profile in \(\mathbf{\Pi}\) for which player \(i\) uses the policy \(\sigma\) and the remaining players use the policies \(\pi^{j}\) for \(j\neq i\). Similarly, we will use notations such as \(\mathbf{\Pi}^{-i}\) and \(\mathbf{M}^{-i}\) to consider the product spaces of all the \(\mathbf{\Pi}^{j}\) and \(\mathbf{M}^{j}\) except \(\mathbf{\Pi}^{i}\) and \(\mathbf{M}^{i}\), respectively.
Construction of the state-action process.The canonical space \(\mathbf{H}_{\infty}=(\mathbf{X}\times\mathbf{A})^{\mathbb{N}}\) is endowed with the product \(\sigma\)-algebra \((\mathfrak{X}\otimes\mathfrak{B}(\mathbf{A}))^{\mathbb{N}}\). Let \((X_{t},A_{t})_{t\geq 0}\) be the corresponding coordinates mappings with \(A_{t}=(A_{t}^{1},\ldots,A_{t}^{N})\). We shall use the notation \(H_{t}=(X_{0},A_{0},\ldots,X_{t})\) for \(t\geq 1\) and \(H_{0}=X_{0}\). Let us consider an initial probability measure \(\eta\in\mathcal{P}(\mathbf{X})\) and a strategy profile \(\pi\in\mathbf{\Pi}\). There exists a unique probability measure \(\mathbb{P}_{\eta,\pi}\) on \(\mathbf{H}_{\infty}\) such that for every \(B\in\mathfrak{X}\), \(C^{i}\in\mathfrak{B}(\mathbf{A}^{i})\) for \(i=1,\ldots,N\), and \(t\geq 0\) we have: (i): \(\mathbb{P}_{\eta,\pi}\{X_{0}\in B\}=\eta(B)\);
(2.2)
and (iii): \(\mathbb{P}_{\eta,\pi}(X_{t+1}\in B|H_{t},A_{t})=Q(B|X_{t},A_{t})\). We denote by \(\mathbb{E}_{\eta,\pi}\) the expectation operator associated to \(\mathbb{P}_{\eta,\pi}\). If the initial distribution is the Dirac measure \(\delta_{x}\) concentrated at a given state \(x\in\mathbf{X}\) we will simply write \(\mathbb{P}_{x,\pi}\) and \(\mathbb{E}_{x,\pi}\).
### Correlated strategies, absorbing models, and Nash equilibria
In Definition 2.1 it is assumed that the players choose their actions independently of each other. It will be technically useful, however, to introduce _correlated_ strategies for which the players can take _dependent actions_. The notion of correlated strategies plays a very important role in the analysis of this type of game. In particular, it will allow to introduce the set of possible answers for each player (see Definition 4.7) which are defined from the occupation measures of the process generated precisely from these correlated strategies (see Definition 4.2). To show our main results, we will have to show that this set of occupation measures is bounded and compact, which leads to a notion of absorbing model also defined on the basis of correlated strategies.
**Definition 2.2**:
1. _Let_ \(\tilde{\mathbf{\Pi}}\) _be the set of correlated strategies defined as follows: we say that_ \(\pi=\{\pi_{t}\}_{t\geq 0}\) _is in_ \(\tilde{\mathbf{\Pi}}\) _if, for every_ \(t\geq 0\) _and_ \(h_{t}=(x_{0},a_{0},\ldots,x_{t})\in\mathbf{H}_{t}\)_, we have that_ \(\pi_{t}\) _is a stochastic kernel on_ \(\mathbf{A}\) _given_ \(\mathbf{H}_{t}\) _that verifies_ \(\pi_{t}(\mathbf{A}(x_{t})|x_{0},a_{0},\ldots,x_{t})=1\)_._
2. _The class of correlated Markov strategies of the players is_ \(\tilde{\mathbf{M}}\) _and it is defined as the set of stochastic kernels_ \(\pi\) _on_ \(\mathbf{A}\) _given_ \(\mathbf{X}\) _such that_ \(\pi(\mathbf{A}(x)|x)=1\) _for every_ \(x\in\mathbf{X}\)_. As usual, we will assume that_ \(\tilde{\mathbf{M}}\subseteq\tilde{\mathbf{\Pi}}\)_._
3. _Given an initial distribution_ \(\eta\in\mathcal{P}(\mathbf{X})\) _and a correlated strategy_ \(\pi\in\tilde{\mathbf{\Pi}}\)_, we can construct the state-action process as we did before, except that (_2.2_) is replaced with_ \(\mathbb{P}_{\eta,\pi}(A_{t}\in C\mid H_{t})=\pi_{t}(C|H_{t})\) _for any_ \(C\in\mathfrak{B}(\mathbf{A})\)_._
We have the obvious inclusion \(\mathbf{\Pi}\subseteq\tilde{\mathbf{\Pi}}\). Moreover, we can associate to each \(\pi=(\pi^{1},\ldots,\pi^{N})\in\mathbf{M}\) the transition kernel (denoted again by \(\pi\)) on \(\mathbf{A}\) given \(\mathbf{X}\) defined as
\[\pi(da|x)=\pi^{1}(da^{1}|x)\times\cdots\times\pi^{N}(da^{N}|x)\quad\text{for }x \in\mathbf{X}, \tag{2.3}\]
so that we also have \({\bf M}\subseteq{\bf M}\). For each \(\pi\in{\bf\tilde{M}}\), we denote by \(Q_{\pi}\) the stochastic kernel on \({\bf X}\) given \({\bf X}\) defined by
\[Q_{\pi}(D|x)=\int_{\bf A}Q(D|x,a)\pi(da|x)\quad\mbox{for $x\in{\bf X}$ and $D\in{\bf\mathfrak{X}}$.}\]
The compositions of \(Q_{\pi}\) with itself are denoted by \(Q_{\pi}^{t}\) for any \(t\geq 0\), with the convention that \(Q_{\pi}^{0}(\cdot|x)\) is the Dirac probability measure concentrated at \(x\).
Absorbing games.Given a subset of the state space \(\Delta\in{\bf\mathfrak{X}}\), we define the hitting time \(T_{\Delta}\) as the measurable function \(T_{\Delta}:{\bf H}_{\infty}\to\mathbb{N}\cup\{\infty\}\) given by
\[T_{\Delta}(x_{0},a_{0},x_{1},a_{1},\ldots)=\min\{n\geq 0:x_{n}\in\Delta\},\]
where the \(\min\) over the empty set is defined as \(+\infty\). Next we propose the some definitions related to the notion of an _absorbing_ game.
**Definition 2.3**: _Fix \(\Delta\in{\bf\mathfrak{X}}\) and an initial distribution \(\eta\in{\mathbf{\cal P}}({\bf X})\). We say that the game model \({\cal G}(\eta,\rho)\) is absorbing to \(\Delta\) if the conditions (a) and (b) below are satisfied, and we say that \({\cal G}(\eta,\rho)\) is uniformly absorbing to \(\Delta\) if it is absorbing and, in addition, it satisfies condition (c)._
1. _For every_ \((x,a)\in\Delta\times{\bf A}\) _we have_ \(Q(\Delta|x,a)=1\) _and, besides, for every_ \(1\leq i\leq N\) _and_ \(1\leq j\leq p\) _it is_ \(r^{i}(x,a)=0\) _and_ \(c^{i,j}(x,a)=0\)_;_
2. _For any_ \(\pi\in\tilde{\bf\Pi}\) _the expected hitting time_ \(\mathbb{E}_{\eta,\pi}[T_{\Delta}]\) _is finite._
3. _We have the following limit:_ \[\lim_{n\to\infty}\sup_{\pi\in{\bf M}}\sum_{t=n}^{\infty}\mathbb{P}_{\eta,\pi} \{T_{\Delta}>t\}=0.\]
The condition (a) means that, once the state process enters in \(\Delta\), it remains in \(\Delta\) thereafter at no further reward or cost (related to the functions \(r^{i}\) and \(c^{i}\)). The condition (c) can be written in several equivalent ways, for instance:
\[\lim_{n\to\infty}\sup_{\pi\in{\bf\tilde{M}}}\mathbb{E}_{\eta,\pi}[(T_{\Delta} -n)^{+}]=0\quad\mbox{or}\quad\sum_{t=0}^{n}\mathbb{P}_{\eta,\pi}\{T_{\Delta} >t\}\uparrow\mathbb{E}_{\eta,\pi}[T_{\Delta}]\mbox{ uniformly in $\pi\in{\bf\tilde{M}}$.}\]
Our next result summarizes some important properties. In particular, it is shown that the expected hitting time \(\mathbb{E}_{\eta,\pi}[T_{\Delta}]\) is uniformly bounded over all correlated strategies, which will imply that the set of occupation measures is bounded (see Remark 4.3(a)), a key element to show the compactness of this set.
**Proposition 2.4**: _Consider a set \(\Delta\in{\bf\mathfrak{X}}\) and an initial distribution \(\eta\in{\mathbf{\cal P}}({\bf X})\)._
1. _If the game model_ \({\cal G}(\eta,\rho)\) _is absorbing to_ \(\Delta\) _then_ \(\sup_{\pi\in\tilde{\bf\Pi}}\mathbb{E}_{\eta,\pi}[T_{\Delta}]<\infty\)_._
2. _The family of initial distributions_ \(\eta\in{\mathbf{\cal P}}({\bf X})\) _for which the game model_ \({\cal G}(\eta,\rho)\) _is absorbing (respectively, uniformly absorbing) to_ \(\Delta\) _is a convex subset of_ \({\mathbf{\cal P}}({\bf X})\)
**Proof.** (i). The proof of this item is partially based on some arguments used in [13, Sections 4.4 and 5.5] for the special case of a Borel state space. We will use the following characterization of the set \(\mathbf{\cal S}_{\eta}=\{\mathbb{P}_{\eta,\pi}\in\mathbf{\cal P$ }({\bf H}_{\infty}):\pi\in\tilde{\mbox{\boldmath$\Pi}}\}\) of strategic probability measures. A probability measure \(\mathbb{P}\in\mathbf{\cal P}({\bf H}_{\infty})\) is in \(\mathbf{\cal S}_{\eta}\) if and only if
\[\mathbb{P}(dx_{0})=\eta(dx_{0})\quad\mbox{and}\quad\mathbb{P}(dx_{0},\ldots, dx_{t},da_{t},dx_{t+1})=\mathbb{P}(dx_{0},\ldots,dx_{t},da_{t})Q(dx_{t+1}|x_{t},a_{t}) \tag{2.4}\]
for \(t\in\mathbb{N}\), where the above differential notation refers to the marginal of \(\mathbb{P}\) on the corresponding variables. Let us show the claim by contradiction. So, assume that there exists a sequence \(\{\pi_{k}\}_{k\in\mathbb{N}}\) in \(\tilde{\mathbf{\Pi}}\) satisfying \(\mathbb{E}_{\eta,\pi_{k}}[T_{\Delta}]\geq 2^{k}\) for any \(k\in\mathbb{N}\). Consider \(\mathbb{P}\in\mathbf{\cal P}({\bf H}_{\infty})\) defined as
\[\mathbb{P}=\sum_{k\in\mathbb{N}}\frac{1}{2^{k+1}}\mathbb{P}_{\eta,\pi_{k}}.\]
It is easily seen that \(\mathbb{P}\) satisfies (2.4). Therefore \(\mathbb{P}\in\mathbf{\cal S}_{\eta}\), so that there exists \(\pi\in\tilde{\mathbf{\Pi}}\) with \(\mathbb{P}=\mathbb{P}_{\eta,\pi}\). We have, however,
\[\mathbb{E}_{\eta,\pi}[T_{\Delta}]=\int_{{\bf H}_{\infty}}T_{\Delta}d\mathbb{ P}=\sum_{k\in\mathbb{N}}\frac{1}{2^{k+1}}\int_{{\bf H}_{\infty}}T_{\Delta}d \mathbb{P}_{\eta,\pi_{k}}=\sum_{k\in\mathbb{N}}\frac{1}{2^{k+1}}\mathbb{E}_{ \eta,\pi_{k}}[T_{\Delta}]=\infty,\]
leading to a contradiction with the condition (b) in Definition 2.3.
(ii). This result, for both the absorbing and the uniformly absorbing cases, is a direct consequence of the fact that \(\alpha\mathbb{P}_{\eta,\pi}+(1-\alpha)\mathbb{P}_{\eta^{\prime},\pi}=\mathbb{ P}_{\alpha\eta+(1-\alpha)\eta^{\prime},\pi}\) for any \(\eta,\eta^{\prime}\) in \(\mathbf{\cal P}({\bf X})\) and \(\pi\in\tilde{\mathbf{\Pi}}\). \(\Box\)
Equilibria of the game model.Given a strategy profile \(\pi\in\mathbf{\Pi}\), the total expected payoff of player \(i\in\{1,\ldots,N\}\) is
\[R^{i}(\eta,\pi)=\mathbb{E}_{\eta,\pi}\Big{[}\sum_{t=0}^{\infty}r^{i}(X_{t},A_ {t})\Big{]}=\mathbb{E}_{\eta,\pi}\Big{[}\sum_{0\leq t<T_{\Delta}}r^{i}(X_{t},A _{t})\Big{]}\in\mathbb{R}, \tag{2.5}\]
and the corresponding total expected cost (for the constraints) is
\[C^{i}(\eta,\pi)=\mathbb{E}_{\eta,\pi}\Big{[}\sum_{t=0}^{\infty}c^{i}(X_{t},A_ {t})\Big{]}=\mathbb{E}_{\eta,\pi}\Big{[}\sum_{0\leq t<T_{\Delta}}c^{i}(X_{t},A _{t})\Big{]}\in\mathbb{R}^{p}. \tag{2.6}\]
In Section 2.3, we will make assumptions ensuring that \(R^{i}(\eta,\pi)\) and \(C^{i}(\eta,\pi)\) are finite --see Remark 3.5(a)-- for any \(\pi\in\mathbf{\Pi}\). For the remainder of this section, we will assume that this is the case. We say that strategy profile \(\pi\in\mathbf{\Pi}\) satisfies the constraint of player \(i\) when \(C^{i}(\eta,\pi)\geq\rho_{i}\).
We propose now the definitions of constrained and unconstrained Nash equilibria.
**Definition 2.5**: _We say that the strategy profile \(\pi_{*}\in\mathbf{\Pi}\) is:_
1. _an unconstrained Nash equilibrium in the class of all strategy profiles if for every_ \(1\leq i\leq N\) _and_ \(\sigma\in\mathbf{\Pi}^{i}\) _we have_ \[R^{i}(\eta,(\pi_{*}^{-i},\sigma))\leq R^{i}(\eta,\pi_{*});\]
2. _a constrained Nash equilibrium in the class of all strategy profiles if for every_ \(1\leq i\leq N\) _we have_ \(C^{i}(\eta,\pi_{*})\geq\rho^{i}\) _and, in addition,_ \[\forall\sigma\in\mathbf{\Pi}^{i},\ C^{i}(\eta,(\pi_{*}^{-i},\sigma)) \geq\rho^{i}\ \Rightarrow\ R^{i}(\eta,(\pi_{*}^{-i},\sigma))\leq R^{i}(\eta,\pi_{*}).\]
Next we introduce the standard Slater condition. It states that whatever policies the other players use, player \(i\) can find a policy so as to satisfy his own constraints.
**Definition 2.6**: _We say that the game model \(\mathbf{\cal G}(\eta,\rho)\) satisfies the Slater condition when, for each strategy profile \(\pi\in\mathbf{\Pi}\) and any player \(1\leq i\leq N\), there exists \(\sigma^{i}\in\mathbf{\Pi}^{i}\) such that \(C^{i}(\eta,(\pi^{-i},\sigma^{i}))>\rho^{i}\)._
### Assumptions and Young measures
In this section we will formulate the assumptions we will need in the sequel. We will also introduce the notion of Young measure. We present three sets of basic assumptions that slightly differ from each other. It is easy to notice that assumption A is weaker than A\({}^{\prime}\), which is itself weaker than \(\mathcal{A}\). In what follows, many results will be proved for an arbitrary initial distribution in \(\boldsymbol{\mathcal{P}}_{\lambda}(\mathbf{X})\) which will be noted generically by \(\eta\).
**Assumption A** Consider the game model \(\mathcal{G}(\eta,\rho)\) with initial distribution \(\eta\in\boldsymbol{\mathcal{P}}(\mathbf{X})\) and constraint constants \(\rho\in\mathbb{R}^{pN}\). We say that \(\mathcal{G}(\eta,\rho)\) satisfies Assumption A when there exist a probability measure \(\lambda\in\boldsymbol{\mathcal{P}}(\mathbf{X})\) and a set \(\Delta\in\boldsymbol{\mathfrak{X}}\) for which the following conditions hold:
1. The game model \(\mathcal{G}(\eta,\rho)\) is absorbing to \(\Delta\).
2. The game model \(\mathcal{G}(\eta,\rho)\) satisfies the Slater condition.
3. The \(\sigma\)-algebra \(\boldsymbol{\mathfrak{X}}\) is countably generated.
4. For each player \(i\in\{1,\ldots,N\}\), the action set \(\mathbf{A}^{i}\) is compact and the correspondence from \(\mathbf{X}\) to \(\mathbf{A}^{i}\) given by \(x\mapsto\mathbf{A}^{i}(x)\) is weakly measurable with nonempty compact values.
5. For each player \(i\in\{1,\ldots,N\}\), we have that \(r^{i}\) and \(c^{i}\) are bounded Caratheodory functions, that is, \(r^{i}\in\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A},\mathbb{R})\) and \(c^{i}\in\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A},\mathbb{R}^{p})\). Let \(\mathbf{r}>0\) be a componentwise bound for all the \(r^{i}\) and \(c^{i}\).
6. There exists a measurable density function \(q:\mathbf{X}\times\mathbf{X}\times\mathbf{A}\to\mathbb{R}^{+}\) such that for each \(B\in\boldsymbol{\mathfrak{X}}\) and \((x,a)\in\mathbf{X}\times\mathbf{A}\) we have \[Q(B|x,a)=\int_{B}q(y,x,a)\lambda(dy)\quad\text{and}\quad\lim_{n\to\infty}\int_ {\mathbf{X}}|q(y,x,a_{n})-q(y,x,a)|\lambda(dy)=0\] whenever \(a_{n}\to a\) in \(\mathbf{A}\).
7. The game model \(\mathcal{G}(\lambda,\rho)\) is absorbing to \(\Delta\).
8. The initial distribution \(\eta\) satisfies \(\eta\ll\lambda\).
**Assumption A\({}^{\prime}\)** Consider the game model \(\mathcal{G}(\eta,\rho)\) with initial distribution \(\eta\in\boldsymbol{\mathcal{P}}(\mathbf{X})\) and constraint constants \(\rho\in\mathbb{R}^{pN}\). We say that \(\mathcal{G}(\eta,\rho)\) satisfies Assumptions A\({}^{\prime}\) when it satisfies Assumption A except for (A\({}_{1}\)) which is replaced by the following stronger condition:
1. The game model \(\mathcal{G}(\eta,\rho)\) is uniformly absorbing to \(\Delta\).
**Assumption \(\boldsymbol{\mathcal{A}}\)** Consider the game model \(\mathcal{G}(\eta,\rho)\) with initial distribution \(\eta\in\boldsymbol{\mathcal{P}}(\mathbf{X})\) and constraint constants \(\rho\in\mathbb{R}^{pN}\). We say that \(\mathcal{G}(\eta,\rho)\) satisfies Assumptions \(\mathcal{A}\) when it satisfies Assumption A\({}^{\prime}\) except for (A\({}_{7}\)) which is replaced by the following stronger condition:
1. The game model \(\mathcal{G}(\lambda,\rho)\) is uniformly absorbing to \(\Delta\).
The Assumptions A, A\({}^{\prime}\) and \(\mathcal{A}\) will be discussed in the next section.
The space of Young measures \(\boldsymbol{\mathcal{Y}}\).Now we introduce the notion of Young measure in order to endow the spaces \(\mathbf{M}\) and \(\hat{\mathbf{M}}\) of stationary Markov strategies with a suitable metric. Note also that the last assumption we will need in this paper will be formulated at the end of this section, in terms of continuity properties of functions defined on these Young measure spaces. To do this we will rely on the reference probability measure \(\lambda\) on the state space \(\mathbf{X}\) introduced in Assumption (A\({}_{6}\)).
Recall that, for the noncooperative game model, the class of stationary Markov profiles of the players is \(\mathbf{M}^{1}\times\ldots\times\mathbf{M}^{N}\). Given a player \(1\leq i\leq N\), we consider in \(\mathbf{M}^{i}\) the following equivalence relation: for \(\phi,\varphi\in\mathbf{M}^{i}\) we say that
\[\phi\sim\varphi\quad\text{when}\quad\phi(\cdot|x)=\varphi(\cdot|x)\quad\text{ for $\lambda$-almost every $x\in\mathbf{X}$}.\]
We will denote by \(\boldsymbol{\mathcal{Y}}^{i}\) the corresponding family of equivalence classes, which will be referred to as _Young measures_. The set \(\boldsymbol{\mathcal{Y}}^{i}\) of Young measures is equipped with the narrow (stable) topology: it is the coarsest topology that makes the mappings
\[\pi^{i}\mapsto\int_{\mathbf{X}}\int_{\mathbf{A}^{i}}f(x,a^{i})\pi^{i}(da^{i}| x)\lambda(dx)\]
continuous for any \(f\) which is a Caratheodory function on \(\mathbf{X}\times\mathbf{A}^{i}\) bounded in \(L^{1}\); more precisely, this means that \(f\in\mathcal{C}ar(\mathbf{X}\times\mathbf{A}^{i},\mathbb{R})\) is such that for some \(F\in L^{1}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\) we have \(|f(x,a^{i})|\leq F(x)\) for every \((x,a^{i})\in\mathbf{X}\times\mathbf{A}^{i}\); see, e.g., [4, Theorem 2.2]. Using [5, Lemma 1], it follows that \(\boldsymbol{\mathcal{Y}}^{i}\) is a compact metric space for the narrow topology. We also define
\[\boldsymbol{\mathcal{Y}}=\boldsymbol{\mathcal{Y}}^{1}\times\ldots\times \boldsymbol{\mathcal{Y}}^{N},\]
which is endowed with the product topology, and it is therefore a compact metric space as well.
Young measures \(\boldsymbol{\mathcal{Y}}\) and Markov strategies \(\mathbf{M}\).We will say that two Markov strategies of the noncooperative game model \(\pi=(\phi^{1},\ldots,\phi^{N})\) and \(\pi^{\prime}=(\varphi^{1},\ldots,\varphi^{N})\) in \(\mathbf{M}\) are in the same equivalence class of Young measures whenever \(\phi^{i}\sim\varphi^{i}\) for every \(1\leq i\leq N\), and we will write \(\pi\sim\pi^{\prime}\) as well. In this case, since the initial distribution \(\eta\) is absolutely continuous with respect to \(\lambda\), and since the transition of the system has a density with respect to \(\lambda\), it is easily seen [12, Lemma 2.2] that both strategies yield the same strategic probability measure, that is, \(\mathbb{P}_{\eta,\pi}=\mathbb{P}_{\eta,\pi^{\prime}}\). Therefore, \(\pi\) and \(\pi^{\prime}\) are indistinguishable since they are driven by the same strategic probability measures and, besides, they also have the same costs and rewards since those are defined from the corresponding strategic measures. Hence, in the sequel _we shall identify the set of Young measures \(\boldsymbol{\mathcal{Y}}\) with the family of Markov profiles \(\mathbf{M}\) of the players_.
Consistence of the notation.Given \(\pi\in\mathbf{M}\) and a function \(f\in\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A})\), define the measurable function \(f_{\pi}\) on \(\mathbf{X}\) as
\[f_{\pi}(x)=\int_{\mathbf{A}}f(x,a)\pi(da|x)\quad\text{for $x\in\mathbf{X}$}.\]
If \(\pi^{\prime}\in\mathbf{M}\) is such that \(\pi^{\prime}\sim\pi\) then \(f_{\pi^{\prime}}=f_{\pi}\) with \(\lambda\)-probability one, and so they both belong to the same equivalence class in \(L^{\infty}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\). Therefore, it is consistent to define the function \(f_{\pi}\in L^{\infty}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\) for \(\pi\in\boldsymbol{\mathcal{Y}}\).
Suppose that \(v\in L^{\infty}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\) and \(\pi\in\mathbf{M}\). We have
\[Q_{\pi}v(x)=\int_{\mathbf{A}}\int_{\mathbf{X}}v(y)q(y,x,a)\lambda(dy)\pi(da|x) \quad\text{for $x\in\mathbf{X}$}.\]
Hence, the above integral does not depend on the representative \(v\) chosen in \(L^{\infty}({\bf X},\mathfrak{X},\lambda)\); and, in addition, if \(\pi^{\prime}\in{\bf M}\) is in the same equivalence class of Young measures as \(\pi\), then \(Q_{\pi}v=Q_{\pi^{\prime}}v\) with \(\lambda\)-probability one, and so both \(\pi\) and \(\pi^{\prime}\) yield the same element in \(L^{\infty}({\bf X},\mathfrak{X},\lambda)\). Consequently, the notation \(Q_{\pi}v\in L^{\infty}({\bf X},\mathfrak{X},\lambda)\) for \(v\in L^{\infty}({\bf X},\mathfrak{X},\lambda)\) and \(\pi\in\mathbf{\mathcal{Y}}\) is consistent. The same applies for the successive compositions \(Q_{\pi}^{t}v\) of the stochastic kernels for \(t\geq 0\). Note also that Assumption (A\({}_{6}\)) implies, in particular, that \(Qv\) is well defined, meaning that \(Qv=Qv^{\prime}\) whenever \(v\) and \(v^{\prime}\) belong to the same equivalence class in \(L^{\infty}({\bf X},\mathfrak{X},\lambda)\).
Young measures \(\tilde{\mathbf{\mathcal{Y}}}\) and Markov strategies \(\tilde{\mbox{M}}\).We recall that a stationary correlated Markov strategy \(\pi\in\tilde{\mbox{M}}\) is given by a stochastic kernel \(\pi\) on \({\bf A}\) given \({\bf X}\) with \(\pi({\bf A}(x)|x)=1\) for every \(x\in{\bf X}\). As before, we can identify kernels \(\pi,\pi^{\prime}\in\tilde{\mbox{M}}\) such that \(\pi(\cdot|x)=\pi^{\prime}(\cdot|x)\)\(\lambda\)-a.s. on \({\bf X}\) (written \(\pi\sim\pi^{\prime}\)) and then define the set \(\tilde{\mathbf{\mathcal{Y}}}\) of Young measures as the corresponding equivalence classes. The associated narrow topology is the coarsest one that makes the mappings \(\pi\mapsto\int_{\bf X}\int_{\bf A}f(x,a)\pi(da|x)\lambda(dx)\) continuous for every \(f\in{\cal C}ar({\bf X}\times{\bf A},\mathbb{R})\) bounded by a \(\lambda\)-integrable function. Again, \(\tilde{\mathbf{\mathcal{Y}}}\) is a compact metric space with its narrow topology.
Observe that the equivalence relation of \(\mathcal{Y}\) is compatible with that of \(\tilde{\mathbf{\mathcal{Y}}}\) meaning that, given \(\pi,\pi^{\prime}\in{\bf M}\subseteq\tilde{\mbox{M}}\), we have \(\pi\sim\pi^{\prime}\) in \({\bf M}\) if and only if \(\pi\sim\pi^{\prime}\) in \(\tilde{\mbox{M}}\). Notations such as \(f_{\pi}\) and \(Q_{\pi}^{t}v\) for \(\pi\in\tilde{\mathbf{\mathcal{Y}}}\), \(f\in{\cal C}ar_{b}({\bf X}\times{\bf A})\), \(v\in L^{\infty}({\bf X},\mathfrak{X},\lambda)\), and \(t\geq 0\) are consistent as well. Invoking the same previous arguments, we shall hereafter identify the space \(\tilde{\mathbf{\mathcal{Y}}}\) of Young measures and the class \(\tilde{\mbox{M}}\) of correlated Markov strategies of the players.
We make the following very important remark.
**Remark 2.7**: _Although it is true that_
\[\mathbf{\mathcal{Y}}^{1}\times\ldots\times\mathbf{\mathcal{Y }}^{N}=\mathbf{\mathcal{Y}}\subseteq\tilde{\mathbf{\mathcal{ Y}}},\]
_it turns out that the trace of the narrow topology of \(\tilde{\mathbf{\mathcal{Y}}}\) on \(\mathcal{Y}\) does not coincide, in general, with the product topology of the \(\mathbf{\mathcal{Y}}^{i}\). Namely, suppose that \(\{\pi_{n}\}\) and \(\pi\) are in \(\mathbf{\mathcal{Y}}\). If \(\pi_{n}\to\pi\) in \(\tilde{\mathbf{\mathcal{Y}}}\) then it is easy to verify that \(\pi_{n}\to\pi\) in \(\mathcal{Y}\). The converse: \(\pi_{n}\to\pi\) in \(\mathcal{Y}\) implies \(\pi_{n}\to\pi\) in \(\tilde{\mathbf{\mathcal{Y}}}\), however, is not necessarily true. Therefore, to fix the terminology, by convergence in \(\mathcal{Y}\) we shall refer to convergence in the product topology of the \(\mathbf{\mathcal{Y}}^{i}\), whereas convergence in \(\tilde{\mathbf{\mathcal{Y}}}\) will mean convergence in the narrow topology of \(\tilde{\mathbf{\mathcal{Y}}}\)._
We now introduce an additional condition that will allow us to establish continuity results for the game model. These continuity conditions will be expressed in terms of functions defined on the set of Markov strategies \(\mathcal{Y}\). In the next section, we will propose sufficient conditions for Assumption B below.
**Assumption B** We say that the game model \({\cal G}(\eta,\rho)\) satisfies Assumption B when the following mappings, defined on \(\mathcal{Y}\) and taking values in \(L^{\infty}({\bf X},\mathfrak{X},\lambda)\),
\[\pi\mapsto r_{\pi}^{i}\,,\quad\pi\mapsto c_{\pi}^{i,j}\,,\quad\mbox{and}\quad \pi\mapsto Q_{\pi}v\]
are continuous for any \(1\leq i\leq N\), \(1\leq j\leq p\), and \(v\in L^{\infty}({\bf X},\mathfrak{X},\lambda)\).
## 3 Main results
In this section we will present our two main results. The first one shows the existence of a Markovian equilibrium in the case where \(\lambda\) is absolutely continuous with respect to the initial distribution \(\eta\) and under assumptions A\({}^{\prime}\) and B. The second result relaxes the condition \(\lambda\ll\eta\) but requires strengthening Assumption A\({}^{\prime}\) to \(\mathcal{A}\).
**Proposition 3.1**: _Suppose that we are given an initial distribution \(\eta\in\mathbf{\mathcal{P}}({\bf X})\) and constraint constants \(\rho\in\mathbb{R}^{Np}\) such that \(\lambda\ll\eta\), and that the game \(\mathcal{G}(\eta,\rho)\) satisfies Assumptions \({\rm A}^{\prime}\) and \({\rm B}\). Both the constrained and the unconstrained game have a stationary Markov profile which is a Nash equilibrium in the class of all strategy profiles._
**Proof.** See Section 5.1\(\Box\)
**Theorem 3.2**: _Suppose that we are given an initial distribution \(\eta\in\mathbf{\mathcal{P}}({\bf X})\) and constraint constants \(\rho\in\mathbb{R}^{Np}\) such that the game \(\mathcal{G}(\eta,\rho)\) satisfies Assumptions \(\mathcal{A}\) and \({\rm B}\). Both the constrained and the unconstrained game have a stationary Markov profile which is a Nash equilibrium in the class of all strategy profiles._
**Proof.** See Section 5.2\(\Box\)
**Remark 3.3**: _The proof of Theorem 3.2 will proceed in several steps. First, we need to consider the case where the reference probability measure \(\lambda\) is absolutely continuous with respect to the initial distribution \(\eta\) and show the existence of a Markovian equilibrium under assumptions \({\rm A}^{\prime}\) and \({\rm B}\), which is precisely Proposition 3.1. Then, in a second step, we will drop the hypothesis \(\lambda\ll\eta\) by considering a sequence of game models \(\mathcal{G}(\eta_{n},\rho_{n})\) that satisfies Assumption \({\rm A}^{\prime}\) and \({\rm B}\) for \(\eta_{n}\) given by a convex combinations of \(\eta\) and \(\lambda\):_
\[\eta_{n}=\frac{n}{n+1}\eta+\frac{1}{n+1}\lambda \tag{3.7}\]
_and for suitably defined constraint constants \(\rho_{n}\). On the basis of Proposition 3.1, this yields the existence of a constrained Nash equilibrium \(\hat{\pi}_{n}\in\mathbf{\mathcal{Y}}\) for the game model \(\mathcal{G}(\eta_{n},\rho_{n})\). Finally, we will prove Theorem 3.2 by showing that a limit point of the sequence \(\{\hat{\pi}_{n}\}\in\mathbf{\mathcal{Y}}\) provides a Markovian equilibrium for the game model \(\mathcal{G}(\eta,\rho)\)._
Some comments regarding the assumptions are in order now.
**Remark 3.4**:
1. _In the context of an absorbing game model_ \(\mathcal{G}(\eta,\rho)\)_, the assumptions_ \(({\rm A}_{1})\)_-_\(({\rm A}_{6})\) _and_ \(({\rm A}_{8})\) _are conditions classically met in the literature, see for example_ _[_11_]_ _for the special case of a discounted model. Assumption_ \(({\rm A}_{7})\) _is a key technical condition which will allow to show very important properties of the absorbing model by showing in particular that a measure in_ \(\mathbf{\mathcal{M}}^{+}({\bf X}\times{\bf A})\) _is an occupation measure if and only if it satisfies the characteristic equations (see items (i) and (iv) of Proposition_ 4.6_). From this point of view, all the conditions of Assumption_ \({\rm A}\) _are very natural._
2. _The proof of the existence of a Markovian noncooperative equilibrium relies on the fact that the set of the marginals on_ \({\bf X}\times{\bf A}^{i}\) _of the occupation measure is a compact space in order to use the Kakutani-Fan-Glicksberg fixed point Theorem. It will be shown in Proposition_ 4.10 _that, under Assumption_ \({\rm A}\)_, this set is compact if and only if the model is uniformly absorbing. This is why it is necessary to replace Assumption_ \({\rm A}\) _by Assumption_ \({\rm A}^{\prime}\) _to prove Proposition_ 3.1_. Now regarding the proof of Theorem_ 3.2_, a key step is to show_ \(\mathcal{G}(\eta_{n},\rho_{n})\) _is uniformly absorbing as explained in Remark_ 3.3_. It is, therefore, necessary to reinforce the absorbing condition of_ \(\mathcal{G}(\lambda,\rho)\) _by assuming that_ \(\mathcal{G}(\lambda,\rho)\) _is uniformly absorbing, which leads to replace Assumption_ \({\rm A}^{\prime}\) _by Assumption_ \(\mathcal{A}\) _in the statement of the theorem_ 3.2_._
Let us now describe some consequences of these assumptions.
**Remark 3.5**:
1. _The functions_ \(r^{i}\) _and_ \(c^{i}\) _being bounded under_ \((\mathrm{A}_{5})\) _and the game model_ \(\mathcal{G}(\eta,\rho)\) _being absorbing to_ \(\Delta\) _under_ \((\mathrm{A}_{1})\) _(and also a fortiori under_ \((\mathrm{A}^{\prime}_{1})\)_) we have that_ \(R^{i}(\eta,\pi)\leq\mathbf{r}\sup_{\pi\in\Pi}\mathbb{E}_{\eta,\pi}[T_{\Delta}]\) _and_ \(C^{i}(\eta,\pi)\leq\mathbf{r}\sup_{\pi\in\Pi}\mathbb{E}_{\eta,\pi}[T_{\Delta}] \mathbf{1}\) _(see equations (_2.5_) and (_2.6_)), which are therefore finite for any_ \(\pi\in\Pi\) _and each player_ \(i\) _by using Proposition_ 2.4_._
2. _The Slater condition_ \((\mathrm{A}_{2})\) _implies that_ \(\lambda(\Delta^{c})>0\)_. Indeed, otherwise we would have_ \(\eta(\Delta)=1\) _and so the process would always remain with probability one in_ \(\Delta\) _and the corresponding reward functions would be all zero. In this very particular case where_ \(\lambda(\Delta^{c})=0\)_, the problem is degenerate and all Markov strategies are noncooperative equilibria._
3. _By Assumption_ \((\mathrm{A}_{4})\)_, the correspondences_ \(x\mapsto\mathbf{A}^{i}(x)\) _are measurable_ _[_1_, Lemma 18.2]_ _and they have measurable graph_ _[_1_, Theorem 18.6]__. Therefore, the following sets are measurable:_ \[\mathbf{K}^{i}=\{(x,a^{i})\in\mathbf{X}\times\mathbf{A}^{i}:a^{i} \in\mathbf{A}^{i}(x)\}\in\mathfrak{X}\otimes\mathfrak{B}(\mathbf{A}^{i})\quad \text{for $1\leq i\leq N$, and}\] \[\mathbf{K}=\{(x,a)\in\mathbf{X}\times\mathbf{A}:a\in\mathbf{A}(x)\} \in\mathfrak{X}\otimes\mathfrak{B}(\mathbf{A}).\] _The Kuratowski-Ryll-Nardzewski selection theorem_ _[_1_, Theorem 18.13]_ _yields the existence of measurable selectors for_ \(x\mapsto\mathbf{A}_{i}(x)\) _for each_ \(1\leq i\leq N\)_. In particular,_ \(\mathbf{M}^{i}\) _is nonempty, and so are all the sets of strategies defined so far:_ \(\mathbf{\Pi}^{i}\)_,_ \(\mathbf{\Pi}\)_,_ \(\tilde{\mathbf{\Pi}}\)_,_ \(\mathbf{M}\)_, and_ \(\tilde{\mathbf{M}}\)_._
We conclude this section by proposing sufficient conditions yielding the continuity properties stated in Assumption B. We show that some game models classically studied in the literature satisfy our hypotheses, such as countable state space models or ARAT-type models. It is also important to note that the expected discounted models are a special case of the total expected absorbing models. In this way, our results here generalize those in [11, 21].
**Corollary 3.6**: _Suppose that the game model \(\mathcal{G}(\eta,\rho)\) satisfies Assumption A. Under any of the conditions (i) and (ii) below, Assumption B is satisfied._
1. _The state space_ \(\mathbf{X}\) _is countable._
2. _The game model has ARAT structure (additive reward, additive transition) meaning that:_ 1. _(Additive reward.) For any_ \(1\leq i\leq N\)_,_ \(1\leq j\leq p\)_, and_ \(1\leq l\leq N\)_, there exist functions_ \(r^{i}_{l}\) _and_ \(c^{i,j}_{l}\) _in_ \(\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A}^{l})\) _such that_ \[r^{i}(x,a^{1},\ldots,a^{N})=\sum_{l=1}^{N}r^{i}_{l}(x,a^{l})\quad\text{and} \quad c^{i,j}(x,a^{1},\ldots,a^{N})=\sum_{l=1}^{N}c^{i,j}_{l}(x,a^{l})\] _for any_ \((x,a)\in\mathbf{X}\times\mathbf{A}\)_._ 2. _(Additive transition.) There exist nonnegative measurable functions_ \(q^{l}:\mathbf{X}\times\mathbf{X}\times\mathbf{A}^{l}\to\mathbb{R}\) _such that_ \[Q(B|x,a^{1},\ldots,a^{N})=\sum_{l=1}^{N}\int_{B}q^{l}(y,x,a^{l})\lambda(dy) \quad\text{for $B\in\mathfrak{X}$ and $(x,a^{1},\ldots,a^{N})\in\mathbf{X}\times\mathbf{A}$}\] _with, in addition,_ \(\lim_{n\to\infty}\int_{\mathbf{X}}|q^{l}(y,x,a^{l}_{n})-q^{l}(y,x,a)|\lambda( dy)=0\) _for any_ \(x\in\mathbf{X}\) _whenever_ \(a^{l}_{n}\to a^{l}\) _as_ \(n\to\infty\) _in_ \(\mathbf{A}^{l}\)_._
**Proof.** See Section 5.3.
**Remark 3.7**: _The absolute continuity condition in Assumption \((\mathrm{A}_{8})\) is not restrictive with respect to Assumptions \(\mathrm{A}\), \(\mathrm{A}^{\prime}\) and \(\mathcal{A}\). Indeed, if it were not true that \(\eta\ll\lambda\), then we would consider the reference probability measure \(\bar{\lambda}=(\eta+\lambda)/2\). Clearly we have \(\eta\ll\bar{\lambda}\), while the function \(\bar{q}(y,x,a)=q(y,x,a)(d\lambda/d\bar{\lambda})(y)\) would satisfy Assumption \((\mathrm{A}_{6})\). Finally, regarding Assumptions \((\mathrm{A}_{7})\) or \((\mathcal{A}_{7})\), the convexity property in Proposition 2.4(ii) ensures that the game model \(\mathcal{G}(\bar{\lambda},\rho)\) is (respectively, uniformly) absorbing to \(\Delta\) if \(\mathcal{G}(\lambda,\rho)\) and \(\mathcal{G}(\eta,\rho)\) are (respectively, uniformly) absorbing. However, changing the measure \(\lambda\) to \(\bar{\lambda}\) may affect Assumption \(\mathrm{B}\). Nevertheless, it is important to emphasize that conditions (i) and (ii) of Corollary 3.6 implying Assumption \(\mathrm{B}\) are not affected by a change of the reference probability measure \(\lambda\)._
A note on discounted games.As mentioned in [14, p. 132], a \(\beta\)-discounted model can be transformed into an equivalent absorbing model just by adding an isolated absorbing cemetery state \(x_{\Delta}\) with a single available action \(a_{\Delta}\) at no reward or cost. In this way, the new state space is \(\mathbf{X}^{\prime}=\mathbf{X}\cup\{x_{\Delta}\}\) and the transitions of the system are given by
\[Q^{\prime}(B|x,a)=\begin{cases}\beta Q(B|x,a)\text{ when }B\subseteq\mathbf{X} \\ 1-\beta\text{ if }B=\{x_{\Delta}\}\end{cases}\]
for \((x,a)\in\mathbf{X}\times\mathbf{A}\) and \(Q^{\prime}(\{x_{\Delta}\}|x_{\Delta},a_{\Delta})=1\). The reference probability measure would be
\[\lambda^{\prime}(B)=\beta\lambda(B\cap\mathbf{X})+(1-\beta)\delta_{\{x_{ \Delta}\}}(B)\quad\text{for measurable }B\subseteq\mathbf{X}^{\prime}.\]
It is then easily seen that this game model is uniformly absorbing to \(\{x_{\Delta}\}\) for any initial distribution. As a consequence, Assumptions \((\mathrm{A}_{1})\), \((\mathrm{A}_{7})\), \((\mathrm{A}_{1}^{\prime})\) and \((\mathcal{A}_{7})\) can be dropped, which makes Assumptions \(\mathrm{A}\), \(\mathrm{A}^{\prime}\) and \(\mathcal{A}\) equivalent. This shows that under Assumptions \(\mathrm{A}\) and \(\mathrm{B}\), we obtain the existence of constrained and unconstrained equilibria for discounted games. As a consequence, using Corollary 3.6 we would obtain the results in [11] and [21].
## 4 Occupation measures
### Occupation measures and their topological properties
_Throughout this subsection we shall assume that we are given a game model \(\mathcal{G}(\eta,\rho)\) with initial distribution \(\eta\in\boldsymbol{\mathcal{P}}_{\lambda}(\mathbf{X})\) and constraint constant \(\rho\in\mathbb{R}^{pN}\) that satisfies Assumption \(\mathrm{A}\)._
First of all we state some useful properties of the kernel \(\mathbb{I}_{\Delta^{c}}\) on \(\mathbf{X}\) given \(\mathbf{X}\) which was defined in Section 1.
**Lemma 4.1**: _The kernel \(\mathbb{I}_{\Delta^{c}}\) on \(\mathbf{X}\) given \(\mathbf{X}\) satisfies the following properties. Given any \(\pi\in\tilde{\boldsymbol{\mathcal{Y}}}\), \(f\in L^{\infty}(\mathbf{X},\mathfrak{X},\lambda)\), and \(\mu\in\boldsymbol{\mathcal{M}}^{+}(\mathbf{X})\):_
1. \(Q_{\pi}\mathbb{I}_{\Delta^{c}}(B|x)=Q_{\pi}(B\cap\Delta^{c}|x)\) _for_ \(B\in\mathfrak{X}\) _and_ \(x\in\mathbf{X}\)_._
2. \(\mathbb{I}_{\Delta^{c}}Q_{\pi}(B|x)=Q_{\pi}(B|x)\mathbf{I}_{\Delta^{c}}(x)\) _for_ \(B\in\mathfrak{X}\) _and_ \(x\in\mathbf{X}\)_._
3. \(\mathbb{I}_{\Delta^{c}}Q_{\pi}\mathbb{I}_{\Delta^{c}}=Q_{\pi}\mathbb{I}_{ \Delta^{c}}\) _and, as a consequence,_ \(\mathbb{I}_{\Delta^{c}}(Q_{\pi}\mathbb{I}_{\Delta^{c}})^{t}=Q_{\pi}^{t} \mathbb{I}_{\Delta^{c}}=(Q_{\pi}\mathbb{I}_{\Delta^{c}})^{t}\) _for any_ \(t\geq 1\)_._
4. \((\mathbb{I}_{\Delta^{c}}f)(x)=f(x)\mathbf{I}_{\Delta^{c}}(x)\) _for_ \(x\in\mathbf{X}\)_._
5. \(\mu\mathbb{I}_{\Delta^{c}}(B)=\mu(B\cap\Delta^{c})\)_, which can be also written_ \(\mu\mathbb{I}_{\Delta^{c}}=\mu_{\Delta^{c}}\)_._
Next we propose the definition of the occupation measure induced by a correlated strategy of the players. This definition can be specialized to noncooperative strategy profiles. We recall that we are making the convention that the sum over an empty set is zero.
**Definition 4.2**: _Given any strategy profile \(\pi\in\tilde{\boldsymbol{\Pi}}\), the occupation measure \(\mu_{\eta,\pi}\in\boldsymbol{\mathcal{M}}^{+}(\mathbf{X}\times\mathbf{A})\) for the initial distribution \(\eta\in\boldsymbol{\mathcal{P}}_{\lambda}(\mathbf{X})\) is defined, for measurable sets \(B\in\boldsymbol{\mathfrak{X}}\) and \(D^{i}\in\mathfrak{B}(\mathbf{A}^{i})\) for \(1\leq i\leq N\), as_
\[\mu_{\eta,\pi}(B\times D^{1}\times\ldots\times D^{N}) = \mathbb{E}_{\eta,\pi}\Big{[}\sum_{t=0}^{\infty}\mathbf{I}_{\{T_{ \Delta}>t\}}\cdot\mathbf{I}_{\{X_{t}\in B,A_{t}^{1}\in D^{1},\ldots,A_{t}^{N} \in D^{N}\}}\Big{]}.\]
_We introduce the notations \(\tilde{\boldsymbol{\mathcal{O}}}_{\eta}=\{\mu_{\eta,\pi}:\pi\in\tilde{ \boldsymbol{\Pi}}\}\) and \(\boldsymbol{\mathcal{O}}_{\eta}=\{\mu_{\eta,\pi}:\pi\in\boldsymbol{\Pi}\}\), with \(\boldsymbol{\mathcal{O}}_{\eta}\subseteq\tilde{\boldsymbol{\mathcal{O}}}_{\eta}\)._
Some important comments concerning this definition are in order.
**Remark 4.3**:
1. _Note that_ \(\mu_{\eta,\pi}(\mathbf{X}\times\mathbf{A})=\mathbb{E}_{\eta,\pi}[T_{\Delta}]\) _for_ \(\pi\in\tilde{\boldsymbol{\Pi}}\)_, which is finite because_ \(\mathcal{G}(\eta,\rho)\) _is absorbing to_ \(\Delta\)_. Moreover, by Proposition_ 2.4_, we have_ \[\sup_{\pi\in\tilde{\boldsymbol{\Pi}}}\mu_{\eta,\pi}(\mathbf{X}\times\mathbf{A} )<\infty,\] _which is usually referred to as_ \(\tilde{\boldsymbol{\mathcal{O}}}_{\eta}\) _being bounded. Also observe that, by construction of the process,_ \(\mu_{\eta,\pi}(\mathbf{K}^{c})=0\)_. Clearly, the set_ \(\boldsymbol{\mathcal{O}}_{\eta}\) _of occupation measures of the noncooperative game inherits the same properties._
2. _Observe that, although the process will eventually visit the set_ \(\Delta\) _--it might even be_ \(\eta(\Delta)>0\) _-- we have_ \(\mu_{\eta,\pi}^{\mathbf{X}}(\Delta)=0\)_. This is because, by its definition, the occupation measure "does not count" visits to_ \(\Delta\)_. In fact, the_ \(\mathbf{X}\)_-marginal of the occupation measure is given by_ \[\mu_{\eta,\pi}^{\mathbf{X}}(B)=\mathbb{E}_{\eta,\pi}\big{[}\sum_{t=0}^{\infty }\mathbf{I}_{\{T_{\Delta}>t\}}\cdot\mathbf{I}_{\{X_{t}\in B\}}\big{]}=\sum_{t= 0}^{\infty}\mathbb{P}_{\eta,\pi}\{X_{t}\in B-\Delta\}\quad\text{for }B\in \boldsymbol{\mathfrak{X}}\] (4.8) _because we have_ \(\{X_{t}\in B,T_{\Delta}>t\}=\{X_{t}\in B-\Delta\}\)_._
3. _It follows directly from Definitions_ 2.3 _and_ 4.2_, and Assumption_ \((\mathrm{A}_{5})\) _that the total expected payoffs of the strategy profile_ \(\pi\in\boldsymbol{\Pi}\) _for the initial distribution_ \(\eta\in\boldsymbol{\mathcal{P}}_{\lambda}(\mathbf{X})\) _equal_ \[R^{i}(\eta,\pi)=\int_{\mathbf{X}\times\mathbf{A}}r^{i}d\mu_{\eta,\pi}\quad \text{and}\quad C^{i}(\eta,\pi)=\int_{\mathbf{X}\times\mathbf{A}}c^{i}d\mu_{ \eta,\pi}\quad\text{for }1\leq i\leq N.\] (4.9)
4. _Regarding Markov strategies in_ \(\mathbf{M}\) _or_ \(\tilde{\mathbf{M}}\)_, since their occupation measure is defined based on the corresponding strategic probability measures, if follows that two Markov strategies in the same equivalence class of_ \(\boldsymbol{\mathcal{Y}}\) _or_ \(\tilde{\boldsymbol{\mathcal{Y}}}\) _yield the same occupation measure. So, the notation_ \(\mu_{\eta,\pi}\) _for_ \(\pi\in\boldsymbol{\mathcal{Y}}\) _or_ \(\pi\in\tilde{\boldsymbol{\mathcal{Y}}}\) _is consistent._
Let us first show the following technical results before deriving properties on occupation measures.
**Lemma 4.4**: _Let \(\boldsymbol{\Gamma}\) be an arbitrary subset of \(\tilde{\mathbf{M}}\) and let \(\{h_{\pi}\}_{\pi\in\boldsymbol{\Gamma}}\) be a family of non-negative functions in \(L^{1}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\) which are uniformly \(\lambda\)-integrable. Under these conditions,_
\[\lim_{t\to\infty}\sup_{\pi\in\boldsymbol{\Gamma}}\int_{\mathbf{X}}Q_{\pi}^{t}( \Delta^{c}|x)h_{\pi}(x)\lambda(dx)=0.\]
**Proof.** Consider a fixed arbitrary \(\epsilon>0\). By the uniform integrability hypothesis, there exists \(c_{\epsilon}>0\) such that
\[\sup_{\pi\in\mathbf{\Gamma}}\int_{\{x\in\mathbf{X}:h_{\pi}(x)>c_{\epsilon}\}}h_{ \pi}(x)\lambda(dx)\leq\epsilon.\]
Therefore, for any \(\pi\in\mathbf{\Gamma}\) and \(t\geq 1\)
\[\int_{\mathbf{X}}Q_{\pi}^{t}(\Delta^{c}|x)h_{\pi}(x)\lambda(dx)\leq\epsilon+c_ {\epsilon}\mathbb{P}_{\lambda,\pi}\{T_{\Delta}>t\}\leq\epsilon+\frac{c_{ \epsilon}}{t}\cdot\mathbb{E}_{\lambda,\pi}[T_{\Delta}].\]
From Assumption (A\({}_{7}\)) and applying Proposition 2.4(i) we have that \(\sup_{\pi\in\mathbf{\Gamma}}\mathbb{E}_{\lambda,\pi}[T_{\Delta}]<\infty\). Hence, we choose \(t\) large enough so as to obtain that \(\sup_{\pi\in\mathbf{\Gamma}}\int_{\mathbf{X}}Q_{\pi}^{t}(\Delta^{c}|x)h_{\pi} (x)\lambda(dx)<2\epsilon\), and the result follows. \(\Box\)
**Lemma 4.5**: _Given any \(\pi\in\tilde{\mathbf{M}}\), the measure \(\gamma\in\boldsymbol{\mathcal{M}}^{+}(\mathbf{X})\) defined as_
\[\gamma=\eta\sum_{k=0}^{\infty}Q_{\pi}^{k}\mathbb{I}_{\Delta^{c}} \tag{4.10}\]
_satisfies \(\gamma\ll\lambda\) and it is the unique solution of the equation_
\[\xi=(\eta+\xi Q_{\pi})\mathbb{I}_{\Delta^{c}}\quad\text{for }\xi\in \boldsymbol{\mathcal{M}}^{+}(\mathbf{X}). \tag{4.11}\]
_Moreover, \(\gamma=\mu_{\eta,\pi}^{\mathbf{X}}\)._
**Proof.** First of all, observe that \(\gamma\) defined in (4.10) is indeed in \(\boldsymbol{\mathcal{M}}^{+}(\mathbf{X})\) because
\[\gamma(\mathbf{X})=\sum_{k=0}^{\infty}(\eta Q_{\pi}^{k}\mathbb{I}_{\Delta^{c} })(\mathbf{X})=\sum_{k=0}^{\infty}(\eta Q_{\pi}^{k})(\Delta^{c})=\sum_{k=0}^{ \infty}\mathbb{P}_{\eta,\pi}\{T_{\Delta}>k\}=\mathbb{E}_{\eta,\pi}[T_{\Delta}] <\infty.\]
Recalling that \(\eta\ll\lambda\) and using Assumption (A\({}_{6}\)), it easily follows that \(\gamma\ll\lambda\). Suppose now that \(\xi\in\boldsymbol{\mathcal{M}}^{+}(\mathbf{X})\) is a solution of (4.11). A first direct consequence is that \(\xi\ll\lambda\). Iterating this equation we obtain that
\[\xi=\eta\mathbb{I}_{\Delta^{c}}\sum_{k=0}^{t}(Q_{\pi}\mathbb{I}_{\Delta^{c}}) ^{k}+\xi(Q_{\pi}\mathbb{I}_{\Delta^{c}})^{t+1}\quad\text{for any }t\in\mathbb{N}. \tag{4.12}\]
Now, by Lemma 4.1(iii) we have \(\mathbb{I}_{\Delta^{c}}(Q_{\pi}\mathbb{I}_{\Delta^{c}})^{k}=Q_{\pi}^{k} \mathbb{I}_{\Delta^{c}}\) and \((Q_{\pi}\mathbb{I}_{\Delta^{c}})^{t+1}=Q_{\pi}^{t+1}\mathbb{I}_{\Delta^{c}}\). Therefore, the equation (4.12) becomes
\[\xi=\eta\sum_{k=0}^{t}Q_{\pi}^{k}\mathbb{I}_{\Delta^{c}}+\xi Q_{\pi}^{t+1} \mathbb{I}_{\Delta^{c}}\quad\text{for any }t\in\mathbb{N}. \tag{4.13}\]
Using Lemma 4.4 (here we make use the fact that \(\xi\ll\lambda\)) it follows that \(\xi Q_{\pi}^{t+1}\mathbb{I}_{\Delta^{c}}(\mathbf{X})=\xi Q_{\pi}^{t+1}(\Delta ^{c})\) converges to \(0\) as \(t\to\infty\). Therefore, taking the limit as \(t\to\infty\) in equation (4.12) we get that indeed \(\xi=\gamma\), which completes the proof of the uniqueness. For the last statement observe that, by (4.8), it follows that \(\gamma\) is precisely the \(\mathbf{X}\)-marginal measure of the occupation measure \(\mu_{\eta,\pi}\), that is, \(\gamma=\mu_{\eta,\pi}^{\mathbf{X}}\). \(\Box\)
**Proposition 4.6**: _The occupation measures satisfy the following properties._
1. _Given_ \(\pi\in\tilde{\bf\Pi}\)_, the occupation measure_ \(\mu_{\eta,\pi}\) _satisfies the so-called characteristic equations (written in the variable_ \(\mu\in{\mathbf{\cal M}}^{+}({\bf X}\times{\bf A})\)_):_ \[\mu({\bf K}^{c})=0\quad\mbox{and}\quad\mu^{\bf X}=(\eta+\mu Q)\mathbb{I}_{ \Delta^{c}}.\] (4.14)
2. _If_ \(\pi\in\tilde{\bf M}\) _is a correlated Markov strategy then_ \(\mu_{\eta,\pi}=\mu^{\bf X}_{\eta,\pi}\otimes\pi\)_. Moreover, if_ \(\pi\in{\bf M}\) _then_ \(\mu_{\eta,\pi}=\mu^{\bf X\times A}_{\eta,\pi}\otimes\pi^{-i}\) _for any_ \(i\in\{1,\ldots,N\}\)_._
3. _If_ \(\pi\in{\bf\Pi}\) _is such that_ \(\pi^{-i}\in{\bf M}^{-i}\) _then there exists_ \(\sigma\in{\bf M}^{i}\) _with_ \(\mu_{\eta,\pi}=\mu_{\eta,(\pi^{-i},\sigma)}\)_._
4. _If_ \(\mu\in{\mathbf{\cal M}}^{+}({\bf X}\times{\bf A})\) _is a solution of (_4.14_) then there exists_ \(\pi\in\tilde{\bf M}\) _such that_ \(\mu=\mu_{\eta,\pi}\) _and so,_ \[\tilde{\mathbf{\cal O}}_{\eta}=\big{\{}\mu\in{\mathbf{\cal M }}^{+}({\bf X}\times{\bf A}):\mu({\bf K}^{c})=0\mbox{ and }\mu^{\bf X}=(\eta+\mu Q)\mathbb{I}_{\Delta^{c}}\big{\}}.\] _Moreover, we have_ \(\mu^{\bf X}\ll\lambda\) _and if_ \(\lambda\ll\eta\) _then_ \(\mu^{\bf X}\sim\lambda_{\Delta^{c}}\)_._
**Proof.** (i). To prove the stated result, note that for any \(B\in{\mathbf{\cal X}}\) we have
\[\mu^{\bf X}(B)=\sum_{t=0}^{\infty}\mathbb{P}_{\eta,\pi}\{T_{\Delta}>t,X_{t}\in B \}=\eta(B-\Delta)+\sum_{t=1}^{\infty}\mathbb{E}_{\eta,\pi}\big{[}\mathbb{P}_{ \eta,\pi}\{T_{\Delta}>t,X_{t}\in B\mid H_{t-1},A_{t-1}\}\big{]}.\]
Observe now that for each \(t\geq 1\), on the set \(\{T_{\Delta}\leq t-1\}\), the conditional probability within brackets vanishes, and so
\[\mu^{\bf X}(B) = \eta(B-\Delta)+\sum_{t=1}^{\infty}\mathbb{E}_{\eta,\pi}\big{[}Q(B -\Delta\mid X_{t-1},A_{t-1})\cdot\mathbf{I}_{\{T_{\Delta}>t-1\}}\big{]}\] \[= \eta(B-\Delta)+\int_{{\bf X}\times{\bf A}}Q(B-\Delta|x,a)\mu(dx, da),\]
which can be equivalently written precisely as \(\mu^{\bf X}=(\eta+\mu Q)\mathbb{I}_{\Delta^{c}}\). By construction of the state-action process, it is clear that \(\mu({\bf K}^{c})=0\).
(ii). Given \(B\in{\mathbf{\cal X}}\) and \(D^{i}\in{\mathbf{\mathfrak{B}}}({\bf A}^{i})\) we can write
\[\mu_{\eta,\pi}(B\times D^{1}\times\ldots\times D^{N}) = \sum_{t=0}^{\infty}\mathbb{E}_{\eta,\pi}\Big{[}\mathbf{I}_{\{T_{ \Delta}>t\}}\mathbf{I}_{\{X_{t}\in B\}}\pi(D^{1}\times\ldots\times D^{N}|X_{t} )\Big{]}\] \[= \int_{B}\pi(D^{1}\times\ldots\times D^{N}|x)\mu^{\bf X}_{\eta, \pi}(dx)\]
because, precisely, \(\mu^{\bf X}_{\eta,\pi}(\Gamma)=\sum_{t\geq 0}\mathbb{P}_{\eta,\pi}\{T_{\Delta}>t,X_{t}\in\Gamma\}\) for \(\Gamma\in{\mathbf{\cal X}}\), and the stated result follows. The second part of the statement is an easy consequence of the first part and the fact that, this time, \(\pi\in{\bf M}\) is a noncooperative Markov profile.
(iii). The occupation measure of the strategy profile \(\pi\) satisfies, for \(B\in{\mathbf{\cal X}}\) and \(D^{j}\in{\mathbf{\mathfrak{B}}}({\bf A}^{j})\) for \(1\leq j\leq N\)
\[\mu_{\eta,\pi}(B\times D^{1}\times\ldots\times D^{N}) = \sum_{t=0}^{\infty}\mathbb{E}_{\eta,\pi}\Big{[}\mathbf{I}_{\{T_{ \Delta}>t\}}\mathbf{I}_{\{X_{t}\in B\}}\pi^{-i}(D^{-i}|X_{t})\pi^{i}(D^{i}|H_{ t})\Big{]} \tag{4.15}\] \[= \int_{B}\pi^{-i}(D^{-i}|x)\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,\pi }(dx\times D^{i}),\]
where \(D^{-i}\) denotes the product of all the sets \(D^{j}\) except \(D^{i}\), and where \(\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\pi}(dx\times D^{i})\) denotes integration with respect to the measure \(B\mapsto\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\pi}(B\times D^{i})\). By the disintegration result in Lemma 1.1, there exists some \(\sigma\in\mathbf{M}^{i}\) such that \(\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,\pi} \otimes\sigma\). It then follows from (4.15) that \(\mu_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,\pi}\otimes(\pi^{-i},\sigma)\) and so applying statement (i) in this proposition,
\[\mu^{\mathbf{X}}_{\eta,\pi} = (\eta+\mu_{\eta,\pi}Q)\mathbb{I}_{\Delta^{c}}\] \[= (\eta+\mu^{\mathbf{X}}_{\eta,\pi}Q_{(\pi^{-i},\sigma)})\mathbb{I} _{\Delta^{c}}.\]
By Lemma 4.5 we derive that \(\mu^{\mathbf{X}}_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,(\pi^{-i},\sigma)}\), and by item (ii) that \(\mu_{\eta,\pi}=\mu_{\eta,(\pi^{-i},\sigma)}\).
(iv). By the disintegration in Lemma 1.1, we obtain that \(\mu=\mu^{\mathbf{X}}\otimes\pi\) for some \(\pi\in\tilde{\mathbf{M}}\). Therefore, \(\mu^{\mathbf{X}}\) satisfies equation (4.11) and so \(\mu^{\mathbf{X}}=\mu^{\mathbf{X}}_{\eta,\pi}\ll\lambda\), while item (ii) yields \(\mu=\mu^{\mathbf{X}}_{\eta,\pi}\otimes\pi=\mu_{\eta,\pi}\). By using item (i), we get the characterization of \(\tilde{\boldsymbol{\mathcal{O}}}_{\eta}\). Now, let \(B\in\mathfrak{X}\) be such that \(B\subseteq\Delta^{c}\) and \(\mu^{\mathbf{X}}(B)=0\). Since (4.14) implies that \(\mu^{\mathbf{X}}(B)\geq\eta_{\Delta^{c}}(B)\) then necessarily \(\lambda_{\Delta^{c}}(B)=0\). We conclude that \(\mu^{\mathbf{X}}\sim\lambda_{\Delta^{c}}\). \(\Box\)
Now, we introduce \(\tilde{\boldsymbol{\mathcal{O}}}^{i}_{\eta}\) as the set of possible responses for each player \(1\leq i\leq N\).
**Definition 4.7**: _Given an initial distribution \(\eta\in\boldsymbol{\mathcal{P}}_{\lambda}(\mathbf{X})\) we define_
\[\tilde{\boldsymbol{\mathcal{O}}}^{i}_{\eta}=\{\mu^{\mathbf{X}\times\mathbf{A} ^{i}}:\mu\in\tilde{\boldsymbol{\mathcal{O}}}_{\eta}\}\subseteq\boldsymbol{ \mathcal{M}}^{+}(\mathbf{X}\times\mathbf{A}^{i}).\]
In our next result, we use the notion of a uniformly absorbing game model (see Definition 2.3). Recall that, by Assumption A, we are considering an initial distribution \(\eta\in\boldsymbol{\mathcal{P}}_{\lambda}(\mathbf{X})\) such that the game model \(\mathcal{G}(\eta,\rho)\) is absorbing to \(\Delta\). To obtain this important result, we need two preliminary technical Lemmas. A direct consequence of Assumption A is the following result.
**Lemma 4.8**: _If \(v\in L^{\infty}(\mathbf{X},\mathfrak{X},\lambda)\) then \(Qv\in\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A},\mathbb{R})\)._
In our next lemma, recall that \(\tilde{\boldsymbol{\mathcal{Y}}}\) is endowed with the narrow topology and that in \(L^{\infty}(\mathbf{X},\mathfrak{X},\lambda)\) we consider the weak\({}^{*}\) convergence.
**Lemma 4.9**: _The following continuity results hold._
1. _Given any_ \(f\in\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A})\) _and_ \(v\in L^{\infty}(\mathbf{X},\mathfrak{X},\lambda)\)_, the mappings_ \(\pi\mapsto f_{\pi}\) _and_ \(\pi\mapsto Q_{\pi}v\) _from_ \(\tilde{\boldsymbol{\mathcal{Y}}}\) _to_ \(L^{\infty}(\mathbf{X},\mathfrak{X},\lambda)\) _are continuous._
2. _If_ \(v_{n}\stackrel{{*}}{{\rightharpoonup}}v\) _in_ \(L^{\infty}(\mathbf{X},\mathfrak{X},\lambda)\) _and_ \(\pi_{n}\to\pi\) _in_ \(\tilde{\boldsymbol{\mathcal{Y}}}\) _then, for any_ \(t\geq 0\)_, we have_ \(Q^{t}_{\pi_{n}}v_{n}\stackrel{{*}}{{\rightharpoonup}}Q_{\pi}v\)_._
**Proof.** Part (i) is a direct consequence of the definition of the narrow topology and Lemma 4.8. For item (ii), the reader is referred to Lemma 4.1 in [20] or Lemmas 3.6 and 3.7 in [11]. \(\Box\)
The above result just concerns convergence in \(\tilde{\boldsymbol{\mathcal{Y}}}\) and it is not necessarily true for convergence in \(\boldsymbol{\mathcal{Y}}\). Indeed, the fact that \(\boldsymbol{\mathcal{Y}}\subseteq\tilde{\boldsymbol{\mathcal{Y}}}\) should not be misleading since, in \(\boldsymbol{\mathcal{Y}}\), we are considering the product topology of the \(\boldsymbol{\mathcal{Y}}^{i}\) which, as mentioned in Remark 2.7, does not coincide with the trace topology of \(\tilde{\boldsymbol{\mathcal{Y}}}\).
The next proposition shows that it is necessary to reinforce the hypothesis of an absorbing model by assuming that the model is uniformly absorbing: we will need this to show that the set \(\tilde{\boldsymbol{\mathcal{O}}}^{i}_{\eta}\) of possible responses of each player is compact in order to use the Kakutani-Fan-Glicksberg fixed point theorem leading to the existence of a Markovian noncooperative equilibrium.
**Proposition 4.10**: _The sets \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\) and \(\tilde{\mathbf{\mathcal{O}}}^{i}_{\eta}\) are convex and the following statements are equivalent._
* _The game model_ \({\cal G}(\eta,\rho)\) _is uniformly absorbing to_ \(\Delta\)_._
* _The set_ \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\) _is a compact metric space for the_ \(ws\)_-topology._
* _The set_ \(\tilde{\mathbf{\mathcal{O}}}^{i}_{\eta}\) _is a compact metric space for the_ \(ws\)_-topology with_ \(i\in\{1,\ldots,N\}\)_._
**Proof.** Regarding the first claim, observe that the convexity of \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\) is a direct consequence of Proposition 4.6(iv). Convexity of \(\tilde{\mathbf{\mathcal{O}}}^{i}_{\eta}\) is a straightforward consequence of convexity of \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\).
\((a)\Rightarrow(b)\) Let us first show that \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\) is relatively compact for the \(ws\)-topology. Applying Theorem 5.2.(ii) in [6], this is equivalent to show that the set of \({\bf X}\)-marginal measures of \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\), which we denote by \(\tilde{\mathbf{\mathcal{O}}}^{\bf X}_{\eta}\), is relatively \(s\)-compact and that the set of \({\bf A}\)-marginal measures of \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\), denoted by \(\tilde{\mathbf{\mathcal{O}}}^{\bf A}_{\eta}\), is relatively \(w\)-compact. Recalling Remark 4.3(a), we have that \(\tilde{\mathbf{\mathcal{O}}}_{\eta}\) is a bounded subset of \({\mathbf{\mathcal{M}}}^{+}({\bf X}\times{\bf A})\). Since \({\bf A}\) is compact, it is clear that \(\tilde{\mathbf{\mathcal{O}}}^{\bf A}_{\eta}\) is relatively \(w\)-compact by using [8, Theorem 8.6.7]. To prove that \(\tilde{\mathbf{\mathcal{O}}}^{\bf X}_{\eta}\) is relatively \(s\)-compact, let us show that
\[\lim_{n\rightarrow\infty}\sup_{\mu\in\tilde{\mathbf{\mathcal{O}}}_{ \eta}}\mu^{\bf X}(\Gamma_{n})=0 \tag{4.16}\]
for any decreasing sequence of sets \(\Gamma_{n}\in{\bf\mathfrak{X}}\) with \(\Gamma_{n}\downarrow\emptyset\). Indeed, from [8, Lemma 4.6.5] this implies that \(\tilde{\mathbf{\mathcal{O}}}^{\bf X}_{\eta}\) is uniformly countably additive and so, relatively compact for the \(s\)-topology; see [8, Theorem 4.7.25]. Since \(\mu^{\bf X}(\Delta)=0\), there is no loss of generality in assuming that the \(\Gamma_{n}\) are subsets of \(\Delta^{c}\). By Proposition 4.6(iv), for every \(\mu\in\tilde{\mathbf{\mathcal{O}}}_{\eta}\) there exists a correlated Markov strategy \(\pi_{\mu}\in\tilde{\mathbf{\mathcal{Y}}}\) such that \(\mu=\mu_{\eta,\pi_{\mu}}\) and so, for any fixed \(k\geq 0\),
\[\mu^{\bf X}(\Gamma_{n}) \leq \sum_{t=0}^{k}\mathbb{P}_{\eta,\pi_{\mu}}\{X_{t}\in\Gamma_{n}\}+ \sum_{t>k}\mathbb{P}_{\eta,\pi_{\mu}}\{X_{t}\in\Delta^{c}\}\] \[= \sum_{t=0}^{k}\mathbb{P}_{\eta,\pi_{\mu}}\{X_{t}\in\Gamma_{n}\}+ \sum_{t>k}\mathbb{P}_{\eta,\pi_{\mu}}\{T_{\Delta}>t\}\]
Therefore,
\[\sup_{\mu\in\tilde{\mathbf{\mathcal{O}}}_{\eta}}\mu^{\bf X}(\Gamma_{ n})\leq\sum_{t=0}^{k}\sup_{\mu\in\tilde{\mathbf{\mathcal{O}}}_{ \eta}}\mathbb{P}_{\eta,\pi_{\mu}}\{X_{t}\in\Gamma_{n}\}+\sup_{\mu\in\tilde{ \mathbf{\mathcal{O}}}_{\eta}}\sum_{t>k}\mathbb{P}_{\eta,\pi_{\mu}}\{ T_{\Delta}>t\}. \tag{4.17}\]
Let us now pay attention to first term in righthand of (4.17). Suppose first that \(0\leq t\leq k\) and \(n\in\mathbb{N}\) remain fixed. By Lemma 4.9 we have that the mapping \(\pi\mapsto Q^{t}_{\pi}(\Gamma_{n}|\cdot)\) from \(\tilde{\mathbf{\mathcal{Y}}}\) to \(L^{\infty}({\bf X},{\bf\mathfrak{X}},\lambda)\) is continuous. Since \(\eta\ll\lambda\), this implies that the mapping \(\pi\mapsto\mathbb{P}_{\eta,\pi_{\mu}}\{X_{t}\in\Gamma_{n}\}\) is continuous on \(\tilde{\mathbf{\mathcal{Y}}}\). Observe now that, by hypothesis, we have \({\bf I}_{\Gamma_{n}}\stackrel{{\sim}}{{\rightharpoonup}}0\) as \(n\rightarrow\infty\), and so by Lemma 4.9 again, for every \(\pi\in\tilde{\mathbf{\mathcal{Y}}}\) we have \(Q^{t}_{\pi}(\Gamma_{n}|\cdot)\stackrel{{\ast}}{{\rightharpoonup}}0\) and, therefore, \(\mathbb{P}_{\eta,\pi}\{X_{t}\in\Gamma_{n}\}\to 0\). Summarizing, the sequence (in \(n\in\mathbb{N}\)) of continuous mappings \(\pi\mapsto\mathbb{P}_{\eta,\pi}\{X_{t}\in\Gamma_{n}\}\) decreases to \(0\) and hence, by Dini's theorem, the convergence is uniform since \(\tilde{\mathbf{\mathcal{Y}}}\) is a compact metric space. So, for each fixed \(0\leq t\leq k\) we have
\[\lim_{n\rightarrow\infty}\sup_{\mu\in\tilde{\mathbf{\mathcal{O}}}_{ \eta}}\mathbb{P}_{\eta,\pi_{\mu}}\{X_{t}\in\Gamma_{n}\}=0.\]
Regarding the rightmost expression in (4.17), it converges to \(0\) as \(n\to\infty\) as a direct consequence of the fact that \({\cal G}(\eta,\rho)\) is uniformly absorbing to \(\Delta\). This completes the proof of (4.16). Therefore, once we know that \(\tilde{\mathbf{\cal O}}_{\eta}\) is relatively compact for the \(ws\)-topology, it follows that it is also metrizable by Proposition 2.3 in [6].
To prove the compactness of \(\tilde{\mathbf{\cal O}}_{\eta}\), the last step consists in showing that it is closed. To see this, let \(\{\mu_{n}\}_{n\geq 0}\) be a sequence in \(\tilde{\mathbf{\cal O}}_{\eta}\) converging in the \(ws\)-topology to some \(\mu\in\mathbf{\cal M}^{+}({\bf X}\times{\bf A})\). First of all, let us show that \(\mu({\bf K}^{c})=0\). The measurable function \((x,a)\mapsto{\bf I}_{{\bf K}^{c}}(x,a)\) is such that \(a\mapsto{\bf I}_{{\bf K}^{c}}(x,a)={\bf I}_{{\bf A}^{c}(x)}(a)\) is lower semicontinuous on \({\bf A}\) for any fixed \(x\in{\bf X}\) because \({\bf A}(x)\) is compact. Thus, \({\bf I}_{{\bf K}^{c}}\) is a nonnegative normal integrand and [6, Theorem 3.1.(c)] yields \(\underline{\lim}_{n}\mu_{n}({\bf K}^{c})\geq\mu({\bf K}^{c})\) and so \(\mu({\bf K}^{c})=0\). On the other hand, it is clear that \(\mu_{n}^{\bf X}(\Delta)=0\) for all \(n\geq 0\) implies that \(\mu^{\bf X}(\Delta)=0\). To conclude the proof, choose an arbitrary measurable subset \(B\) of \(\Delta^{c}\). For every \(n\geq 0\) we have
\[\mu_{n}^{\bf X}(B)=\eta(B)+\int_{{\bf X}\times{\bf A}}Q{\bf I}_{B}(x,a)\mu_{n }(dx,da)\]
By Lemma 4.8, the function \(Q{\bf I}_{B}(x,a)\) is in \({\cal C}ar_{b}({\bf X}\times{\bf A},\mathbb{R})\) so that we can take limits as \(n\to\infty\) to obtain that \(\mu^{\bf X}(B)=\eta(B)+\mu Q(B)\), thus completing the proof that \(\mu\in\tilde{\mathbf{\cal O}}_{\eta}\).
\((b)\Rightarrow(c)\) Since the mapping from \(\mathbf{\cal M}^{+}({\bf X}\times{\bf A})\) to \(\mathbf{\cal M}^{+}({\bf X}\times{\bf A}^{i})\) which associates to \(\mu\in\mathbf{\cal M}^{+}({\bf X}\times{\bf A})\) its marginal measure \(\mu^{{\bf X}\times{\bf A}^{i}}\) is continuous for the respective \(ws\)-topologies, it follows that \(\tilde{\mathbf{\cal O}}_{\eta}^{i}\) is compact. Again from [6, Proposition 2.3], noting that the set of \({\bf X}\)-marginal measures of \(\tilde{\mathbf{\cal O}}_{\eta}^{i}\) is precisely \(\tilde{\mathbf{\cal O}}_{\eta}^{\bf X}\), which has been shown to be relatively \(s\)-compact, we conclude that \(\tilde{\mathbf{\cal O}}_{\eta}^{i}\) is metrizable.
\((b)\Rightarrow(a)\) Since \(\tilde{\mathbf{\cal O}}_{\eta}\) is compact for the \(ws\)-topology, it follows from Theorem 5.2 in [6] that the set of \({\bf X}\)-marginal measures of \(\tilde{\mathbf{\cal O}}_{\eta}\) (denoted by \(\tilde{\mathbf{\cal O}}_{\eta}^{\bf X}\)) is relatively s-compact. By Proposition 4.6(iv), \(\tilde{\mathbf{\cal O}}_{\eta}^{\bf X}=\{\mu_{\pi}^{\bf X}:\pi\in \tilde{\mathbf{\cal Y}}\}\). Combining Proposition 2.2 in [6] and Corollary 2.7 in [16] we get that the family \(\{h_{\pi}\}_{\pi\in\tilde{\mathbf{\cal Y}}}\) of density functions \(h_{\pi}=d\mu_{\pi}^{\bf X}/d\lambda\) is uniformly \(\lambda\)-integrable. Now, observe that for \(\pi\in\tilde{\mathbf{\cal Y}}\),
\[\sum_{k=t}^{\infty}\mathbb{P}_{\eta,\pi}\{T_{\Delta}>k\}=\mu_{\pi}^{\bf X}Q_{ \pi}^{t}(\Delta^{c})=\int_{\bf X}Q_{\pi}^{t}(\Delta^{c}|x)h_{\pi}(x)\lambda(dx)\]
and by using Lemma 4.4 we can conclude that the rightmost term in the previous equation converges to zero uniformly in \(\pi\in\tilde{\mathbf{\cal Y}}\) as \(t\to\infty\). This establishes that \({\cal G}(\eta,\rho)\) is indeed uniformly absorbing to \(\Delta\).
\((c)\Rightarrow(a)\) Observe that for \(i\in\{1,\ldots,N\}\), the sets of \({\bf X}\)-marginal measures of \(\tilde{\mathbf{\cal O}}_{\eta}\) and \(\tilde{\mathbf{\cal O}}_{\eta}^{i}\) are the same. Consequently, by Theorem 5.2 in [6] the set of \({\bf X}\)-marginal measures of \(\tilde{\mathbf{\cal O}}_{\eta}\) is relatively s-compact and the rest of the proof is identical to that of \((b)\Rightarrow(a)\). \(\Box\)
### Continuity properties of the occupation measures
_In this subsection we shall assume that the game model \({\cal G}(\eta,\rho)\) satisfies Assumptions \({\rm A}^{\prime}\) and \({\rm B}\)._ In particular under condition \(({\rm A}^{\prime}_{1})\), the set \(\tilde{\mathbf{\cal O}}_{\eta}\) of occupation measures is compact by Proposition 4.10. Under condition \({\rm B}\), we can obtain the following result similar to Lemma 4.9(ii), whose proof is omitted.
**Lemma 4.11**: _If \(v_{n}\stackrel{{*}}{{\rightharpoonup}}v\) in \(L^{\infty}({\bf X},\mathbf{\cal X},\lambda)\) and \(\pi_{n}\to\pi\) in \(\mathbf{\cal Y}\) then \(Q_{\pi_{n}}^{t}v_{n}\stackrel{{*}}{{\rightharpoonup}}Q_{\pi}^{t}v\) for any \(t\in\mathbb{N}\)._
At this point, recall the notation \(\eta_{n}\) (see (3.7)) for the initial distributions which are a combination of \(\eta\) and \(\lambda\).
**Proposition 4.12**: _Under any of the conditions (i) and (ii) below:_
1. \(\pi_{n}\to\pi\) _in_ \(\boldsymbol{\mathcal{Y}}\) _and_ \(f\in\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A},\mathbb{R})\) _is such that_ \(\mathbb{I}_{\Delta^{c}}f_{\pi_{n}}(\cdot)\stackrel{{*}}{{\rightharpoonup}} \mathbb{I}_{\Delta^{c}}f_{\pi}(\cdot)\) _in_ \(L^{\infty}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\)_,_
2. \(\pi_{n}\to\pi\) _in_ \(\tilde{\boldsymbol{\mathcal{Y}}}\) _and_ \(f\in\mathcal{C}ar_{b}(\mathbf{X}\times\mathbf{A},\mathbb{R})\)_,_
_we have the following limits:_
\[\lim_{n\to\infty}\int_{\mathbf{X}\times\mathbf{A}}fd\mu_{\eta_{n},\pi_{n}}= \int_{\mathbf{X}\times\mathbf{A}}fd\mu_{\eta,\pi}\quad\text{and}\quad\lim_{n \to\infty}\int_{\mathbf{X}\times\mathbf{A}}fd\mu_{\eta,\pi_{n}}=\int_{\mathbf{ X}\times\mathbf{A}}fd\mu_{\eta,\pi}\]
**Proof:** We will only prove the first limit in case (i), the remaining cases being obtained by using similar arguments. Recalling (3.7), observe that
\[\mu_{\eta_{n},\pi_{n}}=\frac{n}{n+1}\mu_{\eta,\pi_{n}}+\frac{1}{n+1}\mu_{ \lambda,\pi_{n}}\quad\text{and}\quad\int_{\mathbf{X}\times\mathbf{A}}f(x,a)d \mu_{\lambda,\pi_{n}}\leq\mathbf{f}\sup_{\pi\in\tilde{\mathbf{H}}}\mu_{\lambda,\pi}(\mathbf{X}\times\mathbf{A})\]
for some constant \(\mathbf{f}\). From Remark 4.3(a), we only have to show that
\[\lim_{n\to\infty}\int_{\mathbf{X}\times\mathbf{A}}f(x,a)d\mu_{\eta,\pi_{n}}= \int_{\mathbf{X}\times\mathbf{A}}f(x,a)d\mu_{\eta,\pi}. \tag{4.18}\]
Equivalently, the above sequence being bounded, we will prove that any convergent subsequence has the desired limit. To simplify the notation, and without loss of generality, we will suppose that the whole sequence is converging and also that \(\{\mu_{\eta,\pi_{n}}\}_{n\in\mathbb{N}}\) is a convergent sequence in \(\tilde{\boldsymbol{\mathcal{O}}}_{\eta}\) (recall Assumption (A\({}_{1}^{\prime}\)) and Proposition 4.10). We have \(\mu_{\eta,\pi_{n}}=\mu_{\eta,\pi_{n}}^{\mathbf{X}}\otimes\pi_{n}\) with (by Lemma 4.5 and, in particular, (4.13))
\[\mu_{\eta,\pi_{n}}^{\mathbf{X}}=\sum_{k=0}^{t-1}\eta Q_{\pi_{n}}^{k}\mathbb{I }_{\Delta^{c}}+\mu_{\eta,\pi_{n}}^{\mathbf{X}}Q_{\pi_{n}}^{t}\mathbb{I}_{ \Delta^{c}}\]
for any \(t\in\mathbb{N}^{*}\). Consequently, integrating the function \(f_{\pi_{n}}\) with respect to the above measures, we can write
\[\int_{\mathbf{X}\times\mathbf{A}}fd\mu_{\eta,\pi_{n}} = \sum_{k=0}^{t-1}\int_{\mathbf{X}}f_{\pi_{n}}\,d\eta Q_{\pi_{n}}^{ k}\mathbb{I}_{\Delta^{c}}+\int_{\mathbf{X}}f_{\pi_{n}}\,d\mu_{\eta,\pi_{n}}^{ \mathbf{X}}Q_{\pi_{n}}^{t}\mathbb{I}_{\Delta^{c}} \tag{4.19}\] \[= \sum_{k=0}^{t-1}\int_{\mathbf{X}}Q_{\pi_{n}}^{k}\mathbb{I}_{ \Delta^{c}}f_{\pi_{n}}d\eta+\int_{\mathbf{X}}\mathbb{I}_{\Delta^{c}}f_{\pi_{n }}d\mu_{\eta,\pi_{n}}^{\mathbf{X}}Q_{\pi_{n}}^{t},\]
for any \(t\in\mathbb{N}^{*}\). Observe also that
\[\Big{|}\int_{\mathbf{X}}\mathbb{I}_{\Delta^{c}}f_{\pi_{n}}\,d\mu_{\eta,\pi_{n}}^ {\mathbf{X}}Q_{\pi_{n}}^{t}\Big{|}\leq\mathbf{f}\cdot\mu_{\eta,\pi_{n}}^{ \mathbf{X}}Q_{\pi_{n}}^{t}(\Delta^{c})=\mathbf{f}\sum_{k=t}^{\infty}\mathbb{P} _{\eta,\pi_{n}}\{T_{\Delta}>k\} \tag{4.20}\]
for any \(t\in\mathbb{N}^{*}\) and \(n\in\mathbb{N}\). By hypothesis we have \(\mathbb{I}_{\Delta^{c}}f_{\pi_{n}}(\cdot)\stackrel{{*}}{{\rightharpoonup}} \mathbb{I}_{\Delta^{c}}f_{\pi}(\cdot)\) in \(L^{\infty}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\) and since \(d\eta/d\lambda\) is in \(L^{1}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\), we have by Lemma 4.11 that
\[\int_{\mathbf{X}}Q_{\pi_{n}}^{k}\mathbb{I}_{\Delta^{c}}f_{\pi_{n}}\,\frac{d\eta }{d\lambda}\,d\lambda\to\int_{\mathbf{X}}Q_{\pi}^{k}\mathbb{I}_{\Delta^{c}}f_{ \pi}\,\frac{d\eta}{d\lambda}\,d\lambda. \tag{4.21}\]
Combining equations (4.19)-(4.21) we get that for any \(t\in\mathbb{N}^{*}\),
\[\Big{|}\lim_{n\to\infty}\int_{\mathbf{X}\times\mathbf{A}}fd\mu_{\eta,\pi_{n}}- \sum_{k=0}^{t-1}\int_{\mathbf{X}}f_{\pi}\,d\eta Q_{\pi}^{k}\mathbb{I}_{\Delta^{ c}}\Big{|}\leq\mathbf{f}\sup_{n\in\mathbb{N}}\sum_{k=t}^{\infty}\mathbb{P}_{ \eta,\pi_{n}}\{T_{\Delta}>k\}.\]
Finally, by Assumption B we get
\[\lim_{n\to\infty}\int_{{\bf X}\times{\bf A}}fd\mu_{\eta,\pi_{n}}=\sum_{k=0}^{ \infty}\int_{\bf X}f_{\pi}\,d\eta Q_{\pi}^{k}\mathbb{I}_{\Delta^{c}}=\int_{\bf X }f_{\pi}d\mu_{\eta,\pi}^{\bf X}=\int_{{\bf X}\times{\bf A}}fd\mu_{\eta,\pi},\]
this establishes the limit in (4.18). \(\Box\)
The next result will be useful in the forthcoming.
**Corollary 4.13**: _The following convergence results hold._
1. _If_ \(\{\pi_{n}\}\) _in_ \(\tilde{\bf{\cal Y}}\) _converges to_ \(\pi\in\tilde{\bf{\cal Y}}\) _then_ \(\mu_{\eta,\pi_{n}}\to\mu_{\eta,\pi}\) _and_ \(\mu_{\eta_{n},\pi_{n}}\to\mu_{\eta,\pi}\)_._
2. _Consider_ \(\pi^{-i}\in{\bf{\cal Y}}^{-i}\) _fixed for_ \(i\in\{1,\ldots,N\}\)_. If_ \(\sigma_{n}\to\sigma\) _in_ \({\bf{\cal Y}}^{i}\) _then_ \(\mu_{\eta,(\pi^{-i},\sigma_{n})}\to\mu_{\eta,(\pi^{-i},\sigma)}\) _and_ \(\mu_{\eta_{n},(\pi^{-i},\sigma)}\to\mu_{\eta,(\pi^{-i},\sigma)}\) _in the_ \(ws\)_-topology._
3. _Fix a player_ \(1\leq i\leq N\) _and suppose that_ \(\{\pi_{n}\}\) _in_ \({\bf{\cal Y}}\) _converges to_ \(\pi\in{\bf{\cal Y}}\)_. Then,_ \[\mu_{\eta,\pi_{n}}^{{\bf X}\times{\bf A}^{i}}\to\mu_{\eta,\pi}^{{\bf X}\times{ \bf A}^{i}}\quad\mbox{and}\quad\mu_{\eta_{n},\pi_{n}}^{{\bf X}\times{\bf A}^{i }}\to\mu_{\eta,\pi}^{{\bf X}\times{\bf A}^{i}}.\]
4. _Consider the initial distributions_ \(\eta_{n}\) _defined in (_3.7_) and an arbitrary sequence_ \(\pi_{n}\to\pi\) _in_ \({\bf{\cal Y}}\)_. Then,_ \[R^{i}(\eta,\pi_{n})\to R^{i}(\eta,\pi)\quad\mbox{and}\quad C^{i}(\eta,\pi_{n}) \to C^{i}(\eta,\pi).\] _and also_ \[R^{i}(\eta_{n},\pi_{n})\to R^{i}(\eta,\pi)\quad\mbox{and}\quad C^{i}(\eta_{n}, \pi_{n})\to C^{i}(\eta,\pi).\]
**Proof:** Part (i) follows directly from the second condition in Proposition 4.12. For the second part, observe that if \(g\in{\cal C}ar({\bf X}\times{\bf A})\) then
\[g^{i}(x,a^{i})=\int_{{\bf A}^{-i}}g(x,(a^{i},a^{-i}))\pi^{-i}(da^{-i}|x)\]
is in \({\cal C}ar({\bf X}\times{\bf A}^{i})\). With this in mind, it is easy to see that, letting \(\pi_{n}=(\pi^{-i},\sigma_{n})\) and \(\pi=(\pi^{-i},\sigma)\), we have that \(\pi_{n}\to\pi\) in \(\tilde{\bf{\cal Y}}\). Apply now Proposition 4.12(ii). For item (iii), consider a fixed integer \(i\in\{1,\ldots,N\}\) and an arbitrary \(g\in{\cal C}ar_{b}({\bf X}\times{\bf A}^{i},\mathbb{R})\). Define then the function \(f\) on \({\bf X}\times{\bf A}\) by \(f(x,a_{1},\ldots,a_{i},\ldots,a_{N})=g(x,a_{i})\) with \(f\in{\cal C}ar_{b}({\bf X}\times{\bf A},\mathbb{R})\), which satisfies \(\mathbb{I}_{\Delta^{c}}f_{\pi_{n}}(\cdot)\stackrel{{*}}{{\to}} \mathbb{I}_{\Delta^{c}}f_{\pi}(\cdot)\) in \(L^{\infty}({\bf X},{\bf X},\lambda)\). Applying Proposition 4.12 we conclude that \(\int_{{\bf X}\times{\bf A}^{i}}gd\mu_{\eta,\pi_{n}}^{{\bf X}\times{\bf A}^{i}} \to\int_{{\bf X}\times{\bf A}^{i}}gd\mu_{\eta,\pi}^{{\bf X}\times{\bf A}^{i}}\), and the result follows. The proof of item (iv) is a direct consequence of Assumption B, Remark 4.3(c), and Proposition 4.12. \(\Box\)
By Corollary 4.13(i), when considering the narrow convergence in \(\tilde{\bf{\cal Y}}\) we have that the mapping which associates to each \(\pi\in\tilde{\bf{\cal Y}}\) its occupation measure \(\mu_{\eta,\pi}\) is continuous. For the product narrow topology on \({\bf{\cal Y}}\), however, such a result is not true in general and, instead, we get weaker results as in Corollary 4.13(ii) and (iii) above. Notice that there is some kind of duality in these results: indeed, item (ii) shows that if the convergence \(\pi_{n}\to\pi\) takes place in _just one_ variable then we get convergence of the _whole_ occupation measures \(\mu_{\eta,\pi_{n}}\to\mu_{\eta,\pi}\), while in item (iii) if the _whole_ sequence \(\pi_{n}\) converges to \(\pi\) then we get convergence of, _individually_, each component of the occupation measures \(\mu_{\eta,\pi_{n}}^{{\bf X}\times{\bf A}^{i}}\to\mu_{\eta,\pi}^{{\bf X}\times{ \bf A}^{i}}\).
Proofs of the main results
We will show Proposition 3.1 and Theorem 3.2 in the constrained case. The unconstrained case is easily deduced from the constrained one. Indeed, by considering constraint constants satisfying \(\rho<-\mathbf{r}\sup_{\pi\in\Pi}\mathbb{E}_{\eta,\pi}[T_{\Delta}]\mathbf{1}\) we get that \(C^{i}(\eta,\pi)>\rho\) for any \(\pi\in\Pi\) and any player \(i\) (see item (a) of Remark 3.5) yielding that the constraints and and the Slater condition (\(\Lambda_{2}\)) are trivially satisfied.
### Proof of Proposition 3.1
_We will suppose in this subsection that we are given an initial distribution \(\eta\in\boldsymbol{\mathcal{P}}(\mathbf{X})\) satisfying \(\lambda\ll\eta\) and constraint constants \(\rho\in\mathbb{R}^{Np}\) such that the game \(\mathcal{G}(\eta,\rho)\) satisfies Assumptions \(\mathrm{A}^{\prime}\) and \(\mathrm{B}\)._ By recalling Assumption (\(\mathrm{A}_{8}\)) and Proposition 4.6(iv), this yields to an important property, namely, \(\mu^{\mathbf{X}}_{\eta,\pi}\sim\lambda_{\Delta^{c}}\) for any \(\pi\in\tilde{\mathbf{M}}\). The objective of this subsection is to introduce a correspondence defined as the composition of a function \(\mathcal{J}_{\eta}:\tilde{\boldsymbol{\mathcal{O}}}^{1}_{\eta}\times\ldots \times\tilde{\boldsymbol{\mathcal{O}}}^{N}_{\eta}\to\boldsymbol{\mathcal{Y}}^ {1}\times\ldots\times\boldsymbol{\mathcal{Y}}^{N}\) and a correspondence \(\mathcal{H}_{\eta,\rho}:\boldsymbol{\mathcal{Y}}^{1}\times\ldots\times \boldsymbol{\mathcal{Y}}^{N}\twoheadrightarrow\tilde{\boldsymbol{\mathcal{O}}}^ {1}_{\eta}\times\ldots\times\tilde{\boldsymbol{\mathcal{O}}}^{N}_{\eta}\) and to show that it has a fixed point, from which we will derive equilibrium stationary Markov policies in the special case where \(\lambda\ll\eta\)._
The function \(\mathcal{J}_{\eta}\).Consider a fixed \(\vartheta\in\mathbf{M}\). For any \(\pi\in\tilde{\mathbf{M}}\), let us define \(\gamma_{\pi}\in\tilde{\mathbf{M}}\) as
\[\gamma_{\pi}(B|x)=\pi(B|x)\mathbf{I}_{\Delta^{c}}(x)+\vartheta(B|x)\mathbf{I} _{\Delta}(x)\quad\text{for $B\in\boldsymbol{\mathfrak{B}}(\mathbf{A})$ and $x\in\mathbf{X}$.} \tag{5.1}\]
This definition ensures that \(\gamma_{\pi}\in\mathbf{M}\) if \(\pi\in\mathbf{M}\).
**Lemma 5.1**:
1. _Given any_ \(\pi\in\tilde{\mathbf{M}}\) _we have equality of the occupation measures_ \(\mu_{\eta,\pi}=\mu_{\eta,\gamma_{\pi}}\)_._
2. _Let_ \(i\in\{1,\ldots,N\}\) _be fixed. For any_ \(m\in\tilde{\boldsymbol{\mathcal{O}}}^{i}_{\eta}\) _consider_ \(\tilde{\mathbf{M}}^{i}_{m}=\{\pi\in\tilde{\mathbf{M}}:\mu^{\mathbf{X}\times \mathbf{A}^{i}}_{\eta,\pi}=m\}\)_. Then the set_ \[\{\gamma^{\mathbf{A}^{i}}_{\pi}:\pi\in\tilde{\mathbf{M}}^{i}_{m}\}\subseteq \mathbf{M}^{i}\] _is contained in a unique class of equivalence of_ \(\boldsymbol{\mathcal{Y}}^{i}\)_, that will be denoted by_ \(\mathcal{J}^{i}_{\eta}(m)\)_._
3. _Given_ \(\pi\in\mathbf{M}\) _and_ \(1\leq i\leq N\)_, let_ \(\sigma=\gamma^{\mathbf{A}^{i}}_{\pi}\) _and consider_ \(\pi^{\prime}=(\pi^{-i},\sigma)\)_. Then_ \(\mu_{\eta,\pi}=\mu_{\eta,\pi^{\prime}}\)_._
**Proof.** (i). By its definition, it is clear that \(Q_{\pi}\mathbb{I}_{\Delta^{c}}=Q_{\gamma_{\pi}}\mathbb{I}_{\Delta^{c}}\). Combining Lemma 4.1(iii) and Lemma 4.5 we conclude that the \(\mathbf{X}\)-marginals of the occupation measures of \(\pi\) and \(\gamma_{\pi}\) coincide: \(\mu^{\mathbf{X}}_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,\gamma_{\pi}}\). Since \(\mu^{\mathbf{X}}_{\eta,\gamma_{\pi}}(\Delta)=0\) and \(\pi(\cdot|x)=\gamma_{\pi}(\cdot|x)\) when \(x\in\Delta^{c}\), we conclude that \(\mu_{\eta,\gamma_{\pi}}\otimes\pi=\mu_{\eta,\gamma_{\pi}}\otimes\gamma_{\pi}\). Summarizing, we have
\[\mu_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,\pi}\otimes\pi=\mu^{\mathbf{X}}_{\eta, \gamma_{\pi}}\otimes\pi=\mu^{\mathbf{X}}_{\eta,\gamma_{\pi}}\otimes\gamma_{\pi }=\mu_{\eta,\gamma_{\pi}}\]
by using Proposition 4.6(ii).
(ii). We must show that if \(\pi,\bar{\pi}\in\tilde{\mathbf{M}}\) are such that \(\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\pi}=\mu^{\mathbf{X}\times\mathbf{ A}^{i}}_{\eta,\bar{\pi}}\) then \(\gamma^{\mathbf{A}^{i}}_{\pi}\) and \(\gamma^{\mathbf{A}^{i}}_{\bar{\pi}}\) belong to the same class of equivalence in \(\boldsymbol{\mathcal{Y}}^{i}\). Clearly, we have \(\mu^{\mathbf{X}}_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,\bar{\pi}}\). By Proposition 4.6(ii), it follows that \(\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,\pi} \otimes\pi^{\mathbf{A}^{i}}\) and \(\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\bar{\pi}}=\mu^{\mathbf{X}}_{\eta, \bar{\pi}}\otimes\bar{\pi}^{\mathbf{A}^{i}}\) implying that \(\mu^{\mathbf{X}}_{\eta,\pi}\otimes\pi^{\mathbf{A}^{i}}=\mu^{\mathbf{X}}_{\eta, \pi}\otimes\bar{\pi}^{\mathbf{A}^{i}}\). Now, recalling that \(\mu^{\mathbf{X}}_{\eta,\pi}\sim\lambda_{\Delta^{c}}\) we obtain by the disintegration Lemma that \(\pi^{\mathbf{A}^{i}}(\cdot|x)=\bar{\pi}^{\mathbf{A}^{i}}(\cdot|x)\) for \(\lambda_{\Delta^{c}}\)-almost all \(x\in\mathbf{X}\) which shows that \(\gamma^{\mathbf{A}^{i}}_{\pi}(\cdot|x)=\gamma^{\mathbf{A}^{i}}_{\pi}(\cdot|x)\) with \(\lambda\)-almost all \(x\in\mathbf{X}\).
(iii). Proceeding as in part (i), we can show that \(\mu^{\mathbf{X}}_{\eta,\pi}=\mu^{\mathbf{X}}_{\eta,\pi^{\prime}}\), which is a measure equivalent to \(\lambda_{\Delta^{c}}\). Since \(\mu^{\mathbf{X}}_{\eta,\pi}(\Delta)=\mu^{\mathbf{X}}_{\eta,\pi^{\prime}}(\Delta)=0\) and \(\pi^{\mathbf{A}^{i}}=\sigma\) on \(\Delta^{c}\) we conclude by Proposition 4.6(ii) that \(\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\pi}=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_ {\eta,\pi^{\prime}}\) and so,
\[\mu_{\eta,\pi}=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,\pi}\otimes\pi^{-i} \quad\text{and}\quad\mu_{\eta,\pi^{\prime}}=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_ {\eta,\pi}\otimes\pi^{-i},\]
and the result follows. \(\Box\)
Using the result in Lemma 5.1(ii), we indeed have defined a function \({\cal J}^{i}_{\eta}\) from \(\tilde{\mathbf{\cal G}}^{i}_{\eta}\) to \({\mathbf{\cal Y}}^{i}\). We can therefore consider the function
\[{\cal J}_{\eta}:\tilde{\mathbf{\cal G}}^{1}_{\eta}\times\ldots\times \tilde{\mathbf{\cal G}}^{N}_{\eta}\rightarrow{\mathbf{\cal Y }}^{1}\times\ldots\times{\mathbf{\cal Y}}^{N}={\mathbf{\cal Y }}\]
whose components are the \({\cal J}^{i}_{\eta}\). Based on this lemma and using Remark 4.3(d), without risk of confusion we can assume that \(\gamma_{\pi}\in\tilde{\mathbf{\cal Y}}\) and \(\gamma^{{\bf A}^{i}}_{\pi}\in{\mathbf{\cal Y}}^{i}\) for \(\pi\in\tilde{\mathbf{\cal Y}}\). Indeed, two Markov correlated strategies \(\pi,\pi^{\prime}\) in the same class of equivalence of \({\mathbf{\cal Y}}\) have the same occupation measure and they yield the same class of equivalence for \(\gamma^{{\bf A}^{i}}_{\pi}\) and \(\gamma^{{\bf A}^{i}}_{\pi^{\prime}}\)
**Proposition 5.2**: _The function \({\cal J}_{\eta}\) is continuous._
**Proof:** We make the proof of the continuity for the function \({\cal J}^{i}_{\eta}\) for any fixed \(1\leq i\leq N\). Suppose that \(\{m_{n}\}_{n\geq 0}\) is a sequence in \(\tilde{\mathbf{\cal G}}^{i}_{\eta}\) converging in the \(ws\)-topology to some \(m\in\tilde{\mathbf{\cal G}}^{i}_{\eta}\). There exist \(\pi_{n}\) and \(\pi\) in \(\tilde{\mathbf{\cal Y}}\) such that \(m_{n}=\mu^{{\bf X}\times{\bf A}_{i}}_{\eta,\pi_{n}}\) for any \(n\in{\mathbb{N}}\) and \(m=\mu^{{\bf X}\times{\bf A}_{i}}_{\eta,\pi}\). Our goal is to prove that \(\gamma^{{\bf A}_{i}}_{\pi_{n}}\rightarrow\gamma^{{\bf A}_{i}}_{\pi}\) in \({\mathbf{\cal Y}}^{i}\). Since \({\mathbf{\cal Y}}^{i}\) is compact, it suffices to show that this limit holds for any convergent subsequence of \(\{\gamma^{{\bf A}_{i}}_{\pi_{n}}\}\) (still denoted by \(\{\gamma^{{\bf A}_{i}}_{\pi_{n}}\}\)). There is no loss of generality in assuming that \(\pi_{n}\rightarrow\pi^{*}\) for some \(\pi^{*}\in\tilde{\mathbf{\cal Y}}\) and, therefore, as a direct consequence of the definition in (5.1) we also have \(\gamma_{\pi_{n}}\rightarrow\gamma_{\pi^{*}}\) and \(\gamma^{{\bf A}^{i}}_{\pi_{n}}\rightarrow\gamma^{{\bf A}^{i}}_{\pi^{*}}\). By Corollary 4.13(i) we obtain that
\[\mu_{\eta,\pi_{n}}\rightarrow\mu_{\eta,\pi^{*}}\quad\mbox{and so}\quad m_{n} =\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,\pi_{n}}\rightarrow\mu^{{\bf X}\times{ \bf A}^{i}}_{\eta,\pi^{*}}.\]
We deduce that \(\mu^{{\bf X}\times{\bf A}_{i}}_{\eta,\pi^{*}}=\mu^{{\bf X}\times{\bf A}_{i}}_{ \eta,\pi}=m\) and from Lemma 5.1(ii) we conclude that \(\gamma^{{\bf A}^{i}}_{\pi^{*}}=\gamma^{{\bf A}^{i}}_{\pi}\), which completes the proof. \(\Box\)
The correspondence \({\cal H}_{\eta,\rho}\).Fix a player \(i\) and an arbitrary \(\pi^{-i}\in{\mathbf{\cal Y}}^{-i}\). Define
\[{\cal L}^{i}_{\eta}(\pi^{-i})=\left\{\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,( \pi^{-i},\sigma)}:\sigma\in{\mathbf{\cal Y}}^{i}\right\}\subseteq \tilde{\mathbf{\cal G}}^{i}_{\eta}\subseteq{\mathbf{\cal M}} ^{+}({\bf X}\times{\bf A}^{i}),\]
which is the set of \(({\bf X}\times{\bf A}^{i})\)-marginals of the occupation measures for the initial distribution \(\eta\) and the strategy profiles \((\pi^{-i},\sigma)\) as the policy \(\pi^{-i}\) of all the players (but \(i\)) remain fixed and the policy of player \(i\) varies in \({\mathbf{\cal Y}}^{i}\).
**Proposition 5.3**: _Given any \(1\leq i\leq N\) and \(\pi^{-i}\in{\mathbf{\cal Y}}^{-i}\), the set \({\cal L}_{\eta}(\pi^{-i})\) is convex and compact for the \(ws\)-topology._
**Proof.** Let \(\gamma,\gamma^{\prime}\) in \({\cal L}^{i}_{\eta}(\pi^{-i})\) and fix some \(0\leq\alpha\leq 1\). We want to prove that \(\alpha\gamma+(1-\alpha)\gamma^{\prime}\in{\cal L}^{i}_{\eta}(\pi^{-i})\). There exist \(\sigma,\sigma^{\prime}\in{\mathbf{\cal Y}}_{i}\) satisfying \(\gamma=\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,(\pi^{-i},\sigma)}\) and \(\gamma^{\prime}=\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,(\pi^{-i},\sigma^{\prime})}\). Convexity of \(\tilde{\mathbf{\cal G}}_{\eta}\) implies that
\[\mu_{\eta,\hat{\pi}}=\alpha\mu_{\eta,(\pi^{-i},\sigma)}+(1-\alpha)\mu_{\eta,( \pi^{-i},\sigma^{\prime})} \tag{5.2}\]
for some \(\hat{\pi}\in\tilde{\mathbf{\cal Y}}\). To get the result, let us show that \(\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,\hat{\pi}}=\mu^{{\bf X}\times{\bf A}^{i} }_{\eta,(\pi^{-i},\tilde{\sigma})}\) for some \(\tilde{\sigma}\in{\mathbf{\cal Y}}^{i}\). Observe that, by (5.2) and Proposition 4.6(ii),
\[\mu_{\eta,\hat{\pi}}=\left[\alpha\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,(\pi^{-i},\sigma)}+(1-\alpha)\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,(\pi^{-i},\sigma^{ \prime})}\right]\otimes\pi^{-i}=\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,\hat{\pi}} \otimes\pi^{-i}.\]
Moreover, \(\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,\bar{\pi}}=\mu^{\bf X}_{\eta,\bar{\pi}} \otimes\hat{\pi}^{{\bf A}_{i}}\) and so, letting \(\tilde{\sigma}=\hat{\pi}^{{\bf A}^{i}}\in{\bf Y}^{i}\) we obtain \(\mu_{\eta,\hat{\pi}}=\mu^{\bf X}_{\eta,\hat{\pi}}\otimes(\pi^{-i},\tilde{ \sigma})\). Since \(\mu^{\bf X}_{\eta,\hat{\pi}}\) is equivalent to \(\lambda\) on \(\Delta^{c}\) (recall Proposition 4.6(iv)), it follows that \(\hat{\pi}\) and \((\pi^{-i},\tilde{\sigma})\) coincide \(\lambda\)-a.s. on \(\Delta^{c}\). In particular \(\gamma_{\hat{\pi}}=\gamma_{(\pi^{-i},\tilde{\sigma})}\) and thus, using Lemma 5.1(i),
\[\mu_{\eta,\hat{\pi}}=\mu_{\eta,\gamma_{\hat{\pi}}}=\mu_{\eta,\gamma_{(\pi^{-i},\tilde{\sigma})}}=\mu_{\eta,(\pi^{-i},\tilde{\sigma})},\]
as we wanted to prove. This establishes convexity of \({\cal L}^{i}_{\eta}(\pi^{-i})\). To prove compactness we will show that \({\cal L}^{i}_{\eta}(\pi^{-i})\) is closed. Suppose that \(\gamma_{n}\to\gamma\) where \(\gamma_{n}\in{\cal L}^{i}_{\eta}(\pi^{-i})\) and \(\gamma\in\tilde{\mathbf{\cal O}}^{i}_{\eta}\). For each \(n\) there is some \(\sigma^{i}_{n}\in{\bf Y}^{i}\) such that \(\gamma_{n}=\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,(\pi^{-i},\sigma^{i}_{n})}\). For some subsequence of \(\{\sigma^{i}_{n}\}\), still denoted by \(\{\sigma^{i}_{n}\}\), we have \(\sigma^{i}_{n}\to\sigma^{i}_{*}\) for some \(\sigma^{i}_{*}\in{\bf Y}^{i}\). By Corollary 4.13(iii), this shows that \(\gamma_{n}\to\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,(\pi^{-i},\sigma^{i}_{*})}\), which indeed belongs to \({\cal L}^{i}_{\eta}(\pi^{-i})\). \(\Box\)
Given a player \(1\leq i\leq N\) and a strategy profile \(\pi^{-i}\in{\bf Y}^{-i}\) for the remaining players, let
\[{\cal A}^{i}_{\eta,\rho^{i}}(\pi^{-i})=\left\{\mu^{{\bf X}\times{\bf A}^{i}}_ {\eta,(\pi^{-i},\sigma)}:\sigma\in{\bf Y}^{i}\mbox{ is such that }C^{i}(\eta,(\pi^{-i},\sigma))\geq\rho^{i}\right\}\subseteq{\cal L}^{i}(\pi^{-i}).\]
Thus, \({\cal A}^{i}_{\eta,\rho^{i}}(\pi^{-i})\) is the set of \(({\bf X}\times{\bf A}^{i})\)-marginals of the occupation measures of the Markov policies \(\sigma\in{\bf Y}^{i}\) of player \(i\) such that the Markov profile \((\pi^{-i},\sigma)\) satisfies player \(i\)'s constraint.
**Proposition 5.4**: _Consider a player \(1\leq i\leq N\) and a sequence \(\{\pi^{-i}_{n}\}\subseteq{\bf Y}^{-i}\) such that \(\pi^{-i}_{n}\to\pi^{-i}\) in the product topology of \({\bf Y}^{-i}\) for some \(\pi\in{\bf Y}\)._
1. _If_ \(C^{i}(\eta,\pi)\geq\rho^{i}\) _then there exists a sequence_ \(\{\gamma_{n}\}\) _in_ \({\bf M}^{+}({\bf X}\times{\bf A}^{i})\) _such that_ \(\gamma_{n}\to\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,\pi}\) _and for some_ \(K\in\mathbb{N}\)_, we have_ \(\gamma_{n}\in{\bf A}^{i}_{\eta,\rho^{i}}(\pi^{-i}_{n})\) _for_ \(n\geq K\)_._
2. _If_ \({\cal G}(\eta,\rho)\) _satisfies the Slater condition and_ \(C^{i}(\eta,\pi)\geq\rho^{i}\) _then there exists a sequence_ \(\{\gamma_{n}\}\) _in_ \({\bf M}^{+}({\bf X}\times{\bf A}^{i})\) _such that_ \(\gamma_{n}\to\mu^{{\bf X}\times{\bf A}^{i}}_{\eta,\pi}\) _and such that, for some_ \(K\in\mathbb{N}\)_, we have_ \(\gamma_{n}\in{\bf A}^{i}_{\eta_{n},\rho^{i}}(\pi^{-i}_{n})\) _for_ \(n\geq K\)_._
**Proof:** We will prove only item (ii) since item (i) can be easily obtained by using the same arguments. From Corollary 4.13(iv) we have \(\lim_{n\to\infty}C^{i}(\eta_{n},(\pi^{-i}_{n},\pi^{i}))=C^{i}(\eta,\pi)\geq\rho^ {i}\). So, there exist some sequence \(\{\epsilon_{n}\}_{n\geq 1}\) contained in \([0,1)\) with \(\epsilon_{n}\to 0\) and some index \(n_{0}\) for which
\[C^{i}(\eta_{n},(\pi^{-i}_{n},\pi^{i}))\geq\rho^{i}-\epsilon_{n}{\bf 1}\quad \mbox{for all }n\geq n_{0}.\]
By the Slater condition, we can find some \(\bar{\pi}^{i}\in{\bf Y}^{i}\) and \(\delta>0\) such that \(C^{i}(\eta,(\pi^{-i},\bar{\pi}^{i}))>\rho^{i}+\delta{\bf 1}\). Again from Corollary 4.13(iv), there is some \(n_{1}\geq n_{0}\) such that
\[C^{i}(\eta_{n},(\pi^{-i}_{n},\bar{\pi}^{i}))>\rho^{i}+\delta{\bf 1}\quad \mbox{for all }n\geq n_{1}.\]
Observe that for any \(n\geq 1\) both \(\mu^{{\bf X}\times{\bf A}^{i}}_{\eta_{n},(\pi^{-i}_{n},\pi^{i})}\) and \(\mu^{{\bf X}\times{\bf A}^{i}}_{\eta_{n},(\pi^{-i}_{n},\bar{\pi}^{i})}\) belong to \({\cal L}^{i}_{\eta_{n}}(\pi^{-i}_{n})\) which is a convex set by Proposition 5.3. Hence, there exists some \(\gamma_{n}\in{\cal L}^{i}_{\eta_{n}}(\pi^{-i}_{n})\) and \(\sigma^{i}_{n}\in{\bf Y}^{i}\) such that
\[\gamma_{n}=\mu^{{\bf X}\times{\bf A}^{i}}_{\eta_{n},(\pi^{-i}_{n},\sigma^{i}_{n}) }=(1-\sqrt{\epsilon_{n}})\mu^{{\bf X}\times{\bf A}^{i}}_{\eta_{n},(\pi^{-i}_{n}, \pi^{i})}+\sqrt{\epsilon_{n}}\mu^{{\bf X}\times{\bf A}^{i}}_{\eta_{n},(\pi^{-i}_{n },\bar{\pi}^{i})} \tag{5.3}\]
with, as a consequence of Proposition 4.6(ii),
\[\mu_{\eta_{n},(\pi^{-i}_{n},\sigma^{i}_{n})}=\mu^{{\bf X}\times{\bf A}^{i}}_{ \eta_{n},(\pi^{-i}_{n},\sigma^{i}_{n})}\otimes\pi^{-i}_{n}=(1-\sqrt{\epsilon_{ n}})\mu_{\eta_{n},(\pi^{-i}_{n},\pi^{i})}+\sqrt{\epsilon_{n}}\mu_{\eta_{n},(\pi^{-i}_{n}, \bar{\pi}^{i})}\]
and so for any \(n\geq n_{1}\)
\[C^{i}(\eta_{n},(\pi_{n}^{-i},\sigma_{n}^{i})) = (1-\sqrt{\epsilon_{n}})C^{i}(\eta_{n},(\pi_{n}^{-i},\pi^{i}))+\sqrt {\epsilon_{n}}C^{i}(\eta_{n},(\pi_{n}^{-i},\bar{\pi}^{i}))\] \[\geq \rho^{i}+\sqrt{\epsilon_{n}}\big{[}\delta-(1-\sqrt{\epsilon_{n}}) \sqrt{\epsilon_{n}}\big{]}\mathbf{1}.\]
Therefore, there exists some \(K\geq n_{1}\) such that \(n\geq K\) implies \(C^{i}(\eta_{n},(\pi_{n}^{-i},\sigma_{n}^{i}))\geq\rho^{i}\). Since \(\gamma_{n}=\mu_{\eta_{n},(\pi_{n}^{-i},\sigma_{n}^{i})}^{\mathbf{X}\times \mathbf{A}^{i}}\in\mathcal{L}_{\eta_{n}}^{i}(\pi_{n}^{-i})\), this establishes precisely that \(\gamma_{n}\in\mathcal{A}_{\eta_{n},\rho^{i}}^{i}(\pi_{n}^{-i})\) for all \(n\geq K\). Finally, from (5.3) and Corollary 4.13(iii) we have that \(\lim_{n\to\infty}\gamma_{n}=\lim_{n\to\infty}\mu_{\eta_{n},(\pi_{n}^{-i},\pi ^{i})}^{\mathbf{X}\times\mathbf{A}^{i}}=\mu_{\eta,\pi}^{\mathbf{X}\times \mathbf{A}^{i}}\). This completes the proof. \(\Box\)
**Proposition 5.5**: _Given any \(1\leq i\leq N\), the correspondence \(\mathcal{A}_{\eta,\rho^{i}}^{i}:\boldsymbol{\mathcal{Y}}^{-i}\twoheadrightarrow \tilde{\boldsymbol{\mathcal{O}}}_{\eta}^{i}\) defined by \(\pi^{-i}\mapsto\mathcal{A}_{\eta,\rho^{i}}^{i}(\pi^{-i})\) is continuous with nonempty convex and compact values._
**Proof.** The Slater condition implies that \(\mathcal{A}_{\eta,\rho^{i}}^{i}(\pi^{-i})\) is nonempty for any \(\pi^{-i}\in\boldsymbol{\mathcal{Y}}^{-i}\). To prove convexity, let \(\gamma,\gamma^{\prime}\in\mathcal{A}_{\eta,\rho^{i}}^{i}(\pi^{-i})\) and \(0\leq\alpha\leq 1\). Then, there exist \(\sigma,\sigma^{\prime}\in\boldsymbol{\mathcal{Y}}^{i}\) with such that \(\gamma=\mu_{\eta,(\pi^{-i},\sigma)}^{\mathbf{X}\times\mathbf{A}^{i}}\) and \(\gamma^{\prime}=\mu_{\eta,(\pi^{-i},\sigma^{\prime})}^{\mathbf{X}\times \mathbf{A}^{i}}\) and satisfying \(C^{i}(\eta,(\pi^{-i},\sigma))\geq\rho^{i}\) and \(C^{i}(\eta,(\pi^{-i},\sigma^{\prime}))\geq\rho^{i}\). By convexity of \(\mathcal{L}_{\eta}^{i}(\pi^{-i})\) in Proposition 5.3, there exists some \(\sigma^{*}\in\boldsymbol{\mathcal{Y}}^{i}\) such that \(\mu_{\eta,(\pi^{-i},\sigma^{*})}^{\mathbf{X}\times\mathbf{A}^{i}}=\alpha \gamma+(1-\alpha)\gamma^{\prime}\). But then
\[\mu_{\eta,(\pi^{-i},\sigma^{*})} = \mu_{\eta,(\pi^{-i},\sigma^{*})}^{\mathbf{X}\times\mathbf{A}^{i}} \otimes\pi^{-i}=\alpha(\gamma\otimes\pi^{-i})+(1-\alpha)(\gamma^{\prime} \otimes\pi^{-i})\] \[= \alpha\mu_{\eta,(\pi^{-i},\sigma)}+(1-\alpha)\mu_{\eta,(\pi^{-i}, \sigma^{\prime})},\]
and so by integration of the function \(c^{i}(x,a)\) with respect to these occupation measures,
\[C^{i}(\eta,(\pi^{-i},\sigma^{*}))=\alpha C^{i}(\eta,(\pi^{-i},\sigma))+(1- \alpha)C^{i}(\eta,(\pi^{-i},\sigma^{\prime}))\geq\rho^{i},\]
which establishes that \(\alpha\gamma+(1-\alpha)\gamma^{\prime}\) is in \(\mathcal{A}_{\eta,\rho^{i}}^{i}(\pi^{-i})\).
The correspondence \(\mathcal{A}_{\eta,\rho^{i}}^{i}\) takes values in the compact metric space \(\tilde{\boldsymbol{\mathcal{O}}}_{\eta}^{i}\) and thus, by the Closed Graph Theorem in [1, Theorem 17.11], it is upper semicontinuous and compact-valued if and only if its graph is closed. Suppose that we have a convergent sequence \((\pi_{n}^{-i},\gamma_{n})\) in the graph of \(\mathcal{A}_{\eta,\rho^{i}}^{i}\) converging to some \((\pi^{-i},\gamma)\in\boldsymbol{\mathcal{Y}}^{-i}\times\tilde{\boldsymbol{ \mathcal{O}}}_{\eta}^{i}\). We must show that \(\gamma\in\mathcal{A}_{\eta,\rho^{i}}^{i}(\pi^{-i})\). For each \(n\geq 1\) there exists \(\sigma_{n}\in\boldsymbol{\mathcal{Y}}^{i}\) such that
\[\gamma_{n}=\mu_{\eta,(\pi_{n}^{-i},\sigma_{n})}^{\mathbf{X}\times\mathbf{A}^{i }}\quad\text{and}\quad C^{i}(\eta,(\pi_{n}^{-i},\sigma_{n}))\geq\rho^{i}. \tag{5.4}\]
For some subsequence \(\{\sigma_{n^{\prime}}\}\) of \(\{\sigma_{n}\}\) we have \(\sigma_{n^{\prime}}\to\sigma\) for some \(\sigma\in\boldsymbol{\mathcal{Y}}^{i}\) and so using Corollary 4.13(iii) and (iv)
\[\gamma=\mu_{\eta,(\pi^{-i},\sigma)}^{\mathbf{X}\times\mathbf{A}^{i}}\quad \text{and}\quad C^{i}(\eta,(\pi^{-i},\sigma))\geq\rho^{i},\]
as we wanted to prove. Lower semicontinuity of the correspondence follows from Proposition 5.4(i) and the sequential characterization of lower semicontinuity in [1, Theorem 17.21]. \(\Box\)
Given some player \(1\leq i\leq N\) and a Markov profile \(\pi^{-i}\in\boldsymbol{\mathcal{Y}}^{-i}\) for the remaining players, if player \(i\) chooses the Markov policy \(\sigma\in\boldsymbol{\mathcal{Y}}^{i}\) then his payoff is (recall Proposition 4.6(ii))
\[R^{i}(\eta,(\pi^{-i},\sigma))=\int_{\mathbf{X}\times\mathbf{A}}r^{i}d\mu_{\eta,( \pi^{-i},\sigma)}=\int_{\mathbf{X}\times\mathbf{A}}r^{i}d(\mu_{\eta,(\pi^{-i}, \sigma)}^{\mathbf{X}\times\mathbf{A}^{i}}),\]
and his goal is to maximize this payoff over all \(\sigma\in\mathbf{\mathcal{Y}}^{i}\) such that \(C^{i}(\eta,(\pi^{-i},\sigma))\geq\rho^{i}\) or, which is the same, maximize
\[\int_{\mathbf{X}\times\mathbf{A}}r^{i}d(\gamma\otimes\pi^{-i})\]
over all \(\gamma\) belonging to \(\mathcal{A}^{i}_{\eta,\rho^{i}}(\pi^{-i})\). Based on this, we define the correspondence \(\mathcal{H}^{i}_{\eta,\rho^{i}}:\mathbf{\mathcal{Y}}^{-i}\twoheadrightarrow \tilde{\mathbf{\mathcal{O}}}^{i}_{\eta}\) given by
\[\mathcal{H}^{i}_{\eta,\rho^{i}}(\pi^{-i})=\underset{\gamma\in\mathcal{A}^{i}_{ \eta,\rho^{i}}(\pi^{-i})}{\arg\max}\Big{\{}\int_{\mathbf{X}\times\mathbf{A}}r^ {i}d(\gamma\otimes\pi^{-i})\Big{\}}. \tag{5.5}\]
**Proposition 5.6**: _For any \(i\in\{1,\ldots,N\}\), the correspondence \(\mathcal{H}^{i}_{\eta,\rho^{i}}:\mathbf{\mathcal{Y}}^{-i} \twoheadrightarrow\tilde{\mathbf{\mathcal{O}}}^{i}_{\eta}\) is upper semicontinuous with nonempty compact and convex values._
**Proof.** On the graph of the correspondence \(\mathcal{A}^{i}_{\eta,\rho^{i}}\) consider the function
\[f_{\eta}(\pi^{-i},\gamma)=\int_{\mathbf{X}\times\mathbf{A}}r^{i}d(\gamma \otimes\pi^{-i})\]
and let us prove that it is continuous. To this end, let \(\pi_{n}^{-i}\to\pi^{-i}\) in \(\mathbf{\mathcal{Y}}^{-i}\) and \(\gamma_{n}\to\gamma\) in \(\tilde{\mathbf{\mathcal{O}}}^{i}_{\eta}\) with \(\gamma_{n}\in\mathcal{A}^{i}_{\eta,\rho^{i}}(\pi_{n}^{-i})\) and \(\gamma\in\mathcal{A}^{i}_{\eta,\rho^{i}}(\pi^{-i})\). We must show that \(f_{\eta}(\pi_{n}^{-i},\gamma_{n})\to f_{\eta}(\pi,\gamma)\), and we will prove that this limit holds through any convergence subsequence, which will be denoted by \(\{n\}\) without loss of generality. There exist \(\sigma_{n},\sigma\in\mathbf{\mathcal{Y}}^{i}\) such that
\[\gamma_{n}=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,(\pi_{n}^{-i},\sigma_{n} )}\quad\mbox{and}\quad\gamma=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,(\pi^ {-i},\sigma)}\]
and we can also assume that \(\sigma_{n}\to\sigma^{*}\) for some \(\sigma^{*}\in\mathbf{\mathcal{Y}}^{i}\). Using Corollary 4.13(iv) we obtain that
\[R^{i}(\eta,(\pi_{n}^{-i},\sigma_{n}))\to R^{i}(\eta,(\pi^{-i},\sigma^{*})).\]
On the other hand, by Corollary 4.13(iii),
\[\gamma_{n}\to\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,(\pi^{-i},\sigma^{*} )}\quad\mbox{and so}\quad\gamma=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,(\pi ^{-i},\sigma^{*})}=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,(\pi^{-i},\sigma )}.\]
This shows that
\[\mu_{\eta,(\pi^{-i},\sigma)}=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,(\pi^ {-i},\sigma)}\otimes\pi^{-i}=\mu^{\mathbf{X}\times\mathbf{A}^{i}}_{\eta,(\pi^ {-i},\sigma^{*})}\otimes\pi^{-i}=\mu_{\eta,(\pi^{-i},\sigma^{*})}\]
and so \(R^{i}(\eta,(\pi_{n}^{-i},\sigma_{n}))\to R^{i}(\eta,(\pi^{-i},\sigma))\), which can be also written as \(f_{\eta}(\gamma_{n},\pi_{n}^{-i})\to f_{\eta}(\gamma,\pi^{-i})\). Once we know that \(f_{\eta}\) is continuous on the graph of \(\mathcal{A}^{i}_{\eta,\rho^{i}}\), we can apply the Berge Maximum Theorem [1, Theorem 17.31] and conclude that the arg max correspondence \(\mathcal{H}^{i}_{\eta,\rho^{i}}\) is upper semicontinuous with nonempty compact values.
Finally, observe that the function \(f_{\eta}\) is linear in \(\gamma\) for fixed \(\pi^{-i}\in\mathbf{\mathcal{Y}}^{-i}\) and so the set of maximizers \(\mathcal{H}^{i}_{\eta,\rho^{i}}(\pi^{-i})\) is convex. \(\Box\)
By considering the product of the correspondences \(\mathcal{H}^{i}_{\eta,\rho^{i}}\) we obtain the following result, which easily follows from [1, Theorem 17.28].
**Corollary 5.7**: _The correspondence \(\mathcal{H}_{\eta,\rho}:\mathbf{\mathcal{Y}}\twoheadrightarrow\tilde{ \mathbf{\mathcal{O}}}^{1}_{\eta}\times\ldots\times\tilde{\mathbf{\mathcal{O}}}^{N}_{\eta}\) defined by_
\[\pi\mapsto\prod_{i=1}^{N}\mathcal{H}^{i}_{\eta,\rho}(\pi^{-i}).\]
_is upper semicontinuous with nonempty compact and convex values._
Proof of Proposition 3.1:To get the result, let us show that the following results hold.
* The correspondence \[\mathcal{H}_{\eta,\rho}\circ\mathcal{J}_{\eta}:\tilde{\boldsymbol{\mathcal{O}}}_{ \eta}^{1}\times\ldots\times\tilde{\boldsymbol{\mathcal{O}}}_{\eta}^{N}\to \tilde{\boldsymbol{\mathcal{O}}}_{\eta}^{1}\times\ldots\times\tilde{\boldsymbol {\mathcal{O}}}_{\eta}^{N}\] has a fixed point \((\gamma_{*}^{1},\ldots,\gamma_{*}^{N})\).
* The Markov profile \(\pi_{*}\in\boldsymbol{\mathcal{Y}}\) given by \(\pi_{*}^{i}=\mathcal{J}_{\eta}^{i}(\gamma_{*}^{i})\) for \(1\leq i\leq N\) is a constrained equilibrium in the class of all strategy profiles \(\boldsymbol{\Pi}\) of the players for the game model \(\mathcal{G}(\eta,\rho)\).
Let us first proceed to the proof of item (i). By [1, Theorem 17.23], the composition \(\mathcal{H}_{\eta,\rho}\circ\mathcal{J}_{\eta}\) is an upper semicontinuous correspondence as it is the composition of a continuous function \(\mathcal{J}_{\eta}\) and an upper semicontinuous correspondence \(\mathcal{H}_{\eta,\rho}\). Besides, it has nonempty compact and convex values. By the Closed Graph Theorem [1, Theorem 17.11], it is also a closed correspondence.
Since \(\tilde{\boldsymbol{\mathcal{O}}}_{\eta}^{1}\times\ldots\times\tilde{\boldsymbol {\mathcal{O}}}_{\eta}^{N}\) is a nonempty compact convex subset of the locally convex Hausdorff space \(\boldsymbol{\mathcal{M}}(\mathbf{X}\times\mathbf{A}^{1})\times\ldots\times \boldsymbol{\mathcal{M}}(\mathbf{X}\times\mathbf{A}^{N})\) --Proposition 2.2 in [11]-- we can use the Kakutani-Fan-Glicksberg fixed point theorem [1, Corollary 17.55] to get the existence of a fixed point for the correspondence \(\mathcal{H}_{\eta,\rho}\circ\mathcal{J}_{\eta}\).
Let us now proceed to the proof of item (ii). If \((\gamma_{*}^{1},\ldots,\gamma_{*}^{N})\) is a fixed point of \(\mathcal{H}_{\eta,\rho}\circ\mathcal{J}_{\eta}\), consider the Markov policies \(\pi_{*}^{i}=\mathcal{J}_{\eta}^{i}(\gamma_{*}^{i})\in\boldsymbol{\mathcal{Y}}^ {i}\) for \(1\leq i\leq N\) and let \(\pi_{*}=(\pi_{*}^{1},\ldots,\pi_{*}^{N})\in\boldsymbol{\mathcal{Y}}\). Since \(\gamma_{*}^{i}\in\mathcal{L}_{\eta}^{i}(\pi_{*}^{-i})\) we have that for some \(\sigma\in\boldsymbol{\mathcal{Y}}^{i}\) it is \(\gamma_{*}^{i}=\mu_{\eta,(\pi_{*}^{-i},\sigma)}^{\mathbf{X}\times\mathbf{A}^ {i}}\). But now using Lemma 5.1(iii) we also have
\[\gamma_{*}^{i}=\mu_{\eta,(\pi_{*}^{-i},\sigma)}^{\mathbf{X}\times\mathbf{A}^ {i}}=\mu_{\eta,(\pi_{*}^{-i},\mathcal{J}_{\eta}^{i}(\gamma_{*}^{i}))}^{\mathbf{ X}\times\mathbf{A}^{i}}=\mu_{\eta,\pi^{*}}^{\mathbf{X}\times\mathbf{A}^{i}}.\]
In particular, for each \(1\leq i\leq N\) we have \(\gamma_{*}^{i}\otimes\pi_{*}^{-i}=\mu_{\eta,\pi_{*}}\). Moreover, since \(\gamma_{*}^{i}\in\mathcal{A}_{\eta,\rho^{i}}^{i}(\pi_{*}^{-i})\) it follows that \(C^{i}(\eta,\pi_{*})\geq\rho^{i}\). We conclude that the Markov profile \(\pi_{*}\in\boldsymbol{\mathcal{Y}}\) satisfies the constraints of all the players.
If player \(i\) varies his policy from \(\pi_{*}^{i}\in\boldsymbol{\mathcal{Y}}^{i}\) to some \(\pi^{i}\in\boldsymbol{\Pi}^{i}\) which satisfies his own constraint (i.e. \(C^{i}(\eta,(\pi_{*}^{-i},\pi^{i}))\geq\rho^{i}\)) we can use the result in Proposition 4.6(iii) to derive the existence of some \(\sigma\in\boldsymbol{\mathcal{Y}}^{i}\) such that
\[\mu_{\eta,(\pi_{*}^{-i},\pi^{i})}=\mu_{\eta,(\pi_{*}^{-i},\sigma)}\]
with, again, \(C^{i}(\eta,(\pi_{*}^{-i},\sigma))\geq\rho^{i}\). This implies that \(\mu_{\eta,(\pi_{*}^{-i},\sigma)}^{\mathbf{X}\times\mathbf{A}^{i}}\in\mathcal{ A}_{\eta,\rho^{i}}^{i}(\pi_{*}^{-i})\) and thus
\[\int_{\mathbf{X}\times\mathbf{A}}r^{i}d(\mu_{\eta,(\pi_{*}^{-i},\sigma)}^{ \mathbf{X}\times\mathbf{A}^{i}}\otimes\pi_{*}^{-i})\leq\int_{\mathbf{X}\times \mathbf{A}}r^{i}d(\gamma_{*}^{i}\otimes\pi_{*}^{-i})\]
or, equivalently,
\[R^{i}(\eta,(\pi_{*}^{-i},\pi))=R^{i}(\eta,(\pi_{*}^{-i},\sigma))\leq R^{i}( \eta,\pi_{*}).\]
This completes the proof. \(\Box\)
### Proof of Theorem 3.2
Clearly, for all \(n\geq 1\) we have \(\lambda\ll\eta_{n}\). We also have that \(\{\eta_{n}\}_{n\in\mathbb{N}}\) converges to \(\eta\) in total variation and that the corresponding density functions with respect to \(\lambda\) converge strongly (or, in norm) in \(L^{1}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\): \(\left\|d\eta_{n}/d\lambda-d\eta/d\lambda\right\|_{1}\to 0\). Since the constraint function \(c_{i}\) is bounded by \(\mathbf{r}\), we have \(\left|C^{i}(\eta_{n},\pi)-C^{i}(\eta,\pi)\right|\leq\mathbf{r}/(n+1)\). Also, for the constraint constants \(\rho_{n}^{i}=\rho^{i}-\frac{\mathbf{r}}{n+1}\mathbf{1}\) with \(1\leq i\leq N\), the game model \(\mathcal{G}(\eta_{n},\rho_{n})\) satisfies the Slater condition in Definition 2.6. Under assumptions (A\({}_{1}^{\prime}\)
and \((\mathcal{A}_{7})\), we obtain that the game model \(\mathcal{G}(\eta_{n},\rho_{n})\) is uniformly absorbing to \(\Delta\) by using item (ii) of Proposition 2.4. We can conclude that the game model \(\mathcal{G}(\eta_{n},\rho_{n})\) satisfies Assumptions A\({}^{\prime}\) and B and so, Proposition 3.1 yields the existence of a constrained Nash equilibrium \(\hat{\pi}_{n}\in\boldsymbol{\mathcal{Y}}\) for the game model \(\mathcal{G}(\eta_{n},\rho_{n})\) with \(n\geq 1\). This means that
\[C^{i}(\eta_{n},\hat{\pi}_{n})\geq\rho_{n}^{i}\quad\text{for }1\leq i\leq N \tag{5.6}\]
and that, for any \(1\leq i\leq N\) and \(\pi^{i}\in\boldsymbol{\Pi}^{i}\),
\[C^{i}(\eta_{n},(\hat{\pi}_{n}^{-i},\pi^{i}))\geq\rho_{n}^{i}\ \Rightarrow\ R^{i}(\eta_{n},(\hat{\pi}_{n}^{-i},\pi^{i}))\leq R^{i}(\eta_{n}, \hat{\pi}_{n}).\]
Without loss of generality, we assume that the sequence of so-defined equilibria converges to some \(\hat{\pi}\in\boldsymbol{\mathcal{Y}}\), that is, for each \(1\leq i\leq N\) we have \(\hat{\pi}_{n}^{i}\to\hat{\pi}^{i}\) in \(\boldsymbol{\mathcal{Y}}^{i}\) as \(n\to\infty\). We want to show that \(\hat{\pi}\) is a constrained equilibrium for the game model \(\mathcal{G}(\eta,\rho)\). To see this, note first that we can take the limit in (5.6) to obtain that \(C^{i}(\eta,\hat{\pi})\geq\rho^{i}\) for every \(1\leq i\leq N\) by using Corollary 4.13(iv). Secondly, fix \(i\in\{1,\ldots,N\}\) and choose any \(\pi^{i}\in\boldsymbol{\Pi}^{i}\) such that \(C^{i}(\eta,(\hat{\pi}^{-i},\pi^{i}))\geq\rho^{i}\). By Proposition 4.6(iii) it follows that there is some \(\sigma\in\boldsymbol{\mathcal{Y}}^{i}\) such that \((\hat{\pi}^{-i},\pi^{i})\) and \((\hat{\pi}^{-i},\sigma)\in\boldsymbol{\mathcal{Y}}\) yield the same payoffs \(C^{i}\) and \(R^{i}\). Hence we have \(C^{i}(\eta,(\hat{\pi}^{-i},\sigma))\geq\rho^{i}\) and we must show that
\[R^{i}(\eta,(\hat{\pi}^{-i},\sigma))\leq R^{i}(\eta,\hat{\pi}).\]
We will use Proposition 5.4(ii) for the Markov profile \((\pi_{*}^{-i},\sigma)\in\boldsymbol{\mathcal{Y}}\) and the sequence \(\{\hat{\pi}_{n}^{-i}\}\) to derive the existence of a sequence \(\gamma_{n}\to\mu_{\eta,(\hat{\pi}^{-i},\sigma)}^{\mathbf{X}\times\mathbf{A}^ {i}}\) such that \(\gamma_{n}\in\mathcal{A}^{i}_{\eta_{n},\rho^{i}}(\hat{\pi}_{n}^{-i})\) for large enough \(n\geq K\). So, for such \(n\geq K\), let \(\sigma_{n}\in\boldsymbol{\mathcal{Y}}^{i}\) be such that \(\gamma_{n}=\mu_{\eta_{n},(\hat{\pi}_{n}^{-i},\sigma_{n})}^{\mathbf{X}\times \mathbf{A}^{i}}\) which satisfies \(C^{i}(\eta_{n},(\hat{\pi}_{n}^{-i},\sigma_{n}))\geq\rho^{i}\geq\rho_{n}^{i}\). This implies that for any \(n\geq K\) we have
\[R^{i}(\eta_{n},(\hat{\pi}_{n}^{-i},\sigma_{n}))\leq R^{i}(\eta_{n},\hat{\pi}_ {n}).\]
There exists some \(\bar{\sigma}\in\boldsymbol{\mathcal{Y}}^{i}\) and a subsequence of \(\{\sigma_{n}\}\) (still denoted by \(\{\sigma_{n}\}\)) satisfying \(\sigma_{n}\to\bar{\sigma}\) in \(\boldsymbol{\mathcal{Y}}^{i}\) and then taking the limit we have
\[R^{i}(\eta,(\hat{\pi}^{-i},\bar{\sigma}))\leq R^{i}(\eta,\hat{\pi})\]
by using Corollary 4.13(iv). But then item (iii) of Corollary 4.13 implies that
\[\gamma_{n}=\mu_{\eta_{n},(\hat{\pi}_{n}^{-i},\sigma_{n})}^{\mathbf{X}\times \mathbf{A}^{i}}\to\mu_{\eta,(\hat{\pi}^{-i},\bar{\sigma})}^{\mathbf{X}\times \mathbf{A}^{i}}=\mu_{\eta,(\hat{\pi}^{-i},\sigma)}^{\mathbf{X}\times\mathbf{A} ^{i}}\]
so that
\[\mu_{\eta,(\hat{\pi}^{-i},\bar{\sigma})}=\mu_{\eta,(\hat{\pi}^{-i},\bar{\sigma })}^{\mathbf{X}\times\mathbf{A}^{i}}\otimes\hat{\pi}^{-i}=\mu_{\eta,(\hat{\pi}^ {-i},\sigma)}^{\mathbf{X}\times\mathbf{A}^{i}}\otimes\hat{\pi}^{-i}=\mu_{\eta,( \hat{\pi}^{-i},\sigma)}\]
and, hence, \(R^{i}(\eta,(\hat{\pi}^{-i},\bar{\sigma}))=R^{i}(\eta,(\hat{\pi}^{-i},\sigma))\) and \(R^{i}(\eta,(\hat{\pi}^{-i},\sigma))\leq R^{i}(\eta,\hat{\pi})\) follows.
### Proof of Corollary 3.6
(i). To check this result, we must show that the convergences \(\pi_{n}^{i}\to\pi^{i}\) in \(\boldsymbol{\mathcal{Y}}^{i}\) for each \(1\leq i\leq N\) imply that
\[\pi_{n}(da|x)=\pi_{n}^{1}(da^{1}|x)\times\cdots\times\pi_{n}^{N}(da^{N}|x)\to \pi^{1}(da^{1}|x)\times\cdots\times\pi^{N}(da^{N}|x)=\pi(da|x)\quad\text{in }\tilde{ \boldsymbol{\mathcal{Y}}}.\]
To avoid trivial cases, suppose that \(\lambda\{x\}>0\) for every \(x\in\mathbf{X}\). Then, \(\pi_{n}^{i}\to\pi^{i}\) means that \(\pi_{n}^{i}(da|x)\) converges in the weak topology of \(\boldsymbol{\mathcal{P}}(\mathbf{A}^{i})\) to \(\pi^{i}(da|x)\) for any \(x\in\mathbf{X}\). By [7, Theorem 2.8] it follows that
\[\pi_{n}^{1}(da^{1}|x)\times\cdots\times\pi_{n}^{N}(da^{N}|x)\to\pi^{1}(da^{1}|x) \times\cdots\times\pi^{N}(da^{N}|x)\]
in the weak topology of \(\boldsymbol{\mathcal{P}}(\mathbf{A})\) for any \(x\in\mathbf{X}\). Given arbitrary \(f\in\mathcal{C}ar(\mathbf{X}\times\mathbf{A})\) bounded by a function \(F\in L^{1}(\mathbf{X},\boldsymbol{\mathfrak{X}},\lambda)\), that is, with \(\sum_{x}F(x)\lambda\{x\}<\infty\), from the dominated convergence theorem we obtain that
\[\sum_{x\in\mathbf{X}}\int_{\mathbf{A}}f(x,a)\pi_{n}(da|x)\lambda\{x\}\to\sum_{ x\in\mathbf{X}}\int_{\mathbf{A}}f(x,a)\pi(da|x)\lambda\{x\}.\]
which shows that, indeed, \(\pi_{n}\to\pi\) in \(\tilde{\boldsymbol{\mathcal{Y}}}\). As a direct consequence of Lemma 4.9, we conclude that the continuity properties in Assumption B are satisfied.
Note that this proof establishes, in fact, that the trace topology of \(\tilde{\boldsymbol{\mathcal{Y}}}\) on \(\boldsymbol{\mathcal{Y}}\) coincides with the product topology of \(\boldsymbol{\mathcal{Y}}=\boldsymbol{\mathcal{Y}}^{1}\times\ldots\times \boldsymbol{\mathcal{Y}}^{N}\). Such a result is known as a _fiber product lemma_.
(ii). Under the additive reward condition, the continuity of \(\pi\mapsto r_{\pi}^{i}\) and \(\pi\mapsto c_{\pi}^{i,j}\) is trivial since those functions turn out to be the sum of continuous functions. Regarding the additive transition property, observe that the density function
\[(y,x,a^{1},\ldots,a^{N})\mapsto\sum_{l=1}^{N}q^{l}(y,x,a^{l})\]
satisfies the conditions in Assumption (A\({}_{6}\)). Checking the continuity of \(\pi\mapsto Q_{\pi}v\) on \(\boldsymbol{\mathcal{Y}}\) is again straightforward by using the additive property of the density function. \(\Box\)
|
2304.13341 | MacWilliams' Extension Theorem for rank-metric codes | The MacWilliams' Extension Theorem is a classical result by Florence Jessie
MacWilliams. It shows that every linear isometry between linear block-codes
endowed with the Hamming distance can be extended to a linear isometry of the
ambient space. Such an extension fails to exist in general for rank-metric
codes, that is, one can easily find examples of linear isometries between
rank-metric codes which cannot be extended to linear isometries of the ambient
space. In this paper, we explore to what extent a MacWilliams' Extension
Theorem may hold for rank-metric codes. We provide an extensive list of
examples of obstructions to the existence of an extension, as well as a
positive result. | Elisa Gorla, Flavio Salizzoni | 2023-04-26T07:22:37Z | http://arxiv.org/abs/2304.13341v1 | # MacWilliams' Extension Theorem for rank-metric codes
###### Abstract
The MacWilliams' Extension Theorem is a classical result by Florence Jessie MacWilliams. It shows that every linear isometry between linear block-codes endowed with the Hamming distance can be extended to a linear isometry of the ambient space. Such an extension fails to exist in general for rank-metric codes, that is, one can easily find examples of linear isometries between rank-metric codes which cannot be extended to linear isometries of the ambient space. In this paper, we explore to what extent a MacWilliams' Extension Theorem may hold for rank-metric codes. We provide an extensive list of examples of obstructions to the existence of an extension, as well as a positive result.
## Introduction and motivation
Coding theory provides tools for the transmission and storage of data over an imperfect channel, where the data may be altered or lost. One of the main goals is being able to automatically correct errors in a received message, without asking for a retransmission. This is done through the use of (error-correcting) codes: The data to be sent is encoded, i.e., transformed into a codeword by adding redundancy to it. The set of codewords is called a code. The codeword travels over the channel, where part of the information may be lost or corrupted. At the receiver's end, the received information is decoded, that is, the error is corrected and the redundancy eliminated. In the mathematical formulation of error-correcting codes, we usually ignore the step in which the redundancy is eliminated, since it does not present any theoretical or practical challenges.
In many scenarios, error correction is done via minimum distance decoding. A code is a subset of a finite metric space and a received message is decoded to the closest codeword. Mathematically, if \((S,d)\) is a finite metric space and \(C\subseteq S\) a code, a received \(r\in S\) is decoded to an \(x\in C\) which minimizes \(d(-,r)\). Under suitable assumptions, the \(x\) which minimizes \(d(-,r)\) is unique. One way to guarantee uniqueness is as follows: Define the minimum distance of a code \(C\) as
\[d_{\min}(C)=\min\{d(x,y)\mid x,y\in C,x\neq y\}.\]
It is easy to show that, given \(r\in S\), if there is an \(x\in C\) such that \(d(x,r)<(d_{\min}(C)-1)/2\), then \(x\) is the unique codeword which minimizes \(d(-,r)\). The quantity \((d_{\min(C)}-1)/2\) is often called the error-correction capability of the code.
This motivates the interest for isometries between codes, since these are the maps that preserve the pairwise distances of codewords, therefore the metric structure of the code, and in particular its error-correction capability. However, one could also look at isometries of the ambient space \(\varphi:S\to S\). Such an isometry does not only preserve the metric structure of the code, mapping \(C\) to an isometric code \(\varphi(C)\), but also the distance between any pair of elements of \(S\), that is \(d(x,r)=d(\varphi(x),\varphi(r))\) for any \(x,r\in S\). In particular, \(\varphi\) preserves the whole error correction procedure, in the sense that \(r\in S\) is decoded to \(x\in C\) if and only if \(\varphi(r)\in S\) is decoded to \(\varphi(x)\in\varphi(C)\). In some cases, we know that any isometry between codes is the restriction of an isometry of the ambient space \(S\), that is, any isometry between codes can be extended to an isometry of the ambient space. In this paper, we call this property the Extension Property.
Linear block codes endowed with the Hamming distance are used in point-to-point communication. These are linear subspaces of \(\mathbb{F}_{q}^{n}\), where \(\mathbb{F}_{q}\) denotes the finite field with \(q\) elements. In [10] Florence Jessie MacWilliams showed that every Hamming distance-preserving linear isomorphism \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\) between two codes in \(\mathbb{F}_{q}^{n}\) can be extended to a Hamming distance-preserving linear isomorphism \(\mu:\mathbb{F}_{q}^{n}\to\mathbb{F}_{q}^{n}\). An elementary proof of this fact was later given by Kenneth Bogart,
Don Goldberg and Jean Gordon in [2]. Nowadays, this theorem is known as the MacWilliams' Extension Theorem.
**MacWilliams' Extension Theorem.** Every linear Hamming weight isometry \(\varphi\) of linear codes over a finite field \(\mathbb{F}_{q}\) extends to a linear Hamming weight isometry \(\mu\) of the ambient space \(\mathbb{F}_{q}^{n}\).
In the last decades, there has been an increasing interest in understanding for which ambient spaces and for which weights a similar Extension Property holds. In [14, 15] Jay Wood studied the case of finite rings and established the Extension Property for codes over finite Frobenius rings with respect to the Hamming distance. Aleams Barra and Heide Gluesing-Luerssen investigated further the case of finite Frobenius rings with various distance functions in [1]. Friedrich Martin Schneider and Jens Zumbragel extended the work of Wood to Artinian rings in [12]. Recently, the Extension Property was proved in [5, 9] for codes over \(\mathbb{Z}_{m}\) endowed with the Lee distance.
In this paper, we explore the Extension Property in the setting of rank-metric codes. These are linear spaces of matrices inside \(\mathbb{F}_{q}^{m\times n}\), where \(\mathbb{F}_{q}\) is the finite field with \(q\) elements. The rank distance between two matrices is the rank of their difference. Rank-metric codes are useful for correcting errors and increasing the efficiency of data transmission over a network.
**Extension Property.** Let \(\mathcal{C}_{1},\mathcal{C}_{2}\) be two linear codes in \(\mathbb{F}_{q}^{m\times n}\). A linear isometry \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\) satisfies the Extension Property if and only if there exists a linear isometry \(\mu:\mathbb{F}_{q}^{m\times n}\to\mathbb{F}_{q}^{m\times n}\) such that \(\mu|_{\mathcal{C}_{1}}=\varphi\).
It is well known that there exist isometries of rank metric codes that do not satisfy the Extension Property (see [1] and [3, Section 7]). We are interested in understanding under which conditions it may be possible to extend an isometry to the whole ambient space and when instead the Extension Property fails. Very little is know in this direction. The results in [7] imply that isometries between two rank support spaces are extendable. The same result for \(\mathbb{F}_{q^{m}}\)-isometries between Galois closed linear subspaces of \(\mathbb{F}_{q^{m}}^{n}\) was proved by Umberto Martinez-Penas in [11, Theorem 5].
In Section 1, we recall some definitions and results on rank-metric codes. In Section 2 we present an extensive list of obstructions to the Extension Property, providing multiple examples, while in Section 4 we establish the Extension Property in a special case. Section 3 is dedicated to developing some tools that are used in Section 4. Our Main Theorem states that the Extension Property holds for certain isometries of codes generated by elementary matrices. In the appendix, we establish some mathematical facts connected to the proof of the Main Theorem in Section 4.
## 1 Preliminaries on rank-metric codes
Throughout this paper, \(q\) is a prime power and \(\mathbb{F}_{q}\) denotes the finite field with \(q\) elements. For positive integers \(m,n\), we denote by \(\mathbb{F}_{q}^{m\times n}\) the set of \(m\times n\) matrices with entries in \(\mathbb{F}_{q}\). We denote by \(\operatorname{rank}(M)\) the rank of a matrix \(M\in\mathbb{F}_{q}^{m\times n}\) and by \(\dim(V)\) the dimension of an \(\mathbb{F}_{q}\)-linear space \(V\).
**Definition 1.1**.: The **rank distance** of \(A,B\in\mathbb{F}_{q}^{m\times n}\) is defined as
\[d:\mathbb{F}_{q}^{m\times n}\times\mathbb{F}_{q}^{m\times n} \longrightarrow\mathbb{N}\] \[(A,B) \longmapsto\operatorname{rank}(A-B).\]
A **rank-metric code**\(\mathcal{C}\subseteq\mathbb{F}_{q}^{m\times n}\) is an \(\mathbb{F}_{q}\)-linear subspace endowed with the rank distance.
In order to properly state the Extension Property in the context of rank-metric codes, we briefly recall the notion of isometric and equivalent codes.
**Definition 1.2**.: Let \(\mathcal{C}_{1},\mathcal{C}_{2}\) be two linear codes in \(\mathbb{F}_{q}^{m\times n}\). An \(\mathbb{F}_{q}\)-linear isomorphism \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\) such that \(\operatorname{rank}(C)=\operatorname{rank}(\varphi(C))\) for all \(C\in\mathcal{C}_{1}\) is called **isometry** and \(\mathcal{C}_{1},\mathcal{C}_{2}\) are **isometric**.
The following classification of the linear isometries of \(\mathbb{F}_{q}^{m\times n}\) is due to Hua [8] for odd characteristic and to Wan [13] for characteristic \(2\). The statement can also be found in [6, Theorem 11.1.9].
**Theorem 1.3**.: Let \(\varphi:\mathbb{F}_{q}^{m\times n}\to\mathbb{F}_{q}^{m\times n}\) be an \(\mathbb{F}_{q}\)-linear isometry with respect to the rank metric.
1. If \(m\neq n\) then there exist matrices \(A\in\operatorname{GL}_{m}(\mathbb{F}_{q})\) and \(B\in\operatorname{GL}_{n}(\mathbb{F}_{q})\) such that \(\varphi(M)=AMB\) for all \(M\in\mathbb{F}_{q}^{m\times n}\).
2. If \(m=n\) then there exist matrices \(A,B\in\operatorname{GL}_{n}(\mathbb{F}_{q})\) such that either \(\varphi(M)=AMB\) for all \(M\in\mathbb{F}_{q}^{n\times n}\), or \(\varphi(M)=AM^{t}B\) for all \(M\in\mathbb{F}_{q}^{n\times n}\).
**Definition 1.4**.: Two codes \(\mathcal{C}_{1},\mathcal{C}_{2}\leq\mathbb{F}_{q}^{m\times n}\) are **equivalent** if there exists a linear rank-metric isometry \(\varphi:\mathbb{F}_{q}^{m\times n}\to\mathbb{F}_{q}^{m\times n}\) such that \(\phi(\mathcal{C}_{1})=\mathcal{C}_{2}\).
According to these definitions and Theorem 1.3, we can formulate the Extension Property for rank-metric linear codes as follows.
**Extension Property.** Let \(\mathcal{C}_{1},\mathcal{C}_{2}\) be two linear codes in \(\mathbb{F}_{q}^{m\times n}\). An isometry \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\) satisfies the Extension Property if and only if there exist two matrices \(A\in\operatorname{GL}_{m}(\mathbb{F}_{q})\) and \(B\in\operatorname{GL}_{n}(\mathbb{F}_{q})\) such that either \(\varphi(M)=AMB\) for all \(M\in\mathcal{C}_{1}\), or \(\varphi(M)=AM^{t}B\) for all \(M\in\mathcal{C}_{1}\), where the latter case can only happen if \(m=n\).
## 2 Obstructions to the Extension Property
In this section we discuss several obstructions to the Extension Property in the rank-metric case. A first problem arises from the fact that the transposition is an isometry of the ambient space only in the square case. This makes the composition of the transposition with the natural inclusion of \(\iota:\mathbb{F}_{q}^{m\times m}\hookrightarrow\mathbb{F}_{q}^{m\times n}\), \(m\leq n\), into an \(\mathbb{F}_{q}\)-linear isometry of \(\iota(\mathbb{F}_{q}^{m\times m})\subseteq\mathbb{F}_{q}^{m\times n}\) with itself, which cannot be extended to \(\mathbb{F}_{q}^{m\times n}\). This is a way of looking at the next example, due to Aleams Barra and Heide Gluesing-Luerssen.
**Example 2.1** ([1], Example 2.9).: Let \(\mathcal{C}=\{\left(\begin{matrix}A&0\end{matrix}\right):A\in\mathbb{F}_{q}^{ 2\times 2}\}\leq\mathbb{F}_{q}^{2\times 3}\) and let \(\varphi:\mathcal{C}\to\mathcal{C}\) be the isometry given by \(\varphi(\left(\begin{matrix}A&0\end{matrix}\right))=\left(\begin{matrix}A^{t}&0 \end{matrix}\right)\) for all \(A\in\mathbb{F}_{q}^{2\times 2}\). It is easy to see that it is not possible to extend \(\varphi\) to an isometry of the whole ambient space.
A similar phenomenon happens in the next example, also due to Barra and Gleusing-Luerssen.
**Example 2.2** ([1], Example 2.9).: Let \(\mathcal{C}\leq\mathbb{F}_{q}^{4\times 4}\) be the code given by
\[\mathcal{C}=\left\{\left(\begin{matrix}A&0\\ 0&B\end{matrix}\right):A,B\in\mathbb{F}_{q}^{2\times 2}\right\}\]
and consider the isometry \(\varphi:\mathcal{C}\to\mathcal{C}\) given by
\[\varphi\left(\left(\begin{matrix}A&0\\ 0&B\end{matrix}\right)\right)=\left(\begin{matrix}A&0\\ 0&B^{t}\end{matrix}\right)\]
As before, one can check that \(\varphi\) cannot be extended to an isometry of \(\mathbb{F}_{q}^{4\times 4}\).
In general, the natural inclusion \(\iota:\mathbb{F}_{q}^{m\times m}\times\mathbb{F}_{q}^{n\times n} \hookrightarrow\mathbb{F}_{q}^{(m+n)\times(m+n)}\) is an isometry with respect to the sum-rank metric in the domain and the rank metric in the codomain. When composed with the product of the identity on \(\mathbb{F}_{q}^{m\times m}\) and the transposition on \(\mathbb{F}_{q}^{n\times n}\), it yields an isometry of \(\iota(\mathbb{F}_{q}^{m\times m}\times\mathbb{F}_{q}^{n\times n})\subseteq \mathbb{F}_{q}^{(m+n)\times(m+n)}\) with itself, which does not extend to \(\mathbb{F}_{q}^{(m+n)\times(m+n)}\).
We stress that, in both examples there is a smaller, natural ambient space to which the isometry can be extended. In fact even more, in those specific examples the isometries are already defined on a smaller ambient space (on which therefore they can be trivially extended). In the first example, the isometry is defined on \(\mathbb{F}_{q}^{2\times 2}\) while in the second example it is defined on \(\mathbb{F}_{q}^{2\times 2}\times\mathbb{F}_{q}^{2\times 2}\), naturally endowed with the sum-rank metric. In order to avoid such problems, one may want to consider codes that cannot be contained in a smaller ambient space, that is, such that \(\operatorname{rowsp}(\mathcal{C})=\mathbb{F}_{q}^{n}\) and \(\operatorname{colsp}(\mathcal{C})=\mathbb{F}_{q}^{m}\).
We now discuss a different obstruction to the Extension Property. Let \(\varphi\) be an isometry of \(\mathbb{F}_{q}^{m\times n}\). Then for every \(\mathcal{C}\leq\mathbb{F}_{q}^{m\times n}\) we have that
\[\dim(\operatorname{rowsp}(\mathcal{C}))=\dim(\operatorname{rowsp}(\varphi( \mathcal{C})))\text{ and }\dim(\operatorname{colsp}(\mathcal{C}))=\dim( \operatorname{colsp}(\varphi(\mathcal{C}))). \tag{1}\]
Therefore, in order to be extendable, an isometry must satisfy this property. The next example shows that not all linear isometries do.
**Example 2.3**.: Let \(\mathcal{C}_{1},\mathcal{C}_{2}\in\mathbb{F}_{2}^{2\times 3}\) be the codes
\[\mathcal{C}_{1}=\left\langle\begin{pmatrix}1&1&0\\ 0&1&0\end{pmatrix},\begin{pmatrix}0&1&0\\ 1&0&0\end{pmatrix}\right\rangle\quad\mathcal{C}_{2}=\left\langle\begin{pmatrix} 0&0&1\\ 0&1&0\end{pmatrix},\begin{pmatrix}0&1&0\\ 1&0&0\end{pmatrix}\right\rangle\]
and let \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\) be the \(\mathbb{F}_{2}\)-linear map given by
\[\varphi\left(\begin{pmatrix}1&1&0\\ 0&1&0\end{pmatrix}\right)=\begin{pmatrix}0&0&1\\ 0&1&0\end{pmatrix}\quad\varphi\left(\begin{pmatrix}0&1&0\\ 1&0&0\end{pmatrix}\right)=\begin{pmatrix}0&1&0\\ 1&0&0\end{pmatrix}\,.\]
Since \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are codes of constant rank \(2\), then \(\varphi\) is an isometry. Notice that \(\dim(\operatorname{rowsp}(\mathcal{C}_{1}))=2\) while \(\dim(\operatorname{rowsp}(\mathcal{C}_{2}))=3\). In particular, \(\varphi\) cannot be extended to an isometry of \(\mathbb{F}_{2}^{2\times 3}\).
The last example motivates us to look at isometries \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\leq\mathbb{F}_{q}^{m\times n}\) with the following property, which implies (1).
**Property 1**.: There exist \(A\in\operatorname{GL}_{m}(\mathbb{F}_{q})\) and \(B\in\operatorname{GL}_{n}(\mathbb{F}_{q})\) such that
\[\operatorname{rowsp}(\varphi(C))=\operatorname{rowsp}(CB)\text{ and } \operatorname{colsp}(\varphi(C))=\operatorname{colsp}(AC)\]
for all \(C\in\mathcal{C}_{1}\).
Notice that none of the isometries considered in Examples 2.1, 2.2 and 2.3 satisfy Property 1. While Property 1 is necessary for the Extension Property to hold, it is not sufficient, as the next example shows.
**Example 2.4**.: In [4, Example 1] the authors exhibit three distinct equivalence classes of MRD codes in \(\mathbb{F}_{2}^{4\times 4}\) with minimum distance \(4\). Any \(\mathbb{F}_{2}\)-linear map between codes in different equivalent classes is an isometry, since each nonzero element has rank \(4\). Moreover, each of these maps satisfy Property 1 with \(A=B=\operatorname{Id}\). A proof that these codes do not satisfy the Extension Property appeared in the first arXiv version of the same paper as [3, Example 7.1].
The obstruction to the Extension Property in Example 2.4 can be seen as coming from the interaction between the linear structure of the code and the group structure of the code without the zero matrix. More precisely, if \(\mathcal{C}\) is a vector space of square matrices and \(\mathcal{C}\setminus\{0\}\) is a subgroup of the general linear group, then every \(\mathbb{F}_{q}\)-linear isomorphism from \(\mathcal{C}\) to itself is a linear isometry. Moreover, if it fixes the identity and it has the Extension Property, then it is a group homomorphism. Therefore, any \(\mathbb{F}_{q}\)-linear isomorphism from \(\mathcal{C}\) to itself which fixes the identity and is not a group homomorphism cannot have the Extension Property.
**Example 2.5**.: Let \(P\in\operatorname{GL}_{n}(\mathbb{F}_{q})\) of order \(q^{n}-1\), let \(Q=P^{q-1}\). Let \(\mathcal{C}=\mathbb{F}_{q}[P]=\langle P\rangle\cup\{0\}\subseteq\mathbb{F}_{q }^{n\times n}\). Every nonzero element of \(\mathcal{C}\) has rank \(n\), hence any injective \(\mathbb{F}_{q}\)-linear isomorphism of \(\mathcal{C}\) with itself is an isometry. Both \(P\) and \(Q\) are linearly independent from the identity matrix \(\operatorname{Id}\), so there is a linear isometry \(\varphi:\mathcal{C}\to\mathcal{C}\) with \(\varphi(\operatorname{Id})=\operatorname{Id}\) and \(\varphi(P)=Q\). If \(\varphi\) has the Extension Property, then either \(\varphi(M)=AMA^{-1}\) or \(\varphi(M)=AM^{t}A^{-1}\) for some \(A\in\operatorname{GL}_{n}(\mathbb{F}_{q})\). Therefore \(Q=\varphi(P)\in\{APA^{-1},AP^{t}A^{-1}\}\), however \(Q\) has order \(q^{n-1}+q^{n-2}+\ldots+1\), while \(APA^{-1}\) and \(AP^{t}A^{-1}\) have order \(q^{n}-1\).
Even when \(\mathcal{C}\setminus\{0\}\) is not a group, an isometry on a set of square matrices which fixes the identity and for which the Extension Property holds needs to be multiplicative. This constitutes an obstruction to the Extension Property, since not every linear isometry is multiplicative.
**Example 2.6**.: Let \(\mathcal{C}\in\mathbb{F}_{2}^{3\times 3}\) be the code given by
\[\mathcal{C}=\left\{0,\mathrm{Id},\begin{pmatrix}1&0&0\\ 1&1&0\\ 0&0&0\end{pmatrix},\begin{pmatrix}0&0&0\\ 1&0&0\\ 0&0&1\end{pmatrix}\right\}\]
and let \(\varphi:\mathcal{C}\to\mathcal{C}\) be the isometry of \(\mathcal{C}\) with itself that fixes the identity matrix and swaps the other two matrices.
Suppose that \(\varphi\) can be extended to an isometry of the whole ambient space. Then, there are \(A,B\in\mathrm{GL}_{3}(\mathbb{F}_{2})\) such that either \(\varphi(C)=ACB\) for all \(C\in\mathcal{C}\) or \(\varphi(C)=AC^{t}B\) for all \(C\in\mathcal{C}\). Since \(\varphi(\mathrm{Id})=\mathrm{Id}\), we have that \(AB=\mathrm{Id}\) and so \(B=A^{-1}\). Therefore, we obtain that
\[\varphi\left(\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix}\right) =\varphi\left(\begin{pmatrix}1&0&0\\ 1&1&0\\ 0&0&0\end{pmatrix}\begin{pmatrix}1&0&0\\ 1&1&0\\ 0&0&0\end{pmatrix}\right)=\varphi\left(\begin{pmatrix}1&0&0\\ 1&1&0\\ 0&0&0\end{pmatrix}\right)\varphi\left(\begin{pmatrix}1&0&0\\ 1&1&0\\ 0&0&0\end{pmatrix}\right)=\] \[=\begin{pmatrix}0&0&0\\ 1&0&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}0&0&0\\ 1&0&0\\ 0&0&1\end{pmatrix}=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&1\end{pmatrix}\,.\]
The map \(\varphi\) sends an element of rank \(2\) to an element of rank \(1\), contradicting the assumption that \(\varphi\) is an isometry. We conclude that \(\varphi\) does not have the Extension Property. Notice however that \(\varphi\) satisfies Property 1 with
\[A=\begin{pmatrix}0&0&1\\ 1&1&1\\ 1&0&0\end{pmatrix}\text{ and }B=\begin{pmatrix}1&0&0\\ 1&0&1\\ 1&1&0\end{pmatrix}\,.\]
Property 1 suggests to look at codes generated by rank-one elements. In fact, if \(C\) is a rank-one element with row space \(\langle u\rangle\) and column space \(\langle v\rangle\), then \(\varphi(C)\) is a rank-one element with row space \(\langle uB\rangle\) and column space \(\langle Av\rangle\). Therefore, \(\varphi\) determines \(Av\) and \(uB\) up to a scalar multiple. This simple observation allows us to prove the next result.
**Proposition 2.7**.: Let \(\mathcal{C}_{1},\mathcal{C}_{2}\leq\mathbb{F}_{2}^{m\times n}\) and let \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\) be an isometry which satisfies Property 1. If \(\mathcal{C}_{1}\) is generated by elements of rank \(1\), then \(\varphi\) is extendable.
Proof.: Since \(\varphi\) has Property 1, then \(\varphi(C)\) and \(ACB\) have the same row and column space for all \(C\in\mathcal{C}\). Over \(\mathbb{F}_{2}\) this give that \(A^{-1}\varphi(C)B^{-1}=C\) for every \(C\in\mathcal{C}_{1}\) of rank \(1\). If \(\mathcal{C}_{1}\) is generated by elements of rank \(1\), we conclude by linearity that \(A^{-1}\varphi(C)B^{-1}=C\) for all \(C\in\mathcal{C}_{1}\).
Even for \(\mathcal{C}\) generated by elements of rank \(1\), the Extension Property may fail if we do not require Property 1.
**Example 2.8**.: Let \(\mathcal{C}\subseteq\mathbb{F}_{2}^{2\times 3}\) be the linear code generated by
\[C_{1}=\begin{pmatrix}1&0&0\\ 0&0&0\end{pmatrix},\,\,\,C_{2}=\begin{pmatrix}0&0&0\\ 0&1&0\end{pmatrix},\,\,\,C_{3}=\begin{pmatrix}0&0&1\\ 0&0&1\end{pmatrix},\,\,\,C_{4}=\begin{pmatrix}1&1&0\\ 1&1&0\end{pmatrix}.\]
Let \(\varphi:\mathcal{C}\to\mathcal{C}\) be the linear map given by \(\varphi(C_{i})=C_{i}\) for \(i=1,2,3\) and \(\varphi(C_{4})=C_{4}+C_{3}\). One can verify that \(\varphi\) is an isometry that cannot be extended to the whole ambient space, since it does not satisfy Property 1.
One may wonder whether the failure of the Extension Property is due to the fact that the code is small compared to the ambient space. The next example shows that this is not the case.
**Example 2.9**.: Starting from the code \(\mathcal{C}\) from the previous example, for each \(n>3\) we construct a code \(\mathcal{C}_{n}\in\mathbb{F}_{2}^{2\times n}\) given by
\[\mathcal{C}=\left\{\begin{pmatrix}A&C\end{pmatrix}:A\in\mathbb{F}_{2}^{2\times (n-3)},\,C\in\mathcal{C}\right\}.\]
Let \(\varphi_{n}:\mathcal{C}_{n}\to\mathcal{C}_{n}\) be the linear map given by \(\varphi_{n}\begin{pmatrix}A&0\end{pmatrix}=A\) for \(A\in\mathbb{F}_{2}^{2\times(n-3)}\) and \(\varphi_{n}\begin{pmatrix}0&C\end{pmatrix}=\varphi(C)\). Again, \(\varphi_{n}\) is an isometry that cannot be extended to the whole ambient space. Moreover, notice that
\[\lim_{n\to\infty}\frac{\dim(\mathcal{C}_{n})}{\dim\left(\mathbb{F}_{2}^{2\times n }\right)}=\lim_{n\to\infty}\frac{2n-2}{2n}=1.\]
This show that there exist non-extendable isometries defined on codes, whose dimension comes arbitrarily close to that of the ambient space.
We state the analogous result of Proposition 2.7 for arbitrary \(q\) as an open question.
**Question 2.10**.: Let \(\mathcal{C}_{1},\mathcal{C}_{2}\leq\mathbb{F}_{q}^{m\times n}\) and let \(\varphi:\mathcal{C}_{1}\to\mathcal{C}_{2}\) be an isometry which satisfies Property 1. If \(\mathcal{C}_{1}\) is generated by elements of rank 1, then the same is true for \(\mathcal{C}_{2}\). If this is the case, does \(\varphi\) have the Extension Property?
Our Main Theorem provides a positive answer to Question 2.10, for codes which are generated by elementary matrices.
Let \(1\leq i\leq m\) and \(1\leq j\leq n\). We denote by \(E_{i,j}\) the matrix in \(\mathbb{F}_{q}^{m\times n}\) that has 1 in position \((i,j)\) and 0 everywhere else. We call these matrices elementary. We now state our main result, which we will prove in Section 4.
**Main Theorem**.: Let \(\mathcal{C}=\langle E_{i_{1},j_{1}},\ldots,E_{i_{k},j_{k}}\rangle\leq\mathbb{ F}_{q}^{m\times n}\) be a code generated by \(k\) elementary matrices. Let \(\varphi:\mathcal{C}\to\mathcal{C}\) be an isometry such that for all \(1\leq h\leq k\) one has \(\varphi(E_{i_{h},j_{h}})=\alpha_{h}E_{i_{h},j_{h}}\) for some \(\alpha_{h}\in\mathbb{F}_{q}^{*}\). Then \(\varphi\) satisfies the Extension Property.
The next example shows that the statement of the Main Theorem fails, if the code is generated by non-elementary, rank-one matrices.
**Example 2.11**.: Let \(q\neq 2\) and let \(\mathcal{C}\in\mathbb{F}_{q}^{2\times 4}\) the code generated by the following elements of rank 1:
\[C_{1}=\begin{pmatrix}1&0&0&0\\ 0&0&0&0\end{pmatrix}, C_{2}=\begin{pmatrix}0&0&0&0\\ 0&1&0&0\end{pmatrix},\] \[C_{3}=\begin{pmatrix}0&0&1&0\\ 0&0&2&0\end{pmatrix}, C_{4}=\begin{pmatrix}0&0&0&1\\ 0&0&0&1\end{pmatrix}, C_{5}=\begin{pmatrix}0&0&0&0\\ 1&1&1&1\end{pmatrix}.\]
Let \(\alpha\in\mathbb{F}_{q}\setminus\{0,1\}\) and let \(\varphi:\mathcal{C}\to\mathcal{C}\) be the linear map given by \(\varphi(C_{i})=C_{i}\) for \(1\leq i\leq 4\) and \(\varphi(C_{5})=\alpha C_{5}\). One can check that \(\varphi\) is an isometry and that it does not have the Extension Property. In fact, \(\varphi\) does not satisfies Property 1, since \(\operatorname{rowsp}(C_{5}-C_{2})\leq\operatorname{rowsp}(\sum_{i=1}^{5}C_{i})\) but \(\operatorname{rowsp}(\varphi(C_{5}-C_{2}))\cap\operatorname{rowsp}(\varphi( \sum_{i=1}^{5}C_{i}))=\{0\}\). Notice that, since \(\varphi\) does not satisfies Property 1, it does not yield a negative answer to Question 2.10. In addition, this example shows that it does not suffice in general to check Property 1 on a system of generators of the code.
## 3 Matrix paths
In this section we establish some preliminary result which we will use in the proof of the Main Theorem. We start by introducing the notion of path in a matrix. From here on, let \(m,n\geq 2\).
**Definition 3.1**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\) be a matrix. A **path**\(\pi\) of length \(k\in\mathbb{N}\) in \(M\) is a finite ordered sequence of positions of nonzero entries \(((i_{1},j_{1}),(i_{2},j_{2}),\ldots(i_{k},j_{k}))\) such that two consecutive elements share either the first or the second component and \((i_{h},j_{h})\neq(i_{s},j_{s})\) for \(h\neq s\).
A path \(\pi\) of length at least 4 is **closed** if the first and the last entries share a component. The **support**\(\operatorname{supp}(\pi)\) of a path \(\pi\) is the set of elements of \(\pi\). A path \(\pi\) is **simple** if no three entries of \(\pi\) share a component.
These definitions are borrowed from graph theory. Indeed, one can naturally associate to every \(M\in\mathbb{F}_{q}^{m\times n}\) a finite graph \(G_{M}=(V_{M},E_{M})\), such that \(V_{M}\) is the set of positions of the nonzero entries of \(M\) and two vertices in \(V_{M}\) are connected by an edge in \(E_{M}\) if and only if the corresponding entries lay on a common line (that is, a common row or column). The notions of path and closed
path from Definition 3.1 correspond to the usual definitions in graph theory. A path is simple if the subgraph of \(G_{M}\) induced by the set of vertices in the path does not contain any clique.
We are mainly interested in closed simple paths. We begin by establishing some of their basic properties. First notice that, up to a cyclic permutation and to reversing the order, every simple path is determined by its support. Moreover, in the next lemma we see that the entries corresponding to the elements of a closed simple path are contained in a square submatrix with exactly two nonzero elements in each row and column.
**Lemma 3.2**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\) be a matrix. The entries of \(M\) corresponding to the elements of a closed simple path are contained in a square submatrix with exactly two nonzero elements in each row and column.
Proof.: Let \(\pi=((i_{1},j_{1}),(i_{2},j_{2}),\ldots(i_{k},j_{k}))\) be a closed path in \(M\). By definition, each line of \(M\) contains at most two nonzero entries whose position belongs to the support of \(\pi\). Suppose by contradiction that there exists a line in \(M\) which contains exactly one nonzero entry in position \((i_{h},j_{h})\). If \(1<h<k\), then the three elements \((i_{h-1},j_{h-1}),(i_{h},j_{h}),(i_{h+1},j_{h+1})\) have either the first or the second coordinate in common. If \(h=1\), the same is true for \((i_{1},j_{1})\), \((i_{2},j_{2}),(i_{k},j_{k})\). If \(h=k\), the same holds for \((i_{1},j_{1}),(i_{k-1},j_{k-1}),(i_{k},j_{k})\). In each case, \(\pi\) is not simple. We conclude that the entries of \(M\) corresponding to the elements of a closed simple path are contained in a square submatrix with exactly two nonzero elements in each row and column. In particular, it must be that \(2m=2n\) and so \(m=n\).
The next proposition ensures that in every matrix with enough nonzero entries there is a closed simple path.
**Proposition 3.3**.: Let \(m,n\geq 2\) and let \(M\in\mathbb{F}_{q}^{m\times n}\) be a matrix with at least \(m+n\) nonzero entries. Then there is a closed simple path in \(M\).
Proof.: We proceed by induction on \(m+n\). If \(m+n=4\) then \(m=n=2\) and all the entries of the matrix are nonzero and so trivially we have a closed simple path.
Suppose now that \(m+n>4\). If there exists a row in which there is at most one nonzero entry, then \(m>2\). By Lemma 3.2 no close simple path can contain the position of that entry. Therefore, one may erase that row from \(M\) and obtain a matrix of size \((m-1)\times n\) which contains the same paths as \(M\). Similarly, one may erase any column of \(M\) which contain a single nonzero entry without affecting the paths contained in \(M\).
By eliminating all rows and columns of \(M\) which contain at most one nonzero entry, we reduce to a matrix which contains at least two nonzero entries in each row and column. Notice that the operation of canceling any rows and columns of \(M\) which contain at most one nonzero entry preserves the property that the matrix has at least as many nonzero entries as the sum of its number of rows and its number of columns. We can now build a closed simple path as follows. Starting from an arbitrary nonzero entry, move along the correspondent row and select another nonzero entry. Then move along the column of last nonzero entry picked and select another nonzero entry. Proceed in this way, alternating between rows and columns. At every step, we find a nonzero entry different from the last one that was picked, since we supposed that in each line we have at least two nonzero entries. Since the number of lines is finite, after \(k\) steps we must choose an entry on a line where there is already one entry which was picked at a step \(h\) with \(1\leq h<k-1\). As soon as that happens, we choose that entry. The positions of the entries that we have picked are the support of a closed simple path in \(M\).
**Remark 3.4**.: The result in Proposition 3.3 is optimal, in the sense that there are matrices in \(\mathbb{F}_{q}^{m\times n}\) with \(m+n-1\) nonzero entries that do not contain any closed simple path. An example is given by
\[M=\begin{pmatrix}1&1&\ldots&1\\ 1&0&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 1&0&\ldots&0\end{pmatrix}\in\mathbb{F}_{q}^{m\times n}.\]
**Definition 3.5**.: Let \(m,n\geq 2\) and \(M\in\mathbb{F}_{q}^{m\times n}\). We say that a matrix \(M^{\prime}\in\mathbb{F}_{q}^{m\times n}\) is a **path-reduction** - or just a **reduction** - of \(M\) if it is obtained from \(M\) by changing to zero a nonzero entry that belong to a closed simple path.
A matrix \(M\in\mathbb{F}_{q}^{m\times n}\)**is path-irreducible** - or just **irreducible** - if does not contain any closed simple path.
Let \(M_{1},\dots,M_{\ell}\in\mathbb{F}_{q}^{m\times n}\). We say that \((M_{1},\dots,M_{\ell})\) is a **path-reduction chain** if for every \(1\leq i<\ell\), \(M_{i+1}\) is a reduction of \(M_{i}\) and \(M_{\ell}\) is irreducible.
Since in a closed simple path there are at least four entries and a matrix may have more than one closed simple path, a matrix may have several path-reductions. We illustrate the situation in the next simple example.
**Example 3.6**.: Consider the matrix \(M\in\mathbb{F}_{2}^{3\times 5}\) given by
\[M=\begin{pmatrix}1&0&0&1&0\\ 0&1&0&1&0\\ 1&1&0&0&0\end{pmatrix}\,.\]
The path \(((1,1),(1,4),(2,4),(2,2),(3,2),(3,1))\) is closed and simple. Replacing any of the ones in \(M\) yields a reduction of \(M\). In particular
\[M^{\prime}=\begin{pmatrix}0&0&0&1&0\\ 0&1&0&1&0\\ 1&1&0&0&0\end{pmatrix}\qquad M^{\prime\prime}=\begin{pmatrix}1&0&0&0&0\\ 0&1&0&1&0\\ 1&1&0&0&0\end{pmatrix}\]
are reductions of \(M\). Notice that both \(M^{\prime}\) and \(M^{\prime\prime}\) are irreducible.
The next corollary is an immediate consequence of Proposition 3.3.
**Corollary 3.7**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\). If \(M\) is irreducible, than \(M\) has at most \(m+n-1\) nonzero entries.
Given a matrix \(M\in\mathbb{F}_{q}^{m\times n}\), it is always possible to find a path-reduction chain starting from \(M\). In fact, one can simply apply consecutive reductions. Since \(M\) has a finite number of nonzero entries, one obtains an irreducible matrix in a finite number of steps.
**Proposition 3.8**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\). Then there exists a path-reduction chain \((M_{1},\dots,M_{\ell})\) such that \(M_{1}=M\).
Notice that one can find more than one path-reduction chain starting with the same matrix \(M\). In Appendix A we prove that each path-reduction chain with \(M_{1}=M\) has the same length.
**Example 3.9**.: Let \(M\in\mathbb{F}_{2}^{3\times 3}\) be the matrix
\[M=\begin{pmatrix}1&1&0\\ 1&1&1\\ 0&1&1\end{pmatrix}\,.\]
Both
\[\left(\begin{pmatrix}1&1&0\\ 1&1&1\\ 0&1&1\end{pmatrix},\begin{pmatrix}0&1&0\\ 1&1&1\\ 0&1&1\end{pmatrix},\begin{pmatrix}0&1&0\\ 1&1&1\\ 0&1&0\end{pmatrix}\right)\,,\]
and
\[\left(\begin{pmatrix}1&1&0\\ 1&1&1\\ 0&1&1\end{pmatrix},\begin{pmatrix}1&1&0\\ 1&0&1\\ 0&1&1\end{pmatrix},\begin{pmatrix}1&1&0\\ 1&0&1\\ 0&1&0\end{pmatrix}\right)\]
are path-reduction chains starting with \(M\).
Proof the Main Theorem
In order to clarify the structure of the proof of the Main Theorem, we enclose part of it in two technical lemmas. The first one shows under which conditions two maps coincide on a closed simple path.
**Lemma 4.1**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\) and let \(((i_{1},j_{1}),\ldots,(i_{k},j_{k}))\) be a closed simple path in \(M\). Let \(\varphi,\psi:\langle E_{i_{1},j_{1}},\ldots,E_{i_{k},j_{k}}\rangle\to\langle E _{i_{1},j_{1}},\ldots,E_{i_{k},j_{k}}\rangle\) two rank-preserving linear maps such that \(\varphi(E_{i_{k},j_{k}})=s_{h}E_{i_{k},j_{k}}\) and \(\psi(E_{i_{k},j_{k}})=t_{h}E_{i_{k},j_{k}}\), where \(s_{1},\ldots,s_{k},t_{1},\ldots,t_{k}\in\mathbb{F}_{q}^{*}\). If \(s_{h}=t_{h}\) for \(1\leq h<k\), then \(s_{k}=t_{k}\).
Proof.: For \(a\in\mathbb{F}_{q}^{*}\), consider the matrix
\[M_{a}=\left(\sum_{h=1}^{k-1}E_{i_{h},j_{h}}\right)+aE_{i_{k},j_{k}}.\]
Since \(((i_{1},j_{1}),\ldots,(i_{k},j_{k}))\) is a closed simple path, by Lemma 3.2, \(k\) is even and the nonzero entries of \(M_{a}\) are contained in a square submatrix of size \(k/2\), whose determinant is a linear function of \(a\). Hence there exists \(\bar{a}\in\mathbb{F}_{q}^{*}\) such that \(\operatorname{rank}(M_{\bar{a}})=k/2-1\) and \(\operatorname{rank}(M_{a})=k/2\) for all \(a\in\mathbb{F}_{q}\setminus\{\bar{a}\}\).
Let \(M\) be the matrix given by
\[M=\left(\sum_{h=1}^{k-1}s_{h}^{-1}E_{i_{h},j_{h}}\right)+\bar{a}s_{k}^{-1}E_{ i_{k},j_{k}}.\]
By assumption \(\operatorname{rank}(\psi(M))=\operatorname{rank}(M)=\operatorname{rank}( \varphi(M))=k/2-1\). Moreover, if \(s_{h}=t_{h}\) for \(1\leq h<k\), then
\[\psi(M)=\left(\sum_{h=1}^{k-1}E_{i_{h},j_{h}}\right)+t_{h}\bar{a}s_{k}^{-1}E_ {i_{k},j_{k}}\,.\]
By the uniqueness of \(\bar{a}\) we conclude that \(\bar{a}=t_{k}\bar{a}s_{k}^{-1}\), hence \(t_{k}=s_{k}\).
The next lemma establish the Extension Property in a special case.
**Lemma 4.2**.: Let \(\varphi:\langle E_{i_{1},j_{1}},\ldots,E_{i_{k},j_{k}}\rangle\to\langle E_{i_ {1},j_{1}},\ldots,E_{i_{k},j_{k}}\rangle\subseteq\mathbb{F}_{q}^{m\times n}\) be a rank-preserving linear map such that \(\varphi(E_{i_{k},j_{h}})=s_{h}E_{i_{k},j_{h}}\), where \(s_{1},\ldots,s_{k}\in\mathbb{F}_{q}\). If the matrix \(M=\sum_{h=1}^{k}E_{i_{h},j_{h}}\) is irreducible, then there are two diagonal invertible matrices \(A\in\mathbb{F}_{q}^{m\times m}\) and \(B\in\mathbb{F}_{q}^{n\times n}\) such that
\[\varphi(C)=ACB\]
for all \(C\in\langle E_{i_{1},j_{1}},\ldots,E_{i_{k},j_{k}}\rangle\).
Proof.: We build the matrices \(A=(a_{i,j})\) and \(B=(b_{i,j})\) step by step. Let \(h=1\) and set \(a_{i_{1},i_{1}}=1\) and \(b_{j_{1},j_{1}}=s_{1}\). This guarantees that \(AE_{i_{1},j_{1}}B=s_{1}E_{i_{1},j_{1}}\). At each subsequent step, choose \(h\in\{1,\ldots,k\}\) among those that have not been previously chosen and such that either \(a_{i_{k},i_{h}}\) or \(b_{i_{k},i_{h}}\) has been assigned a value, if such an \(h\) exists. If \(a_{i_{k},i_{h}}\) was already assigned a value, set \(b_{j_{k},j_{k}}=a_{i_{k},i_{h}}^{-1}s_{h}\). If \(b_{j_{k},j_{h}}\) was already assigned a value, set \(a_{i_{k},i_{h}}=b_{i_{h},j_{h}}^{-1}s_{h}\).
Notice that at most one among \(a_{i_{k},i_{h}}\) and \(b_{j_{k},j_{h}}\) can already have an assigned value. Indeed, assume by contradiction that both \(a_{i_{k},i_{h}}\) and \(b_{j_{k},j_{h}}\) are fixed. Then there exist two simple paths \((\alpha_{1},\ldots,\alpha_{u})\) and \((\beta_{1},\ldots,\beta_{v})\) such that \(\alpha_{1}=\beta_{1}=(i_{1},j_{1})\), \(\alpha_{u}=\beta_{v}=(i_{h},j_{h})\) and \(\alpha_{u-1}\neq\beta_{v-1}\). Let \(z>1\) be the smallest index such that \(\alpha_{z}\neq\beta_{z}\). Let \(N\) be the inclusion-minimal submatrix of \(M\) whose support contains \(\{\alpha_{z-1},\ldots,\alpha_{u},\beta_{z},\ldots,\beta_{v-1}\}\). Let \(d,e\) be such that \(N\) has size \(d\times e\). Notice that \(d,e\geq 2\), since \(\alpha_{z-1},\alpha_{z}\), and \(\alpha_{u}\) are not aligned. If \(\beta_{z}\) and \(\alpha_{z}\) are not aligned, then every line of \(N\) contains at least two nonzero entries. Otherwise, \(\alpha_{z-1},\alpha_{z}\), and \(\beta_{z}\) are aligned, then any line that does not pass through the position \(\alpha_{z-1}\) contains at least two nonzero entries of \(N\). Therefore, in both cases, we have \(2\max\{d,e\}\) nonzero entries in a submatrix of size \(d\times e\). Since \(d+e\leq 2\max\{d,e\}\), by Proposition 3.3 there exists a closed simple path in \(N\), contradicting the irreducibility of \(M\).
If no such \(h\) exists, choose any \(h\) among those that have not been previously chosen and set \(a_{i_{k},i_{h}}=1\) and \(b_{j_{k},j_{h}}=s_{h}\). When all values of \(h\) have been considered, set to \(1\) all the entries on the diagonal of \(A\) and \(B\) which have not been assigned a value yet.
**Remark 4.3**.: The matrix \(M\) in Lemma 4.2 is irreducible, which by Corollary 3.7 implies that \(\dim(\langle E_{i_{1},j_{1}},\ldots,E_{i_{k},j_{k}}\rangle)\leq m+n-1\). Notice that \(m+n-1\) is the number of degree of freedom of the pair of matrices \(A,B\).
We conclude the section with the proof of the Main Theorem.
Proof of the Main Theorem.: If \(m=1\) or \(n=1\), any injective linear map is a linear isometry and the statement holds. Suppose therefore that \(m,n\geq 2\) and let \(M=\sum_{h=1}^{R}E_{i_{h},j_{h}}\). By Proposition 3.8 there exists a path-reduction chain \((M,M_{2},\ldots,M_{\ell})\) with \(M_{\ell}\) irreducible. Consider the subset \(R\subseteq\{1,\ldots,k\}\) such that \(M_{\ell}=\sum_{r\in R}E_{i_{r},j_{r}}\). By Lemma 4.2 there are two invertible matrices \(A,B\) such that
\[AE_{i_{r},j_{r}}B=\varphi(E_{i_{r},j_{r}}),\]
for all \(r\in R\). Following the path-reduction chain and applying \(\ell-1\) times Lemma 4.1, we have that \(AE_{i_{h},j_{h}}B=\varphi(E_{i_{h},j_{h}})\), for \(1\leq h\leq k\). By linearity we conclude that \(\varphi(C)=ACB\) for all \(C\in\mathcal{C}\).
## Appendix A Length of path-reduction chains
In this appendix, we prove that every path-reduction chain of a matrix \(M\in\mathbb{F}_{q}^{m\times n}\) has the same length.
**Remark A.1**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\) and let \(\sigma_{1}=((i_{1},j_{1}),\ldots,(i_{k},j_{k}))\) and \(\sigma_{2}=((i^{\prime}_{1},j^{\prime}_{1}),\ldots,(i^{\prime}_{h},j^{\prime }_{h}))\) be two closed simple paths. Notice that if \(\operatorname{supp}(\sigma_{1})\neq\operatorname{supp}(\sigma_{2})\), then \(\operatorname{supp}(\sigma_{1})\nsubseteq\operatorname{supp}(\sigma_{2})\) and vice versa.
In the next lemma, we prove that if \(M\) contains two distinct closed single paths, than a path-reduction chain of \(M\) has length at least \(3\).
**Lemma A.2**.: Let \(M=(m_{ij})\in\mathbb{F}_{q}^{m\times n}\), let \(\sigma_{1}=((i_{1},j_{1}),\ldots,(i_{k},j_{k}))\) and \(\sigma_{2}=((i^{\prime}_{1},j^{\prime}_{1}),\ldots,(i^{\prime}_{h},j^{\prime }_{h}))\) be two closed simple paths such that \(\operatorname{supp}(\sigma_{1})\neq\operatorname{supp}(\sigma_{2})\). If \((i_{1},j_{1})=(i^{\prime}_{1},j^{\prime}_{1})\), then for each \((i_{s},j_{s})\in\operatorname{supp}(\sigma_{1})\setminus\operatorname{supp}( \sigma_{2})\) there is a closed simple path in \(M-m_{i_{1},j_{1}}E_{i_{1},j_{1}}\) that contains \((i_{s},j_{s})\).
Proof.: Up to reversing the order of \(\sigma_{2}\) and to a transposition, we may suppose without loss of generality that \(j^{\prime}_{1}=j^{\prime}_{2}=j_{k}=j_{1}\). As a consequence, also \(i_{1}=i_{2}=i^{\prime}_{h}=i^{\prime}_{1}\). Consider the list of positions
\[\gamma=(\gamma_{1},\ldots,\gamma_{h+k-2})=((i_{2},j_{2}),\ldots,(i_{k},j_{k}), (i^{\prime}_{2},j^{\prime}_{2}),\ldots,(i^{\prime}_{h},j^{\prime}_{h})).\]
Notice that \(\gamma\) is not always a path, since it can contain more than two entries with the same first or second coordinate, as well as repeated entries. Fix an \(s\) such that \((i_{s},j_{s})\in\operatorname{supp}(\sigma_{1})\setminus\operatorname{supp}( \sigma_{2})\) and let \(\gamma_{x}=(i_{s},j_{s})\). We now recursively build a finite sequence of simple paths \(\pi_{n}\), whose support is contained in that of \(\gamma\) and which start with \(\gamma_{x}\). Let \(\pi_{1}=(\gamma_{x})\). Suppose that we have constructed \(\pi_{n-1}=(p_{1},\ldots,p_{\ell})\) with \(p_{1}=\gamma_{x}\) and \(p_{\ell}=\gamma_{y}\), with \(y=x+n-2\) mod. \(h+k-2\) and \(\ell\geq 2\). Let \(z=y+1\) mod. \(h+k-2\) and define \(\pi_{n}\) as follows:
* If no two entries of \(\pi_{n-1}\) have either the first or the second coordinate in common with \(\gamma_{z}\), then let \(\pi_{n}=(p_{1},\ldots,p_{\ell},\gamma_{z})\).
* If there exists \(1\leq r<t\leq\ell\) such that \(p_{r},p_{t}\) and \(\gamma_{z}\) share either the first or the second component, then let \(\pi_{n}=(p_{1},\ldots,p_{r},\gamma_{z})\) if \(t=r+1\). Notice that if \(t\neq r+1\), then \(\pi_{n-1}\) is a closed simple path.
For \(n\geq 2\), \(\pi_{n}\) is a simple path of length at least \(2\). If for some \(n\) we find a closed simple path, then we are done. Else, \(\pi_{h+k-2}\) is a closed simple path, since \(\gamma_{x-1}\) and \(\gamma_{x}\) lay on a common line and \(\gamma_{x-2}\) and \(\gamma_{x+1}\) do not.
The next lemma shows that the length of a path-reduction chain is independent of the order of the reductions.
**Lemma A.3**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\) and let \(M,M_{2},\ldots,M_{k+1}\) be a path-reduction chain for \(M\). Let \(\alpha_{1},\ldots,\alpha_{k}\) be the ordered list of positions of the entries that we set to zero during the path-reduction chain. Any permutation of the sequence \(\alpha_{1},\ldots,\alpha_{k}\) still yields a path-reduction chain for \(M\).
Proof.: Since the group of permutation of \(k\) elements is generated by the \(k-1\) transpositions \((1,2),(2,3),\ldots,(k-1,k)\), it suffices to prove that setting to zero the entries in position
\[\alpha_{1},\ldots,\alpha_{i-2},\alpha_{i},\alpha_{i-1},\alpha_{i+1},\ldots, \alpha_{k}\]
in the given order gives a path-reduction chain for \(M\), for \(i=2,\ldots,k\). This corresponds to the sequence of matrices
\[M_{1},M_{2},\ldots,M_{i-1},\bar{M}_{i},M_{i+1},M_{i+2},\ldots,M_{k+1}\]
where we let \(M_{1}=M\). By assumption, \(M_{k+1}\) is irreducible and \(M_{j}\) is a reduction of \(M_{j-1}\) for \(j=2,\ldots,i-1,i+2,\ldots,k\).
The matrix \(\bar{M}_{i}\) is obtained from \(M_{i-1}\) by setting to zero the entry in position \(\alpha_{i}\). Since \(\alpha_{i}\) belongs to a closed simple path \(\pi\) in \(M_{i}\) and every nonzero entry in \(M_{i}\) is also a nonzero entry in \(M_{i-1}\), then \(\pi\) is also a closed simple path in \(M_{i-1}\). Therefore, \(\bar{M}_{i}\) is a reduction of \(M_{i-1}\). In order to prove that \(M_{i+1}\) is a reduction of \(\bar{M}_{i}\), we need to show that there is a closed simple path in \(\bar{M}_{i}\) which contains \(\alpha_{i-1}\). Notice that \(\bar{M}_{i}\) is equal to \(M_{i}\), except for the entries in position \(\alpha_{i-1}\) and \(\alpha_{i}\). By assumption, there are closed simple paths \(\sigma_{1}\) and \(\sigma_{2}\) such that \(\sigma_{1}\) contains \(\alpha_{i-1}\) and \(\sigma_{2}\) contains \(\alpha_{i}\), but not \(\alpha_{i-1}\). If \(\sigma_{1}\) does not contain \(\alpha_{i}\), then it is a closed simple path in \(\bar{M}_{i}\) which contains \(\alpha_{i-1}\). If instead \(\sigma_{1}\) contains \(\alpha_{i}\), then by Lemma A.2 there is a closed simple path in \(M_{i}\) which contains \(\alpha_{i-1}\) but not \(\alpha_{i}\). This gives a closed simple path in \(\bar{M}_{i}\) which contains \(\alpha_{i-1}\).
We are now ready to prove that every path reduction chain of a given matrix has the same length.
**Theorem A.4**.: Let \(M\in\mathbb{F}_{q}^{m\times n}\) be a matrix. Every path-reduction chain of \(M\) has the same length.
Proof.: We proceed by induction on the maximum length \(\ell\) of a path-reduction chain of \(M\). Notice that \(\ell\geq 1\) and equality holds if and only if \(M\) is irreducible. If \(\ell=2\), then \(M\) needs to have at least one closed simple path. Moreover, there is an \(\alpha\) in the path such every closed simple path in \(M\) contains \(\alpha\). If \(M\) contains two distinct closed simple paths through \(\alpha\), then by Lemma A.2 it also contains a closed simple path that does not pass through \(\alpha\). It follows that \(M\) contains exactly one closed simple path and every path-reduction chain has length two and is obtained by replacing with zero one of the entries of \(M\) in one of the positions on the closed simple path.
Let \(M,M_{2},\ldots,M_{\ell}\) and \(M,M^{\prime}_{2},\ldots,M^{\prime}_{k}\) be two path-reduction chains for \(M\), \(\ell\geq k\). Let \(\alpha_{1},\ldots,\alpha_{k-1}\) and \(\beta_{1},\ldots,\beta_{\ell}\) be the positions of the entries of \(M\) that we replace with zero to obtain the path-reduction chains \(M,M^{\prime}_{2},\ldots,M^{\prime}_{k}\) and \(M,M_{2},\ldots,M_{\ell}\), respectively. Notice that \(M_{2},\ldots,M_{\ell}\) is a path-reduction chain for \(M_{2}\) and, by the induction hypothesis, every path reduction chain for \(M_{2}\) has length \(\ell-1\). Starting from \(M_{2}\), we construct a path-reduction chain \(M_{2},\bar{M}_{3},\ldots,\bar{M}_{\ell}\) as follows. At each step \(i=1,\ldots,k-1\), if there is a closed simple path that contains \(\alpha_{i}\), we replace the entry in position \(\alpha_{i}\) by zero. We claim that we delete at most \(k-2\) entries of \(M_{2}\). In fact, if setting to zero the entries in position \(\beta_{1},\alpha_{1},\ldots,\alpha_{k-1}\) in the prescribed order yields a path-reduction chain of \(M\), by Lemma A.3 so does setting to zero the entries in position \(\alpha_{1},\ldots,\alpha_{k-1},\beta_{1}\). This contradicts the assumption that \(M,M^{\prime}_{2},\ldots,M^{\prime}_{k}\) is a path-reduction chain. So we have obtained a path-reduction chain for \(M_{2}\) of length \(\ell-1\leq k-1\). It follows that \(\ell=k\).
|
2308.02199 | A Survey of Spanish Clinical Language Models | This survey focuses in encoder Language Models for solving tasks in the
clinical domain in the Spanish language. We review the contributions of 17
corpora focused mainly in clinical tasks, then list the most relevant Spanish
Language Models and Spanish Clinical Language models. We perform a thorough
comparison of these models by benchmarking them over a curated subset of the
available corpora, in order to find the best-performing ones; in total more
than 3000 models were fine-tuned for this study. All the tested corpora and the
best models are made publically available in an accessible way, so that the
results can be reproduced by independent teams or challenged in the future when
new Spanish Clinical Language models are created. | Guillem García Subies, Álvaro Barbero Jiménez, Paloma Martínez Fernández | 2023-08-04T08:33:07Z | http://arxiv.org/abs/2308.02199v1 | # A Survey of Spanish Clinical Language Models
###### Abstract
This survey focuses in encoder Language Models for solving tasks in the clinical domain in the Spanish language. We review the contributions of 17 corpora focused mainly in clinical tasks, then list the most relevant Spanish Language Models and Spanish Clinical Language models. We perform a thorough comparison of these models by benchmarking them over a curated subset of the available corpora, in order to find the best-performing ones; in total more than 3000 models were fine-tuned for this study. All the tested corpora and the best models are made publically available in an accessible way, so that the results can be reproduced by independent teams or challenged in the future when new Spanish Clinical Language models are created.
## 1 Introduction
The enormous wealth of data present in the Electronic Health Records (EHR) opens up multiple possibilities for its use not only in biomedical research (secondary use) but also in clinical practice (primary use). Transforming unstructured data (as clinical notes) into structured data of EHR offers researchers and healthcare professionals the possibility of working with clinical data of higher quality and accuracy because redundancy of information has been eliminated, data is validated, and also integrated in databases, thus allowing accessing using structured queries. The most frequent uses of EHR in research, in addition to coding systems for billing purposes (e.g., ICD-10 coding of diagnoses in discharge reports), range from exploring new diagnostic and therapeutic solutions, to evaluating patient outcomes, identifying populations at risk, developing databases and repositories, among many others.
Processing data coming from EHR has been one the most important challenges in natural language processing (NLP). The application of NLP techniques in analysing information contained in the unstructured fields (free text) of the EHR makes it possible to extract the relevant information (expressed in concepts, relationships and events) and its transformation into a structured format. In this way, the structured information obtained can be stored in repositories integrated within the EHR itself, which would open up the possibility for its automatic exploitation, facilitating the development of decision support tools, clinical practice guidelines, support tools for the development of epidemiological studies, among many other applications.
Apart from the inherent complexities of understanding text, clinical text has additional peculiarities. While documents published in scientific journals and books have undergone careful editing, clinical narrative texts are usually hastily written in a telegraphic style, with numerous ellipsis and spelling errors and with incorrect syntax in many cases, abuse of negations, jargon, abbreviations among others. For instance, incomplete sentences (predominance of phrases instead of complete sentences), new terms arising every day, others due to misspellings or even new terms that medical professionals invent to generate their notes. From the semantic point of view, there are words with uses in medical language that differ from the language of general use. From the contextual point of view, the abuse of ellipsis makes interpretation difficult (e.g., the paragraph: "Complaints of chest pain, increased frequency, especially with exertion.
Usually associated with short respiration and nausea and vomiting" represents the entity "chest pain due to chest angina").
### Spanish language in NLP
With \(559\) million speakers, Spanish is the fourth most spoken language in the world [1]. However, and despite its relevance, there are few relevant NLP resources in Spanish. For instance, at the date of the writing of this article there are \(16520\) English models and \(3466\) corpora in the Hugging Face Hub, but only \(1379\) and \(351\) respectively for Spanish [2] (a lot of those being multilingual models). This means that there are \(11.35\) English models and 2.38 corpora for every million English speakers while there are only \(2.30\) and \(0.59\) respectively for Spanish. Another example can be the French language, with 309 million speakers, which has \(4.64\) and \(1.24\) French corpora for every million speakers.
Compared to English, Spanish is a highly inflectional language with a richer morphology; morphemes signify many syntactic, semantic, and grammatical aspects of words (such as gender, number, etc.). From a syntactic perspective, Spanish texts feature more subordinate clauses and lengthy sentences with a high degree of word order flexibility; for example, the subject is not restricted to appearing solely before the verb in a sentence. Spanish clinical texts have particularities that are explained below.
There is clearly a lack of resources for non-English language models, which has become even more pressing since the revolution brought about by BERT [3] that has culminated with ChatGPT and GPT-4 [4, 5, 6]. This lack is also quantified with the big amount of biomedical and clinical language models in other languages. Just to name a few, in English we can find BioBERT [7], PubMedBERT [8], BioGPT [9] or ClinicalGPT [10] and DrBERT [11] or CamemBERT-bio [12] for French.
Therefore, a significant amount of research efforts are still required in the development of corpora and language models for the Spanish language, even more in the clinical domain [13].
#### 1.1.1 Spanish language in the Clinical domain
Compared to the general Spanish language, in the Clinical domain there are more anglicisations as a result of the translation of English words used in biomedicine. Some of them are freely altered, while others are perfect replicas of the originals; for example, "adipocytokine" is also rendered as "adipocytocin", "adipocitoquina", and "adipocittokina". Additionally, the Spanish language uses accent marks that do not exist in English language, and whether or not these accent marks are preferred results in lexical variants. For example, the word "ebola" may become "ebola" or "ebola."
When translated, adjectives ending in "-al" sometimes retain their original form and other times adopt Spanish morphological conventions. For instance, "retinal" may become "retinal" or "retiniano". Greco-latin prefixes exhibit variations such as "psi-" ("psiqiuatra" vs. "siquiatra"). In contrast to Spanish, where there are many variations, English uses hyphens between words more consistently. For example, "beta-alanine" is converted into "beta-alanine" or "beta alanina" or "alanine beta". The names of active ingredients are sometimes kept the same as they are in English and other times they are changed. For example, the name "acetaminophen" is changed to "acetaminophen". Some terminology that refer to gender (male/female) are ambiguous ("el enzima"/"la enzima") or both are acceptable, for example, "el triotides"(male)/ "la triotides" (female) for the hormone "thyroid".
Abbreviations are frequently used in clinical notes, and typically Spanish and English abbreviations coexist. The term "PSA" stands for "prostate-specific antigen," which is preferable to the term "APE" (antigen prostatico especifico). However, both languages use polysemic abbreviations frequently. Sentences in the two languages are syntactically highly similar (short sentences or phrases, abuse of negation particles and unconventional abbreviations, misspellings, speculation, and grammatical sentence anomalies, among other things). In conclusion, due to term replication or partial adaptation, there are more lexical variants of medical terms in Spanish than in English. Because of these factors, studying these texts requires more resources and normalization tools.
### Purpuse of the study
The main objective of this study is to gather all the resources available to deal with clinical textual data in Spanish and benchmark the best models in order to obtain a leaderboard. Thus, we fine-tune more than \(3000\) combinations for the best available models on \(12\) public corpora to achieve our goal.
In the rest of this paper we present a brief summary on the evolution of NLP in the latest years in Section 2. In section 3 we make an overview of some of the latest corpora released in the clinical domain for the Spanish language. Then, in Section 4, we list the best Spanish language models (both general and domain specific). In Section 5 we explain what
corpora, models and metrics we used for the benchmark. Finally, in Section 6, we present a detailed benchmark of a selection of the models and corpora in order to get insights of the situation of this research field.
**Contributions** We make the following contributions:
* We create the **first public benchmark for clinical Spanish language models**.
* We prove that there is a lack of good quality clinical Spanish language models.
* We make a survey with the most relevant corpora and models released in the recent years.
## 2 Previous work
This last decade has experienced an exponential growth in NLP. We went from word embedding models like Word2vec [14] and GloVe [15], to gigantic language models with billions of parameters like PaLM [16].
This growth can be explained with the, also exponential, development of specific hardware dedicated to speed up the matrix calculations made in neural networks like NVIDIA's H100 GPUs [17], Google's TPUs [18] or AWS's Trainium [19]. With such an evolution in hardware, the training of neural architectures for the processing of sequences such as Recurrent Neural Networks (RNNs) [20; 21], bidirectional RRNs [22] and Long Short-term Memory (LSTM) [23] (and modifications like GRUs [24]) became practical.
These networks were used along word embeddings in order to train models for specific downstream tasks such as translation, token classification (or NER), text generation, text classification, etc. Every vector model encoded just a word and did not take into account context. Then, some models started to improve the encoding with subword information like FastText [25]. The biggest breakthrough here came in 2017 with the Transformer architecture and the contextualized embeddings [26]. After that a lot of new architectures appeared such as ELMo [27] and FLAIR [28].
However, the one that definitely changed everything after the Transformer was BERT [3] in 2018, a bidirectional encoder model based in the transformer that obtained much better results than the state of the art by that time. After that, hundreds of BERT-inspired models (like RoBERTa [29] or DeBERTa [30], just to name a few) were published [31]. Huggingface's Transformers library [32] and subsequently their Datasets library [33] and Huggingface Hub [34] helped to democratize these new architectures.
In parallel other models based in the Transformer were being created like GPT [35] which was, opposite to BERT, just the generative part of the Transformers. After that, the architecture of GPT-2 [36] stacked more and more transformers layers one on the top of another. GPT-3 [4] proved a really big change also, although the main difference with the architecture of their predecessors was just the size. However, it obtained outstanding results even in few-show scenarios without the need of fine-tuning in demanding benchmarks like WinoGrande [37]. This set a precedent, after all the decade of Open Sourced models, GPT-3 would not be made available to the public, companies understood their real potential.
After GPT-3, in 2022 ChatGPT was trained using Reinforcement Learning from Human Feedback (RLHF) [38] and suddenly, the chatbot revolution began. Companies announced their models but did not release them like GPT-4 [5] or LLaMA [39].
In our specific context of domain specific clinical models, they tried to follow the main enhancements of this ever-growing trend. For the clinical domain in Spanish language, most of the improvements were made in shared tasks such as IberLEF [40; 41] and BioNLP open shared tasks, [42; 43].
## 3 Corpora
NLP requires a great amount of data in order to obtain good results [44], so it its very important to gather as much data as possible. When talking about NLP data we can classify it into two big groups depending whether the data is structured (labeled) or not.
### Unannotated corpora
Corpora are very valuable in NLP because they are often very big and can be used to pre-train large language models without the need of labels. Although most of the times these corpora can be crawled online [45], this is not always possible in the clinical domain because data such as Electronic Health Records are not publicly available and contain personal data.
Note that this is not the case with biomedical data where PubMed papers and public clinical trials can be used [46].
In this section we will review the most relevant corpora in Spanish for the clinical domain.
#### 3.1.1 Spaccc
The Spanish Clinical Case Corpus (SPACCC) [47] is a collection of \(1000\) clinical cases from SciELO. A clinician classified the original dump of clinical cases into realistic clinical cases and not suitable for this specific corpus.
The corpus has \(382470\) tokens, a cc-by-4.0 licence and is publicly available 1.
Footnote 1: [https://zenodo.org/record/2560316](https://zenodo.org/record/2560316)
#### 3.1.2 European Clinical Case Corpus
European Clinical Case Corpus (E3C) [48] is a multilingual collection of clinical cases. The corpus contains clinical cases in English, French, Basque and Spanish.
We will focus in the Spanish part, which is a collection of other well known corpora such as SPACCC, NUBes (detailed down in Section 3.2.13), IULA+ (detailed down in Section 3.2.9) and a small amount of newly crawled SciELO clinical cases.
Although the corpus is mainly unlabeled, it has some labeled sub-sets. Specifically, the corpus has 3 different parts:
* Gold Standard: Named entities of time and factuality in THYME standard and medical entities in SNOMED-CT and ICD-10 standards. Annotated manually. This corpus has \(20k\) tokens.
* Silver Standard: The same as above but with a bigger size and annotated automatically. This corpus has \(70k\) tokens.
* Unnanotated data: All the data collected, about \(1M\) tokens.
The corpus has a cc-by-nc-4.0 licence and is publicly available 2.
Footnote 2: [https://live.european-language-grid.eu/catalogue/corpus/7618](https://live.european-language-grid.eu/catalogue/corpus/7618)
### Labeled data
Labeled corpora are used to train and evaluate the models in downstream tasks such as text classification or NER.
Given the nature of the clinical data, there are few corpora and most of them have a relatively small size compared to corpora in other domains. Also, almost all of these corpora come from shared tasks.
A summary of all the corpora is presented in Table 2.
#### 3.2.1 Barr2
Biomedical Abbreviation Recognition and Resolution 2nd Edition (BARR2) [49] is a NER corpus of clinical cases from SciELO where the entities are abbreviations (Short Form, SF) and their respective definition (Long Form, LF).
The corpus consists of two sub-tasks:
* Detection of pairs SF/LF mentions in text. This is some kind of multi label classification given that there is no need to classify the tokens individually.
* Detection of SF mentions and their characters offset (NER) and generating their corresponding LF.
While this corpus has been annotated by different experts, there are no annotation guidelines nor consistency analysis so it may not be as curated as a proper Gold Standard corpus.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Corpus** & **Tokens** \\ \hline E3C & 1M \\ SPACCC & 0.38M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Spanish Clinical Corpora
The corpus has a size of \(318\), \(146\) and \(220\) train, validation and test samples respectively. The test split comes from a bigger background unnanotated split of \(2879\) unnanotated samples. The corpus has a cc-by-4.0 licence and is publicly available 3.
Footnote 3: [https://temu.bsc.es/BARR2/datasets.html](https://temu.bsc.es/BARR2/datasets.html)
#### 3.2.2 Cantemist
CANTEMIST (CANcer TExt Mining Shared Task - tumor named entity recognition) [50] corpus is a Named Entity Recognition (NER) corpus that focuses in tumor morphology concepts in clinical texts written in Spanish. Specifically. CANTEMIST is focused in detecting mentions of tumor morphology terms and link them to their corresponding eCIE-O-3.1 codes (the Spanish branch of International Classification of Diseases for Oncology). The corpus includes diverse clinical cases covering different cancer types, both common and rare.
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus consists of three independent sub-tasks:
* CANTEMIST-NER track: In this track, the goal is to find tumor morphology mentions in medical documents.
* CANTEMIST-NORM track: This track focuses on clinical concept normalization or named entity normalization. The entities are tumor morphology entity mentions along with their corresponding eCIE-O-3.1 codes.
* CANTEMIST-CODING track: The objective of this track is to return a ranked list of ICD-O-3 codes for each document.
The corpus has a size of \(501\), \(500\) and \(300\) train, validation and test samples respectively. The test split comes from a bigger background unnanotated split of \(4932\) unnanotated samples. The corpus has a cc-by-4.0 licence and is publicly available 4.
Footnote 4: [https://zenodo.org/record/3978041](https://zenodo.org/record/3978041)
#### 3.2.3 Cares
CARES (Corpus of Anonymised Radiological Evidences in Spanish) [51] is a corpus composed of different radiology reports from several areas. These reports are annotated to hierarchically classify their ICD-10 codes. Specifically, each report has an annotation for the areas of the body that the report is talking about, their corresponding IDC-10 chapters, the ICD-10 codes and the sub-codes.
While this corpus has been annotated by different experts, there are no annotation guidelines nor consistency analysis so it may not be as curated as a proper Gold Standard corpus.
The corpus has a size of \(2250\), and \(966\) train and test samples respectively. The corpus has a afl-3.0 licence and is publically available 5.
Footnote 5: [https://huggingface.co/datasets/chizhikchi/CARES](https://huggingface.co/datasets/chizhikchi/CARES)
#### 3.2.4 Chilean Waiting List Corpus
The Chilean Waiting List Corpus (CWLC) [52] is a NER clinical corpus made up of anonymized referrals from waiting lists in Chilean public hospitals. 48% of the entities in the corpus are nested and the main entities are "Finding", "Procedure", "Family member", "Disease", "Body Part", "Medication" and "Abbreviation".
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(9000\) train samples. The corpus has a cc-by-nc-sa-4.0 licence and is publically available 6.
Footnote 6: [https://zenodo.org/record/7555181](https://zenodo.org/record/7555181)
#### 3.2.5 CodiEsp
CodiEsp (Clinical Case Coding in Spanish Shared Task) [53] is a corpus of clinical cases from a variety of medical topics, including oncology, urology, cardiology, pneumology or infectious diseases. The corpus has 3 different tasks:
* CodiEsp-D: Multi-label classification of ICD10-CM (CIE10 Diagnostic in Spanish).
* CodiEsp-P: Multi-label classification of ICD10-CM (CIE10 Procedimiento in Spanish).
* CodiEsp-X: NER task with the purpose of marking those segments in the text where evidences for the CIE10 labels can be found.
The corpus provides a manually annotated Gold Standard generated by a practicing physician and a clinical documentalist. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(500\), \(250\) and \(250\) train, development and test samples respectively. The test split comes from a bigger background unannotated split of \(2751\) unnanotated samples. The corpus has a cc-by-4.0 licence and is publically available 7.
Footnote 7: [https://zenodo.org/record/3837305](https://zenodo.org/record/3837305)
#### 3.2.6 Ct-Ebm-Sp
CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) [54] is a NER corpus made up of PubMed and SciELO abstracts and clinical trials announcements published in the European Clinical Trials Register and Repositorio Espanol de Estudios Clinicos. Although these are not strictly clinical sources, abstracts are rather general and simple descriptions and, given the size of the corpus, it can be very useful to enhance the generalization of the models.
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(12600\), \(4510\) and \(4550\) train, development and test samples respectively. The corpus has a cc-by-4.0 licence and is publically available 8.
Footnote 8: [https://huggingface.co/datasets/lcampillos/ctebmsp](https://huggingface.co/datasets/lcampillos/ctebmsp)
#### 3.2.7 DisTEMIST
The DisTEMIST (Disease Text Mining Shared Task) corpus [55] is a collection of clinical cases written in Spanish, covering various medical specialties. The corpus is annotated with disease mentions (i.e. NER corpus) that have been standardized using Snomed-CT terminology.
An expert filtered the gathered data so only relevant documents were included. After that, the data was preprocessed to delete redundant information such as figure references or citations. The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus consists of two independent sub-tasks:
* DISTEMIST-entities subtrack: NER to find diseases, without classifying them into any kind of taxonomy.
* DISTEMIST-linking subtrack: NER to find disease mentions and assign a Snomed-CT term.
There is also a multilingual version of the corpus containing data in English, Portuguese, Catalan, Italian, French, and Romanian.
The corpus has a size of \(750\) and \(250\) train and test samples respectively. The test split comes from a bigger background unnanotated split of \(3000\) unnanotated samples. The corpus has a cc-by-4.0 licence and is publically available 9.
Footnote 9: [https://zenodo.org/record/7614764](https://zenodo.org/record/7614764)
#### 3.2.8 eHealthKD
eHealthKD (eHealth Knowledge Discovery) [56] is a cross-domain NER corpus with samples gathered from MedlinePlus and Wikinews. These sources create a very heterogeneous corpus that can be very useful to prove the generalization of the models. The corpus has 2 different tasks:
* Subtask A: NER to recognize concepts, actions, predicates and references.
* Subtask B: Relation extraction from the previous NER task.
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(1500\), \(50\) and \(150\) train, development and test samples respectively. There are also \(50\) and \(150\) development and test samples in English, however they are not taken into account here because they are out of scope. The corpus has a cc-by-nc-sa-4.0 licence and is publically available 10.
Footnote 10: [https://github.com/ehealthkd/corpora/tree/master](https://github.com/ehealthkd/corpora/tree/master)
#### 3.2.9 Iula-Scrc
IULA Spanish Clinical Record Corpus (SCRC) [57] is a NER corpus with negation entities extracted from various anonymized clinical records. Besides IULA-SCRC, there is also another version of the corpus annotated with NUBes' annotation guidelines called IULA+.
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(3194\) train samples. The corpus has a cc-by-sa-3.0 licence and is publically available 11.
Footnote 11: [https://github.com/Vicomtech/NUBes-negation-uncertainty-biomedical-corpus](https://github.com/Vicomtech/NUBes-negation-uncertainty-biomedical-corpus)
#### 3.2.10 IxaMed-GS
IxaMed-GS [58] is a NER corpus composed of real (anonymized) Electronic Health Records. The entities annotated are related to diseases and drugs and there are also annotations of relationships between entities indicating adverse drug reaction events.
The corpus provides a manually annotated Gold Standard generated by experts in pharmacology and pharmacovigilance. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(1875\), \(1097\) and \(995\) train, development and test samples respectively. The corpus is not publically available and it can only be obtained through an agreement with its authors.
#### 3.2.11 LivingNER
LivingNER [59] is a corpus focused on species, pathogens and food in clinical case reports.
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus consists of three independent sub-tasks:
* Species NER): NER to identify either a species of a disease or human entities.
* Species Norm): NER to identify species of a disease and classifying them into NCBITax taxonomy.
* Clinical IMPACT): Multi-label classification. Classes: high impact (or not) and NCBI IDs.
The corpus has a size of \(1000\), \(500\) and \(485\) train, development and test samples respectively. The test split comes from a bigger background unnanotated split of \(~{}13000\) unnanotated samples. The corpus has a cc-by-4.0 licence and is publically available 12.
Footnote 12: [https://zenodo.org/record/7614764](https://zenodo.org/record/7614764)
Footnote 13: [https://huggingface.co/datasets/bigbio/meddocan](https://huggingface.co/datasets/bigbio/meddocan)
#### 3.2.12 MeddOCAN
MEDDOCAN (Medical Document Anonymization Track) [60] is a corpus of clinical cases sampled from the SPACCC corpus and enriched with synthetic personal information. The NER entities of the corpus range from the personal data of a patient to even the personal data of the doctors.
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(500\), \(250\) and \(250\) train, development and test samples respectively. The test split comes from a bigger background unnanotated split of \(2000\) unnanotated samples. The corpus has a cc-by-4.0 licence and is publically available 13.
#### 3.2.13 NUbes
The NUBes (Negation and Uncertainty annotations in Biomedical texts in Spanish) corpus [61] is a collection of \(29,682\) sentences from anonymised documents from a Spanish private hospital annotated with negation and uncertainty. The sentences come from documents from the following kinds: Chief Complaint, Present Illness, Physical Examination, Diagnostic Tests, Surgical History, Progress Notes, and Therapeutic Recommendations.
This is the biggest negation corpus for Spanish clinical domain. Samples have been annotated for a NER task with all the negations and the uncertainty that is reflected in the text. The corpus provides a manually annotated Gold Standard generated by expert linguists. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has no explicit testing splits so we can consider that the train split has \(29,682\) samples. The corpus has a cc-by-sa-3.0 licence and is publically available 14.
Footnote 14: [https://github.com/Vicomtech/NUBes-negation-uncertainty-biomedical-corpus](https://github.com/Vicomtech/NUBes-negation-uncertainty-biomedical-corpus)
#### 3.2.14 PharmaCoNER
The PharmaCoNER (Pharmacological Substances, Compounds and proteins and Named Entity Recognition) corpus [62] is a random sample of the SPACCC corpus that has been annotated manually for two different tasks.
* NER: The entities are drugs and chemicals that appear in the text. Depending on their nature, they can be "normalizable" using SNOMED-CT, "non normalizable", "proteins" or "unclear".
* Concept indexing: Task to identify all the SNOMED-CT IDs corresponding to each of the entities from the first task.
The corpus provides a manually annotated Gold Standard generated by medicinal chemistry experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(500\), \(250\) and \(250\) train, development and test samples respectively. The test split comes from a bigger background unnanotated split of \(2751\) unnanotated samples. The corpus has a cc-by-4.0 licence and is publically available 15.
Footnote 15: [https://zenodo.org/record/4270158](https://zenodo.org/record/4270158)
#### 3.2.15 SocialDisNER
SocialDisNER (Mining social media content for disease mentions) [63] is a collection of health-related tweets focused on disease mentions. Although it is not strictly a clinical corpus, its unique nature of informal language talking about health can be very useful to check the generalization of the models.
The corpus provides a manually annotated Gold Standard generated by experts. There are also annotation guidelines and consistency analysis to ensure the quality of the corpus.
The corpus has a size of \(6000\), \(2000\) and \(2000\) train, development and test samples respectively. The corpus has a cc-by-4.0 licence and is publically available 16.
Footnote 16: [https://zenodo.org/record/6803567](https://zenodo.org/record/6803567)
## 4 Models
Despite some recent attempts on creating static word embeddings like Spanish Clinical Embeddings [64], the Word embeddings for the Spanish clinical language [65] and even contextual embeddings like the Spanish Clinical Flair [66] we will not emphasize on those because it has been proven that transformer-based language models outperform them in most of the situations [67].
However, given that there are not that many Spanish clinical language models, we will also list general models with good results in the Spanish language.
### Beto
BETO [68] was the first Spanish language model with BERT architecture. The model was trained with the Spanish Unnanotated Corpora (SUC) [69] (4GB corpus) and has 110M parameters (the same as BERT). Although it was released in \(2020\), the model is still useful as a baseline given its simplicity and stability.
The model has cc-by-4.0 licence and is publically available 17.
Footnote 17: [https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased)
### MarIA
The next general Spanish language model is MarIA [70]. Released in \(2021\), MarIA is based in the RoBERTa [29] architecture. The base version of the model has 117M parameters and the large one has 355M, with the large version having the exact same architecture and size than the original RoBERTa-large. The corpus used to train this model is a web crawl performed by the Spanish National Library with a size of 570GB once clean.
Although both models have good results, this paper will focus on the large model given that the results are better and the model is still small enough to fit in a single GPU.
The model has apache-2.0 licence and is publicly available 18.
Footnote 18: [https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne)
### RigoBERTa
RigoBERTa is a DeBERTa-based model released in \(2022\) by the Instituto de Ingenieria del Conocimiento (IIC) [71]. The model has 134M parameters and was trained on a mix of various crawl sources (such as OSCAR [45] and mC4 [72]), Spanish news articles and the Spanish Unmanotated Corpora (SUC) [69].
Later, in \(2023\), the second version of the model has been released, RigoBERTa 2. This is the model that will be tested in the benchmark.
The model has all rights reserved and is not publically available, although it can be obtained through its autors.
### XLM-RoBERTa
Before jumping into the specific clinical models, it is important to note that there are some very good performing multilingual models that we should take into account given the small amount of purely Spanish language models publicly available. Otherwise we could leave very good models out [73].
XLM-RoBERTa [74] is a multilingual version of RoBERTa trained in 2019. The model was trained with the CC100 corpus [74]. This corpus has 2.5TB in size and has data of \(100\) different languages.
There are several models trained with this architecture and corpus: base, large, xl and xxl sizes with 250M, 560M, 3.5B and 10.7B parameters respectively. For this paper we will stick with the large model given that it fits in a GPU and is stable enough given the sizes of the corpora used in the benchmark.
The model has MIT licence and is publically available 19.
\begin{table}
\begin{tabular}{l c c c c} \hline
**corpus** & **Train samples** & **Val samples** & **Test Samples** & **Problem type** \\ \hline BARR2 & 318 & 146 & 220 & NER \\ CANTEMIST & 501 & 500 & 300 & NER \\ CARES & 2250 & 0 & 996 & Classification \\ CWLC & 9000 & 0 & 0 & NER \\ CodiEsp & 500 & 250 & 250 & Classification \\ CT-EBM-SP & 12600 & 4510 & 4550 & NER \\ DisTEMIST & 750 & 0 & 250 & NER \\ eHealthKD & 1500 & 50 & 150 & NER \\ IULA-SCRC & 3194 & 0 & 0 & NER \\ IxaMed-GS & 1875 & 1097 & 995 & NER \\ LivingNER & 1000 & 500 & 485 & NER \\ MEDDOCAN & 500 & 250 & 250 & NER \\ NUBes & 29,682 & 0 & 0 & NER \\ PharmaCoNER & 500 & 250 & 250 & NER \\ SocialDisNER & 6000 & 2000 & 2000 & NER \\ \hline \end{tabular}
\end{table}
Table 2: Summary of Spanish Clinical Corpora
### DeBERTaV3
DeBERTaV3 [75] is the latest DeBERTa [30] based model and it was released in 2021. Although DeBERTa is a family of English language models, DeBERTaV3 includes a multilingual model too, mDeBERTaV3.
The model was trained with the CC100 corpus, the same as XLM-RoBERTa. The resulting multilingual model has 276M parameters which starts to be in the big side for a single GPU.
The model has MIT licence and is publically available 20.
Footnote 20: [https://huggingface.co/microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base)
### GPT family
After the release of ChatGPT in late 2022 and GPT-4 in early 2023, this family of models brought a revolution both to the NLP research community and to the society in general [76]. Although these are generative models, they shine for their ability to tackle any kind of problem in a zero or few shot learning approach.
These models are not open source and can only be accessed via the OpenAI API 21. Therefore we can only make educated guesses about how they have been trained and with which corpora. There are also some ethical concerns regarding the closed nature of the models [6], even bigger in this medical domain where a lot of medical institutions have strict security protocols that forbid them to upload personal data to the internet.
Footnote 21: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference)
For instance, ChatGPT is an evolution of GPT-3 [4] fine-tuned with some kind of RLHF (Reinforcement Learning from Human Feedback) [77]. In the case of GPT-4 [5] we can assume that is an evolution of GPT-3 with a bigger corpora and more parameters.
There is evidence that GPT-4 has improved a lot the results in some clinical NLP benchmarks in English [78], however it has not been tried in Spanish or in more encoder-focused tasks like Named Entity Recognition. In the Section 6.1 we will try to use these models in some clinical corpora to check their viability in every aspect.
### Galen family
The Galen [79] models are domain adaptations [80] of preexisting Spanish capable language models like mBERT, BETO and XLM-RoBERTa published in 2021. This specific domain adaptation has been performed by further pretraining the above mentioned models with a clinical corpus extracted from the Galen Oncology Information System [81].
All models are publically available under a MIT licence 22, however they are not published in the Model Hub, only their weights are published which makes it harder to reuse them.
Footnote 22: [https://github.com/guilopgar/ClinicalCodingTransformerES](https://github.com/guilopgar/ClinicalCodingTransformerES)
### bsc-bio-ehr-es
bsc-bio-ehr-es [82] is a biomedical model based in the RoBERTa architecture published in 2022. The model has been trained from scratch (instead of a domain adaptation) using a large biomedical corpora and also a smaller corpus of electronic health records.
There are two models in these series, bsc-bio-es and bsc-bio-ehr-es where the first is trained only with the first corpora and the later is trained with all the gathered data. Both models have 125M parameters. In this paper we will only compare the bsc-bio-ehr-es model given that the authors report better results for this use case.
Most of the data sources for these models are not publically available so we cannot assert the claims of the authors.
The model has apache-2.0 licence and is publically available 23.
Footnote 23: [https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es)
### Other Generative Large Language Models
Since the release of ChatGPT, a lot of new generative Large Language Models have been created. Although they are way out of scope for this study and most of them are in English, we will mention the most relevant ones that may be featured in future work. Also we carry out a preliminary study on chat models (OpenAI models) to check their feasibility in Section 6.1.
Most of the models have a focus in English language, but due to their size, they can all perform well in Spanish. Most notoriously we can find Llama [39] and Llama2 [83] with some of their biggest versions being multilingual, Falcon [84] with a 40B multilingual version, Vicuna [85], BLOOM [86] and PaLM [16].
## 5 Method
Formally, we will evaluate results using the F1 score for the NER corpora and F1 with micro average classification tasks. However we will also take into account the availability of the models and the computational cost of running them.
The main evaluation has been designed for the encoder models and they are fine-tuned using the chosen corpora. For the metaparameters optimization we designed a grid of parameters for all the models, described in the Appendix A.
### Corpora
Most of the clinical corpora in Spanish are either NER corpora or non-labeled corpora so there is a relatively low variety of corpora in order to make a benchmark. Having this into account, we selected the following corpora for the benchmark:
* catememist: Good example of well annotated NER corpus (3.2.2).
* caresA: Multi-label classification of the body area mentioned on the radiological report (3.2.3).
* caresC: Multi-label classification of the ICD-10 chapters that correspond to the radiological report (3.2.3).
* ctebmsp: (3.2.6).
* distemist: Good example of well annotated NER corpus (3.2.7).
* ehealth_kd: Subtask A, NER with very different entities (3.2.8).
* livingner1: LivingNER task 1. Good example of well annotated NER corpus (3.2.11).
* livingner3: LivingNER task 3. Multi-label classification corpus.(3.2.11).
* meddocan: Anonymization corpus in clinical cases (3.2.12).
* nubes: Big NER corpus and with negation entities (3.2.13).
* pharmaconer: Good example of well annotated NER corpus (3.2.14).
* socialdisner: Informal language talking about health (3.2.15).
### Models
Due to the limited time and computational budget, not every available model has been tested in the benchmark. Next we will list the chosen models:
* BETO: We use BETO as a baseline for Spanish language models.
* BETO_Galen: A domain adaptation of BETO, it should be better for these experiments.
* bsc-bio-ehr-es: Smaller and trained only with relevant data.
* MarIA: The best open-sourced Spanish only encoder language model.
* mDeBERTaV3: Second-best performing multilingual model.
* RigoBERTa 2: The best Spanish only encoder language model.
* XLM-R_Galen: A domain adaptation of the best model for Spanish.
* XLM-RoBERTa-Large: The best results for Spanish language.
We will also show some experiments with GPT-3 and GPT-4 that we did not include in the final benchmark due to some inherent limitations to generative language models and the limited OpenAI API functionality.
### Public benchmark
For the sake openness, all the final fine-tuned models, as well as all the corpora used and the code for the benchmark will be publically available 24. Note that neither the RigoBERTa 2 fine-tuned models nor the base model can be publically available due to license constrains.
Footnote 24: [https://github.com/iconocimiento/survey-spanish-clinical-language-models](https://github.com/iconocimiento/survey-spanish-clinical-language-models)
Reuploading the corpora and gathering them all in the Hugging Face Hub allows us to use the Hugging Face Leaderboards 25 to create the first benchmark for Spanish Clinical Language Models.
Footnote 25: [https://huggingface.co/spaces/autoevaluate/leaderboards](https://huggingface.co/spaces/autoevaluate/leaderboards)
All the information about the resources will be displayed in Appendix B.
## 6 Evaluation and results
In Table 3 we can see the results of the benchmark. RigoBERTa 2 obtains the best results by far, winning in 6 out of the \(12\) corpora. Overall it was also the most stable model, consistently obtaining good results, not like mDeBERTaV3 that, despite having very good results, obtained an exceptionally bad result in the livingner3 corpus. XML-RoBERTa-Large obtains also very good results and is the best open-sourced model available.
Paradoxically, the more specific the model is, the worse that its results are. At a first glance it might look like the bigger model always wins, however we must remember that both XLM-R_Galen and BETO_Galen are domain adaptations of bigger models. Specifically XLM-R_Galen uses as base model XML-RoBERTa-Large, the best performing model in the benchmark. Consequently this means that it is not always worth to adapt a model to a certain domain if the corpus for the adaptation is not big or good enough.
bsc-bio-ehr-es, the only model trained from scratch in this benchmark obtains relatively good results given its size and the relatively small amount of data used during the training process. However, this model does not obtain any improvement against the general Spanish language models or the multilingual ones.
In the Figure 1 we can see a Nemenji plot with a classification of the models given the results of the benchmark. We can see that, although the first models are within the critical distance, both RigoBERTa 2 and XLM-RoBERTa-Large are statistically better than all the domain adaptations.
\begin{table}
\begin{tabular}{l|c c c c c c|c c} \hline \hline & \multicolumn{6}{c|}{_Spanish Only models_} & \multicolumn{3}{c}{_Multilingual_} \\ \cline{2-10} & \multicolumn{3}{c|}{_Clinical models_} & \multicolumn{3}{c|}{_General Models_} & \multicolumn{3}{c}{_Multilingual_} \\ \hline Corpus & XLM-RG & BETO\_G & bsc-ehr & BETO & MarIA & RigoB2 & mDeB3 & XLM-RL \\ \hline cantemist & 0.898 & 0.802 & 0.864 & 0.898 & 0.902 & 0.903 & 0.890 & **0.904** \\ caresA & 0.989 & 0.977 & 0.991 & 0.992 & 0.992 & **0.997** & 0.993 & 0.994 \\ caresC & 0.823 & 0.551 & **0.862** & 0.835 & 0.840 & 0.854 & 0.756 & 0.847 \\ ctebmsp & 0.881 & 0.726 & 0.876 & 0.880 & 0.877 & **0.907** & 0.902 & 0.906 \\ distemist & 0.759 & 0.346 & 0.759 & 0.801 & 0.793 & **0.832** & 0.808 & 0.817 \\ ehealth\_kd & 0.830 & 0.658 & 0.836 & 0.843 & 0.836 & 0.865 & 0.844 & **0.871** \\ livingner1 & 0.907 & 0.646 & 0.938 & 0.938 & 0.939 & 0.951 & **0.953** & 0.949 \\ livingner3 & 0.500 & 0.000 & 0.604 & 0.626 & **0.644** & 0.621 & 0.153 & 0.606 \\ meddocan & 0.947 & 0.682 & 0.967 & 0.957 & 0.977 & **0.979** & 0.974 & 0.978 \\ nubes & 0.908 & 0.762 & 0.903 & 0.908 & 0.911 & 0.915 & 0.919 & **0.920** \\ pharmaconer & 0.915 & 0.708 & 0.904 & 0.908 & 0.914 & **0.927** & 0.922 & 0.924 \\ socialdisner & 0.919 & 0.777 & 0.921 & 0.915 & 0.920 & **0.943** & 0.935 & 0.941 \\ \hline Average & 0.853 & 0.621 & 0.869 & 0.873 & 0.877 & **0.890** & 0.833 & 0.887 \\ Wins group & 0 & 0 & 1 & 0 & 1 & **10** & 1 & 11 \\ Wins total & 0 & 0 & 1 & 0 & 1 & **6** & 1 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results with models grouped into Spanish Only and Multilingual models. Spanish Only models are splitted into Clinical models and General models. Best results for every group are underlined and best results overall are in **bold**. We also report the sum of the scores for every model, wins in own group and wins overall. The metric is micro F1 for classification models and F1 for NER models. XLM-RF = XLM-R_Galén, BETO_G = BETO_Galén, bsc-ehr = bsc-bio-ehr-es, RigoB2 = RigoBERTa 2, mDeB3 = mDeBERTaV3, XLM-RL = XLM-RoBERTa-Large
### GPT family
We made two attempts to use this technology, first with the NER corpus CANTEMIST and then with the multi-label classification corpus CaresC. Generative models are not prepared for any of these tasks so we had to tune the prompts in order to achieve what we wanted. As we will see with the results of these experiments, it is not easy to use these models properly, but it is worth giving them a chance [87].
#### 6.1.1 Named Entity Recognition
First, in the NER corpus we observed that the models (both GPT-3 and GPT-4) are capable of finding some entities in the text. However there is no way to obtain a comparable score if those entities do not have their respective positions also labeled in the text. It is technically possible to perform token classification tasks such as NER with generative language models: we just need to add a classification head over each output embedding. However, the OpenAI API does not allow to do it straightforwardly and transforms the task into a text-to-text problem which turned out to be inefficient and error prone.
While the models more or less understood our queries, there were a lot of formatting errors and most of the times every entity was positioned wrong. In Appendix C.1 a more detailed description of both the prompts and parameters used can be found.
After a lot of attempts without a good solution we exhausted our budged for this experiment and we decided to stop experimenting and not try any fine-tuning or few shot evaluation due to the high risk of a bad result and the high cost of the platform. This budget, similarly with the one showed in Table 4, was higher that what would have costed a GPU machine to fine-tune an encoder model.
We also tried the edits endpoint of the API to insert the NER entity after its appearance, but it did not work well either.
#### 6.1.2 Classification
Next we tried with a multi-label classification corpus CaresC. Although it is relatively simple to correctly prompt these models in order to perform binary or multiclass classification, more precise prompting had to be done to achieve consistent and good results. All the details of these process can be found in the Appendix C.2.
Once we obtained good enough prompts, we evaluated the performance in a few-shot approach. Also a fine-tunning was attempted, taking into account that the fine-tuning API does not support multi-label classification, so the training process was a general text generation fine-tuning. In the Table 4 we can see a detailed summary of the results and costs.
Surprisingly, we can see that when comparing GPT models against a far simpler encoder-only model such as bsc-bio-ehr-es, the latter obtains the best results by far. Within GPT models, there is a clear increase of quality when a fine-tuning process is performed to the base models, up to a point that the few shot models appear as a poor-quality but expensive alternative. Even when using the most expensive GPT approach (fine-tune the Davinci model), we obtain a significantly poor result in comparison to a classic encoder fine-tuning. The only acceptable exception to all these bad results is the fine-tuning of the Ada model, which is \(4\) times cheaper than bsc-bio-ehr-es; however the result in terms of quality is considerably worse.
Figure 1: Nemenji plot of the results
There is also the concern that all the data used in the OpenAI API can be privately used by OpenAI to improve their models, which can make these solutions unusable for the clinical domain given the European data protection laws.
All in all, these results show that this Large Generative Language Models revolution does not apply to all fields, problem types and languages. There is also more evidence of this in more thorough studies like the one by Chen et al. [88]. Wang et al. [10] (similarly to Luo et al. [9] with BioGPT, instead trained a ClinicalGPT in order to greatly improve the results in clinical domain tasks). However it must be noted that all these efforts generated English language models so they can be seen as future work for the Spanish language.
## 7 Conclusion
Through this study we have observed that there is a fair amount of corpora and models for the application of NLP to clinical problems in Spanish. However a lot of that corpora is not publically available or does not have a proper Gold Standard so models can easily learn from it and be evaluated.
We benchmarked the most promising models for the clinical domain in Spanish in a variety of corpora to check their real performance, and we found out that none of them is good enough to beat a big general multilingual model or a closed-source commercial one. This holds true even if the clinical models are domain adaptations of the best model open-source model. This means that there is a lot of work to be done for the Spanish language in terms of publicly available language models, and that we have room to improve.
Following those experiments, our main contribution was the release of a standardized benchmark to test future Spanish language models for medical applications and the gathering of a series of models and corpora in an easy to access platform.
We also tried the revolutionary generative models and proved that they cannot be used for all use cases: at this time there is no such thing as an Artificial General Intelligence and there are more narrow and affordable solutions that perform better for certain applications.
It is clear then, that we need better Spanish language models, both general and domain specific. We also need a big corpora of Spanish clinical texts so these models can be trained. We are aware that this is a big claim, but we just cannot relegate ourselves to using multilingual models in this crucial domain where people's lives are at stake.
## Acknowledgments
This work was supported in part by the Instituto de Ingenieria del Conocimiento and R&D&i ACCESS2MEET (PID2020-116527RB-I0) project supported by MCIN AEI/10.13039/501100011033/. Additionally, this work has been supported by "Intelligent and interactive home care system for the mitigation of the COVID-19 pandemic" PRTR-REACT UE project.
|
2306.13538 | A brief introduction on latent variable based ordinal regression models
with an application to survey data | The analysis of survey data is a frequently arising issue in clinical trials,
particularly when capturing quantities which are difficult to measure. Typical
examples are questionnaires about patient's well-being, pain, or consent to an
intervention. In these, data is captured on a discrete scale containing only a
limited number of possible answers, from which the respondent has to pick the
answer which fits best his/her personal opinion. This data is generally located
on an ordinal scale as answers can usually be arranged in an ascending order,
e.g., "bad", "neutral", "good" for well-being. Since responses are usually
stored numerically for data processing purposes, analysis of survey data using
ordinary linear regression models are commonly applied. However, assumptions of
these models are often not met as linear regression requires a constant
variability of the response variable and can yield predictions out of the range
of response categories. By using linear models, one only gains insights about
the mean response which may affect representativeness. In contrast, ordinal
regression models can provide probability estimates for all response categories
and yield information about the full response scale beyond the mean. In this
work, we provide a concise overview of the fundamentals of latent variable
based ordinal models, applications to a real data set, and outline the use of
state-of-the-art-software for this purpose. Moreover, we discuss strengths,
limitations and typical pitfalls. This is a companion work to a current
vignette-based structured interview study in paediatric anaesthesia. | Johannes Wieditz, Clemens Miller, Jan Scholand, Marcus Nemeth | 2023-06-23T14:58:50Z | http://arxiv.org/abs/2306.13538v2 | # A primer on computational statistics for ordinal models with applications to survey data
###### Abstract
The analysis of survey data is a frequently arising issue in clinical trials, particularly when capturing quantities which are difficult to measure using, e.g., a technical device or a biochemical procedure. Typical examples are questionnaires about patient's well-being, pain, anxiety, quality of life or consent to an intervention. Data is captured on a discrete scale containing only a limited (usually three to ten) number of possible answers, of which the respondent has to pick the answer which fits best his personal opinion to the question. This data is generally located on an ordinal scale as answers can usually be arranged in an increasing order, e.g., "bad", "neutral", "good" for well-being or "none", "mild", "moderate", "severe" for pain.
Since responses are often stored numerically for data processing purposes, analysis of survey data using ordinary linear regression (OLR) models seems to be natural. However, OLR assumptions are often not met as linear regression requires a constant variability of the response variable and can yield predictions out of the range of response categories. Moreover, in doing so, one only gains insights about the mean response which might, depending on the response distribution, not be very representative.
In contrast, ordinal regression models are able to provide probability estimates for all response categories and thus yield information about the full response scale rather than just the mean. Although these methods are well described in the literature, they seem to be rarely applied to biomedical or survey data. In this paper, we give a concise overview about fundamentals of ordinal models, applications to a real data set, outline usage of state-of-the-art-software to do so and point out strengths, limitations and typical pitfalls. This article is a companion work to a current vignette-based structured interview study in paediatric anaesthesia (German Clinical Trials Register DRKS00027090).
_Keywords:_ Ordinal regression, cumulative link models, logistic regression, response distribution, factors influencing willingness to consent to participation
Introduction
Data derived from surveys or patient interviews are often subject of research in medicine. As answers are usually encoded numerically, e.g., using the numeric rating scale for pain, data analysis using linear models often seems reasonable. Whereas for finely graduated response scales summary statistics such as mean or median response are often of interest, these are often not very meaningful or representative for response scales with only a few categories, e.g., "On a scale of 1 (absolutely yes) to 5 (absolutely no), do you consent in the participation in the following study?". In this case, probabilities for the individual response levels or the proportion of responders who at least "rather consent" for participation would be more revealing. The application of this statistical methodology is illustrated using a vignette-based interview study from paediatric anaesthesia.
### Motivation
The setting of the study is as follows: Parents or legal representatives of inpatient children were asked for their willingness to consent to participation in three _fictional_ studies containing potential objectives for clinical studies in paediatric anaesthesia that widely differed in terms of invasiveness. Aim of this investigation was to identify factors influencing the willingness to participate (acronym Filippa) in the following studies:
1. a prospective _observational study_ on a non-invasive temperature measurement sensor,
2. a _randomised controlled trial_ (RCT) of inducing anaesthesia by intravenous vs. inhalative agents, and
3. a _pharmacological study_ on intravenous ibuprofen (painkiller) designed to collect data for regular approval in children, corresponding to an open label phase-II pharmacological study.
Interviews were conducted by the same investigator to ensure consistent wording and in the same order (defined by increasing invasiveness as outlined above). Legal representatives were asked if they were willing to have their child participating in the corresponding studies. Responses were captured on a five-point Likert-scale of "_absolutely consent_", "_rather consent_", "_unsurve_", "_rather decline_" and "_absolutely decline_" participation. For further details on experimental design and the exact descriptions of the studies, we refer to Miller et al. (2023).
Figure 1 portrays a descriptive statistic of the response distribution stratified by the three studies with increasing levels of invasiveness (bar charts, left to right) in form of an alluvial diagram. The streams between the bar charts show the migration between the answers from one question to the following one. It appears that as the study becomes more invasive,
willingness to participate generally decreases. Note that this behaviour is, however, not present in all participants. This might be due to the fact that a certain understanding of research in medicine is beneficial to understand the different degrees of invasiveness between the studies, particularly the RCT and the pharmacological study.
Statistical inference, particularly identification of significantly influencing factors or testing, however, is only possible within a statistical model. For this purpose, we perform an _ordinal regression_ on the response depending on the study type and including various covariates such as child's sex, age, preceding participation in studies, as well as age and professional medical degree of the legal representatives.
The applied statistical approach presented here is based on the literature briefly summarised below. Agresti (2013) provides a comprehensive introduction to categorial data analysis but early work can already be found in Hildebrand et al. (1977). Agresti (2012) and McCullagh (1980) focus particularly on ordinal models, and Fullerton (2009) provides a concise comparison of different types of models. Goodness of fit and model selection problems as well as summary measures of the model's predictive power are addressed in Fagerland and Hosmer (2016); Agresti and Kateri (2017); Agresti and Tarantola (2018). Most ordinal regression fall into the category of _generalised linear models_. To this end, Agresti
Figure 1: Alluvial diagram of the migration of the level of consent for three studies with increasing level of invasiveness (left to right) indicated by the streams between the bars. The bar charts show the frequency distribution of the corresponding responses for a given study.
(2015) provide an detailed derivation. A software implementation based on these generalised linear models (so-called _cumulative link models_) as well as a comprehensive how-to-apply-it tutorial is given in Christensen (2015, 2018). A clear online introduction using software examples can also be found under Dunn (2020, 2020); the latter also covers issues from Bayesian ordinal regression. More recent articles demonstrate the applications of cumulative link models in deep learning classification algorithms, see e.g. Vargas et al. (2020). Regarding applications in medical research, Norris et al. (2006) present a concise comparison between different approaches. For more finely graduated responses scales, as it is often the case for e.g. verbal, visual analogue or numeric scales, Heller et al. (2016) present an approach for data analysis. Manuguerra et al. (2020) provide corresponding software to do so. This article particularly focusses on the analysis of data on response scales with only few levels and provides practical recommendations, enriched with corresponding R code sections.
### R package ordinal
For the analyses within this tutorial we employ the R package ordinal by Christensen (2015) which is available on CRAN, or via [https://github.com/runehaubo/ordinal](https://github.com/runehaubo/ordinal). Code sections are presented at suitable positions and were executed using R version 4.3.0, R Core Team (2023). The package supports fitting of ordinal fixed effects models (clm) as well as mixed effects models (clmm). To use the package, no extraordinary data preprocessing is required. An overview of the data set is provided below:
data %>% glimpse()
## Rows: 318
## Columns: 8
## $ id <fct> 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, ^
## $ age <dbl> 1.6438356, 1.6438356, 1.6438356, 13.3315068, 13^
## $ sex <fct> Female, Female, Female, Female, Female, Female, ^
## $ partner <chr> "Mother", "Mother", "Mother", "Father"
## $ partner_age <dbl> 35, 35, 35, 53, 53, 53, 38, 38, 38, 34, 34, ^
## $ prev_studies <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE
## $ study <fct> Observational study, Randomised controlled tria^
## $ response <ord> Absolutely consent, Rather consent, Rather cons^
Presented results are based moreover on the packages emmeans, generalhoslem, ggstream and tidyverse, Lenth (2022); Jay (2019); Sjoberg (2021); Wickham et al. (2019). Several graphical parameters have been outsourced; code is provided in Appendix A.
### Outline
This article is structured as follows: Section 2 puts ordinary linear regression and ordinal models in juxtaposition and points out each strengths and limitations. Section 3 introduces the fundamental ideas of ordinal regression models. Section 4 considers the influence of discrete and continuous covariates on the response outcome and examine different aspects on estimated parameters and confidence intervals, prediction, goodness of fit and outline numerical issues that may arise. We allude to advanced topics in Section 5 and summarise our findings in Section 6. All computations are stated along with corresponding R code sections. An interactive version of this manuscript can be found at [https://jwieditz.github.io/FILIPPA](https://jwieditz.github.io/FILIPPA).
## 2 Ordinal models: strengths, limitations and alternatives
The question of how to evaluate ordinal response data appropriately has been widely discussed in the literature, cf. Agresti (2013, Chapter 1). So far, however, no broad consensus has been found as the chosen approach often depends on the original question to be answered.
Approaches considering ordinal responses only as mere categorical data do not exploit the additional structure. In contrast to these nominal approaches, ordinal models can provide descriptive statistics similar to ordinary linear regression, such as means, slopes or correlations. Furthermore, ordinal analysis can use a greater variety of models. These models are usually more efficient yielding higher power for detecting trends or location alternatives using fewer parameters. Moreover, these parameters are often simpler in their interpretation than parameters in standard models for nominal variables, cf. Agresti (2013, Section 1.2).
For ordinal data with many response levels, as for instance for visual analogue/ numeric scales, ordinal models are often inappropriate as individual response levels are not that meaningful or fitting might even be infeasible if the number of parameters to be estimated grows too large, cf. Heller et al. (2016). For this kind of data, responses are commonly already encoded numerically on the questionnaire. As a result, ordinary least squares analysis is usually less problematic and provides more interpretable insights than an ordinal analysis, cf. Agresti (2012, Section 1.2).
In contrast, for ordinal data consisting of only a few response levels, ordinal regression analysis is often the better choice even though a linear regression analysis can be useful for identifying variables that clearly affect the response variable. Agresti (2012, Section 1.3.2) points out a number of reasons why ordinary linear regression is in this case often inappropriate:
1. There is usually _not_ a clear-cut choice for the scores, i.e. a particular response outcome might be consistent with a range of values of some underlying latent score, modelling an abstract quantity causing the response, see Section 3. Ordinary regression analysis, however, does not allow for such an error.
2. An ordinary regression approach does _not_ provide probability estimates for the response levels but only a prediction of the estimated value given covariates (possibly with corresponding confidence interval to quantify uncertainty).
3. Linear regression may yield predictions beyond the original response scale (i.e. above the highest or below the lowest level).
4. Linear regression ignores different variabilities in the response categories: usually there is only a small variability at predictor values for which observations fall mainly in the highest (or lowest) category, but there is a considerable variability at predictor values for which observations tend to spread among the categories.
As for the presented application, see Section 1.1, the distribution of consent and the number of people responding "rather consent" was of particular interest, we argue that an ordinal analysis is the most appropriate in this case. For further applications and discussions about model choices we refer to Agresti (2012, Section 1.3) and the references therein.
## 3 Ordinal regression models
In ordinal statistics, the quantity of interest is typically an ordinal response (e.g., of a survey), modelled by a random variable \(Y\) which is assumed to take value on an _ordinal scale_ with \(L\geq 2\) levels, \(1\leq 2\leq\cdots\leq L\), (e.g., the levels of consent). Note that although the levels are encoded numerically, it is neither assumed that we can interpret between-level distances (e.g., \(2-1\)) nor that the distances between two levels are equidistant--there might be a large difference between "unsure" and "rather consent" but only a small one between "rather consent" and "absolutely consent".
For ordinal regression, it is assumed that \(Y\) can be acquired as the discretisation of an unobserved, continuous _latent score_\(S\) as
\[Y=\ell\quad\text{ if and only if }\quad\theta_{\ell-1}<S\leq\theta_{\ell} \tag{1}\]
for all levels \(\ell=1,2,\ldots,L\) where the \(\theta_{\ell}\)'s are _cut-points_ (also _threshold coefficients_) corresponding to the level boundaries on the scale of the latent variable, see Figure 2. The cut-points \(\theta_{\ell}\) are assumed to be ordered in a strictly increasing manner \(-\infty=\theta_{0}<\theta_{1}<\cdots<\theta_{L-1}<\theta_{L}=+\infty\) and have to be estimated within a regression framework. For an ordinal variable with \(L\) levels we have to estimate \(L-1\) cut-points \(\theta_{1},\theta_{2},\ldots,\theta_{L-1}\).
The latent variable includes all external parameters which can influence the response behaviour of the responder and is often thought of as an abstract quantity, e.g., consent, quality of life or pain, of which \(Y=1,2,\ldots,L\) represent the ordinal levels e.g., "absolutely consent", "rather consent", "unsure", "rather decline", "absolutely decline" (for \(L=5\)).
From Equation (1) follows that the probability distributions of \(Y\) and \(S\) are related as \(\mathbb{P}(Y\leq\ell)=\mathbb{P}(S\leq\theta_{\ell})\) and in particular \(\mathbb{P}(Y=\ell)=\mathbb{P}(\theta_{\ell-1}<S\leq\theta_{\ell})=\mathbb{P}( S\leq\theta_{\ell})-\mathbb{P}(S\leq\theta_{\ell-1})\). Within an ordinal regression framework, the influence of the covariates \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{K})^{\top}\in\mathbb{R}^{K}\) on the response \(Y\) is typically modelled on the latent scale as
\[S=\mathbf{x}^{\top}\boldsymbol{\beta}+\varepsilon=\sum_{k=1}^{K}x_{k}\beta_{k }+\varepsilon \tag{2}\]
where \(\boldsymbol{\beta}=(\beta_{1},\beta_{2},\ldots,\beta_{K})^{\top}\in\mathbb{R}^ {K}\) contains the regression parameters and \(\varepsilon\) is a random variable whose distribution needs to be specified, typically with zero mean and known variance. Some common choices are stated below in Section 3.1.1.
### Methodological details
#### 3.1.1 Link functions
* Assume \(\varepsilon\sim\mathcal{N}(0,1)\) to be standard normally distributed. Then, we obtain the probability distribution of the survey response \(Y\) given covariates \(\mathbf{x}\) via \(S\) as \[\mathbb{P}(Y\leq\ell\mid\mathbf{x})=\mathbb{P}\left(S\leq\theta_{\ell}\mid \mathbf{x}\right)=\mathbb{P}\big{(}S-\mathbf{x}^{\top}\boldsymbol{\beta}\leq \theta_{\ell}-\mathbf{x}^{\top}\boldsymbol{\beta}\mid\mathbf{x}\big{)}\stackrel{{ (2)}}{{=}}\Phi\left(\theta_{\ell}-\mathbf{x}^{\top}\boldsymbol{\beta}\right),\] (3)
Figure 2: Probability distribution of response \(Y\) (hatched regions) to a questionnaire with \(L=5\) possible answers (encoded as 1 to 5). The response probabilities are acquired as the area (coloured regions) under the density curve of \(S\) between two consecutive cut points \(\theta_{\ell},\theta_{\ell+1}\), \(\ell=0,1,2,3,4\).
where \(\Phi\) is the distribution function of the standard normal distribution. Particularly, the probability for a response answer can be obtained as the area under a normal curve between two consecutive cut points (possibly shifted by the term including information about covariates), see Figure 2. This model is, in relation to the probit model for binary responses, also called _ordered probit_ model.
2. Another popular choice is to assume \(\varepsilon\) to follow a standard logistic distribution, i.e., \(\varepsilon\) has cumulative distribution function \(F(t)=\mathbb{P}(\varepsilon\leq t)=1/(1+\mathrm{e}^{-t})\). Then, similarly to Equation (3), we obtain \[\mathbb{P}(Y\leq\ell\mid\mathbf{x})=F\left(\theta_{\ell}-\mathbf{x}^{\top} \boldsymbol{\beta}\right)=\frac{1}{1+\mathrm{e}^{-\left(\theta_{\ell}- \mathbf{x}^{\top}\boldsymbol{\beta}\right)}}\] and for the log-odds holds \[\log\left(\frac{\mathbb{P}(Y\leq\ell\mid\mathbf{x})}{1-\mathbb{P}(Y\leq\ell \mid\mathbf{x})}\right)=\theta_{\ell}-\mathbf{x}^{\top}\boldsymbol{\beta}, \quad\ell=1,2,\ldots,L.\] (4) Thus this model is also called _proportional odds_ or _ordered logistic regression_ model, McCullagh (1980); Fullerton (2009). Note that for \(L=2\), i.e. for binary responses, this model reduces to the ordinary logistic regression model.
3. More generally, \(S\) can be modelled to follow an arbitrary distribution. Christensen (2018, Section 2.2) provide a comprehensive overview about the choice of common link functions beyond the ones presented above, depending on the assumptions of the rating behaviour of the responder, e.g., choose a Cauchy distribution if extreme ratings are assumed to be more likely. Moreover, Agresti (2012, Chapter 5) yield a differentiated view on the field of ordinal models with various examples and highlight aspects about practical and theoretical issues.
#### 3.1.2 Cumulative link models
From Equation (3) follows (for an arbitrary choice of \(\varepsilon\)) that we can write the cumulative distribution function (CDF) of \(Y\) given the covariates \(\mathbf{x}\) as
\[\mathbb{P}(Y\leq\ell\mid\mathbf{x})=F(\theta_{\ell}-\mathbf{x}^{\top} \boldsymbol{\beta}),\quad\ell=1,2,\ldots,L, \tag{5}\]
where \(F\) is the distribution function of \(\varepsilon\). As the function \(F\) links the CDFs of \(Y\) and \(S\), ordinal models of the form (5) are called _cumulative link models_ and \(F\) is called _inverse link function_ (as it takes the linear predictor \(\theta_{\ell}-\mathbf{x}^{\top}\boldsymbol{\beta}\) from the latent space back and maps it to predicted probabilities for \(Y\)), cf. Christensen (2018).
#### 3.1.3 Intercept
Note, that in contrast to ordinary linear regression, the model from Equation (2) deliberately does _not_ include an intercept term, as a model with \(S=\alpha+\mathbf{x}^{\top}\boldsymbol{\beta}+\varepsilon\) and shifted cut points \(\tilde{\theta}_{\ell}=\theta_{\ell}-\alpha\) would result in the same distribution for the response \(Y\). Thus, the cut-points \(\theta_{\ell}\) would not be identifiable in a model including a fixed _unknown_ intercept. A model including a fixed _known_ intercept, however, would result in the same parameter estimates \(\boldsymbol{\beta}\) as for the model (2). For this reason, without loss of generality \(\alpha=0\) is assumed.
### Interpretation of model parameters
To conclude this section, let us briefly address the influence of the regression coefficients. An interpretation of this influence on the response variable \(Y\) is, in general, difficult and depends on the chosen link function, cf. Agresti (2012, Section 5.1.3). For the logit-link, there is the following relation for the difference in log-odds (i.e., the logarithm of the odds ratio). Note that here, the odds are the ratio of responding at most \(\ell\) vs. responding at least \(\ell+1\), \(\ell=1,2,\ldots,L\). Denote the vector of covariates by \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{K})\) and let \(\tilde{\mathbf{x}}=(x_{1},\ldots,x_{k-1},x_{k}+1,x_{k+1},\ldots,x_{K})\) be differing from \(\mathbf{x}\) only at the \(k\)-th component by one, then
\[\log\left(\frac{\mathbb{P}(Y\leq\ell\mid\mathbf{x})}{1-\mathbb{P }(Y\leq\ell\mid\mathbf{x})}\right)-\log\left(\frac{\mathbb{P}(Y\leq\ell\mid \tilde{\mathbf{x}})}{1-\mathbb{P}(Y\leq\ell\mid\tilde{\mathbf{x}})}\right) \stackrel{{(\ref{eq:
Identification of influencing factors and effect quantification
Let us now investigate the influences of covariates on the response outcome. To this end, we consider at first discrete covariates, say, a binary group variable having two levels \(0/1\) (e.g., two different studies with one question each having the same possible answers). Then, the distribution of the latent variable \(S\) for group \(1\) is the one for group \(0\) shifted by the regression coefficient \(\beta\), see Figure 3. This results in a shift of the probability masses for all possible responses. In the example, this means that positive values for \(\beta\) make responses encoded with high values more likely (and responses encoded with low values less likely), and vice versa for negative values of \(\beta\).
Using the framework presented above, we are moreover able to test for significant differences in the rating behaviour between two groups. Two rating behaviours are considered to differ significantly if the corresponding regression coefficient \(\beta\) on the latent scale differs significantly from zero, i.e., there is a significant shift in the distribution of the latent
Figure 3: Graphical representation of the influence of a binary group variable to the distribution of the latent variable \(S\) and effects on the probabilities of the survey responses \(Y\): The entire distribution of the latent variable, particularly its mean \(\alpha\) (dashed lines), is shifted by the regression coefficient \(\beta\) to the right (left) if \(\beta>0\) (\(\beta<0\)). The probabilities (coloured areas) for all response levels change accordingly.
variable \(S\). Testing procedures are of asymptotic Wald-type, cf. Christensen (2015, Section 4.1).
### Model application
We consider the data from the Filippa study from Section 1.1. We are interested whether and how the invasiveness of the study (observational/ RCT/ pharmacological study) and the child's sex influence the response behaviour of the legal representatives. Therefore, we fit an ordinal regression model for the response with covariates study type and child's sex with logit-link using the clm function from the ordinal package:
clmodel <- clm(response ~ study + sex, data = data, link = 'logit') summary(clmodel)
#> formula: response ~ study + sex
#> data: data
#>
#> link threshold nobs logLik AIC inter max.grad cond.H
#> logit flexible 318 -359.80 733.60 6(1) 5.47e-12 2.3e+02
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> studyRCT 2.2127 0.3723 5.943 2.79e-09 ***
#> studyPharma 2.8428 0.3682 7.721 1.16e-14 ***
#> sexMale -0.4778 0.2350 -2.033 0.042 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1'1
#>
#> Threshold coefficients:
#> Estimate Std. Error z value
#> Absolutely consent|Rather consent 1.9020 0.3362 5.658
#> Rather consent|Unsure 2.3924 0.3446 6.942
#> Unsur|Rather decline 2.9745 0.3541 8.401
#> Rather decline|Absolutely decline 3.6105 0.3670 9.837
This model provides:
* \(L-1=4\) threshold coefficients (cut-points), corresponding to the \(\theta_{\ell}\)'s in Figure 3;
* two coefficients for the levels "RCT" (studyRCT) and "pharmacological study" (studyPharma) of invasiveness (in comparison to the "observational study");
* and one coefficient sexMale for the influence of the child's sex (here: boys in comparison to girls).
The estimated coefficients for study invasiveness and sex are statistically significant (according to a Wald test, \(p<0.001\) and \(p=0.042\), respectively) at the two-sided 5% significance level. As the coefficient for study type "RCT" is positive, the latent score distribution of \(Y\) for an RCT study is shifted to the right by 2.2127 in comparison to the observational study. See Figure 3 for an illustration where 1 denotes the highest level of consent ("absolutely consent") and 5 the least ("absolutely decline"). As the cut-points do not depend on the covariates, by this shift we predict levels with lesser consent with higher probability.
A similar argument holds for the pharmacological study compared to the observational study (here, the shift is 2.8428, i.e. another 0.6301 units w.r.t. "RCT"). Moreover, as the coefficient for sex is negative (\(-0.4778\)), legal representatives of boys were significantly more likely to consent to participate in the studies than those of girls (\(p=0.042\)).
To assess whether there is also a significantly different response behaviour between the RCT and the pharmacological study, we have to conduct one additional test. Note that to this, we have to adjust for multiple testing. For this purpose, the emmeans method from the package of the same name emmeans by Lenth (2022) provides several possibilities using the adjust parameter. By default, \(p\)-value adjustment using Tukey's method, cf. Tukey (1953), for multiple comparisons is applied:
emmeans(clmodel, specs = list(pairwise ~ study), mode = 'latent')[[2]]
#> 1 estimate SE df z.ratio p.value
#> Observational - RCT -2.21 0.372 Inf -5.943 <.0001
#> Observational - Pharma -2.84 0.368 Inf -7.721 <.0001
#> RCT - Pharma -0.63 0.252 Inf -2.496 0.0336
#>
#> Results are averaged over the levels of: sex
#> P value adjustment: Tukey method for comparing a family of 3 estimates
See also Hsu (1996, Section 5) for the theoretical background and a comprehensive overview about multiple testing problems. From the last column it follows that there are significant differences in the response behaviour of voters between all three study types at the significance level 5%.
Note that in the model summary there are no \(p\)-values provided for the threshold coefficients as testing against zero would not make much sense--the actual position of the cut-points has no meaning but only the distances between each other and the relative position to the mean of the latent variable, cf. Section 3.1.3.
For this study, the odds for a legal representative of a girl to respond "absolutely consent" for the observational study are about \(\exp(2.2127)=9.1\) times the odds for the RCT. This estimate is, however, relatively imprecise which can probably be reduced to the small number of legal representatives of a girl absolutely willing to participate in the RCT or the small sample size of the study overall. The 95%-confidence interval is \([1.5154,2.9858]\), corresponding to \([4.551,19.803]\) for the odds ratio. As descriptive statistics of effects (here: probabilities) can compare the cumulative probabilities more suitably and is easier to interpret, we recommend to describe effects for ordinal regression quantitatively by presenting comparisons of probabilities e.g. at their extreme values, see also Agresti (2013, Section 8.2.4). The procedure how to do so is described as follows.
### Predicting probabilities, confidence intervals
The package emmeans provides moreover a method for computing predicted marginal response probabilities for each possible answer (columns response and prob), possibly stratified by given covariates (first line in each paragraph below):
emmeans(clmodel, response | study / sex, mode = 'prob')
#> study = Observational, sex = Female:
#> response prob SE df asymp.LCL asymp.UCL
#> Absolutely consent 0.8701 0.03799 Inf 0.79565 0.9446
#> Rather consent 0.0461 0.01408 Inf 0.01853 0.0737
#> Unsure 0.0352 0.01162 Inf 0.01239 0.0579
#> Rather decline 0.0223 0.00799 Inf 0.00660 0.0379
#> Absolutely decline 0.0263 0.00941 Inf 0.00789 0.0448
...
Note, that as the regression parameters and cut-points are subject to uncertainties (the data is random), so are the _estimated probabilities_ for the possible responses. Corresponding asymptotic (indicated by the infinite number of degrees of freedom df=Inf) confidence intervals, given as [asymp.LCL, asymp.UCL], can be derived using the delta method and are provided in addition to the predictions in the emmeans method above (last two columns). We refer to Christensen (2018, Section 4.7) for further details on the derivation of standard errors (column SE) to compute these confidence intervals.
The emmeans package contains moreover a function to show the regression output graphically using emmip for a clear and compact presentation:
emmip(clmodel, response | study / sex, mode = 'prob', CI = TRUE, style = 'factor',
CIarg = list(colour=response_colours, size=5, alpha=.5 )) + labs(y = 'Estimated probability', x = 'Level of consent') + scale_x_discrete() + scale_y_continuous(labels = scales::percent) + facet_grid(sex ~ study) + theme
Figure 4 clearly shows differences in the response behaviour of legal representatives depending on the study invasiveness. More precisely, the statement that willingness to consent for participation decreases with increasing invasiveness from Section 4.1 is confirmed. Further, we discern smaller differences between the willingness to consent between boys and girls. Particularly for the RCT and the pharmacological study, more participants were responding "absolutely consent" and less "absolutely decline" in the boys' strata than in girls'. Differences were about 13% and 6% for the RCT and about 11% and 10% for the pharmacological study, respectively.
Figure 4: Estimated response probabilities stratified by study type (horizontally) and child’s sex (vertically) within the ordinal regression model of Section 4.1.
Finally we like to remark that emmeans provides for the possibility to compute cumulative and exceedance (i.e. \(1\) - cumulative) probabilities using mode = 'cum.prob' and mode = 'exc.prob', respectively. For instance, the probability for a legal representative of at least "rather consenting" is:
emmeans(clmodel, ~ study / sex, mode = 'cum.prob', at = list(cut = "Rather consent|Unsure"))
#> study sex cumprob SE df asymp.LCL
#> Observational study Female 0.916 0.0264 Inf 0.864
#> Randomised controlled trial Female 0.545 0.0576 Inf 0.432
#> Pharmacological study Female 0.389 0.0533 Inf 0.285
#> Observational study Male 0.946 0.0180 Inf 0.911
#> Randomised controlled trial Male 0.659 0.0504 Inf 0.560
#> Pharmacological study Male 0.507 0.0510 Inf 0.407
#> asymp.UCL
#> 0.968
#> 0.658
#> 0.494
#> 0.982
#> 0.757
#> 0.607
#> Confidence level used: 0.95
Note, that the cumulative and exceedance probablities can be obtained from the individual probability estimates above. This, however, does not hold for the confidence intervals.
### Numerics and convergence
The model coefficients are estimated numerically using a maximum likelihood approach by calculation of zeroes of the gradient of the negative log-likelihood. Parameter estimates are output after a convergence criterion is satisfied (typically small gradient) or a maximum number of iterations has been reached. Numerics and control parameters can be passed using clm.control.
To oversee the convergence of the approach, the summary of the clm method provides three parameters, see Section 4.1: niter (the number of Newton-Raphson iterations needed with the number of step-halvings in paranthesis), max.grad (the maximum absolute gradient of log-likelihood) and cond.H (the condition of the Hessian at the maximum). More detailed information can be obtained using the convergence method applied to the model (cf. also Section 5.1 below). Christensen (2015) states that large cond.H values (like \(>\)1e4) might
indicate that the model is ill-defined. As in the case of Section 4.1 max.grad is small and cond.H is reasonably sized, we can conclude that the algorithm seems to have converged properly.
### Interaction effects
One might wonder, whether it is appropriate to include also an interaction term study:sex in the model. To this end, we can fit a model including this interaction and compare it with the model from Section 4.1 (not containing factor interaction). For this purpose, the ordinal package provides the anova method conducting a likelihood ratio test between both models:
clmodel_inter <- clm(response ~ study * sex, data = data, link = 'logit') anova(clmodel, clmodel_inter)
#> Likelihood ratio tests of cumulative link models:
#>
#> formula: link: threshold:
#> clmodel response ~ study + sex logit flexible
#> clmodel_inter response ~ study * sex logit flexible
#>
#> no.par AIC logLik LR.stat df Pr(>Chisq)
#> clmodel 7 733.60 -359.80
#> clmodel_inter 9 737.29 -359.65 0.307 2 0.8577
This implies that there is no statistical evidence for including interaction study:sex between study-type and child's sex into the model (\(p=0.8429\)).
### Random effects
To conclude this first example, we consider the question whether there are substantial differences in the individual response behaviours, i.e. whether there are parents responding systematically e.g. particularly low or high values or answering always "unsure" etc. To this end, we fit an ordinal model as in Section 4.1 but with additional individual random effect using the clmm function from the ordinal package and compare this model, analogously to Section 4.4 with the model from Section 4.1:
clmmodel <- clmm(response ~ study + sex + (1 | id), data = data, link = 'logit') anova(clmodel, clmmodel)
#> Likelihood ratio tests of cumulative link models:
#>
#> formula: link: threshold:
#> clmodel response ~ study + sex logit flexible
#> clmmodel response ~ study + sex + (1 | id) logit flexible
#>
#> no.par AIC logLik LR.stat df Pr(>Chisq)
#> clmodel 7 733.60 -359.80
#> clmmodel 8 733.47 -358.74 2.1299 1 0.1444
Again, there is no strong evidence (\(p=0.1183\)) for substantial subject specific response behaviours; a test at the 5% significance level would not reject the model from Section 4.1 in favour to a model including additional individual random effects.
Note that, as of now, the clmm function does not support predicting probabilities of a model containing random effects. In contrast, the former implementation clmm2 of clmm does support prediction, however, provides only fitted values for a random effect of zero, cf. Christensen (2015). Beyond this, we present an approximate approach using the emmeans method at the end of the following Section 4.6.
### Continuous covariates
Considering the influence of a continuous covariate on the response outcome, fitting of models is done analogously as described in Section 4.1 above. The influence of a continuous covariate is portrayed schematically in Figure 5 for a univariate covariate \(x\in\mathbb{R}^{1}\). The covariate \(x\) is allowed to vary along the horizontal axis whereas the latent score is plotted on the vertical axis. This corresponds to Figure 3 reflected at the \(45^{\circ}\) line. A change in \(x\) by one unit results in a shift of the latent score distribution by the corresponding regression coefficient, i.e. here \(\beta\) units. More generally, the mean of the latent score between two responders with covariates \(x^{\prime}\) and \(x^{\prime\prime}\) is shifted by \(\pm\beta(x^{\prime\prime}-x^{\prime})\) (depending on which direction one considers). The probabilities of the response categories shift accordingly, given unchanged cut-points, see the coloured areas in Figure 5.
Below you can find the result for an ordinal model for the response depending on study type and child's sex, and additionally the age of the legal representative for the data from the Filippa study. Note that whenever there was more than one legal representative present at the interview, we chose the age of the oldest one for this analysis.
clmodel <- clm(response ~ study + sex + partner_age, data = data, link = 'logit') summary(clmodel)
#> formula: response ~ study + sex + partner_age
Figure 5: Graphical representation of the influence of a univariate continuous covariate \(\mathbf{x}\in\mathbb{R}^{1}\) to the distribution of the latent variable \(S\) and effects on the probabilities of the survey responses \(Y\): A change of the covariate \(\mathbf{x}\) causes a change of the mean of the latent score \(\mathbb{E}\,S\) (cf. \(\alpha\) in Figure 3) along the regression line (dash-dotted \(-\cdot\)), i.e. if the line’s slope is \(\beta\), a change of \(\mathbf{x}\) by one unit results in a change of \(\mathbb{E}\,S\) by \(\beta\) units. Particularly, assuming fixed cut-points, if the regression coefficient \(\beta\) is positive (negative), a higher value of \(\mathbf{x}\) results in tendentially higher (smaller) response values.
#>data:data
#>
#>linkthresholdnobslogLikAICnitermax.gradcond.H
#>logitflexible306-342.41700.826(1)6.29e-101.7e+05
#>
#>Coefficients:
#>EstimateStd.ErrorzvaluePr(>|z|)
#>studyRCT2.285470.388635.8814.08e-09***
#>studyPharma3.009520.385507.8075.87e-15***
#>sexMale-0.527530.24277-2.1730.0298*
#>partner_age-0.037130.01546-2.4020.0163*
#>---
#>Signif.codes:
#>0'***'0.001'**'0.01'*'0.05'.'0.1'''1
#>
#>
#>Thresholdcoefficients:
#>
#>EstimateStd.Error
#>Absolutelyconsent|Ratherconsent0.60980.6546
#>Ratherconsent|Unsure1.09610.6564
#>Unsure|Ratherdecline1.71150.6573
#>Ratherdecline|Absolutelydecline2.35680.6602
#>zvalue
#>Absolutelyconsent|Ratherconsent0.931
#>Ratherconsent|Unsure1.670
#>Unsure|Ratherdecline2.604
#>Ratherdecline|Absolutelydecline3.570
#>(12observationsdeletedduetomissingness)
We observe that there are only minor changes in the coefficients studyRCT, studyPharma and sex in comparison to Section 4.1. The coefficient partner_age states that for every increase in the age of the legal representative by one year, the mean latent score decreases by about \(-0.03713\) units. This means that the older the interview partners are, the more likely they are responding smaller response levels, i.e. the more likely they are consenting to study participation.
The resulting matrix of estimates can then be visualised as a stream plot as follows:
ggplot(clm_fit,aes(x=partner_age,y=fit,fill=response))+ geom_stream(type='proportional',alpha=0.8)+ labs(y='Estimatedprobability',x='Ageoflegalrepresentative[years]',fill='Levelofconsent')+ scale_x_continuous(expand=c(0,0),limits=c(20,60))+
scale_y_continuous(expand=c(0,0), labels=scales::percent_format())+ scale_fill_manual(values=response_colours,labels=responses)+ theme+ facet_grid(sex~study)
In all three studies the level of consent is increasing for an increasing age of the legal representative even though this is a bit more pronounced in male than in female inpatients, see Figure 6. If we included an interaction term into the regression model, e.g., study:sex between study type and child's sex, it might occur that this behaviour differs among the different study types. However, as neither interaction between study, sex and partner_age was significant we did not include parameter interaction into the model.
Note that Figure 6, in contrast to Figure 4 does not contain any information about the estimated uncertainty of estimates (e.g., in form of confidence intervals). This is, however,
Figure 6: Estimated response probabilities stratified by study type (horizontally) within the ordinal regression model of Section 4.6.
rather a limitation of the graphical representation than of the applied method. We can still obtain asymptotic Wald-type confidence intervals for the probabilities of each response level given the covariates (e.g., here for all studies and a legal representative of age 42) using:
emmeans(clmodel, ~ response | study / partner_age, mode = 'prob', at = list(partner_age=42))
#> study = Observational study, partner_age = 42:
#> response prob SE df asymp. LCL asymp.UCL
#> Absolutely consent 0.9171 0.02658 Inf 0.86504 0.9692
#> Rather consent 0.0301 0.01016 Inf 0.01022 0.0500
#> Unsure 0.0235 0.00833 Inf 0.00716 0.0398
#> Rather decline 0.0137 0.00524 Inf 0.00341 0.0239
#> Absolutely decline 0.0156 0.00598 Inf 0.00386 0.0273...
### Invariance to choice of response categories
Agresti (2012, Section 3.3.3) points out that the regression parameters \(\boldsymbol{\beta}\) in the latent variable model do _not_ depend on the particular way the continuous latent scale is cut by the cut-points \(\theta_{\ell}\). Thus, the effect of the parameters \(\boldsymbol{\beta}\) are independent of the choice of the categories of \(Y\). For instance, the same effect parameters apply for a variable with five consent levels (as in Section 4.1 and Section 4.6 above) or to a response variable with e.g. ten or only three consent levels. This makes it possible to compare model parameters from studies using different scales.
## 5 Advanced topics
To conclude we like to allude to a number of advanced topics which we deem to be important in the context fitting of ordinal models.
### Convergence check
As already mentioned in Section 4.3, the model coefficients are determined numerically. Whereas the summary command shows only a brief outline about model coefficients and convergence, the convergence command from the ordinal packages yields more comprehensive overview and provides additional information about the number of correctly estimated decimals.
### Goodness of fit, model selection
In applications after model fitting typically the question for _goodness of fit_ arises, i.e. "How well does our model predict the present data?". For ordinary linear regression, quantities such as an \(R^{2}\) coefficient of determination, quantifying the amount of data variance is explained by the model, are often stated. As a result, goodness of fit testing is enabled. In a nutshell, a non-significant \(p\)-value of a goodness of fit test indicates that there is no evidence that observed and fitted values/ frequencies do differ in a statistically significant way (thus indicating a reasonable fit). Comparisons of several nested models are possible using criteria such as the Akaike (AIC) or the Bayesian information criterion (BIC) to account for increasing goodness of fit for an increasing number of included parameters, cf. Agresti (2015, Section 4.6).
For ordinal models, similar methods are available, even though the test statistics are often hardly interpretable unlike e.g. an \(R^{2}\) coefficient. Goodness of fit testing in ordinal models is considered by Pulkstenis & Robinson (2004); Fagerland & Hosmer (2016). Corresponding algorithms are implemented in R in the generalhoslem package by Jay (2019). For practical applications, Fagerland & Hosmer (2013) recommend calculation of three different methods, the Lipsitz test, an ordinal version of the Hosmer-Lemeshow test, and the Pulkstenis-Robinson test, to assess goodness of fit, each covering slightly different aspects of the problem. The procedure is presented as follows:
lipsitz.test(clmodel)
#>
#> Lipsitz goodness of fit test for ordinal response
#> models
#>
#>
#> data: formula: response ~ study + sex + partner_age
#> LR statistic = 5.4161, df = 9, p-value = 0.7966
We observe that this test does not show strong evidence against the null hypothesis that the data was generated from the model of Section 4.6, thus indicating that there is no pronounced lack-of-fit.
For the Hosmer-Lemeshow and the Pulkstein-Robinson tests, the assumption of a \(\chi^{2}\) approximation for the frequency distributions in the categories defined by the parameters \(g\) (numbers of response quantiles) or catvars (encoding categorical covariates by which to stratify), respectively, have to be reasonable. The methods shown below throw a warning that the \(\chi^{2}\) approximation might be incorrect due to inbalanced data, i.e. here, not for all combinations of study type and child's sex all response possibilities were observed. Results from these tests have thus to be interpreted with care. For an elaborate discussion about this issue we refer to Fagerland & Hosmer (2013, Section 6) or Hosmer Jr et al. (2013,
Chapter 5).
fit <- predict(clmodel, data %>% dplyr::select(study, sex, partner_age), type='prob')$fit logitof(data$response, fit, g=10, ord = TRUE)
#> Warning in logitof(data$response, fit, g = 10, ord = TRUE): At least one cell in the expected frequencies table is < 1. Chi-square approximation may be incorrect.
#>
#> Hosmer and Lemeshow test (ordinal model)
#>
#> data: data$response, fit
#> X-squared = 22.965, df = 35, p-value = 0.9412
pulkrob.chisq(clmodel, catvars=c('study','sex'))
#>
#> Pulkstenis-Robinson chi-squared test
#>
#> data: formula: response ~ study + sex + partner_age
#> X-squared = 36.129, df = 41, p-value = 0.6866
pulkrob.deviance(clmodel, catvars=c('study','sex'))
#>
#> Pulkstenis-Robinson deviance test
#>
#> data: formula: response ~ study + sex + partner_age
#> Deviance-squared = 39.301, df = 41, p-value = 0.5463
### Model generalisations
Finally, the ordinal package provides a number of generalisations of the ordinal model presented in Section 3. The general form of a cumulative link model (see Equation (5))
can be written as
\[\mathbb{P}(Y\leq\ell\mid\mathbf{x},\mathbf{w},\mathbf{z})=F_{\lambda}\left(\frac{g _{\boldsymbol{\alpha}}(\theta_{\ell})-\mathbf{x}^{\top}\boldsymbol{\beta}- \mathbf{w}^{\top}\widetilde{\boldsymbol{\beta}}_{\ell}}{\exp(\mathbf{z}^{\top} \boldsymbol{\zeta})}\right) \tag{6}\]
where the parameters affect the model as follows:
* \(F_{\lambda}\) is the inverse link function which may be parametrised by a parameter \(\lambda\in\mathbb{R}\). Its inverse \(F_{\lambda}^{-1}\) is also referred to as _flexible link function_.
* Cut-points may be transformed via \(g_{\boldsymbol{\alpha}}(\theta_{\ell})\) to be more _structured_ to reduce the number of model parameters, thus increasing efficiency of the estimators. Typical choices are assumptions of symmetric distribution of the cut-points of around the mean or having equal distances between each other (equidistant cut-points). Unrestricted cut-points (corresponding to \(g_{\boldsymbol{\alpha}}\) to be the identity) are also referred to as _flexible_.
* \(\mathbf{x}^{\top}\boldsymbol{\beta}\) are the ordinary regression effects as in Equation (5).
* Regression effects (or the cut-points, respectively) might be allowed to depend on covariates to include _nominal effects_\(\mathbf{w}^{\top}\widetilde{\boldsymbol{\beta}}_{\ell}\), see Figure 6(a). This allows for more flexibility in the modelling of rating behaviours. Computationally, to include nominal effects one has to pass the corresponding variable to the nominal parameter of clm. Note, however, that parameters included in nominal effects cannot be included as covariates due to identifiability reasons. Models including these nominal effects with logit-link are also called _partial_ or _non-proportional odds_ models, see Peterson & Harrell Jr (1990).
* The variance of the latent variable might be depending on covariates via \(\exp(\mathbf{z}^{\top}\boldsymbol{\zeta})\) (_scale effects_, see Figure 6(b), e.g., reflecting different variances in the rating behaviour depending on group membership. To use scale effects in the clm method, pass variable names to the scale parameter while fitting the model.
A more comprehensive overview is provided in Christensen (2018, Section 2.3ff.) with corresponding remarks concerning implementations in Christensen (2018, Section 4). Whether it is reasonable to include any of these modifications into the model is usually up to the analysing statistician. Models can again be compared using likelihood ratio tests as described in Section 4.4 and Section 4.5, respectively.
Figure 7: Effects of including (a) nominal and (b) scale effects to an ordinal regression model. (a) For every nominal effect we have to estimate another \(L\) (number of response levels) regression parameters, or equivalently \(L\) threshold coefficients \(\tilde{\theta}_{\ell}=\theta_{\ell}-\tilde{\beta}_{\ell}\). (b) Including scale effects changes the variance of the underlying latent variable \(S\).
Conclusion
The tools of the ordinal regression framework provide a method to appropriately analyse ordinally scaled data, particularly with only a few number of response categories. By applying ordinal regression we yield probabilities for each response category. This offers a more differentiated view, e.g., regarding proportions of participants consenting in contrast to considering just a mean score. Moreover, considering data on a latent scale, we have the possibility of testing group differences and marginal effects such as cumulative odds. If survey data is very finely graduated, such as for numeric rating scales or comprehensive quality-of-life questionnaires, the usage of ordinal methods is often limited as the estimated parameters are not that meaningful anymore in contrast to e.g. a mean or median score or, are even not feasible due to a too large number of parameters to be estimated. For the Filippa study, we demonstrated that an ordinal regression analysis has the potential to identify factors influencing the willingness to study participation and to quantify the probability of consenting to the participation in fictional studies with differing levels of invasiveness given a range of demographic variables from the children and their legal representatives.
## Data availability statement
The raw data underlying the presented analyses will be made available by the authors upon reasonable request.
## Conflicts of interest and financial disclosures
The authors have no conflicts of interest to declare. We conducted this research with institutional resources only. Open access charges were funded by the Open Access Publication Funds of the Gottingen University, who had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2305.12376 | Measuring Intersectional Biases in Historical Documents | Data-driven analyses of biases in historical texts can help illuminate the
origin and development of biases prevailing in modern society.
However, digitised historical documents pose a challenge for NLP
practitioners as these corpora suffer from errors introduced by optical
character recognition (OCR) and are written in an archaic language. In this
paper, we investigate the continuities and transformations of bias in
historical newspapers published in the Caribbean during the colonial era (18th
to 19th centuries). Our analyses are performed along the axes of gender, race,
and their intersection. We examine these biases by conducting a temporal study
in which we measure the development of lexical associations using
distributional semantics models and word embeddings. Further, we evaluate the
effectiveness of techniques designed to process OCR-generated data and assess
their stability when trained on and applied to the noisy historical newspapers.
We find that there is a trade-off between the stability of the word embeddings
and their compatibility with the historical dataset. We provide evidence that
gender and racial biases are interdependent, and their intersection triggers
distinct effects. These findings align with the theory of intersectionality,
which stresses that biases affecting people with multiple marginalised
identities compound to more than the sum of their constituents. | Nadav Borenstein, Karolina Stańczak, Thea Rolskov, Natália da Silva Perez, Natacha Klein Käfer, Isabelle Augenstein | 2023-05-21T07:10:31Z | http://arxiv.org/abs/2305.12376v1 | # Measuring Intersectional Biases in Historical Documents
###### Abstract
Data-driven analyses of biases in historical texts can help illuminate the origin and development of biases prevailing in modern society. However, digitised historical documents pose a challenge for NLP practitioners as these corpora suffer from errors introduced by optical character recognition (OCR) and are written in an archaic language. In this paper, we investigate the continuities and transformations of bias in historical newspapers published in the Caribbean during the colonial era (18th to 19th centuries). Our analyses are performed along the axes of gender, race, and their intersection. We examine these biases by conducting a temporal study in which we measure the development of lexical associations using distributional semantics models and word embeddings. Further, we evaluate the effectiveness of techniques designed to process OCR-generated data and assess their stability when trained on and applied to the noisy historical newspapers. We find that there is a trade-off between the stability of the word embeddings and their compatibility with the historical dataset. We provide evidence that gender and racial biases are interdependent, and their intersection triggers distinct effects. These findings align with the theory of intersectionality, which stresses that biases affecting people with multiple marginalised identities compound to more than the sum of their constituents.
+
Footnote †: * Equal contribution.
[https://github.com/copenlu/i](https://github.com/copenlu/i)
ntersectional-bias-pbw
## 1 Introduction
The availability of large-scale digitised archives and modern NLP tools has enabled a number of sociological studies of historical trends and cultures Garg et al. (2018); Kozlowski et al. (2019); Michel et al. (2011). Analyses of historical biases and stereotypes, in particular, can shed light on past societal dynamics and circumstances Levis Sullam et al. (2022) and link them to contemporary challenges and biases prevalent in modern societies Payne et al. (2019). For instance, Payne et al. (2019) consider implicit bias as the cognitive residue of past and present structural inequalities and highlight the critical role of history in shaping modern forms of prejudice.
Thus far, previous research on bias in historical documents focused either on gender Rios et al. (2020); Wevers (2019) or ethnic biases Levis Sullam et al. (2022). While Garg et al. (2018) separately analyse both, their work does not engage with their intersection. Yet, in the words of Crenshaw (1995), intersectional perspective is important because "the intersection of racism and sexism factors into black women's lives in ways that cannot be captured wholly by looking separately at the race or gender dimensions of those experiences."
Analysing historical documents poses particular challenges for modern NLP tools Borenstein et al. (
Figure 1: PMI analysis of our historical corpora. Words are placed on the intersectional gender/race plane.
2023; Ehrmann et al., 2020). Misspelt words due to wrongly recognised characters in the digitisation process, and archaic language unknown to modern NLP models, i.e. historical variant spellings and words that became obsolete in the current language, increase the task's complexity (Bollmann, 2019; Linhares Pontes et al., 2019; Piotrowski, 2012). However, while most previous work on historical NLP acknowledges the unique nature of the task, only a few address them within their experimental setup.
In this paper, we address the shortcomings of previous work and make the following contributions: (1) To the best of our knowledge, this paper presents the first study of historical language associated with entities at the intersections of two axes of oppression: race and gender. We study biases associated with identified entities on a word level, and to this end, employ distributional models and analyse semantics extracted from word embeddings trained on our historical corpora. (2) We conduct a temporal case study on historical newspapers from the Caribbean in the colonial period between 1770-1870. During this time, the region suffered both the consequences of European wars and political turmoil, as well as several uprisings of the local enslaved populations, which had a significant impact on the Caribbean social relationships and cultures (Migge and Muehleisen, 2010). (3) To address the challenges of analysing historical documents, we probe the applied methods for their stability and ability to comprehend the noisy, archaic corpora.
We find that there is a trade-off between the stability of word embeddings and their compatibility with the historical dataset. Further, our temporal analysis connects changes in biased word associations to historical shifts taking place in the period. For instance, we couple the high association between _Caribbean countries_ and "manual labour" prevalent mostly in the earlier time periods to waves of white labour migrants coming to the Caribbean from 1750 onward. Finally, we provide evidence supporting the intersectionality theory by observing conventional manifestations of gender bias solely for white people. While unsurprising, this finding necessitates intersectional bias analysis for historical documents.
## 2 Related Work
Intersectional Biases.Most prior work has analysed bias along one axis, e.g. race or gender, but not both simultaneously (Field et al., 2021; Stanczak and Augenstein, 2021). There, research on racial biases is generally centred around the gender majority group, such as Black men, while research on gender bias emphasises the experience of individuals who hold racial privilege, such as white women. Therefore, discrimination towards people with multiple minority identities, such as Black women, remains understudied. Addressing this, the intersectionality framework (Crenshaw, 1989) investigates how different forms of inequality, e.g. gender and race, intersect with and reinforce each other. Drawing on this framework, Tan and Celis (2019); May et al. (2019); Lepori (2020); Maronikolakis et al. (2022); Guo and Caliskan (2021) analyse the compounding effects of race and gender encoded in contextualised word representations and downstream tasks. Recently, Lalor et al. (2022); Jiang and Fellbaum (2020) show the harmful implications of intersectionality effects in pre-trained language models. Less interest has been dedicated to unveiling intersectional biases prevalent in natural language, with a notable exception of Kim et al. (2020) which provide evidence on intersectional bias in datasets of hate speech and abusive language on social media. As far as we know, this is the first paper on intersectional biases in historical documents.
Bias in Historical Documents.Historical corpora have been employed to study societal phenomena such as language change (Kutuzov et al., 2018; Hamilton et al., 2016) and societal biases. Gender bias has been analysed in biomedical research over a span of 60 years (Rios et al., 2020), in English-language books published between 1520 and 2008 (Hoyle et al., 2019), and in Dutch newspapers from the second half of the 20th century (Wevers, 2019). Levis Sullam et al. (2022) investigate the evolution of the discourse on Jews in France during the 19th century. Garg et al. (2018) study the temporal change in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the US. However, they neglect the emergent intersectionality bias.
When analysing the transformations of biases in historical texts, researchers rely on conventional tools developed for modern language. However, historical texts can be viewed as a separate domain due to their unique challenges of small and idiosyncratic corpora and noisy, archaic text (Piotrowski, 2012). Prior work has attempted to over
come the challenges such documents pose for modern tools, including recognition of spelling variations Bollmann (2019) and misspelt words Boros et al. (2020), and ensuring the stability of the applied methods Antoniak and Mimno (2018).
We study the dynamics of intersectional biases and their manifestations in language while addressing the challenges of historical data.
## 3 Datasets
Newspapers are considered an excellent source for the study of societal phenomena since they function as transceivers - both producing and demonstrating public discourse Wevers (2019). As part of this study, we collect newspapers written in English from the "Caribbean Newspapers, 1718-1876" database,1 the largest collection of Caribbean newspapers from the 18th-19th century available online. We extend this dataset with English-Danish newspapers published between 1770-1850 in the Danish colony of Santa Cruz (Saint Croix) downloaded from Danish Royal Library's website.2 See Tab 1 and Fig 8 (in App A.1) for details.
Footnote 1: [https://www.readex.com/products/caribbean-newspapers-series-1-1718-1876-american-antiquarian-society](https://www.readex.com/products/caribbean-newspapers-series-1-1718-1876-american-antiquarian-society)
As mentioned in SS1, the Caribbean islands experienced significant changes and turmoils during the 18th-19th century. Although chronologies can change from island to island, key moments in Caribbean history can be divided into roughly four periods Higman (2021); Heuman (2018): 1) colonial trade and plantation system (1718 to 1750); 2) international conflicts and slave rebellions (1751 to 1790); 3) revolutions and nation building (1791 to 1825); 4) end of slavery and decline of European dominance (1826 to 1876). In our experimental setup, we conduct a temporal study on data split into these periods (see Tab 2 for the number of articles in each period). As the resulting number of newspapers for the first period is very small (\(<10\)), we focus on the three latter periods.
Data Preprocessing.Starting with scans of entire newspaper issues (Fig 2.a), we first OCR them using the popular software Tesseract3 with default parameters and settings. We then clean the dataset by applying the DataMunging package,4 which uses a simple rule-based approach to fix basic OCR errors (e.g. long s' being OCRed as f', (Fig 2.b)). As some of the newspapers downloaded from the Danish royal library contain Danish text, we use spaCy5 to tokenise the OCRed newspapers into sentences and the python package langdetect6 to filter out non-English sentences.
Footnote 3: [https://github.com/tesseract-ocr/teseract](https://github.com/tesseract-ocr/teseract)
Footnote 4: [https://github.com/tedunderwood/DataMunging](https://github.com/tedunderwood/DataMunging)
Footnote 5: [https://spacy.io/](https://spacy.io/)
Footnote 6: [https://github.com/Mimino666/langdetect](https://github.com/Mimino666/langdetect)
## 4 Bias and its Measures
Biases can manifest themselves in natural language in many ways (see the surveys by Stanczak and Augenstein (2021); Field et al. (2021); Lalor et al. (2022)). In the following, we state the definition of bias we follow and describe the measures we use to quantify it.
\begin{table}
\begin{tabular}{l r r} \hline \hline Source & \#Files & \#Sentences \\ \hline Caribbean Project & \(7\,487\) & \(5\,224\,591\) \\ Danish Royal Library & \(5\,661\) & \(657\,618\) \\ \hline Total & \(13\,148\) & \(5\,882\,209\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the newspapers dataset.
\begin{table}
\begin{tabular}{l l r r} \hline \hline Period & Decade & \(\#\)Issues & Total \\ \hline International & 1710–1770 & 15 & \\ conflicts & 1770s & 747 & \\ and slave & 1780s & 283 & \\ rebellions & 1790s & 841 & \\ \hline Revolutions & 1800s & 604 & \\ and nation & 1810s & \(1\,347\) & \(3\,790\) \\ building & 1820s & \(1\,839\) & \\ \hline \multirow{3}{*}{Abolishment of slavery} & 1830s & \(1\,838\) & \\ & 1840s & \(1\,197\) & \\ \cline{1-1} of slavery & 1850s & \(1\,111\) & \(7\,453\) \\ \cline{1-1} & 1860s & \(1\,521\) & \\ \cline{1-1} & 1870s & \(1\,786\) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Total number of articles in each period and decade.
Figure 2: An example of a scanned newspaper (a) and the output of the OCR tool Tesseract (b). We fix simple OCR errors (highlighted) using a rule-based approach.
### Definition
Language is known to reflect common perceptions of the world (Hitti et al., 2019) and differences in its usage have been shown to reflect societal biases (Hoyle et al., 2019; Marjanovic et al., 2022). In this paper, we define bias in a text as the use of words or syntactic constructs that connote or imply an inclination or prejudice against a certain sensitive group, following the bias definition as in Hitti et al. (2019). To quantify bias under this definition, we analyse word embeddings trained on our historical corpora. These representations are assumed to carry lexical semantic meaning signals from the data and encode information about language usage in the proximity of entities. However, even words that are not used as direct descriptors of an entity influence its embedding, and thus its learnt meaning. Therefore, we further conduct an analysis focusing exclusively on words that describe identified entities.
### Measures
**WEAT** The Word Embedding Association Test (Caliskan et al., 2017) is arguably the most popular benchmark to assess bias in word embeddings and has been adapted in numerous research (May et al., 2019; Rios et al., 2020). WEAT employs cosine similarity to measure the association between two sets of attribute words and two sets of target concepts. Here, the attribute words relate to a sensitive attribute (e.g. male and female), whereas the target concepts are composed of words in a category of a specific domain of bias (e.g. career- and family-related words). For instance, the WEAT statistic informs us whether the learned embeddings representing the concept of \(family\) are more associated with females compared to males. According to Caliskan et al. (2017), the differential association between two sets of target concept embeddings, denoted \(X\) and \(Y\), with two sets of attribute embeddings, denoted as \(A\) and \(B\), can be calculated as:
\[s(X,Y,A,B)=\sum_{x\in X}\text{s}(x,A,B)-\sum_{y\in Y}\text{s}(y,A,B)\]
where \(s(w,A,B)\) measures the embedding association between one target word \(w\) and each of the sensitive attributes:
\[s(w,A,B)=\underset{a\in A}{\text{mean}}[\text{cos}(w,a)]-\underset{b\in B}{ \text{mean}}[\text{cos}(w,b)]\]
The resulting effect size is then a normalised measure of association:
\[d=\frac{\underset{x\in X}{\text{mean}}[\text{s}(x,A,B)]-\underset{y\in Y}{ \text{mean}}[\text{s}(y,A,B)]}{\underset{w\in X\cup Y}{\text{std}}[\text{s}( w,A,B)]}\]
As a result, larger effect sizes imply a more biased word embedding. Furthermore, concept-related words should be equally associated with either sensitive attribute group assuming an unbiased word embedding.
**PMI** We use point-wise mutual information (PMI; Church and Hanks 1990) as a measure of association between a descriptive word and a sensitive attribute (gender or race). In particular, PMI measures the difference between the probability of the co-occurrence of a word and an attribute, and their joint probability if they were independent as:
\[\text{PMI}(a,w)=\log\frac{p(a,w)}{p(a)p(w)} \tag{1}\]
A strong association with a specific gender or race leads to a high PMI. For example, a high value for \(\text{PMI}(female,wife)\) is expected due to their co-occurrence probability being higher than the independent probabilities of \(female\) and \(wife\). Accordingly, in an ideal unbiased world, words such as \(honourable\) would have a PMI of approximately zero for all gender and racial identities.
## 5 Experimental Setup
We perform two sets of experiments on our historical newspaper corpus. First, before we employ word embeddings to measure bias, we investigate the stability of the word embeddings trained on our dataset and evaluate their understanding of the noisy nature of the corpora. Second, we assess gender and racial biases using tools defined in SS4.2.
### Embedding Stability Evaluation
We use word embeddings as a tool to quantify historical trends and word associations in our data. However, prior work has called attention to the lack of stability of word embeddings trained on small and potentially idiosyncratic corpora (Antoniak and Mimno, 2018; Gonen et al., 2020). We compare these different embeddings setups by testing them with regard to their stability and capturing meaning while controlling for the tokenisation algorithm, embedding size and the minimum number of occurrences.
We construct the word embeddings employing the continuous skip-gram negative sampling model from Word2vec Mikolov et al. (2013) using gensim.7 Following prior work Antoniak and Mimno (2018); Gonen et al. (2020), we test two common vector dimension sizes of 100 and 300, and two minimum numbers of occurrences of 20 and 100. The rest of the hyperparameters are set to their default value. We use two different methods for tokenising documents, the spaCy tokeniser and a subword-based tokeniser, Byte-Pair Encoding BPE Gage (1994)). We train the BPE tokeniser on our dataset using the Hugging Face tokeniser implementation.8
Footnote 7: [https://radimrehurek.com/gensim/mode](https://radimrehurek.com/gensim/mode) ls/word2vec.html
Footnote 8: [https://huggingface.co/docs/tokenize](https://huggingface.co/docs/tokenize) rs
For each word in the vocabulary, we identify its 20 nearest neighbours and calculate the Jaccard similarity across five algorithm runs. Next, we test how well the word embeddings deal with the noisy nature of our documents. We create a list of 110 frequently misspelt words (See App A.2). We construct the list by first tokenising our dataset using spaCy and filtering out proper nouns and tokens that appear in the English dictionary. We then order the remaining tokens by frequency and manually scan the top \(1\,000\) tokens for misspelt words. We calculate the percentage of words (averaged across 5 runs) for which the misspelt word is in immediate proximity to the correct word (top 5 nearest neighbours in terms of cosine similarity).
Based on the results of the stability and compatibility study, we select the most suitable model with which we conduct the following bias evaluation.
### Bias Estimation
#### 5.2.1 WEAT Evaluation
As discussed in SS4.2, WEAT is used to evaluate how two attributes are associated with two target concepts in an embedding space, here of the model that was selected by the method described in SS5.1.
In this work, we focus on the attribute pairs (_female_, _male_)9 and (_white_, _non-white_). Usually, comparing the sensitive attributes (_white_, _non-white_) is done by collecting the embedding of popular white names and popular non-white names Tan and Celis (2019). However, this approach can introduce noise when applied to our dataset Handler and Jacoby (1996). First, non-whites are less likely to be mentioned by name in historical newspapers compared to whites. Second, popular non-white names of the 18th and 19th centuries differ substantially from popular non-white names of modern times, and, to the best of our knowledge, there is no list of common historical non-white names. For these reasons, instead of comparing the pair (_white_, _non-white_), we compare the pairs (_African countries_, _European countries_) and (_Caribbean countries_, _European countries_).
Footnote 9: As we deal with historical documents from the 18th-19th centuries, other genders are unlikely to be found in the data.
Following Rios et al. (2020), we analyse the association of the above-mentioned attributes to the target concepts (_career_, _family_), (_strong_, _weak_), (_intelligence_, _appearance_), and (_physical illness_, _mental illness_). Following a consultation with a historian, we add further target concepts relevant to this period (_manual labour_, _non-manual labour_) and (_crime_, _lawfulness_). Tab 6 (in App A.3) lists the target and attribute words we use for our analysis.
We also train a separate word embedding model on each of the dataset splits defined in SS3 and run WEAT on the resulting three models. Comparing the obtained WEAT scores allows us to visualise temporal changes in the bias associated with the attributes and understand its dynamics.
#### 5.2.2 PMI Evaluation
Different from WEAT, calculating PMI requires first identifying entities in the OCRed historical newspapers and then classifying them into predefined attribute groups. The next step is collecting descriptors, i.e. words that are used to describe the entities. Finally, we use PMI to measure the association strength of the collected descriptors with each attribute group.
Entity Extraction.We apply F-coref Otmazgin et al. (2022), a model for English coreference resolution that simultaneously performs entity extraction and coreference resolution on the extracted entities. The model's output is a set of entities, each represented as a list of all the references to that entity in the text. We filter out non-human entities by using nltk's WordNet package,10 retaining only entities for which the synset "person.n1" is a hypernym of one of their references.
Footnote 10: [https://www.nltk.org/howto/wordnet.html](https://www.nltk.org/howto/wordnet.html)
Entity Classification.We use a keyword-based approach Lepori (2020) to classify the entities into groups corresponding to the gender and race axes
and their intersection. Specifically, we classify each entity as being a member of _male_ vs _female_, and _white_ vs _non-white_. Additionally, entities are classified into intersectional groups (e.g. we classify an entity into the group _non-white females_ if it belongs to both _female_ and _non-white_).
Formally, we classify an entity \(e\) with references \(\{r_{e}^{1},...,r_{e}^{m}\}\) to attribute group \(G\) with keyword-set \(K_{G}=\{k_{1},...,k_{n}\}\) if \(\exists i\) such that \(r_{e}^{i}\in K_{G}\). See App A.3 for listing the keyword sets of the different groups. In Tab 3, we present the number of entities classified into each group. We note here the unbalanced representation of the groups in the dataset. Further, it is important to state, that because it is highly unlikely that an entity in our dataset would be explicitly described as white, we classify an entity into the _whites_ group if it was not classified as _non-white_. See Limitations for a discussion of the limitations of using a keyword-based classification approach.
To evaluate our classification scheme, an author of this paper manually labelled a random sample of 56 entities. The keyword-based approach assigned the correct gender and race label for \(\sim 80\%\) of the entities. See additional details in Tab 7 in App B. From a preliminary inspection, it appears that many of the entities that were wrongly classified as _female_ were actually ships or other vessels (traditionally "ship" has been referred to using female gender). As F-coref was developed and trained using modern corpora, we evaluate its accuracy on the same set of 56 entities. Two authors of this paper validated its performance on the historical data to be satisfactory, with especially impressive results on shorter texts with fewer amount of OCR errors.
Descriptors Collection.Finally, we use spaCy to collect descriptors for each classified entity. Here, we define the descriptors as the lemmatised form of tokens that share a dependency arc labelled "amod" (i.e. adjectives that describe the tokens) to one of the entity's references. Every target group \(G_{j}\) is then assigned with descriptors list \(D_{j}=[d_{1},...,d_{k}]\).
To calculate PMI according to Eq (1), we estimate the joint distribution of a target group and a descriptor using a simple plug-in estimator:
\[\widehat{p}(G_{j},d_{i})\propto\mathrm{count}(G_{j},d_{i}) \tag{2}\]
Now, we can assign every word \(d_{i}\) two continuous values representing its bias in the gender and race dimensions by calculating \(\mathrm{PMI}(\textit{female},d_{i})-\mathrm{PMI}(\textit{males},d_{i})\) and \(\mathrm{PMI}(\textit{non-white},d_{i})-\mathrm{PMI}(\textit{white},d_{i})\). These two continuous values can be seen as \(d_{i}\)'s coordinates on the intersectional gender/race plane.
#### 5.2.3 Lexicon Evaluation
Another popular approach for quantifying different aspects of bias is the application of specialised lexica Stanczak and Augenstein (2021). These lexica assign words a continuous value that represents how well the word aligns with a specific dimension of bias. We use NRC-VAD lexicon Mohammad (2018) to compare word usage associated with the sensitive attributes _race_ and _gender_ in three dimensions: _dominance_ (strength/weakness), _valence_ (goodness/badness), and _arousal_ (activeness/passiveness of an identity). Specifically, given a bias dimension \(\mathcal{B}\) with lexicon
\(\{(w_{1},a_{1}),...,(w_{n},a_{n})\}\), where \((w_{i},a_{i})\) are word-value pairs, we calculate the association of \(\mathcal{B}\) with a sensitive attribute \(G_{j}\) using:
\[A(\mathcal{B},G_{j})=\frac{\sum_{i}^{n}a_{i}\cdot\text{count}(w_{i},D_{j})}{\sum _{i}^{n}\text{count}(w_{i},D_{j})} \tag{3}\]
where count\((w_{i},D_{j})\) is the number of times the word \(w_{i}\) appears in the descriptors list \(D_{j}\).
## 6 Results
First, we investigate which training strategies of word embeddings optimise their stability and compatibility on historical corpora (SS6.1). Next, we analyse how bias is manifested along the gender and racial axes and whether there are any noticeable differences in bias across different periods of the Caribbean history (SS6.2).
### Embedding Stability Evaluation
In Tab 4, we present the results of the study on the influence of training strategies of word embeddings. We find that there is a trade-off between the stability of word embeddings and their compatibility with the dataset. While BPE achieves a higher Jaccard similarity across the top 20 nearest neighbours for each word across all runs, it loses the meaning of misspelt words. Interestingly, this phenomenon arises, despite the misspelt words occurring frequently enough to be included in the BPE model's vocabulary.
For the remainder of the experiments, we aim to select a model which effectively manages this
Figure 4: Temporal WEAT analysis conducted for the periods 1751–1790 (rebellions), 1791–1825 (revolutions) and 1826–1876 (abolishment). Similar to Fig 3, the height of each bar represents how strong the association of the attribute is with each concept.
Figure 3: a) WEAT results of _females_ vs _males_. The location of a marker measures the association strength of _females_ with the concept (compared to _males_). For example, according to the modern model, _females_ are associated with “weak” and _non-manual labour_ while _males_ are associated with “strong” and _manual labour_. b) WEAT results of _Caribbean countries_ vs _European countries_. The location of a marker measures the association strength of _Caribbean countries_ with the concept (compared to _European countries_).
trade-off achieving both high stability and captures meaning despite the noisy nature of the underlying data. Thus, we opt to use a spaCy-based embedding with a minimum number of occurrences of 20 and an embedding size of 100 which achieves competitive results in both of these aspects. Finally, we note that our results remain stable across different algorithm runs and do not suffer from substantial variations which corroborates the reliability of the findings we make henceforth.
### Bias Estimation
#### 6.2.1 WEAT Analysis
Fig 3 displays the results of performing a WEAT analysis for measuring the association of the six targets described in SS5.2 with the attributes (_females_, _males_) and (_Caribbean countries_, _European countries_), respectively.11 We calculate the WEAT score using the embedding model from SS6.1 and compare it with an embedding model trained on modern news corpora (word2vec-google-news-300, Mikolov et al. (2013a)). We notice interesting differences between the historical and modern embeddings. For example, while in our dataset _females_ are associated with the target concept of _manual labour_, this notion is more aligned with _males_ in the modern corpora. A likely cause is that during this period, womens' intellectual and administrative work was not commonly recognised (Wayne, 2020). It is also interesting to note that the attribute _Caribbean countries_ has a much stronger association in the historical embedding with the target _career_ (as opposed to _family_) compared to the modern embeddings. A possible explanation is that Caribbean newspapers referred to locals by profession or similar titles, while Europeans were referred to as relatives of the Caribbean population.
Footnote 11: See Fig 9 in App B for analysis of the attributes (_African countries_, _European countries_).
In Fig 4 and Fig 10 (in App B), we present a dynamic WEAT analysis that unveils trends on a temporal axis. In particular, we see an increase in the magnitude of association between the target of _family_ vs _career_ and the attributes (_females_, _males_) and (_Caribbean countries_, _European countries_) over time. It is especially interesting to compare Fig 3 with Fig 4. One intriguing result is that the high association between _Caribbean countries_ and _manual labour_ can be attributed to the earlier periods. This finding is potentially related to several historical shifts taking place in the period. For instance, while in the earlier years, it was normal for plantation owners to be absentees and continue to live in Europe, from 1750 onward, waves of white migrants with varied professional backgrounds came to the Caribbean.
#### 6.2.2 PMI Analysis
We report the results of the intersectional PMI analysis in Fig 1. As can be seen, an intersectional analysis can shed a unique light on the biased nature of some words in a way that single-dimensional analysis cannot. _White males_ are "brave" and "ingenious", and _non-white males_ are described as "active" and "tall". Interestingly, while words such as "pretty" and "beautiful" (and peculiarly, "murdered") are biased towards _white_ as opposed to _non-white females_, the word "lovely" is not, whereas "elderly" is strongly aligned with _non-white females_. Another intriguing dichotomy is the word pair "sick" and "blind" which are both independent
Figure 5: Intersectional PMI analysis of “free”, “celebrated”, “deceased” and “poor” across the periods.
Figure 6: Association of attributes with the lexicon of dominance, valence, and arousal.
along the gender axis but manifest a polar racial bias. In Tab 8 in App B, we list some examples from our dataset featuring those words.
Similarly to SS6.2.1, we perform a temporal PMI analysis by comparing results obtained from separately analysing the three dataset splits. In Fig 5, we follow the trajectory over time of the biased words "free", "celebrated", "deceased" and "poor". Each word displays different temporal dynamics. For example, while the word "free" moved towards the _male_ attribute, "poor" transitioned to become more associated with the attributes _female_ and _non-white_ over time (potentially due to its meaning change from an association with poverty to a pity).
These results provide evidence for the claims of the intersectionality theory. We observe conventional manifestations of gender bias, i.e. "beautiful" and "pretty" for _white females_, and "ingenious" and "brave" for _white males_. While unsurprising due to the societal status of non-white people in that period, this finding necessitates intersectional bias analysis for historical documents in particular.
#### 6.2.3 Lexicon Evaluation
Finally, we report the lexicon-based evaluation results in Fig 6 and Fig 7. Unsurprisingly, we observe lower dominance levels for the _non-white_ and _female_ attributes compared to _white_ and _male_, a finding previously uncovered in modern texts (Field and Tsvetkov, 2019; Rabinovich et al., 2020). While Fig 7 indicates that the level of dominance associated with these attributes raised over time, a noticeable disparity to white males remains. Perhaps more surprising is the valence dimension. We see the highest and lowest levels of associations with the intersectional attributes _non-white female_ and _non-white male_, respectively. We hypothesise that this connects to the nature of advertisements for lending the services of or selling non-white women where being agreeable is a valuable asset.
## 7 Conclusions
In this paper, we examine biases present in historical newspapers published in the Caribbean during the colonial era by conducting a temporal analysis of biases along the axes of gender, race, and their intersection. We evaluate the effectiveness of different embedding strategies and find a trade-off between the stability and compatibility of word representations on historical data. We link changes in biased word usage to historical shifts, coupling the development of the association between _manual labour_ and _Caribbean countries_ to waves of white labour migrants coming to the Caribbean from 1750 onward. Finally, we provide evidence to corroborate the intersectionality theory by observing conventional manifestations of gender bias solely for white people.
## Limitations
We see several limitations regarding our work. First, we focus on documents in the English language only, neglecting many Caribbean newspapers and islands with other official languages. While some of our methods can be easily extended to non-English material (e.g. WEAT analysis), methods that rely on the pre-trained English model F-coref (i.e. PMI, lexicon-based analysis) can not.
On the same note, F-coref and spaCy were developed and trained using modern corpora, and their capabilities when applied to the noisy historical newspapers dataset, are noticeably lower compared to modern texts. Contributing to this issue is the unique, sometimes archaic language in
Figure 7: Association of attributes with the lexicon of dominance, valence, and value done on the periods 1751–1790 (rebellions), 1791–1825 (revolutions) and 1826–1876 (abolishment).
which the newspapers were written. While we validate F-coref performance on a random sample (SS5.2), this is a significant limitation of our work. Similarly, increased attention is required to adapt the keyword sets used by our methods to historical settings.
Moreover, our historical newspaper dataset is inherently imbalanced and skewed. As can be seen in Tab 2 and Fig 8, there is an over-representation of a handful of specific islands and time periods. While it is likely that in different regions and periods, less source material survived to modern times, part of the imbalance (e.g. the prevalence of the US Virgin Islands) can also be attributed to current research funding and policies.12 Compounding this further, minority groups are traditionally under-represented in news sources. This introduces noise and imbalance into our results, which rely on a large amount of textual material referring to each attribute on the gender/race plane that we analyse.
Footnote 12: The Danish government has recently funded a campaign for the digitisation of historical newspapers published in the Danish colonies; [https://stcroixsource.com/2017/03/01/](https://stcroixsource.com/2017/03/01/).
Relating to that, our keyword-based method of classifying entities into groups corresponding to the gender and race axes is limited. While we devise a specialised keyword set targeting the attributes _female_, _male_ and _non-white_, we classify an entity into the _white_ group if it was not classified as _non-white_. This discrepancy is likely to introduce noise into our evaluation, as can also be observed in Tab 7. This tendency may be intensified by the NLP systems that we use, as many tend to perform worse on gender- and race-minority groups (Field et al., 2021).
Finally, in this work, we explore intersectional bias only along the race and gender axes. Thus, we neglect the effects of other confounding factors (e.g. societal position, occupation) that affect asymmetries in language.
## Ethical Considerations
Studying historical texts from the era of colonisation and slavery poses ethical issues to historians and computer scientists alike since vulnerable groups still suffer the consequences of this history in the present. Indeed, racist and sexist language is not only a historical artefact of bygone days but has a real impact on people's lives (Alim et al., 2020).
We note that the newspapers we consider for this analysis were written foremost by the European suppressors. Moreover, only a limited number of affluent people (white males) could afford to place advertisements in those newspapers (which constitute a large portion of the raw material). This skews our study toward language used by privileged individuals and their perceptions.
This work aims to investigate racial and gender biases, as well as their intersection. Both race and gender are considered social constructs and can encompass a range of perspectives, including one's reflected, observed, or self-perceived identity. In this paper, we classify entities as observed by the author of an article and infer their gender and race based on the pronouns and descriptors used in relation to this entity. We follow this approach in an absence of explicit demographic information. However, we warn that this method poses a risk of misclassification. Although the people referred to in the newspapers are no longer among the living, we should be considerate when conducting studies addressing vulnerable groups.
Finally, we use the mutually exclusive _white_ and _non-white_ race categories as well as _male_ and _female_ gender categories. We acknowledge that these groupings do not fully capture the nuanced nature of bias. This decision was made due to limited data discussing minorities in our corpus. While gender identities beyond the binary are unlikely to be found in the historical newspapers from the 18th-19th century, future work will aim to explore a wider range of racial identities.
|
2310.19095 | On a class of algebro-geometric solutions to the Ernst equation | We discuss a class of solutions to the Ernst equation in terms of theta
functions with characteristics. We show that it is necessary to take into
account a phase factor, which arises from a shift by a lattice vector, and
impose conditions on the characteristics of the theta functions in order for
the presented function to solve the Ernst equation for all the considered
parameters. | Eddy B. de Leon | 2023-10-29T17:53:15Z | http://arxiv.org/abs/2310.19095v1 | # On a class of algebro-geometric solutions to the Ernst equation
###### Abstract.
We discuss a class of solutions to the Ernst equation in terms of theta functions with characteristics. We show that it is necessary to take into account a phase factor, which arises from a shift by a lattice vector, and impose conditions on the characteristics of the theta functions in order for the presented function to solve the Ernst equation for all the considered parameters.
This work was partially supported by the EIPHI Graduate School (contract ANR-17-EURE-0002)the Bourgogne Franche-Comte Region, the European fund FEDER, and by the European Union Horizon 2020 research and innovation program under the Marie Sklodowska-Curie RISE 2017 grant agreement no. 778010 IPaDEGAN. The author thanks D. Korotkin for helpful discussions and hints.
It is the purpose of this note to show that in terms of the complex variable \(\xi=\zeta+\mathrm{i}\rho\) (where \(\zeta\) and \(\rho\) are the physical coordinates), the potential
\[\mathcal{E}(\xi,\bar{\xi})=e^{-\pi\mathrm{i}\sum_{j}p_{j}}\frac{\Theta_{\mathbf{ pq}}\left(\int_{\xi}^{\infty^{+}},\mathbb{B}_{\xi}\right)}{\Theta_{\mathbf{ pq}}\left(\int_{\xi}^{\infty^{-}},\mathbb{B}_{\xi}\right)}, \tag{2}\]
solves the Ernst equation (thus an Ernst potential), where \(\int_{\xi}^{\infty^{\pm}}\) and \(\mathbb{B}_{\xi}\) are quantities parametrized by \(\xi\) and \(\Theta_{\mathbf{pq}}\) is the multi-dimensional theta function (10) with fixed arbitrary characteristics \(\mathbf{p}\in\mathbb{R}^{g}\) and \(\mathbf{q}\in\mathbb{C}^{g}\), whose components satisfy the reality conditions
\[\Re(q_{j})=\left\{\begin{array}{ll}\frac{1}{2}\sum_{k\neq j}p_{k},&\text{if }E_ {j}=\bar{F}_{j},\\ -\frac{1}{4}+\frac{1}{2}\sum_{k}p_{k},&\text{if }E_{j},F_{j}\in\mathbb{R}, \end{array}\right. \tag{3}\]
where \(E_{j}\), \(F_{j}\) are the branch points of the defining hyperelliptic curves (7). These terms are properly defined in Section 2. There are two differences with respect to the potential presented in [4]. First, it is necessary to include a phase factor, which arises upon shifting the argument of the theta functions by a lattice vector when the characteristic \(\mathbf{p}\) is different from zero. Second, the reality conditions on the characteristics had to be modified to (3). However, the class of solutions (2) coincides with the one presented in [4] when \(\sum_{j}p_{j}\) is an integer and all the branch cuts are of the form \(E_{j}=\bar{F}_{j}\).
This note is organized as follows. In Section 2, we recall some basic facts regarding the Ernst equation, hyperelliptic curves and theta functions, as well as Fay's identity. In Section 3, we show that the potential (2) solves the Ernst equation by using the approach presented in [4].
## 2. Preliminaries
### Ernst equation
The line element of a stationary axisymmetric vacuum spacetime in the Weyl-Lewis-Papapetrou form reads
\[ds^{2}=-f(dt+Ad\varphi)^{2}+f^{-1}\left[e^{2k}(d\zeta^{2}+d\rho^{2})+\rho^{2}d \varphi^{2}\right], \tag{4}\]
where \(k=k(\rho,\zeta)\), \(A=A(\rho,\zeta)\), \(f=f(\rho,\zeta)\). The Einstein equations in these coordinates are equivalent to the Ernst equation together with the relation \(f=\Re(\mathcal{E})\) and the quadratures of
\[A_{\xi}=2\rho\frac{(\mathcal{E}-\overline{\mathcal{E}})_{\xi}}{(\mathcal{E}+ \overline{\mathcal{E}})^{2}},\quad k_{\xi}=2\mathrm{i}\rho\frac{\mathcal{E}_{ \xi}\mathcal{E}_{\bar{\xi}}}{(\mathcal{E}+\overline{\mathcal{E}})^{2}}. \tag{5}\]
It is convenient to consider the Ernst equation in the complex coordinates, in which it takes the form
\[\left(\mathcal{E}+\overline{\mathcal{E}}\right)\left(\mathcal{E}_{\xi\bar{ \xi}}-\frac{1}{2(\xi-\xi)}\left(\mathcal{E}_{\bar{\xi}}-\mathcal{E}_{\xi} \right)\right)=2\mathcal{E}_{\xi}\mathcal{E}_{\bar{\xi}}, \tag{6}\]
### Hyperelliptic curves
Recall that the period matrix \(\mathbb{B}\) of any smooth algebraic curve \(\mathcal{L}\) is defined as the matrix with components \(\mathbb{B}_{jk}=\int_{b_{j}}\omega_{j}\), where \(\{a_{1},b_{1},...,a_{g},b_{g}\}\) is a canonical basis of the first homology group and \(\{\omega_{1},...,\omega_{g}\}\) is a basis of holomorphic differentials normalized with respect to the \(a_{j}\) cycles, i.e., \(\int_{a_{j}}\omega_{k}=\delta_{jk}\). The period matrix \(\mathbb{B}\) is known to be complex symmetric and with positive definite imaginary part. This matrix defines the full-rank lattice \(\Lambda_{\mathbb{B}}=\mathbb{Z}^{g}+\mathbb{B}\mathbb{Z}^{g}\) in \(\mathbb{C}^{g}\).
An algebraic curve \(\mathcal{L}\) defines the Abel map \(\mathcal{A}:p\mapsto\int_{p_{0}}^{p}\omega\), for a base point \(p_{0}\in\mathcal{L}\). Although \(\mathcal{A}(p)\) depends on the path, it is unique in \(\mathbb{C}^{g}\) modulo \(\Lambda_{\mathbb{B}}\). For simplicity, we use the notation \(\int_{p_{1}}^{p_{2}}:=\mathcal{A}(p_{2})-\mathcal{A}(p_{1})\). Notice that the difference is independent of the base point \(p_{0}\).
Consider the family of hyperelliptic curves
\[\mathcal{L}_{\xi}=\{(x,y)\in\mathbb{C}^{2}|y^{2}=(x-\xi)(x-\bar{\xi})\prod_{j= 1}^{g}(x-E_{j})(x-F_{j})\}, \tag{7}\]
parametrized by \(\xi\in\mathbb{C}\), with distinct branch points \(E_{j},F_{j}\in\mathbb{C}\) and the pairwise condition that either \(E_{j}=\bar{F}_{j}\) or \(E_{j},F_{j}\in\mathbb{R}\).
The basis cycles \(\{a_{j},b_{j}\}\) of the first homology group are chosen such that the action of the holomorphic involution \(\tau(x,y)=(\bar{x},\bar{y})\) on them is
\[\begin{split}\tau(a_{j})&=-a_{j},\quad\text{for }j=1, \ldots,g,\\ \tau(b_{j})&=\left\{\begin{array}{ll}b_{j}+\sum_{k \neq j}a_{k},&\text{if }E_{j}=\bar{F}_{j},\\ b_{j}+\sum_{k}a_{k},&\text{if }E_{j},F_{j}\in\mathbb{R}.\end{array}\right.\end{split} \tag{8}\]
This guarantees that the real part of the period matrix \(\mathbb{B}_{\xi}\) of \(\mathcal{L}_{\xi}\) will have half-integer coefficients independent of \(\xi\), as shown in Appendix B. Figure 1 shows cycles satisfying such conditions, where \(a_{j}\) are the cycles encircling the branch cuts \([E_{j},F_{j}]\) in counterclockwise direction and \(b_{j}\) are those going from \([\xi,\bar{\xi}]\) to \([E_{j},F_{j}]\) on the \(+\)-sheet. In the following, the \(\pm\) scripts indicate whether we are considering the \(+\) or \(-\) covering sheet of \(\mathcal{L}_{\xi}\).
The path \(\gamma_{\infty}\) from the point \(\infty^{-}\) to \(\infty^{+}\) is chosen such that
\[\tau(\gamma_{\infty})=\gamma_{\infty}-\sum_{k}a_{k}. \tag{9}\]
### Theta functions
The multi-dimensional theta function with characteristics \(\mathbf{p}\in\mathbb{R}^{g}\) and \(\mathbf{q}\in\mathbb{C}^{g}\) is defined by the series
\[\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z},\mathbb{B}):=\sum_{\mathbf{n}\in \mathbb{Z}^{g}}\exp\left(\pi\mathrm{i}\langle\mathbf{n}+\mathbf{p},\mathbb{B}( \mathbf{n}+\mathbf{p})\rangle+2\pi\mathrm{i}\langle\mathbf{n}+\mathbf{p}, \mathbf{z}+\mathbf{q}\rangle\right), \tag{10}\]
for any \(\mathbf{z}\in\mathbb{C}^{g}\) and \(\mathbb{B}\in\mathbb{H}^{g}\), where \(\mathbb{H}^{g}\) is the space of \(g\times g\) complex symmetric matrices with positive definite imaginary part. The latter implies that \(\Theta_{\mathbf{p}\mathbf{q}}\) is an entire function in \(\mathbb{C}^{g}\). This function is quasi-periodic with respect to the lattice \(\Lambda_{\mathbb{B}}\) and its translation by the lattice vector \(\mathbf{m}\in\mathbb{Z}^{g}\) is given by
\[\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z}+\mathbf{m},\mathbb{B})=e^{2\pi \mathrm{i}\langle\mathbf{p},\mathbf{m}\rangle}\Theta_{\mathbf{p}\mathbf{q}}( \mathbf{z},\mathbb{B}). \tag{11}\]
Moreover, theta functions can be defined on a curve \(\mathcal{L}\) via its period matrix \(\mathbb{B}\) and the Abel map. Namely,
\[\Theta:\mathcal{L}\to\mathbb{C},\quad p\mapsto\Theta_{\mathbf{p}\mathbf{q}}( \mathcal{A}(p)+v,\mathbb{B}),\]
for any \(v\in\mathbb{C}^{g}\).
Summarizing, to every point \(\xi\in\mathbb{C}\) we associate the period matrix \(\mathbb{B}_{\xi}\) of the curve \(\mathcal{L}_{\xi}\) as well as the Abel maps \(\int_{\xi}^{\infty^{\pm}}\), which enter as the arguments of the theta functions.
### Fay identity
It is a relation between points on a compact Riemann surface \(\mathcal{R}\), e.g., the hyperelliptic curves (7). This functional relation is given in terms of the theta functions with characteristics defined by the period matrix of \(\mathcal{R}\) and it holds on arbitrary points \(a,b,c,d\in\mathcal{R}\), for all \(\mathbf{z}\in\mathbb{C}^{g}\).
\[\begin{split} E(c,a)E(d,b)&\Theta_{\mathbf{p} \mathbf{q}}(\mathbf{z}+\int_{b}^{c})\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z}+ \int_{a}^{d})+E(c,b)E(a,d)\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z}+\int_{a}^{ c})\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z}+\int_{b}^{d})\\ &=E(c,d)E(a,b)\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z})\Theta_{ \mathbf{p}\mathbf{q}}(\mathbf{z}+\int_{a}^{d}+\int_{b}^{c}),\end{split} \tag{12}\]
Figure 1. Choice of cycles.
where \(E(x,y)=\Theta_{\mathbf{p}^{*}\mathbf{q}^{*}}(\int_{y}^{x})/[h_{\Delta}(x)h_{\Delta} (y)]\) is the prime form, with the spinor \(h_{\Delta}(x)\) satisfying
\[h_{\Delta}^{2}(x)=\sum_{j=1}^{g}\frac{\partial\Theta_{\mathbf{p}^{*}\mathbf{q}^ {*}}(0)}{\partial z_{j}}\omega_{j}(x),\]
see [7, 3] for further discussions. The theta function \(\Theta_{\mathbf{p}^{*}\mathbf{q}^{*}}\) is required to have a non-singular odd half-integer characteristic, i.e., \(2\mathbf{p}^{*},2\mathbf{q}^{*}\in\mathbb{Z}^{g}/(2\mathbb{Z}^{g})\) such that \(\Theta_{\mathbf{p}^{*}\mathbf{q}^{*}}(0)=0\) and \(\nabla\Theta_{\mathbf{p}^{*}\mathbf{q}^{*}}(0)\neq 0\).
## 3. Solution to the Ernst equation
In this section, we show that the Ernst potential (2) solves the Ernst equation (6) with fixed arbitrary characteristics \(\mathbf{p}\in\mathbb{R}^{g}\), \(\mathbf{q}\in\mathbb{C}^{g}\) satisfying the reality conditions (3). The proof is divided in three steps. First, we show that the complex conjugate of (2) can be expressed in terms of theta functions corresponding to the same period matrix \(\mathbb{B}_{\xi}\). Second, we show that the real part of the Ernst potential can be simplified via the Fay identity. Third, we show that the proof presented in [4], which holds for \(\mathbf{p}=0\), can be extended to any \(\mathbf{p}\in\mathbb{R}^{g}\).
We are interested in expressing the complex conjugate of the Ernst potential in terms of theta functions of the same matrix \(\mathbb{B}_{\xi}\), whose arguments must be represented as the Abel maps of points on \(\mathcal{L}_{\xi}\), in order to use the Fay identity (12). This is given by the following proposition.
**Proposition 3.1**.: _Let \(\mathcal{E}(\xi,\bar{\xi})\) be the potential defined by (2) with characteristics \(\mathbf{p}\in\mathbb{R}^{g}\), \(\mathbf{q}\in\mathbb{C}^{g}\) satisfying the reality conditions (3). Then, its complex conjugate is_
\[\overline{\mathcal{E}(\xi,\xi)}=e^{-\pi\mathrm{i}\sum_{j}p_{j}}\frac{\Theta_{ \mathbf{p}\mathbf{q}}\left(\int_{\bar{\xi}}^{\infty^{+}},\mathbb{B}_{\xi} \right)}{\Theta_{\mathbf{p}\mathbf{q}}\left(\int_{\bar{\xi}}^{\infty^{-}}, \mathbb{B}_{\xi}\right)}. \tag{13}\]
Proof.: From the definition of the multi-dimensional theta function (10), it can be observed that
\[\overline{\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z},\mathbb{B})}=\alpha\cdot \Theta_{\mathbf{p}\mathbf{q}}(-\bar{\mathbf{z}}-2\Re(\mathbf{q}+\mathbb{B} \mathbf{p})+\mathrm{diag}(\Re(\mathbb{B})),\mathbb{B}),\]
for all \(\mathbf{z}\in\mathbb{C}^{g}\) and for any matrix \(\mathbb{B}\in\mathbb{H}^{g}\) satisfying the condition \(2\Re(\mathbb{B})\in M_{g\times g}(\mathbb{Z})\), where \(\alpha\in\mathbb{C}\) is a constant independent of \(\mathbf{z}\) (see Appendix A). This condition is satisfied by the period matrices \(\mathbb{B}_{\xi}\) of hyperelliptic curves of the form (7) with the choice of cycles shown in Figure 1, as shown in Appendix B.
Moreover, the conditions (3) on the \(\mathbf{p}\), \(\mathbf{q}\) characteristics are equivalent to the vanishing of the term \(-2\Re(\mathbf{q}+\mathbb{B}\mathbf{p})+\mathrm{diag}(\Re(\mathbb{B}))\). Then,
\[\overline{\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{z},\mathbb{B}_{\xi})}=\alpha \cdot\Theta_{\mathbf{p}\mathbf{q}}(-\bar{\mathbf{z}},\mathbb{B}_{\xi}),\]
which implies
\[\overline{\mathcal{E}(\xi,\bar{\xi})}=e^{\pi\mathrm{i}\sum_{j}p_{j}}\frac{ \Theta_{\mathbf{p}\mathbf{q}}\left(-\overline{\int_{\xi}^{\infty^{+}}}, \mathbb{B}_{\xi}\right)}{\Theta_{\mathbf{p}\mathbf{q}}\left(-\overline{\int_{ \xi}^{\infty^{-}}},\mathbb{B}_{\xi}\right)}. \tag{14}\]
The next step is expressing the complex conjugates \(\overline{\int_{\xi}^{\infty^{\pm}}}\) in terms of the Abel maps of \(\bar{\xi}\). Notice that \(\bar{\mathbf{z}}=2\Re(\mathbf{z})-\mathbf{z}\) for any \(\mathbf{z}\in\mathbb{C}^{g}\) and in particular,
\[\overline{\int_{\xi}^{\infty^{\pm}}}=2\Re\left(\int_{\xi}^{\infty^{\pm}} \right)-\int_{\xi}^{\infty^{\pm}}.\]
On the other hand, the integrals \(\int_{\xi}^{\infty^{\pm}}\) can be written in terms of \(\int_{\infty^{-}}^{\infty^{+}}\), whose real part is given explicitly by \(\Re(\int_{\infty^{-}}^{\infty^{+}})=(\frac{1}{2},\ldots,\frac{1}{2})\) in Appendix B. Notice that \(\int_{\xi}^{\infty^{+}}=-\int_{\xi}^{\infty^{-}}\), since the paths of the integrals \(\int_{\xi}^{\infty^{+}}\) and \(\int_{\xi}^{\infty^{-}}\) have the same projection on \(\mathbb{C}\mathbb{P}^{1}\). Thus,
\[\int_{\infty^{-}}^{\infty^{+}}=\int_{\infty^{-}}^{\xi}+\int_{\xi}^{\infty^{+}}=- \int_{\xi}^{\infty^{-}}+\int_{\xi}^{\infty^{+}}=\pm 2\int_{\xi}^{\infty^{\pm}}.\]
Implying,
\[\overline{\int_{\xi}^{\infty^{+}}}=-{\int_{\xi}^{\infty^{+}}}\pm\Re({\int_{ \infty^{-}}^{\infty^{+}}}). \tag{15}\]
The integral \({\int_{\xi}^{\xi}}\) is one half of the integral along the cycle encircling the cut \([\bar{\xi},\xi]\) in clockwise direction, which is equivalent to \(\sum_{k}a_{k}\). Then,
\[{\int_{\bar{\xi}}^{\xi}}=\tfrac{1}{2}\sum_{k}\int_{a_{k}}=\Re({\int_{\infty^{ -}}^{\infty^{+}}})=(\tfrac{1}{2},\ldots,\tfrac{1}{2}).\]
Thus, substituting this value and (15) into (14) we obtain
\[\overline{\mathcal{E}(\xi,\bar{\xi})}=e^{\pi\mathrm{i}\sum_{j}p_{j}}\,\frac{ \Theta_{\mathbf{pq}}\left({\int_{\bar{\xi}}^{\infty^{+}}}-2{\int_{\bar{\xi}}^{ \xi}},{\mathbb{B}}_{\xi}\right)}{\Theta_{\mathbf{pq}}\left({\int_{\bar{\xi}}^ {\infty^{-}}},{\mathbb{B}}_{\xi}\right)}. \tag{16}\]
Finally, the form (13) of the complex conjugate of the Ernst potential follows by considering the translation
\[\Theta_{\mathbf{pq}}\left({\int_{\bar{\xi}}^{\infty^{+}}}-2{\int_{\bar{\xi}}^ {\xi}},{\mathbb{B}}_{\xi}\right)=e^{-2\pi\mathrm{i}(\mathbf{p},2{\int_{\bar{ \xi}}^{\xi}})}\Theta_{\mathbf{pq}}\left({\int_{\bar{\xi}}^{\infty^{+}}},{ \mathbb{B}}_{\xi}\right)\]
by the lattice vector \(2{\int_{\bar{\xi}}^{\xi}}=(1,\ldots,1)\), which is obtained with the property (11) of theta functions.
This proposition implies that the real part of the Ernst potential (2) can be expressed in a simple form, since all the involved theta functions are now in terms of the same period matrix \({\mathbb{B}}_{\xi}\), which allows us to use Fay's identity (12). For ease of readability, we omit the second argument \({\mathbb{B}}_{\xi}\) in the sequel. Thus, using (13) from Proposition 3.1, we obtain
\[\mathcal{E}(\xi,\bar{\xi})+\overline{\mathcal{E}(\xi,\bar{\xi})}=e^{-\pi \mathrm{i}\sum_{j}p_{j}}\left[\frac{\Theta_{\mathbf{pq}}({\int_{\xi}^{\infty^{ +}}})\Theta_{\mathbf{pq}}({\int_{\bar{\xi}}^{\infty^{-}}})+\Theta_{\mathbf{ pq}}({\int_{\bar{\xi}}^{\infty^{+}}})\Theta_{\mathbf{pq}}({\int_{\xi}^{ \infty^{-}}})}{\Theta_{\mathbf{pq}}({\int_{\xi}^{\infty^{-}}})\Theta_{ \mathbf{pq}}({\int_{\bar{\xi}}^{\infty^{-}}})}\right],\]
and considering Fay's identity (12) with \(\mathbf{z}=0\), \(a=\xi\), \(b=\bar{\xi}\), \(c=\infty^{-}\), \(d=\infty^{+}\); the lemma
\[\frac{E(\infty^{-},\xi)E(\infty^{+},\bar{\xi})}{E(\infty^{-},\bar{\xi})E( \xi,\infty^{+})}=1,\]
which is proven in [4]; and the property \(E(x,y)=-E(y,x)\) of the prime form, we observe that
\[\Theta_{\mathbf{pq}}({\int_{\xi}^{\infty^{+}}})\Theta_{\mathbf{ pq}}({\int_{\bar{\xi}}^{\infty^{-}}})+ \Theta_{\mathbf{pq}}({\int_{\bar{\xi}}^{\infty^{+}}})\Theta_{ \mathbf{pq}}({\int_{\bar{\xi}}^{\infty^{-}}})=\] \[\frac{E(\infty^{-},\infty^{+})E(\xi,\bar{\xi})}{E(\xi,\infty^{-} )E(\bar{\xi},\infty^{+})}\Theta_{\mathbf{pq}}(0)\Theta_{\mathbf{pq}}({\int_{ \bar{\xi}}^{\infty^{+}}}+{\int_{\bar{\xi}}^{\infty^{-}}}).\]
Therefore, the real part of the Ernst potential is
\[\Re(\mathcal{E}(\xi,\bar{\xi}))=Q\cdot e^{-\pi\mathrm{i}\sum_{j}p_{j}}\frac{ \Theta_{\mathbf{pq}}(0)\Theta_{\mathbf{pq}}({\int_{\bar{\xi}}^{\xi}})}{ \Theta_{\mathbf{pq}}({\int_{\xi}^{\infty^{-}}})\Theta_{\mathbf{pq}}({\int_{\bar {\xi}}^{\infty^{-}}})}, \tag{17}\]
where
\[Q=Q(\xi,\bar{\xi})=\frac{1}{2}\frac{E(\infty^{-},\infty^{+})E(\xi,\bar{\xi})}{ E(\xi,\infty^{-})E(\bar{\xi},\infty^{+})}=\frac{\Theta({\int_{\xi}^{\infty^{-}}}) \Theta({\int_{\bar{\xi}}^{\infty^{-}}})}{\Theta(0)\Theta({\int_{\bar{\xi}}^{ \xi}})}.\]
The latter equality is obtained from the fact that \(\mathcal{E}\equiv 1\) if \(\mathbf{p},\mathbf{q}=0\).
Finally, with formula (17), we show that the potential (2) solves the Ernst equation.
**Theorem 3.2**.: _Let \(\mathcal{E}(\xi,\bar{\xi})\) be the potential defined by (2) with fixed arbitrary characteristics \(\mathbf{p}\in{\mathbb{R}}^{g}\), \(\mathbf{q}\in{\mathbb{C}}^{g}\) satisfying the reality conditions (3). Then, it solves the Ernst equation (6) in complex coordinates, which can be written as_
\[(\mathcal{E}+\overline{\mathcal{E}})\Delta\mathcal{E}=8\xi_{\xi}\mathcal{E}_{ \bar{\xi}}. \tag{18}\]
Proof.: Since the phase factor in (2) is independent of \(\xi\) if the basis cycles of the first homology group are chosen such that they satisfy conditions (8), the derivatives \(\mathcal{E}_{\xi}\), \(\mathcal{E}_{\bar{\xi}}\) and the Laplacian \(\Delta\mathcal{E}\) are just those shown in [4] multiplied by this phase factor. Namely,
\[\mathcal{E}_{\xi}=\frac{1}{2}c_{2}(\infty^{-},\xi,\infty^{+})e^{-\pi\mathrm{i} \sum_{j}p_{j}}\frac{\Theta_{\mathbf{pq}}(0)}{\Theta_{\mathbf{pq}}^{2}(\int_{ \xi}^{\infty^{-}})}D_{\bar{\xi}}\Theta_{\mathbf{pq}}(0),\]
\[\mathcal{E}_{\bar{\xi}}=\frac{1}{2}c_{2}(\infty^{-},\bar{\xi},\infty^{+})e^{- \pi\mathrm{i}\sum_{j}p_{j}}\frac{\Theta_{\mathbf{pq}}(\int_{\bar{\xi}}^{\xi})} {\Theta_{\mathbf{pq}}^{2}(\int_{\xi}^{\infty^{-}})}D_{\bar{\xi}}\Theta_{ \mathbf{pq}}(\int_{\bar{\xi}}^{\bar{\xi}}),\]
\[\Delta\mathcal{E}=\frac{1}{Q}c_{2}(\infty^{-},\xi,\infty^{+})c_{2}(\infty^{-},\bar{\xi},\infty^{+})e^{-\pi\mathrm{i}\sum_{j}p_{j}}\frac{\Theta_{\mathbf{ pq}}(\int_{\bar{\xi}}^{\infty^{-}})}{\Theta_{\mathbf{pq}}^{3}(\int_{\xi}^{ \infty^{-}})}D_{\bar{\xi}}\Theta_{\mathbf{pq}}(\int_{\bar{\xi}}^{\bar{\xi}})D _{\xi}\Theta_{\mathbf{pq}}(0),\]
where the coefficients \(c_{2}(\infty^{-},\xi,\infty^{+})\), \(c_{2}(\infty^{-},\bar{\xi},\infty^{+})\) are functions defined in terms of prime forms [4]. Therefore, with these values for \(\mathcal{E}_{\xi}\), \(\mathcal{E}_{\bar{\xi}}\) and \(\Delta\mathcal{E}\), and considering the form (17) for \(\Re(\mathcal{E})\), it follows that the potential (2) solves the Ernst equation.
**Remark 3.3**.: _If the exponential factor in (2) is not taken into account, this class of solutions is only valid if \(\sum_{j}p_{j}\) is an integer, e.g., \(\mathbf{p}=0\). Otherwise, formula (17) does not hold._
**Remark 3.4**.: _An equivalent form of the class of solutions (2) is_
\[\mathcal{E}(\xi,\bar{\xi})=e^{-\pi\mathrm{i}\sum_{j}p_{j}}\frac{\Theta_{ \mathbf{pq}}\left(\int_{\xi}^{\infty^{+}}+\frac{1}{2}\Delta,\mathbb{B}_{\xi} \right)}{\Theta_{\mathbf{pq}}\left(\int_{\xi}^{\infty^{-}}+\frac{1}{2}\Delta, \mathbb{B}_{\xi}\right)}, \tag{19}\]
_with fixed arbitrary \(\mathbf{p}\in\mathbb{R}^{g}\), \(\mathbf{q}\in\mathbb{C}^{g}\) subject to the reality condition \(\Re(\mathbf{q})+R\mathbf{p}=0\), where \(R\) is given by (21) and \(\Delta:=\mathrm{diag}(R)\)._
## Appendix A Complex conjugate of theta functions
**Proposition A.1**.: _Let \(\mathbb{B}\) be any Riemann matrix \(\mathbb{B}\in\mathbb{H}^{g}\) whose real part \(R:=\Re(\mathbb{B})\) has half-integer coefficients. Then, the complex conjugate of its associated theta function with characteristics (10) can be written as_
\[\overline{\Theta_{\mathbf{pq}}(\mathbf{z},\mathbb{B})}=\alpha\cdot\Theta_{ \mathbf{pq}}(-\bar{\mathbf{z}}-2\Re(\mathbf{q}+\mathbb{B}\mathbf{p})+\mathrm{ diag}(R),\mathbb{B}), \tag{20}\]
_where \(\alpha\in\mathbb{C}\) is a constant that does not depend on \(\mathbf{z}\)._
Proof.: In general, from the definition (10) of multi-dimensional theta functions with characteristics \(\mathbf{p}\in\mathbb{R}^{g}\) and \(\mathbf{q}\in\mathbb{C}^{g}\), it can be observed that their complex conjugate can be written as
\[\overline{\Theta_{\mathbf{pq}}(\mathbf{z},\mathbb{B})}=\Theta_{\mathbf{pq}}(- \bar{\mathbf{z}}-\bar{\mathbf{q}}-\mathbf{q},-\bar{\mathbb{B}})=\Theta_{ \mathbf{pq}}(-\bar{\mathbf{z}}-2\Re(\mathbf{q}),\mathbb{B}-2R),\]
for all \(\mathbf{z}\in\mathbb{C}^{g}\) and \(\mathbb{B}\in\mathbb{H}^{g}\). Moreover, since \(R\) is assumed to have half-integer coefficients,
\[\begin{pmatrix}\mathbb{I}_{g}&-2R\\ 0&\mathbb{I}_{g}\end{pmatrix}\in\mathrm{Sp}(2g,\mathbb{Z}),\]
implying that \(\mathbb{B}-2R\) is symplectically equivalent to \(\mathbb{B}\). Thus, using the modular transformation of theta functions we obtain (20).
## Appendix B Choice of cycles
We choose cycles such that the action of the holomorphic involution \(\tau(x,y)=(\bar{x},\bar{y})\) on them satisfies (8). Figure 1 shows an example of such cycles. These conditions imply that the real part of the period matrices of \(\mathcal{L}_{\xi}\) are half-integers and that they do not depend on the parameter \(\xi\in\mathbb{C}\). Indeed, due to the condition \(\tau(a_{j})=-a_{j}\), the action \(\tau^{*}\) of \(\tau\) on the
normalized differentials is \(\tau^{*}(\omega_{k})=-\bar{\omega}_{k}\) (see [1]). Therefore, the complex conjugate of the period matrices are of the form
\[\bar{\mathbb{B}}_{jk} =\int_{b_{j}}\bar{\omega}_{k}=-\int_{b_{j}}\tau^{*}(\omega_{k})=- \int_{\tau(b_{j})}\omega_{k},\] \[=-\int_{b_{j}}\omega_{k}-\sum_{l}\int_{a_{l}}\omega_{k}+\sigma_{j }\int_{a_{j}}\omega_{k},\] \[=-\mathbb{B}_{jk}-(1-\sigma_{j}\cdot\delta_{jk}),\]
where \(\sigma_{j}=1\) if \(E_{j}=\bar{F}_{j}\) and \(\sigma_{j}=0\) if \(E_{j},F_{j}\in\mathbb{R}\). Thus, the components of the real part of the period matrices \(\mathbb{B}_{\xi}\), which we denote \(R=\Re(\mathbb{B}_{\xi})\), are independent of \(\xi\) and their explicit values are
\[R_{ij}=\left\{\begin{array}{ll}0,&\mbox{if $i=j$ and $E_{j}=\bar{F}_{j}$,}\\ -\frac{1}{2},&\mbox{otherwise.}\end{array}\right. \tag{21}\]
Notice that the reality conditions (3) can be equivalently expressed as
\[\Re(\mathbf{q})+R\cdot\mathbf{p}=\frac{1}{2}\mbox{diag}(R).\]
On the other hand, since the path \(\gamma_{p}\) for the integral \(\int_{\infty^{-}}^{\infty^{+}}\omega\) is chosen such that the action of the holomorphic involution \(\tau\) is given by (9), the complex conjugate of \(\int_{\infty^{-}}^{\infty^{+}}\omega_{j}\) is
\[\overline{\int_{\infty^{-}}^{\infty^{+}}\omega_{j}}=\int_{\gamma _{\infty}}\bar{\omega}_{j} =-\int_{\gamma_{\infty}}\tau^{*}(\omega_{j})=-\int_{\tau(\gamma_{ \infty})}\omega_{j},\] \[=-\int_{\gamma_{\infty}}\omega_{j}+\sum_{k}\int_{a_{k}}\omega_{j }=-\int_{\infty^{-}}^{\infty^{+}}\omega_{j}+1,\]
which implies
\[\Re\left(\int_{\infty^{-}}^{\infty^{+}}\omega\right)=\frac{1}{2}\sum_{k}\int _{a_{k}}\omega=\left(\tfrac{1}{2},\ldots,\tfrac{1}{2}\right). \tag{22}\]
|
2308.03287 | Equivalent-Time-Active-Cavitation-Imaging Enables Vascular-Resolution
Blood-Brain-Barrier-Opening-Therapy Planning | Linking cavitation and anatomy was found to be important for predictable
outcomes in Focused-Ultrasound Blood-Brain-Barrier-Opening and requires high
resolution cavitation mapping. However, cavitation mapping techniques for
planning and monitoring of therapeutic procedures either 1) do not leverage the
full resolution capabilities of ultrasound imaging or 2) place strong
constraints on the length of the therapeutic pulse. This study aimed to develop
a high-resolution technique that could resolve vascular anatomy in the
cavitation map. Herein, we develop BP-ETACI, derived from bandpass sampling and
dual-frequency contrast imaging at 12.5 MHz to produce cavitation maps prior
and during blood-brain barrier opening with long therapeutic bursts using a
1.5-MHz focused transducer in the brain of C57BL/6 mice. The BP-ETACI
cavitation maps were found to correlate with the vascular anatomy in ultrasound
localization microscopy vascular maps and in histological sections. Cavitation
maps produced from non-blood-brain-barrier disrupting doses showed the same
cavitation-bearing vasculature as maps produced over entire blood-brain-barrier
opening procedures, allowing use for 1) monitoring FUS-BBBO, but also for 2)
therapy planning and target verification. BP-ETACI is versatile, created high
resolution cavitation maps in the mouse brain and is easily translatable to
existing FUS-BBBO experiments. As such, it provides a means to further study
cavitation phenomena in FUS-BBBO. | Samuel Desmarais, Gerardo Ramos-Palacios, Jonathan Poree, Stephen A. Lee, Alexis Leconte, Abbas F. Sadikot, Jean Provost | 2023-08-07T04:06:15Z | http://arxiv.org/abs/2308.03287v1 | Equivalent-Time-Active-Cavitation-Imaging Enables Vascular-Resolution Blood-Brain-Barrier-Opening-Therapy Planning
## Abstract
Linking cavitation and anatomy was found to be important for predictable outcomes in Focused-Ultrasound Blood-Brain-Barrier-Opening and requires high resolution cavitation mapping. However, cavitation mapping techniques for planning and monitoring of therapeutic procedures either 1) do not leverage the full resolution capabilities of ultrasound imaging or 2) place strong constraints on the length of the therapeutic pulse. This study aimed to develop a high-resolution technique that could resolve vascular anatomy in the cavitation map.
Herein, we develop BP-ETACI, derived from bandpass sampling and dual-frequency contrast imaging at 12.5 MHz to produce cavitation maps prior and during blood-brain barrier opening with long therapeutic bursts using a 1.5-MHz focused transducer in the brain of C57BL/6 mice.
The BP-ETACI cavitation maps were found to correlate with the vascular anatomy in ultrasound localization microscopy vascular maps and in histological sections. Cavitation maps produced from non-blood-brain-barrier disrupting doses showed the same cavitation-bearing vasculature as maps produced over entire blood-brain-barrier opening procedures, allowing use for 1) monitoring FUS-BBBO, but also for 2) therapy planning and target verification.
BP-ETACI is versatile, created high resolution cavitation maps in the mouse brain and is easily translatable to existing FUS-BBBO experiments. As such, it provides a means to further study cavitation phenomena in FUS-BBBO.
## Introduction
Focused-Ultrasound Blood-Brain-Barrier-Opening (FUS-BBBO) with microbubbles (\(\upmu\)B) was first observed in rabbits [1] and is an emerging therapeutic tool under clinical trial that has the potential to lessen the impact of neurological diseases such as Alzheimer's disease, Parkinson's disease, and brain cancer [2]. The technique is promising because it allows known large molecular weight therapeutic agents to cross the BBB and be effective in the brain.
Other techniques have been studied for promoting the action of diverse pharmacological agents in the central nervous system across the BBB. These techniques include intrathecal injections [3, 4] pharmacological agents that promote BBB permeability [5], and delivery of vectors that readily cross the BBB [6, 7]. FUS-BBBO is of particular interest because focused ultrasound allows the technique to be less invasive and its effect limited to the region of highest intensity in the emitted pressure field.
The BBB disrupting effect of FUS comes from the action of ultrasound on contrast agents composed of \(\upmu\)B administered intravenously. As \(\upmu\)B circulate and are exposed to the rapidly oscillating pressure in the acoustic field, they undergo rapid changes in size without bursting, a phenomenon termed non-inertial cavitation, which induces intense mechanical and fluidic processes in the immediate surroundings and triggers transport across the BBB [8].
The localized effect of FUS-BBBO requires targeting (1) to ensure that the brain regions requiring treatment receive the therapeutic agent and (2) to minimize off-target delivery. This is especially important because interaction between different tissue structures and FUS leads to different outcomes depending on the target [9, 10]. Current methods of visualizing the opening include post-therapeutic magnetic resonance imaging (MRI) and concurrent acoustic mapping. MRI of gadolinium in the brain parenchyma is used [1] to verify the extent of BBBO because the gadolinium compounds used as contrast agents in MRI do not
cross the undisturbed BBB. Acoustic monitoring is used in FUS-BBBO to minimize damage because inertial and non-inertial cavitation have particular backscatter signatures which are used to distinguish and quantify them [11]. Acoustic mapping approaches use a multi-element imaging transducer to receive scattered signals during FUS sonication and beamform maps of the acoustic activity. Passive Acoustic Mapping (PAM) is one such approach but its axial resolution is limited by the use of continuous mode therapeutic sonication [12]. Imaging transducers can also be used as the receiving component in a pulse-echo ultrasound system where the pulse is provided by the focused transducer. Paired with the use of few-cycles therapeutic pulses comparable to those used in ultrasound imaging, the technique produced B-mode images of the cavitation activity [13]. Therapy and cavitation mapping was also conducted together from the imaging probe [14]. However, to our knowledge, no cavitation mapping technique allows for in vivo vascular-resolution cavitation imaging without requiring very short therapeutic pulses.
Recently, it has been shown that it is possible to map non-inertial cavitation at high resolution using Equivalent-Time Active-Cavitation Imaging (ETACI), Phantom studies demonstrated the correlation between the ETACI signal and pressure levels and showed initial results in vivo [15]. ETACI is a method based on Radial Modulation (RM) where Doppler spectral replicates are generated in the pixels undergoing cavitation. Briefly, RM methods typically use a two-frequency approach where a low-frequency wave is used to modulate the size of \(\upmu\)B and thus their scattering characteristics, and a high-frequency pulse is used to image the \(\upmu\)B at different phases of the RM cycle.
RM has been used mostly as a \(\upmu\)B-contrast imaging technique because the scattering-modulation effect is specific to \(\upmu\)B. Work on RM first showed the change in amplitude of a high-frequency wave scattered by microbubbles through RM cycles induced by a low-frequency wave [16] which has been used to measure the size of individual \(\upmu\)B [17]. \(\upmu\)B were found to produce different backscattered signal when high-frequency imaging pulses imaged them in compression and expansion phases of the RM [18]. Second order ultrasound field (SURF) imaging used dual-band pulses emitted by the imaging probe to image \(\upmu\)B in compression and expansion in vivo to conduct contrast imaging [19].
Recently, ultrafast imaging was used to develop contrast RM techniques in phantoms. Ultrafast radial modulation imaging (uRMI) [20] increased the number of sample phases at which the RM cycle is sampled by leveraging ultrafast image formation used demodulation to produce contrast images. Further work identified that RM could be induced by low frequency ultrasound (100 kHz) at depth and that Singular Value Decomposition (SVD)-based extraction of the RM signal was possible in phantoms, making RM more robust to tissue motion [21]. [20], [21] and [15] found artifacts in the form of additional peaks in the Doppler spectrum, which remained unexplained.
The objective of this study was to establish the potential of ETACI to reliably map the non-inertial cavitation produced by a therapeutic transducer at high resolution in the mouse brain in vivo and support BBBO therapy planning. We developed and validated a novel framework for ETACI based on bandpass sampling (BP-ETACI), that solved the spectral artifacts, enabled further uncoupling of the framerate and the modulation frequency, simplifying the method, increasing adaptability by allowing the selection of the aliasing frequency in the received signal, and improving image quality by eliminating spectral artifacts. In vitro, we validated the selectivity of BP-ETACI towards cavitating \(\upmu\)B and explored how modulation and Doppler mix. We then illustrated the potential of BP-ETACI by demonstrating that cavitation can be mapped in vivo in mouse models transcranially in the context of FUS-BBBO.
## Methods
### Theoretical Framework
First, we develop the equations that describe the framework. Then, we use these equations to trace the shape of the spectrum that would be obtained with the imaging and modulation parameters selected for this study.
### Pulsed-Doppler
For pulsed-Doppler (PD), the signal of interest comes from the movement of scatterers between pulse-echo events. As such, the received signal in slow time can be modeled with (1) for a scatterer moving towards the probe at speed \(v_{z}\), in a medium where the speed of sound is \(c\), and observed with \(M\) cycle pulses of center frequency \(f_{0}\).
\[d(t)=h(t)\exp(-i2\pi f_{d}t) \tag{1}\]
Where \(h(t)\) is a window of length \(|Mc/f_{0}v_{z}|\), which corresponds to the transit time of the scatterer as observed by the system at a fixed depth. The motion of the scatterer is observed as a frequency \(f_{d}=2v_{z}f_{0}/c\) in slow time. This frequency shift is shown in Fig. 1. A where the spread of the frequency band is attributed to scatterer motion. [22]
### BP-ETACI Modulation
In BP-ETACI, the scatterers are \(\mu\)B and their backscatter amplitude is modulated externally. The received signal can thus be modeled with (2) in which \(A\) and \(B\) parametrize this modulation.
\[s(t)=d(t)[A+B\,\cos(2\pi f_{m}t)] \tag{2}\]
Combining, (1) and (2), we obtain
\[s(t)=h(t)\exp(-i2\pi f_{d}t)\,[A+B\,\cos(2\pi f_{m}t)] \tag{3}\]
The received signal \(s(t)\) develops to a sum of three scaled and gated complex exponentials.
\[s(t)=\ A\,h(t)\exp(-i2\pi f_{d}t)+\tfrac{B}{2}h(t)\exp(-i2\pi(f_{m}+f_{d})t)+ \tfrac{B}{2}h(t)\exp(-i2\pi(-f_{m}+f_{d})t) \tag{4}\]
In the frequency domain, these complex exponentials yield three signal bands around \(f=0\), \(f=f_{m}\) and \(f=-f_{m}\).
\[S(f)=\ A\pi\,H(f)*\delta(f-f_{d})+\tfrac{B\pi}{2}H(f)*\delta\big{(}(f-f_{m})-f_ {d}\big{)}+\tfrac{B\pi}{2}H(f)*\delta\big{(}(f+f_{m})-f_{d}\big{)} \tag{5}\]
The modulating pressure field is not spatially uniform, as it originates from a focused transducer in FUS-BBBO. We can rewrite the received signal as
\[S(f)=A\pi\,H(f)*\delta(f-f_{d})+\tfrac{\pi}{2}B(r)\,H(f)*\delta\big{(}(f-f_{m} )-f_{d}\big{)}+\tfrac{\pi}{2}B(r)\,H(f)*\delta\big{(}(f+f_{m})-f_{d}\big{)} \tag{6}\]
With \(B(r)\) the spatial distribution of the focused transducer's pressure field. Thus, the last two bands of \(S(f)\) contain information about the spatial distribution of the focused transducer's pressure field. Fig. 1. B shows the two bands added by modulating the scatterers amplitude over time. The central band (orange) does not contain \(B(r)\) in its expression whereas the two side-bands (blue) do contain \(B(r)\) in their expression. Thus, the central band, which contains unmodulated PD signal from all moving structures, not only from modulated \(\mu\)B, is spurious when mapping cavitation.
### Sampling
Using bandpass sampling, one can extract \(B(r)\) even when using a framerate \(f_{s}\) much smaller than the modulation frequency. Indeed, for a signal sampled at a sampling frequency \(f_{s}\), the baseband spectrum contained between \(-f_{s}/2\leq f<f_{s}/2\), is the superposition of all bands \(-f_{s}/2\leq f-kf_{s}<f_{s}/2\) for all \(k\in\mathbb{Z}\). Thus, the observed spectrum \(S_{o}(f)\) is
\[S_{o}(f)\ =\sum_{k=-\infty}^{\infty}S(f)*\delta(f-kf_{s}) \tag{7}\]
Applying a change of variables to obtain the spectrum between \(-1/2\leq f/f_{s}<1/2\) gives
\[S_{o}\left(\tfrac{f}{f_{s}}\right)\ =\sum_{k=-\infty}^{\infty}S\left(\tfrac{f}{f_{s}} \right)*\delta\left(\tfrac{f}{f_{s}}-k\right) \tag{8}\]
As such, the received signal contains the three signal bands of (6) between \(-1/2\leq f/f_{s}<1/2\)
\[S_{o}\left(\tfrac{f}{f_{s}}\right)=S\left(\tfrac{f}{f_{s}}\right)*\delta \left(\tfrac{f}{f_{s}}-k_{1}\right)+\ S\left(\tfrac{f}{f_{s}}\right)*\delta \left(\tfrac{f}{f_{s}}-k_{2}\right)+\ S\left(\tfrac{f}{f_{s}}\right)*\delta \left(\tfrac{f}{f_{s}}-k_{3}\right) \tag{9}\]
With \(k_{1}=0\), \(k_{2}\) and \(k_{3}\) obeying these constraints:
\[\left\{1/2\leq 0/f_{s}-k_{1}<1/2\right\};\left\{1/2\leq f_{m}/f_{s}-k_{2}<1/2 \right\};\left\{1/2\leq-f_{m}/f_{s}-k_{3}<1/2\right\} \tag{10}\]
Thus, choosing \(f_{m}\) and \(f_{s}\) so their ratio is not whole separates the signal bands that contain \(B(r)\) from the central band, which is fixed at \(0\) Hz. Fig. 1. illustrates the possibility of generating separated bands (C) or superimposed bands at \(0\) Hz (D).
_BP-ETACI Demodulation_
Information on \(B(r)\) is only present in the spectral bands created by the modulation. Thus, to isolate this information, the slow time-sampled data \(s(t)\) must be demodulated at the modulation frequency. The width of the demodulation low-pass (LP) filter is chosen so that only one band of the received signal is preserved. Demodulating around the \(f=f_{m}\) band gives
\[g(t)=\mathrm{LP}\left[s(t)\exp\left(i2\pi\left(\frac{f_{m}}{f_{s}}-k_{2} \right)t\right)\right]\]
\[=\tfrac{1}{2}B(r)h(t)\exp\left(-i2\pi\left(\frac{f_{d}}{f_{s}}\right)t\right) \tag{11}\]
When sufficient data is acquired, averaging temporally reduces (11) to
\[\hat{B}(r)=q(r)B(r) \tag{12}\]
Where \(q(r)\) is the spatial distribution of \(\mu\)B. \(\hat{B}(r)\) is thus a quantity that reflects the number of microbubbles and the amplitude of their non-inertial cavitation, which is directly related to the local pressure.
_Validation of the Framework_
The spectrum obtained from (4) was traced to validate the separation between modulated and unmodulated bands and to understand how both the motion signal (PD) and the modulation signal interacted to produce the resulting signal.
Specifically, equation (4) was sampled at a single location and a rate of \(f_{s}=4\,\mathrm{kHz}\) with the following parameters: \(c=1540\,\mathrm{\ m/s}\), \(M=3,\ f_{0}=12.5\,\mathrm{MHz}\), \(f_{m}=1.50132\,\mathrm{MHz}\), \(A=1\) and \(B=0.1\), \(v_{x}=5;10;15\,\mathrm{\ mm/s}\) and \(h(t)\) a Hann window. Doing so, a 1D temporally sampled signal vector was obtained. The spectrum of this signal was then computed. In this situation, \(\pm\,f_{m}/f_{s}=\pm 375.33\), \(k_{2}=375\) and \(k_{3}=-375\) thus we expect the modulated bands to be centered at \(f/f_{s}=\pm 0.33\) to satisfy (10). To model multiple scatterers, the process was repeated and averaged while maintaining all the parameters constant, except for the velocity, which was uniformly selected along the radius according to Poiseuille flow (radius \(=2.5\mathrm{mm}\), \(v_{x\,max}=5;10;15\,\mathrm{\ mm/s}\)).
Fig. 1: Spectral View of BP-ETACI Signal Formation. (A) Slow-time-wise spectrum of moving \(\mu\)B. (B) Slow-time spectrum containing the pulsed-Doppler band at \(f=0\) (orange) and modulated bands at \(f=-f_{m};f_{m}\) (blue). (C) Slow-time spectrum that contains non-overlapping bands between \(-f_{s}/2\) and \(f_{s}/2\). (D) Slow-time spectrum with overlapping bands between \(-f_{s}/2\) and \(f_{s}/2\).
### Experimental Setup
For imaging, a programmable ultrasound scanner (Vantage 256, Verasonics) was used to control a linear imaging probe (L8-18i, GE). For RM and BBBO therapy, a focused monoelement transducer (H-234, Sonic Concepts) was driven by an RF amplifier (Empower RF) with a signal generated by a function generator (SDG6000, Siglent).
The focused transducer was mounted transducing-face down in a 3D-printed conical water reservoir. A silicone cast of the imaging transducer was used to mount it in a way that placed the focal zone of the focused transducer in the imaging plane of the probe. Both the focused transducer and the imaging probe were mounted to a 3D-printed bracket. The reservoir was filled with room-temperature water and a latex membrane was used to seal the bottom. Fig. 2 shows a cross-section view of the assembly.
### Imaging
The imaging sequences used for this study are detailed here. The ultrasound scanner emitted 3-cycle pulses centered at 12.5 MHz in a plane wave imaging scheme. Transmission and reception occurred only on the 128 middle elements of the imaging probe. For all acquisitions, the reception bandwidth was 100%, frames were acquired in ensembles of 1200 with pauses between each ensemble. Frames were DAS beamformed and plane wave angles were coherently compounded.
For ULM acquisitions, the frame rate was 1.6 kHz with 5 angles per frame. The focused transducer was kept off. An SVD-based clutter filter was applied to each ensemble to highlight circulating \(\upmu\)B [23] and a spatiotemporal localization algorithm [24] was used to extract super-resolved \(\upmu\)B trajectories. These trajectories were accumulated on a high-resolution grid to obtain maps of \(\upmu\)B density.
For BP-ETACI acquisitions, the frame rate was \(f_{\text{s}}=4\) kHz with 4 planewave angles per frame. The focused transducer emitted 300 ms tone bursts at \(f_{m}=1.50132\) MHz with a PRF of 1 Hz. Here, \(f_{m}/f_{\text{s}}=375.33\) thus modulated bands are expected at \(f/f_{\text{s}}=\pm 0.33\) so, after coherent compounding, each ensemble was demodulated in slow time according to (11) at a normalized frequency of \(f/f_{\text{s}}=0.33\). The demodulation filter used in (11) was a frequency-domain Blackman-Harris window of width 1.6 kHz. The recording of each ensemble was triggered on the start of a tone burst. Some strong clutter remained at the skull and was removed by removing the first two singular values of the Casorati matrix formed with the middle 1000 frames of the ensemble [23]. Then, the magnitude of each pixel was averaged temporally to get a spatial map of the cavitation induced by the focused transducer.
### In Vitro Experiments
#### Free-Field Experiment
BP-ETACI was used to map the focal zone of the focused transducer. The experimental assembly was mounted above a tank filled with a 14 \(\upmu\)L/L dilution of contrast agent (Definity, Lantheus) in room temperature water agitated with a magnetic stirrer (Fig. 4. A). 60s of BP-ETACI data were acquired. The total acquisition time was chosen to ensure \(\upmu\)B visited the field of view uniformly, under which conditions the resulting map is proportional to the spatial distribution of the pressure induced by the focused transducer. The focused transducer was driven at very low amplitude (0.01 MPa) for this experiment to limit the radiation force and avoid pushing the \(\upmu\)B, as significant PD frequencies can be achieved from this push and could be confused with modulation.
#### Peak Negative Pressure Field Map
The focal zone was mapped with a hydrophone to verify the BP-ETACI cavitation map. The experimental assembly was mounted on a bracket and submerged in a tank filled with room-temperature water. A hydrophone (Y-104, Sonic Concepts) was mounted
Figure 2: Cut view of the experimental assembly used for all acquisitions.
on a 3-axis robot (X-LSM200A-E03, Zaber) and connected to a data acquisition device (5242D, Picoscope). The minimum value of each pressure waveform was extracted and converted to a peak negative pressure map.
### Channel Phantom Experiment
We verified that BP-ETACI only detects cavitating uB, not tissue and uB outside the focal zone. The experimental assembly was mounted and aligned above a channel phantom, with the probe imaging along the length of the channel (Fig. 5. A). The channel was angled at 8\({}^{\circ}\) from the horizontal to allow for an axial component in uB speed. A solution of 56 uL/L Definity contrast agent was prepared by dilution with water. The channel diameter was 2.5 mm. The focused transducer was driven at 0.02 MPa. The flow rate was set to 5 mL/min and 6s of BP-ETACI data were acquired. The slow-time spectrum of each pixel was computed and then averaged over three subregions of the field of view: 1) inside the channel and the focal spot, 2) inside the channel, but outside the focal spot, and 3) inside the focal spot but outside the channel in the tissue portion of the phantom.
With the same setup, we verified that BP-ETACI produces similar cavitation maps under various flow rates. BP-ETACI cavitation maps were acquired at flow rates of 5, 10, 20 mL/min, and at a fourth, very fast, flow rate achieved by pushing the uB solution manually. This fourth flow rate was selected because it made the central PD band overlap the filter passband in [(11)]. For each experimental condition, 6 s of BP-ETACI data were acquired. Before demodulating according to [(11)], the spectrum of each pixel was computed and averaged over a group of pixels spanning the width of the channel inside the focal spot for each flow rate.
### In Vivo Experiments
We used BP-ETACI to map the cavitation transcranially in the mouse brain prior to and during FUS-BBBO treatments. The study group was composed of 6 wild type C57BL/6 female mice and was split into two groups of 3 animals. BP-ETACI cavitation maps were produced at low cavitation doses for each animal in the first group, termed the mapping-only-group. The second group, termed the BBBO-group, underwent a BBBO therapy procedure and BP-ETACI data was acquired during the whole procedure. This served two purposes: 1) establishing that BP-ETACI can map the cavitation transcranially in the mouse brain at doses that do not open the BBB, and 2) establishing that BP-ETACI reliably maps regions where the BBB is opened by FUS-BBBO therapy.
The mice were deeply anesthetized using a mix of 1L/min of oxygen and isoflurane 5% for induction and maintained at 1.5-2.5% for the duration of the procedure. Their heads were immobilized in a stereotactic frame (Model 940, Kopf), were shaved, and were depilated. Their temperature was maintained with a warm-water-circulation blanket. Sterile saline solution 0.9 % was administered intra-peritoneumally at 0.5 mL per 10g of body weight to maintain adequate intravascular volume during the procedure. Room-temperature deionized water was applied to the skin and to the outside of the experimental assembly's water retaining membrane. Ultrasound coupling gel was then applied to the mouse head, the experimental assembly was mounted to the stereotactic system, and lowered above the head to provide coronal imaging planes (Fig. 9. A). The acoustic coupling between the assembly and the animal was verified with B-mode imaging.
All mice underwent an ULM imaging protocol prior to BP-ETACI cavitation mapping. For ULM imaging, 5 minutes of data was recorded over 15 minutes. A bolus consisting of a 4 uL/g of bodyweight dose of 1:20 phosphate buffered saline (PBS)-diluted Definity solution was injected in the tail vein of the animal following the start of the recording.
BP-ETACI was conducted after the previously injected uB were allowed to decay. For all in vivo BP-ETACI experiments, one bolus consisting of a 4 uL/g dose of 1:20 PBS-diluted Definity solution and of a 2 uL/g dose of 1% Evans blue dye in PBS was used. The focused transducer was set to a peak-negative-pressure of 0.1 MPa. For animals in the mapping-only-group, the focused transducer was set to emit 5 bursts and thus 5 ensembles were recorded. The bolus was injected in the tail vein of the animal 10 seconds before the start of the sonication. For the animals in the BBBO-group, real time BP-ETACI cavitation mapping and focused transducer sonication were initiated, and bolus injection occurred within 30 seconds from the start of the sonication. The arrival of the bolus was observed in BP-ETACI. The real-time BP-ETACI processing was then done for 1 therapeutic burst every 20 to monitor that the focal zone placement was maintained. Recording and focused transducer sonication were continued for a total of 4 minutes after bolus injection. The total time of BP-ETACI data recorded during BBBO varied between animals depending on the save and process time of ensembles. Some of the focused transducer bursts that occurred late into each BBBO procedure were not recorded because their trigger signal overlapped with the save time of the previous burst. From all the ensembles acquired during BBBO, two cavitation maps were produced. One from the first 5 focused transducer bursts acquired later than 10s after the bolus injection and one from all ensembles acquired later than 10s after the bolus injection.
The mice were transcardially perfused with 4% paraformaldehyde at 7.4 pH 24h after the procedure. The brains were cryoprotected, frozen, and cut into 40 um coronal sections. The section that best aligned with the focal zone for each animal was
identified and tile scanned with a fluorescence microscope (Aperio Versa 200, Leica). The same exposure settings were used for each slide.
## Results
### Illustration of the Theoretical Framework
Simulations of the signal framework were conducted to evaluate the capability of BP-ETACI to separate the BP-ETACI signal, originating from the modulation induced by the focused transducer, from the PD signal, originating from scatterer motion. This separation was evaluated in the frequency domain. The signal spectrum was simulated for single scatterers and Poiseuille flow. We show that BP-ETACI can separate modulation from flow in the frequency domain. For the single scatterer model, the frame rate and the modulation frequency used in the simulation were chosen because at those frequencies the signal framework in (6) places the center of the modulated spectral bands at \(0\neq f/f_{s}=0.33,-0.33\), away from the PD signal at \(f/f_{s}=0\). Fig. 3. A shows simulated spectra for single scatterers moving at different speeds. The curves show modulated sidebands around \(f/f_{s}=0.33\) and \(-0.33\), as predicted. Scatterer movement shifts all signal bands off their \(v=0\) m/s frequency, shown with dotted lines. Increasingly fast movement increases this shift and broadens the bands, which was predicted by (6). Fig. 3. B shows simulated spectra for an ensemble of 1000 particles in a simulation of Poiseuille flow. The signal is concentrated in the same three spectral bands as in Fig. 3. A. A Poiseuille flow produced spectral bands that hold two peaks: one at the \(v=0\) m/s frequency corresponding to scatterers in no-slip-condition along the walls, and one corresponding to flowing scatterers, increasingly offset from the first one as the flow rate increases.
### In Vitro Study
#### Shape of the Focal Zone
Prior to demodulation according to (11), the B-mode frames showed uB in the field of view (Fig. 4. B). After demodulation, only the uB inside the focal zone were distinguishable (Fig. 4. C). The temporal accumulation of 60s of demodulated BP-ETACI frames showed the shape of the focal zone (Fig. 4. D) and presented strong similarities with a map of the focused transducer's peak negative pressure as measured with a hydrophone (Fig. 4. E) that was registered to the same field of view as Fig. 4. D. Both the BP-ETACI map and the hydrophone map showed a clearly defined focal zone in the same location and with similar sizes.
Fig. 4: Mapping the acoustic amplitude of the focused transducer’s emission in free field. (A) Plane wave acquisition of agitated dilute microbubbles. (B) Beamforming and coherent compounding. (C) Demodulation and filtering as described in (11) highlights microbubbles inside the focal. (D) Cavitation map obtained by accumulating BP-ETACI images over a long timeframe to ensure the agitated microbubbles visited every point of the field of view uniformly. (E) Peak Negative Pressure map over the imaging plane obtained by raster-scanning the focal spot with a hydrophone. Both the raster scan and the BP-ETACI-obtained map give similar results.
Fig. 3: Slow-time spectra predicted by the developed signal framework. (A) Slow-time spectrum of scatterers moving at different speeds. The parameters used are realistically attainable with ultrasound equipment. (B) Slow-time Spectrum averaged over 1000 scatterers with speeds distributed according to a Poiseuille Flow. All other parameters are the same as in (A).
_Selectivity of BP-ETACI Towards Cavitating \(\mu\)B_
While the B-mode frames showed the whole channel phantom (Fig. 5. B), the BP-ETACI cavitation map only held signal in the part of the \(\upmu\)B-containing-phantom's channel intersecting the focal zone (Fig. 5. C). Evaluating the average spectrum in sections of the field of view (Fig. 5. D-E-F) revealed that modulated bands appeared strongest at the overlap of the focused transducer's focal zone and of the phantom's channel. Both the signal spectrum in the overlap between the phantom's tissue mimicking material and the focused transducer's focal and the signal spectrum in the phantom's channel outside the focused transducer's focal contained almost no modulated bands.
Fig. 5: Cavitation mapping with BP-ETACI when part of the focal spot is obstructed by tissue. (A) Ultrafast plane wave acquisition of dilute microbubbles flowing in a flow phantom. (B) beamforming and coherent compounding. (C) BP-ETACI cavitation map showing the focal spot interrupted by the phantom walls (D-E-F) Double-sided-slow-time-DFTs averaged over subregions of the field of view.
Fig. 6: Cavitation mapping with ETACI at increasingly fast flow rates. (A) Compounded frames of dilute microbubbles flowing in a flow phantom. (B) Double-sided-slow-time-DFTs of increasingly fast flow. (C) Zoom over the BP-ETACI demodulation signal band. (D-E-F) Similar cavitation maps obtained at increasingly fast flow rates. (G) Cavitation map when the flow is fast enough to mix the signal bands.
_Effect of Flow on BP-ETACI_
In Fig. 6., the influence of flow on BP-ETACI cavitation maps was evaluated in a flow phantom. The spectrum evaluated on B-mode frames (Fig. 6. A) showed BP-ETACI bands for all flow rates (Fig. 6. B-C). The BP-ETACI bands were composed of 1) A tall narrow peak at \(f/f_{\mathrm{s}}=0.33\) which corresponded to \(\upmu\)B confined to the phantom's walls. 2) As the flow rate was increased the signal bands corresponding to moving \(\upmu\)B shifted in frequency and widened. After demodulation according to (11) and time accumulation, cavitation maps contained the signature of both immobile and flowing \(\upmu\)B (Fig. 6. D-E-F-G). The immobile \(\upmu\)B were confined by the slow flow near the walls and most likely by radiation forces on the wall away from the focused transducer. Cavitation maps produced with significantly different flow rates were virtually identical (Fig. 6. D-E-F). When a threshold flow rate was reached, the central PD band was wide enough to be included in the filter passband in (11) and was visible in the cavitation map (Fig. 6. G). Bright streaks associated with fast flow appeared over the whole channel.
### In Vivo Study
_BP-ETACI Produces Cavitation Maps In Vivo_
In vivo, focused-transducer-induced cavitation added a shape that matched the extent of the expected focal zone to BP-ETACI-produced images (Fig. 7. A C) and produced modulated bands in the spectrum (Fig. 7. B D). Additionally, there were noticeable details in the cavitation map along the focal zone, with bright blood vessels and dark low-vascular density regions like the corpus callosum and the ventricular system. The BP-ETACI signal was strongest in the cortex and in subcortical regions. Signal associated with high blood velocities that overlap the passband of the demodulation filter in (11) (Fig. 7. B D) were present bilaterally, whether the focused transducer was turned ON or OFF.
_BP-ETACI Cavitation Maps can be Obtained Without BBBO_
For the animals in the in the BBBO-group, all the anatomical structures that were visible in the cavitation maps created from data acquired over the full BBBO were also visible in the cavitation maps created from only the first 5 focused transducer bursts that occurred 10 s after \(\upmu\)B injection, a non-BBB opening dose, even if noise was reduced when more data was used (Fig. 8. A-F). The cavitation map pixel values (Fig. 8. G-I) obtained at the no-BBBO dose tended to be higher than the pixel values obtained over
Fig. 7: BP-ETACI cavitation mapping. (A) BP-ETACI cavitation map created when the focused transducer is OFF. Signal is only visible in vessels where the PD spectrum overlaps with the BP-ETACI spectrum. (B) Slow-time spectrum averaged over selected subregions in the field of view. (C) BP-ETACI cavitation map created from data when the focused transducer is ON. The focal zone becomes visible. (D) Slow-time spectrum averaged over selected subregions in the field of view.
the BBBO procedure, which was expected for noisy pixels, but this trend was even stronger for pixels in the vasculature. The pixels that deviated the most from the \(y=x\) line, like those inside the cyan ellipse in Fig. 8. I, corresponded to cavitation bearing pixels in the cortical and subcortical regions of the corresponding cavitation map. The intensity of the cavitation signal was reduced over time as the \(\upmu\)B were eliminated by the mice's metabolism (Fig. 8. J-L). Thus, 5 focused transducers bursts were sufficient to produce cavitation maps and integrating over the whole BBBO-procedure reduced noise.
### BP-ETACI Cavitation Mapping Predicts BBBO Location In Vivo
BP-ETACI cavitation mapping showed cavitation regions in the ipsi-lateral side of the brain (Fig. 9. B-G). Superimposing ULM vascular maps on the BP-ETACI cavitation maps helped identify which parts of the underlying vascular anatomy generated BP-ETACI signal. There was strong cavitation in cortical vessels and in large subcortical vessels along the path of the focal zone (Fig. 9. H-M).
Evans blue histology of coronal sections that matched the imaging and therapeutic plane of the animals in the BBBO-group revealed one-sided BBB openings (Fig. 9. N-P). The BBBO region was always situated on the ipsi-lateral side. Overall, the fluorescence was most intense in the top half of the brain, in the cortex and in some subcortical regions. Fluorescence was more intense around vessels, especially in the cortex and subcortical regions. Histology of the animals in the mapping-only-group showed no fluorescence (Fig. 9. Q-S).
The BP-ETACI cavitation maps matched the most intense regions of fluorescence on the histological sections in the cortical and subcortical regions. Low-vascular regions like the corpus callosum and the ventricular system did not generate signal in the cavitation maps and did not show fluorescence in histology. Regions with BP-ETACI signal on the contra-lateral side of the cavitation maps were void of fluorescence in histology and thus contra-lateral BP-ETACI signal and BBBO were uncorrelated.
Fig. 8: BP-ETACI-produced cavitation maps show the same structures when built from doses the do not induce BBBO as when built from data acquired over a BBBO procedure. (A-C) Cavitation maps produced with doses that do not open the BBBO. (D-F) Cavitation maps produced over an entire BBBO procedure. (G-I) Pixel to pixel comparison between non-BBBO and BBBO cavitation maps. (J-L) Mean BP-ETACI intensity over the therapeutic focused bursts. The bursts used to build the non-BBBO and the BBBO cavitation maps are highlighted, and the moment \(\upmu\)B were injected is indicated.
[MISSING_PAGE_POST]
izhevsky, A. A. Arizhevsky, and A. A. Krizhevsky, "The k-means clustering algorithm", _IEEE Transactions on Image Processing_, vol. 10, no. 1, pp. 11
## Discussion
In this study, we evaluated the potential of BP-ETACI to map the non-inertial cavitation in the mouse brain in vivo to support FUS-BBBO planning. A sampling framework based on bandpass sampling to separate the cavitating \(\upmu\)B from the rest of the signal was developed and validated in vitro. Precise aliasing allowed BP-ETACI to place the MHz range cavitation of \(\upmu\)B away from unmodulated PD in the frequency domain by using a physically attainable framerate in ultrafast pulse-echo imaging. In vitro, tissue and non-cavitating \(\upmu\)B did not produce BP-ETACI signal, while cavitating \(\upmu\)B produced strong BP-ETACI signal at the expected location in the spectrum. In the mouse brain, BP-ETACI predicted BBBO location while requiring cavitation doses lower than the BBBO threshold.
The bandpass signal framework developed in this study allows for either the cavitation frequency or the frame rate to be adjusted to offset the cavitation signal from the unmodulated PD signal in the frequency domain. Previous studies have used both approaches under a paradigm that required the PRP and the cavitation period to have a common multiple which enabled ultrafast planewave acquisition as the steering angle could be changed between RM cycles. Muleki-Seya et al. [20] pioneered this paradigm and manipulated the cavitation frequency to create a beat with a set PRP. Instead of the expected single beat frequency, multiple frequencies were observed. All generated frequency bands were demodulated and summed to reconstruct contrast images. Jing and Lindsey [21] used a set, very low cavitation frequency of 100 kHz and were able to induce RM at depth through ex vivo bone. The 1 \(\upmu\)s PRF quantum of their programmable ultrasound system was sufficient to sample the RM cycle at 10 different phases by adding a wait period of 1 us to the PRP between frames. In a previous study with ETACI, Blais et al. [15] used a set RM frequency and manipulated the framerate by increasing the transmit beamforming delays frame-to-frame, which allowed the use of the higher frequency cavitation required for BBBO in the mouse brain. The ultrafast paradigm used in these studies required the use of commonly multipliable PRP and cavitation periods. We believe that the reliance on a whole number of samples per RM cycle, and the precision in the timings required to do so, was why studies such as [20], [21] and [15] found frequency peaks not predicted by theory. A critical difference when reframing the sampling process as bandpass sampling is that it allowed the placement of the BP-ETACI aliases anywhere in the spectrum and it eliminated the constraining relationship between PRF and modulation frequency. It yielded a spectrum with a single modulated band on each side of the central PD band and made in vivo use possible as it allowed more frequency-space to separate unmodulated PD and cavitation.
BP-ETACI, like PD, is a pulse-echo technique where the signal is sampled in slow time for each pixel of the image. PD techniques enabled high resolution blood flow imaging by decoupling the imaging resolution from the blood-flow signal sampling. BP-ETACI inherits this advantage from PD imaging but applies it to cavitation imaging instead. As such, BP-ETACI's axial resolution is dictated by the imaging pulse and sequence, instead of being limited by the reception aperture like in PAM. The technique was also applied in real-time, which was used to monitor the targeting of the focal zone. We showed that BP-ETACI resolves the structure of brain tissue and the regional vascular topology that contains the cavitating \(\upmu\)B. Resolving such structures is important because it has been demonstrated that vessels seed molecular extravasation in FUS-BBBO [25]. Indeed, in the current study, Evans blue fluorescence was most intense around vessels and was less intense in the corpus callosum.
Prior work has found that the size and density of vessels especially matter in FUS-BBBO [10]. Cavitation mapping at a vascular scale meant that BP-ETACI and Evans blue fluorescence represent different aspects of FUS-BBBO. BP-ETACI mapped the cavitation while fluorescence showed Evans blue extravasation and cellular uptake in the targeted region. BP-ETACI could be used to further study therapeutic agent kinetics after BBBO. Indeed, high-resolution cavitation imaging techniques like BP-ETACI make it possible to predict the regions reached by the delivered molecule and help planning for treatment to reduce off-target effects and increase therapeutic dose in FUS-BBBO [26].
The images presented in the current study were distorted because the experimental assembly used a layer of water at least 100 wavelengths thick above the target, which especially affected the ULM images. Because the relevant vasculature was still observable, no attempt was made to correct the difference in sound speed between the water and the target. However, ULM images and BP-ETACI cavitation maps could benefit from aberration correction techniques [27], particularly when adapted to ultrafast imaging [28].
The BP-ETACI cavitation maps contained unmodulated PD signal from vessels undergoing fast flow. In principle, these areas could be detected beforehand and subtracted from the BP-ETACI maps. BP-ETACI still provided useful information as fast vessels did not overlap with the focal zone. Furthermore, they were easily identifiable with knowledge about the anatomy of the large, symmetrical, high-flow arteries of the brain. The current study made no attempt to eliminate this flow-generated signal, but it could be eliminated by increasing the framerate and using a scheme where the modulated bands are aliased to \(f/f_{\text{s}}=0.5\) to maximize the frequency separation between the modulated bands and the central PD band.
Long (300ms) tone bursts were used as the FUS sonication to create the cavitation maps in this study, allowing acquisition of ensembles of 1200 frames during a single focused transducer burst, which was optimal for the data transfers of the programmable ultrasound system. However, BP-ETACI does not require long bursts. Short therapeutic bursts of 10-50 ms, which are consistent with existing literature on FUS-BBBO [29] are most attainable as long as enough frames are acquired during a burst to allow the detection of the cavitation. However, short, few-cycles therapeutic pulses present a challenge. Further optimization on the framerate, the ensemble size and the demodulation filter transient time are possible.
## Conclusion
BP-ETACI was used as a transcranial non-inertial cavitation mapping method for FUS-BBBO in the mouse brain. Using a novel RM framework based on bandpass sampling, we induced separation between motion and cavitation frequency bands. Cavitation mapping using this framework revealed cavitation-bearing vasculature at high resolution that matched vascular maps. Cavitation mapping was possible at non-BBBO doses and predicted the BBBO location, with ipsi-lateral cavitation signals correlating with Evans blue extravasation. The developed technique is easy to use and is directly compatible with broad sonication parameters in FUS-BBBO experiments. The high-resolution cavitation maps can be obtained in real time transcranially in mice, paving the way for further studying FUS-BBBO in the treatment of neurological diseases.
## Ethics
Animal handling was done in observation of the Canadian Council on Animal Care guidelines and in accordance with McGill University Animal Care Committee regulation under protocol 4532.
## Funding
We acknowledge the support of FRQNT, TransMedTech, IVADO, CIHR, NSERC (DGECR-2020-00229) and of the CFI (38095 and 246916)
|
2306.14257 | A Self-Encoder for Learning Nearest Neighbors | We present the self-encoder, a neural network trained to guess the identity
of each data sample. Despite its simplicity, it learns a very useful
representation of data, in a self-supervised way. Specifically, the
self-encoder learns to distribute the data samples in the embedding space so
that they are linearly separable from one another. This induces a geometry
where two samples are close in the embedding space when they are not easy to
differentiate. The self-encoder can then be combined with a nearest-neighbor
classifier or regressor for any subsequent supervised task. Unlike regular
nearest neighbors, the predictions resulting from this encoding of data are
invariant to any scaling of features, making any preprocessing like min-max
scaling not necessary. The experiments show the efficiency of the approach,
especially on heterogeneous data mixing numerical features and categorical
features. | Armand Boschin, Thomas Bonald, Marc Jeanmougin | 2023-06-25T14:30:31Z | http://arxiv.org/abs/2306.14257v1 | # A Self-Encoder for Learning Nearest Neighbors
###### Abstract
We present the self-encoder, a neural network trained to guess the identity of each data sample. Despite its simplicity, it learns a very useful representation of data, in a self-supervised way. Specifically, the self-encoder learns to distribute the data samples in the embedding space so that they are linearly separable from one another. This induces a geometry where two samples are close in the embedding space when they are not easy to differentiate. The self-encoder can then be combined with a nearest-neighbor classifier or regressor for any subsequent supervised task. Unlike regular nearest neighbors, the predictions resulting from this encoding of data are invariant to any scaling of features, making any preprocessing like min-max scaling not necessary. The experiments show the efficiency of the approach, especially on heterogeneous data mixing numerical features and categorical features.
Keywords:Neural network, nearest neighbors, linear invariance.
## 1 Introduction
Despite the recent progress of machine learning, the question of the optimal encoding of data remains open, especially for tabular data [2]. In this paper, we present the self-encoder, a neural network trained to guess the identity of each data sample. Given \(n\) data samples \(x_{1},\ldots,x_{n}\in\mathrm{I\!R}^{d}\), the self-encoder maps any sample \(x\in\mathrm{I\!R}^{d}\) to a probability distribution \(p\) over \(\{1,\ldots,n\}\), in such a way that \(p(x_{i})\) is close to a Dirac in \(i\) for each \(i\in\{1,\ldots,n\}\). In other words, the self-encoder is a classifier where each sample of the train set has its own label (its index). As such, it belongs to the category of self-supervised learning methods, like auto-encoders. The key difference is that, while auto-encoders rely on a reconstruction task, with the output in the same space \(\mathrm{I\!R}^{d}\) as the original sample, our self-encoder relies on an identification task, with the output in the set of probability distributions on the set of indices \(\{1,\ldots,n\}\).
Despite its simplicity, the self-encoder learns a very useful representation of data. It learns to distribute data in the embedding space in a way that makes them linearly separable from one another. This induces a geometry where two samples are close in the embedding space when they are not easy to differentiate. In particular, the self-encoder can be used for any classification or regression task, using the \(k\) nearest neighbors in the sense of this geometry, as given by the ranking of the predicted probabilities \(p_{1}(x),\ldots,p_{n}(x)\), for any sample \(x\)
Interestingly, these nearest neighbors do not correspond to those given by the Euclidean distance in the original space \(\mathrm{I\!R}^{d}\) (nor by any other usual metric like a Minkowski metric or cosine similarity for instance). The geometry is _learned_ by the model. In particular, the predictions resulting from this encoding of data are invariant to any scaling of the features, making any preprocessing like min-max scaling not necessary.
A drawback of the self-encoder is its complexity, as the dimension of the output is equal to \(n\), the size of the training set. This induces a time complexity in \(O(n^{2})\) for training. To overcome this problem, we present a variant based on sampling where the model is trained to predict the identity of samples in a random subset of the training set, reducing the training time.
The rest of the paper is organized as follows. We present the related work in Section 2. The self-encoder is presented in Section 3. In Section 4, we prove that the learned geometry is invariant to linear transformations, like any scaling of the features. The behavior of the self-encoder in the presence of categorical features is analyzed in Section 5. The variant of the model based on sampling is described in Section 6. The experiments are presented Section 7. Section 8 concludes the paper.
## 2 Related work
**Nearest Neighbors.** A simple and yet fundamental method to solve tasks in machine learning is the proximity search. It relies on the intuition that close vectors in the feature space should have close properties (similar labels in classification and close values in regression). The most common classifier is the \(k\) nearest neighbor (\(k\)-NN) algorithm [4], which assigns a label to a point by choosing the most present label among its \(k\) nearest neighbors. Usual similarity measures are the Euclidean distance or the cosine similarity but \(k\)-NN can also be applied to any similarity measure, such as that learned by the proposed self-encoder model.
**Kernel methods.** Many machine learning algorithms are able to work with kernels, i.e., similarity functions that differ from the usual Euclidean metric. This is the case of the Support Vector Machine (SVM) [3], a classifier that tries to find linear hyperplanes separating samples of different labels. When the dataset is not linearly separable, the SVM can still be used with the help of a kernel that moves the training vectors into another space, possibly of different dimension, in which they are more likely to be separable. The key point is that the kernel must be chosen by the user. In comparison, the self-encoder _learns_ a geometry that makes data samples linearly separable from another. Moreover, it is a self-supervised learning technique, that does not rely on labeled data.
**Auto-encoders.** In its simplest form, an auto-encoder is a neural network model that learns a compact latent representation of data points with an encoder part and that is able to reconstruct them from this representation with a decoder
part. The rationale is that a performing encoder should be able to extract the core discriminative features that define the samples of the dataset and this is measured by the ability of the decoder to reconstruct the original data from those features.
There is a large variety of application domains for auto-encoders and even more different structures. Let us cite for example variational auto-encoders that are used in a probabilistic framework for variational inference [6] or denoising auto-encoders that can be used for example to improve image resolutions by changing the reconstruction criterion [8] of manually noised data samples.
The output of an auto-encoder being a reconstructed version of the input vector, the measure of reconstruction loss is key in the training process. Unlike auto-encoders, in the case of the Self-Encoder, there is no need to engineer a good similarity measure between the original data sample and its reconstruction because the objective is simply to predict the identity of each data sample as an index in \(\{1,\ldots,n\}\) and not the data point in \(\mathrm{I\!R}^{d}\) itself.
## 3 The self-encoder
Let \(x_{1},\ldots,x_{n}\in\mathrm{I\!R}^{d}\) be the set of training samples with \(d\) the dimension of the feature space and \(n\) the number of samples. The Self-Encoder is a multi-layer perceptron with input dimension \(d\) and output dimension \(n\), trained to predict the identity \(i\) of each data sample \(x_{i}\).
#### 3.0.1 Hidden layers
The encoder consists of \(L\) hidden layers. Each layer \(l=1,\ldots,L\) consists in an affine transformation followed by an activation function:
\[h^{(1)} =\phi^{(1)}\left(W^{(1)}x+b^{(1)}\right)\] \[h^{(2)} =\phi^{(2)}\left(W^{(2)}h^{(1)}+b^{(2)}\right)\] \[\vdots\] \[h^{(L)} =\phi^{(L)}\left(W^{(L)}h^{(L-1)}+b^{(L)}\right)\]
where \(\phi^{(1)},\ldots,\phi^{(L)}\) are the activation functions, typically non-linear. The dimensions of the successive outputs, say \(d_{1},\ldots,d_{L}\), are hyper-parameters. The weight matrices \(W^{(1)},\ldots,W^{(L)}\) and the bias vectors \(b^{(1)},\ldots,b^{(L)}\) must be learned.
#### 3.0.2 Output layer
The output layer is a fully connected layer with input dimension \(d_{L}\) (the output dimension of the last hidden layer) and output dimension \(n\). This affine transformation is followed by an activation function \(\phi\) which is either a coordinate-wise sigmoid function:
\[\forall u\in\mathrm{I\!R}^{n},\quad\phi(u)=\left(\frac{e^{u_{1}}}{1+e^{u_{1}}},\ldots,\frac{e^{u_{n}}}{1+e^{u_{n}}}\right)\]
or a SoftMax function:
\[\forall u\in{\rm I\!R}^{n},\quad\phi(u)=\left(\frac{e^{u_{1}}}{\sum_{i=1}^{n}e^{u_ {i}}},\ldots,\frac{e^{u_{n}}}{\sum_{i=1}^{n}e^{u_{i}}}\right)\]
The output of the network is then a vector \(p=\phi(Wh^{(L)}+b)\in[0,1]^{n}\) that can be interpreted as probabilities: the \(i\)th component \(p_{i}\) is the probability that the input \(x\) corresponds to the training sample \(x_{i}\). Observe that the probabilities sum to \(1\) only with the SoftMax function (with the sigmoid function, the probabilities are learned independently for each training sample \(i\) by the output layer). The weight matrix \(W\) and the bias vector \(b\) must be learned, together with the other parameters.
#### 3.2.2 Loss function
In the following, \(f\) denotes the learned function of the network, mapping sample vectors \(x\in{\rm I\!R}^{d}\) to probability vectors \(p\in[0,1]^{n}\). \(f\) is a parametric function and its parameters are the weight matrices \(W^{(1)},\ldots,W^{(L)},W\) and the bias vectors \(b^{(1)},\ldots,b^{(L)},b\). Those are learned by minimizing the following Binary Cross Entropy (BCE) loss by gradient descent:
\[\mathcal{L}=-\sum_{i=1}^{n}\left(\log f_{i}(x_{i})+\sum_{j\neq i}\log(1-f_{j}( x_{i}))\right) \tag{1}\]
#### 3.2.3 Interpretation
The Self-Encoder learns a latent representation of data, given by the last hidden layer, where the \(n\) training samples are linearly separable. It is the role of the output layer to find the hyperplanes (given by the weight matrix \(W\) and the bias vector \(b\)) separating each training sample.
#### 3.2.4 No hidden layer
Observe that the Self-Encoder can also be trained without any hidden layer. It then reduces to the output layer, i.e. a perceptron with input dimension \(d\) and output dimension \(n\). Equivalently, the Self-Encoder then consists of \(n\) binary logistic regressions (for the sigmoid activation function) or a single multinomial logistic regression (for the SoftMax activation function).
#### 3.2.5 Geometry
In both cases (with or without hidden layers), the Self-Encoder learns a specific similarity measure in the sense that it can predict the training samples that are the most similar to any new data sample \(x\). This measure depends on the distribution of the training samples \(x_{1},\ldots,x_{n}\) in the original space \({\rm I\!R}^{d}\). Any sample \(x\in{\rm I\!R}^{d}\) is said to be _close_ to the training sample \(x_{i}\) if the corresponding predicted probability \(p_{i}=f_{i}(x)\) is close to \(1\).
## 4 Invariance property
The Self-Encoder is invariant to invertible affine transformations of the training data, as stated below.
Proposition 1: _Let \(f\) be the mapping learned by the encoder with training samples \(x_{1},\ldots,x_{n}\). For any invertible matrix \(M\in\mathrm{I\!R}^{d\times d}\) and vector \(v\in\mathrm{I\!R}^{d}\), let \(\tilde{x}_{1},\ldots,\tilde{x}_{n}\) be the new training samples obtained by affine transformation \(x\mapsto Mx+v\). The new mapping \(\tilde{f}\) defined by:_
\[\forall\tilde{x}\in\mathrm{I\!R}^{d},\quad\tilde{f}(\tilde{x})=f(M^{-1}( \tilde{x}-v))\]
_minimizes the cross-entropy loss (1) for the training samples \(\tilde{x}_{1},\ldots,\tilde{x}_{n}\). Both encoders are related through the affine transformation:_
\[\forall x\in\mathrm{I\!R}^{d},\quad f(x)=\tilde{f}(Mx+v).\]
In other words, if the training data are the same up to some invertible affine transformations, so are the mappings learned by the encoder.
Proof: The mapping \(\tilde{f}\) is the same encoder as \(f\) except for the first hidden layer, whose output \(\tilde{h}^{(1)}\) for the input \(\tilde{x}\) is given by:
\[\tilde{h}^{(1)}\left(\tilde{x}\right) =h^{(1)}\left(M^{-1}(\tilde{x}-v)\right)\] \[=\phi^{(1)}\left(W^{(1)}(M^{-1}(\tilde{x}-v))+b^{(1)}\right)\] \[=\phi^{(1)}\left(W^{(1)}M^{-1}\tilde{x}+b^{(1)}-W^{(1)}M^{-1}v\right)\] \[=\phi^{(1)}(\tilde{W}^{(1)}\tilde{x}+\tilde{b}^{(1)}),\]
for the weight matrix and bias vector:
\[\tilde{W}^{(1)} =W^{(1)}M^{-1}\] \[\tilde{b}^{(1)} =b^{(1)}-W^{(1)}M^{-1}v.\]
The corresponding binary cross-entropy loss is minimized, given that:
\[\mathcal{L} =-\sum_{i=1}^{n}\left(\log\tilde{f}_{i}(\tilde{x}_{i})+\sum_{j \neq i}\log(1-\tilde{f}_{j}(\tilde{x}_{i}))\right)\] \[=-\sum_{i=1}^{n}\left(\log f_{i}(x_{i})+\sum_{j\neq i}\log(1-f_{j }(x_{i}))\right)\]
This invariance property is a clear difference with the usual Euclidean distance. To illustrate this, Figure 1 shows the Voronoi diagram (regions formed by the nearest neighbors) associated with \(n=4\) points of \(\mathrm{I\!R}^{2}\) for the Self-Encoder and for the Euclidean distance. On the left, the 4 points form a square and the Voronoi diagrams coincide, by symmetry. On the right, a linear transformation is applied; the Voronoi diagram is obtained by the same linear transformation for the Self-Encoder, while it changes completely for the Euclidean distance.
This invariance property is handy as it simplifies the pre-processing steps, which can become tedious and that usually have an impact on the performances of many classifiers. This will be showed later in the experiments for the Euclidean \(k\)-NN classifier.
Figure 1: Impact of a linear transformation on the Voronoi diagram (regions of nearest neighbors) formed by \(n=4\) points in \(\mbox{\rm I}\!\mbox{\rm R}^{2}\).
## 5 Categorical features
We claim that the Self-Encoder robustly handles categorical features, in the sense that it does not depend on the number of bits they are encoded on, unlike Euclidean Nearest Neighbor (NN).
Given \(x^{(1)},\ldots,x^{(n)}\) training samples with categorical features in \(\{0,1\}^{d}\), data redundancy in the features does not modify the optimum of the function. If there is a pair \((k_{1},k_{2})\) such that \(\forall i,x_{k_{1}}^{(i)}=x_{k_{2}}^{(i)}\) then the contributions to the loss involving these features is duplicated and they should result in the same corresponding weights in the input layer, which should not impact the learned geometry. If there is a pair \((k_{1},k_{2})\) such that \(\forall i,x_{k_{1}}^{(i)}=1-x_{k_{2}}^{(i)}\) then the \(k_{1}\)-th feature is a invertible affine transformation of the \(k_{2}\)-th feature and the invariance property of Proposition 4.1 tells us that the first case of redundancy applies.
Let us illustrate on a simple example how the similarity measure learned by a Self-Encoder can outperform the usual Euclidean metric on a nearest-neighbor search by handling categorical features differently.
\[X_{1}=\begin{pmatrix}x_{1}^{(1)}\\ x_{1}^{(2)}\\ x_{1}^{(3)}\\ x_{1}^{(4)}\\ x_{1}^{(5)}\end{pmatrix}=\begin{pmatrix}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&1\\ 1&0&1&0\end{pmatrix} \tag{2}\]
\[X_{2}=\begin{pmatrix}x_{2}^{(1)}\\ x_{2}^{(2)}\\ x_{2}^{(3)}\\ x_{2}^{(4)}\\ x_{2}^{(5)}\end{pmatrix}=\begin{pmatrix}1&0&0&0&1\\ 1&0&0&1&0\\ 1&0&1&0&0\\ 0&1&0&0&1\\ 0&1&0&1&0\end{pmatrix} \tag{3}\]
Let \(X_{1}\) and \(X_{2}\) be defined in Equations 2 and 3. The only difference between the two set of samples is that the binary feature of the first binary column of \(X_{1}\) is encoded on two bits in \(X_{2}\) in the first two columns. It is reasonable to expect that a classifier trained on \(X_{1}\) and fed with \(\bar{x}_{1}=\left(1\;1\;0\;0\right)\) makes the same decision as a classifier trained on \(X_{2}\) and fed with \(\bar{x}_{2}=\left(0\;1\;1\;0\;0\right)\). This is not the case for a Euclidean nearest-neighbor as the closest vector from \(\bar{x}_{1}\) in \(X_{1}\) is \(\left(0\;1\;0\;0\right)\) and the closest ones from \(\bar{x}_{2}\) in \(X_{2}\) are \(\left(1\;0\;1\;0\;0\right)\), \(\left(0\;1\;0\;0\;1\right)\) and \(\left(0\;1\;0\;1\;0\right)\). On the other hand, the Self-Encoder returns \(\left(0\;1\;0\;0\right)\) as the most similar to \(\bar{x}_{1}\) and \(\left(1\;0\;1\;0\;0\right)\) as the most similar to \(\bar{x}_{2}\).
## 6 Sampling
Under the reasonable assumption that the dimensions of the hidden layers do not scale with \(n\), the time complexity is that of the output layer: \(O(n^{2})\) for training and \(O(n)\) for evaluation. The training time complexity is higher than the \(n\)-linear time complexity of Euclidean \(k\)-NN. However, training is only done once
and then the trained model can be used in \(O(n)\) for similarity search. Moreover, the natural way to implement a Multi-Layer Perceptron (MLP) model is to use machine learning frameworks such as PyTorch [7] or TensorFlow [1], which natively support GPU acceleration and highly reduce the computation time.
To control the memory usage and time complexity of the approach and in order to improve the performance, we propose to use a simple sampling strategy, where a random subset of the training set is selected, thus reducing the space and time complexity of the model. Given a set of training samples \(X=(x^{(i)})_{i\in\{1,\ldots,n\}}\), sampling generates a new training subsets of size \(s\) by randomly sampling vectors from \(X\). The model is then trained on the new training subset. Experiments show that that the performance of the sampled model is comparable to the complete one.
## 7 Experiments
To evaluate the quality of the similarity measure learned by the proposed Self-Encoder, we apply it to a classification task with \(k\)-NN method. The Self-Encoder \(k\)-NN is simply called Self-Encoder for simplicity. It is tested with and without a hidden layer and in normal and sampling modes. Training uses early stopping and learning rate decay.
The performances are compared to those of four classification baselines: the Euclidean \(k\)-NN with \(k=5\), the linear Support Vector Machine (SVM), the one-vs-all logistic regression and the multi-layer perceptron with one hidden layer of twenty neurons.
Ten datasets that are recurrent in the machine learning literature were selected for the experiments. They are all available from the UCI repository1 and descriptive figures can be found in Table 1. Among these, the German Credit dataset comes in two versions: one with numerical features and another one with 13 out of 20 features being categorical. The only pre-processing applied to all the datasets is the conversion of categorical features into a one-hot encoding.
Footnote 1: [https://archive.ics.uci.edu/ml/datasets.php](https://archive.ics.uci.edu/ml/datasets.php)
The Self-Encoder is implemented in PyTorch, the optimization is done using Adam [5]. Some hyper-parameters are fixed: the learning rate decay is set to 0.995, the size of the hidden layer is fixed to 20 and for the sampling mode, the number of visible samples is fixed to 100.
For each classification model and each dataset, the reported metric is the accuracy. It is measured using 5-fold cross validation. The learning rate to train each model on each _fold_ is chosen according to a log-uniform distribution between 0.001 and 2, using the Bayesian optimization library hyperopt2. The library is also used to choose the normalization function between sigmoid and SoftMax. All experiments can be reproduced using the code provided in the supplementary material.
Footnote 2: [https://hyperopt.github.io/hyperopt](https://hyperopt.github.io/hyperopt)
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & \# samples &
\begin{tabular}{l} Feature \\ dimension \\ \end{tabular} & \# classes \\ \hline Breast Cancer & 699 & 9 & 2 \\ \hline Digits & 1,797 & 64 & 10 \\ \hline Ecoli & 336 & 7 & 8 \\ \hline German credit & 1,000 & 24 & 2 \\ \hline German credit (categorical) & 1,000 & 20 & 2 \\ \hline Glass & 214 & 9 & 6 \\ \hline Ionosphere & 351 & 34 & 2 \\ \hline Iris & 150 & 4 & 3 \\ \hline Liver & 345 & 6 & 2 \\ \hline Wine & 178 & 13 & 3 \\ \hline \end{tabular}
\end{table}
Table 1: Dataset details
\begin{table}
\begin{tabular}{|l|l l|l l|l l|} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\(k\)-NN} & \multicolumn{1}{c|}{MLP} & \multicolumn{1}{c|}{Logistic} & \multicolumn{1}{c|}{SVM} \\ \hline Breast cancer & **0.967** & 0.009 & 0.96 & 0.007 & 0.963 & 0.003 & 0.96 & 0.003 \\ \hline Digits & **0.983** & 0.004 & 0.934 & 0.013 & 0.965 & 0.004 & **0.983** & 0.002 \\ \hline Ecoli & **0.869** & 0.021 & 0.762 & 0.016 & 0.759 & 0.029 & 0.798 & 0.019 \\ \hline German credit & 0.684 & 0.012 & 0.741 & 0.026 & 0.738 & 0.012 & **0.754** & 0.012 \\ \hline German credit cat & 0.716 & 0.027 & 0.717 & 0.023 & 0.733 & 0.027 & **0.736** & 0.023 \\ \hline Glass & **0.616** & 0.066 & 0.439 & 0.099 & 0.579 & 0.046 & **0.616** & 0.06 \\ \hline Ionosphere & 0.835 & 0.032 & **0.903** & 0.033 & 0.855 & 0.039 & 0.838 & 0.03 \\ \hline Iris & 0.953 & 0.05 & 0.913 & 0.129 & 0.967 & 0.03 & **0.987** & 0.027 \\ \hline Liver & **0.693** & 0.025 & 0.609 & 0.098 & 0.684 & 0.063 & 0.687 & 0.057 \\ \hline Wine & 0.714 & 0.046 & 0.321 & 0.1 & 0.966 & 0.021 & **0.977** & 0.011 \\ \hline \end{tabular}
\end{table}
Table 2: Classification accuracies for the Self-Encoder along with other baselines on ten usual datasets. Performances are measured using a 5-fold cross validation mechanism. Best results are in bold. German credit cat is the version of the German credit dataset with categorical features.
All results are reported in Table 2. An obvious conclusion is that in both normal and sampling settings, the Self-Encoder performs better than the baselines. A second conclusion is that the Self-Encoder in the sampling framework performs comparably or better than the other baselines in the normal setting. This is reassuring because the Self-Encoder might need to be applied in sampling mode while other lighter models have access to all the samples.
To measure the impact of the invariance feature, the performance of the Self-Encoder has also been compared to the performance of the Euclidean \(k\)-NN algorithm on normalized numerical datasets. The results are shown in Table 3. As expected, normalization improves the score of the Euclidean \(k\)-NN in most cases but not up to the score of the Self-Encoder.
## 8 Conclusion
In conclusion, the Self-Encoder is an unsupervised method that learns a similarity measure specific to the training data that can be used for downstream tasks such as classification, its primary objective being to separate samples from one another.
\begin{table}
\begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\(k\)-NN} & \multicolumn{1}{c|}{\(k\)-NN} & \multicolumn{1}{c|}{SE} \\ \cline{2-4} \multicolumn{1}{c|}{} & normalized & \multicolumn{1}{c|}{\(k\)-NN} & \multicolumn{1}{c|}{SE} \\ \hline Breast cancer & 0.960 & **0.967** & **0.976** \\ \hline Digits & 0.970 & **0.983** & **0.983** \\ \hline Ecoli & 0.866 & **0.869** & **0.911** \\ \hline German & **0.693** & 0.684 & **0.768** \\ \hline Glass & **0.645** & 0.616 & **0.794** \\ \hline Ionosphere & **0.835** & **0.835** & **0.963** \\ \hline Iris & **0.960** & 0.953 & **1.0** \\ \hline Liver & 0.609 & **0.693** & **0.768** \\ \hline Wine & **0.966** & 0.714 & **0.977** \\ \hline \end{tabular}
\end{table}
Table 3: Classification accuracy of Euclidean \(k\)-NN on normalized datasets and Euclidean \(k\)-NN and Self-Encoder on raw datasets. This comparison is only reported on numerical datasets. Two best scores for each dataset are in bold. |
2310.09286 | Percolation with invariant Poisson processes of lines in the $3$-regular
tree | In this paper, we study invariant Poisson processes of lines (i.e,
bi-infinite geodesics) in the $3$-regular tree. More precisely, there exists a
unique (up to multiplicative constant) locally finite Borel measure on the
space of lines that is invariant under graph automorphisms, and we consider two
Poissonian ways of playing with this invariant measure. First, following
Benjamini, Jonasson, Schramm and Tykesson, we consider an invariant Poisson
process of lines, and show that there is a critical value of the intensity
below which a.s. the vacant set of the process percolates, and above which all
its connected components are finite. Then, we consider an invariant Poisson
process of roads (i.e, lines with speed limits), and show that there is a
critical value of the parameter governing the speed limits of the roads below
which a.s. one can drive to infinity in finite time using the road network
generated by the process, and above which this is impossible. | Guillaume Blanc | 2023-10-13T17:53:19Z | http://arxiv.org/abs/2310.09286v1 | # Percolation with invariant Poisson processes of lines in the 3-regular tree
###### Abstract
In this paper, we study invariant Poisson processes of lines (i.e, bi-infinite geodesics) in the 3-regular tree. More precisely, there exists a unique (up to multiplicative constant) locally finite Borel measure on the space of lines that is invariant under graph automorphisms, and we consider two Poissonian ways of playing with this invariant measure. First, following Benjamini, Jonasson, Schramm and Tykesson, we consider an invariant Poisson process of lines, and show that there is a critical value of the intensity below which a.s. the vacant set of the process percolates, and above which all its connected components are finite. Then, we consider an invariant Poisson process of roads (i.e, lines with speed limits), and show that there is a critical value of the parameter governing the speed limits of the roads below which a.s. one can drive to infinity in finite time using the road network generated by the process, and above which this is impossible.
## Introduction and main results
Let \(\mathbb{T}\) be the 3-regular tree (planar, rooted, and labeled using the Neveu notation, or Ulam-Harris labelling; as represented in Figure 1), and let \(\mathbb{L}\) be the space of lines (i.e, bi-infinite geodesics) in
Figure 1: The 3-regular tree \(\mathbb{T}\) (planar, rooted, and labeled using the Neveu notation, or Ulam–Harris labelling). In red, we have represented a few lines in \(\mathbb{T}\).
\(\mathbb{T}\). There exists a unique (up to multiplicative constant) locally finite Borel measure on \(\mathbb{L}\) that is invariant under graph automorphisms. While this is certainly guaranteed by abstract results on Haar measures (see, e.g, [10, Chapter 13] and references therein), we give a simple description of \(\mu\) in terms of the uniform measure on the boundary of \(\mathbb{T}\) (see Proposition 1 below), which is analogous to the better-known case of the hyperbolic plane [2, Section 6]. Then, we consider two Poissonian ways of playing with this invariant measure. To give some extra motivation, let us take a step back and abstract the setting a little bit (while remaining informal).
Motivation.Picture a nice homogeneous metric space \(X\), on which a group of isometries acts transitively. Add to the picture a class \(\mathcal{F}\) of closed subsets of \(X\), stable under the action of the isometries, and assume that \(\mathcal{F}\) is equipped with an isometry-invariant measure \(\mu\). For instance, if \(X\) itself is equipped with an isometry-invariant Borel measure \(\lambda\), then the class \(\mathcal{F}=\left\{\overline{B}(x,r)\,;\,x\in X\right\}\) of closed balls with some fixed radius is stable under the action of the isometries, and the pushforward \(\mu\) of \(\lambda\) by the map \(x\in X\mapsto\overline{B}(x,r)\) is an isometry-invariant measure on \(\mathcal{F}\). Consider the following concrete examples.
1. \(X\) is the Euclidean lattice \(\mathbb{Z}^{d}\), equipped with the counting measure \(\lambda\), and \[\mathcal{F}=\left\{\left\{x\right\};\,x\in\mathbb{Z}^{d}\right\}.\]
2. \(X\) is the Euclidean space \(\mathbb{R}^{d}\), equipped with the Lebesgue measure \(\lambda\), and \[\mathcal{F}=\left\{\overline{B}(x,1)\,;\,x\in\mathbb{R}^{d}\right\}.\]
In settings \((X,\mathcal{F},\mu)\) such as described above, two separate problems may be considered: a percolation problem, and a driving distance problem.
* **Percolation.** One can take a Poisson process \(\Pi\) with intensity \(\alpha\cdot\mu\) on \(\mathcal{F}\), where \(\alpha>0\) is a parameter, and ask about the percolative properties of the trace \(\bigcup_{F\in\Pi}F\) of the process; or that of its complement, the vacant set \(\mathcal{V}=\,X\backslash\bigcup_{F\in\Pi}F\,.\)
* **Driving distance.** One can take a Poisson process \(\Pi\) with intensity \(\mu\otimes v^{-\beta}\mathrm{d}v\) on \(\mathcal{F}\times\mathbb{R}_{+}^{*}\), where \(\beta>1\) is a parameter; and, viewing each atom \((F,v)\) of \(\Pi\) as a "road" in \(X\), with \(v\) the speed limit on the subset \(F\), consider the random metric \(T:X\times X\to\mathbb{R}_{+}\) induced by the driving distance with respect to the road network generated by \(\Pi\). To be more precise, first consider the (random) speed limit function \(V:X\to\mathbb{R}_{+}\) defined by \[V(x)=\sup\{v\,;\,(F,v)\in\Pi:F\ni x\}\quad\text{for all }x\in X,\] with the convention \(\sup\emptyset=0\). Then, define the driving distance \(T(x,y)\) between \(x,y\in X\) as the infimal time \(T>0\) for which there exists a path \(\gamma:[0,T]\to X\) from \(x\) to \(y\) that respects the speed limits set by \(\Pi\), in the sense that \[d(\gamma(s),\gamma(t))\leq\int_{s}^{t}V(\gamma(u))\mathrm{d}u\quad\text{for all }s,t\in[0,T].\] Equivalently, the driving distance metric \(T:X\times X\to\mathbb{R}_{+}\) is the first passage percolation distance function associated with the random field \(W:x\in X\mapsto 1/V(x)\).
Note that in the case of example (i) given above, the percolation problem amounts to Bernoulli site percolation on the Euclidean lattice, while the driving distance problem broadly amounts to site first passage percolation. In the case of example (ii), the percolation problem amounts to the continuum percolation model known as the Gilbert disk (or Boolean) model. That being said, one can also imagine settings \((X,\mathcal{F},\mu)\) allowing long range models, where the invariant measure \(\mu\) on \(\mathcal{F}\) does not simply come from an invariant measure \(\lambda\) on \(X\). We have in mind the following examples.
* \(X\) is the Euclidean space \(\mathbb{R}^{d}\) (\(d\geq 2\)) and \(\mathcal{F}\) is the space of affines lines, equipped with its unique (up to multiplicative constant) locally finite invariant Borel measure \(\mu\). In this case, the driving distance problem has been introduced by Aldous [1] and Kendall [5] a few years ago. It has been shown [5, 4, 3] that the driving distance metric \(T:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}_{+}\) is well-defined for and only for \(\beta>d\), and that the random metric space \(\left(\mathbb{R}^{d},T\right)\) is homeomorphic to the Euclidean space \(\mathbb{R}^{d}\) and has Hausdorff dimension \((\beta-1)d/(\beta-d)>d\).
* \(X\) is the hyperbolic plane \(\mathbb{H}\) and \(\mathcal{F}\) is the space of lines (i.e, bi-infinite geodesics), equipped with its unique (up to multiplicative constant) locally finite invariant Borel measure \(\mu\). In this case, the percolation problem has been considered by Benjamini, Jonasson, Schramm and Tykesson [2]. They have shown the existence of a critical parameter \(\alpha_{0}=1\) (this explicit value depends on the normalisation of \(\mu\)) such that the vacant set \(\mathcal{V}\) contains lines for \(\alpha<\alpha_{0}\), and does not contain any half-line for \(\alpha\geq\alpha_{0}\).
* \(X\) is the Euclidean lattice \(\mathbb{Z}^{d}\) (\(d\geq 3\)) and \(\mathcal{F}\) is the space of bi-infinite transient paths, equipped with the invariant measure \(\mu\) constructed by Sznitman [12]. In this case, the percolation problem amounts to the random interlacements model introduced by Sznitman [12], for which it has been shown [12, 11] that there exists a critical parameter \(\alpha_{0}\in\mathbb{R}_{+}^{*}\) such that the vacant set \(\mathcal{V}\) percolates for \(\alpha<\alpha_{0}\), and has only finite connected components for \(\alpha>\alpha_{0}\).
Our setting.In this paper, we consider the two problems (percolation and driving distance) in the case where \(X\) is the \(3\)-regular tree \(\mathbb{T}\) and \(\mathcal{F}\) is the space of lines \(\mathbb{L}\), equipped with its unique (up to multiplicative constant) locally finite invariant Borel measure \(\mu\). As it turns out, a normalisation for \(\mu\) can be specified by asking that for every \(x\neq y\in\mathbb{T}\),
\[\mu\{\ell\in\mathbb{L}:\ell\text{ passes through }x\text{ and }y\}=2^{-d(x,y)},\]
where \(d(x,y)\) is the graph distance between \(x\) and \(y\) in \(\mathbb{T}\).
##### Percolation (visibility to infinity, despite obstacles).
Following Benjamini, Jonasson, Schramm and Tykesson [2], we let \(\Pi\) be a Poisson process with intensity \(\alpha\cdot\mu\) on \(\mathbb{L}\), where \(\alpha>0\) is a parameter. We recover their result [2, Proposition 6.1] in this discrete setting.
**Theorem 1**.: _There exists a critical parameter \(\alpha_{0}=4\ln 2\) (this explicit value depends on the normalisation of \(\mu\) specified above) such that the following holds._
* _For_ \(\alpha<\alpha_{0}\)_, almost surely, the vacant set_ \(\mathcal{V}\) _contains a line._
* _For_ \(\alpha\geq\alpha_{0}\)_, almost surely, the vacant set_ \(\mathcal{V}\) _does not contain any half-line._
We present this in Section 2.
Driving distance.In Section 3, which is the core of the paper, we le \(\Pi\) be a Poisson process with intensity proportional to \(\mu\otimes v^{-\beta}\mathrm{d}v\) on \(\mathbb{L}\times\mathbb{R}_{+}^{*}\), where \(\beta>1\) is a parameter. Viewing each atom \((\ell,v)\) of \(\Pi\) as a road in \(\mathbb{T}\), with \(v\) the speed limit on the line \(\ell\), we consider the random metric \(T:\mathbb{T}\times\mathbb{T}\to\mathbb{R}_{+}\) induced by the driving distance with respect to the road network generated by \(\Pi\) (see Section 3 for a more detailed presentation of the model). In this instance of first passage percolation with positively associated passage times, we prove that the so-called explosion phenomenon undergoes a phase transition in terms of the parameter \(\beta\). This is the main novel result of this paper.
**Theorem 2**.: _The explosion phenomenon undergoes a phase transition at \(\beta=2\)._
* _For_ \(\beta<2\)_, almost surely, there exists an infinite geodesic path_ \((\varnothing=x_{0},x_{1},\ldots)\) _in_ \(\mathbb{T}\) _such that_ \(\sum_{n\geq 1}T(x_{n-1},x_{n})<\infty\)_._
* _For_ \(\beta>2\)_, almost surely, for every infinite geodesic path_ \((\varnothing=x_{0},x_{1},\ldots)\) _in_ \(\mathbb{T}\)_, we have_ \(\sum_{n\geq 1}T(x_{n-1},x_{n})=\infty\)_._
Acknowledgements.I warmly thank Nicolas Curien and Arvind Singh for their constant support and guidance, and for their valuable comments on earlier versions of this paper. We thank Itai Benjamini for suggesting to look at the driving distance problem in the 3-regular tree. Finally, I am grateful to the PizzaMa team in Orsay for their encouraging feedback.
###### Contents
* 1 The invariant measure on the space of lines
* 2 Visibility to infinity, despite obstacles
* 3 Driving to infinity with a Poisson process of roads
* 3.1 Explosion and the greedy process
* 3.2 Non-explosion and the bounded driving distance probability
* 3.2.1 Non-explosion
* 3.2.2 The bounded driving distance probability
* 3.3 Open questions
## 1 The invariant measure on the space of lines
In this section, we give a description of the invariant measure \(\mu\) on the space of lines \(\mathbb{L}\) in terms of the uniform measure on the boundary of \(\mathbb{T}\) (see Proposition 1 below). This is analogous to better-known case of the hyperbolic plane (see, e.g, [2, Section 6]), and certainly not new, but we were unable to find it in the literature. Let us first recall some basic definitions and facts. We rely on the authoritative reference [7].
* A _line_ in \(\mathbb{T}\) is the trace \(\{x_{n},\,n\in\mathbb{Z}\}\) of a bi-infinite geodesic path \((x_{n})_{n\in\mathbb{Z}}\), i.e, such that \(d(x_{m},x_{n})=|m-n|\) for all \(m,n\in\mathbb{Z}\), where \(d(\cdot,\cdot)\) denotes the graph distance on \(\mathbb{T}\). We denote by \(\mathbb{L}\) the set of lines.
* A _ray_ in \(\mathbb{T}\) is an infinite non-backtracking path \((\varnothing=x_{0},x_{1},\ldots)\) starting at the root. We denote by \(\partial\mathbb{T}\) the set of rays, also known as the boundary of \(\mathbb{T}\). Given two distinct rays
\(\xi=(x_{n})_{n\in\mathbb{N}}\) and \(\eta=(y_{n})_{n\in\mathbb{N}}\), we denote by \(\xi\wedge\eta\) the farthest node from the root that is common to both paths \(\xi\) and \(\eta\). We equip \(\partial\mathbb{T}\) with the metric \(d:\partial\mathbb{T}\times\partial\mathbb{T}\to\mathbb{R}_{+}\) defined by \[d(\xi,\eta)=\begin{cases}0&\text{if }\xi=\eta\\ 2^{-|\xi\wedge\eta|}&\text{otherwise}\end{cases}\quad\text{ for all }\xi,\eta\in\partial\mathbb{T}.\] There is a natural Borel probability measure on \(\partial\mathbb{T}\); namely, the law of a non-backtracking random walk \((X_{n})_{n\in\mathbb{N}}\) starting at the root. With this measure as mass distribution, it is not difficult to check that \((\partial\mathbb{T},d)\) has Hausdorff dimension \(1\).
* Given two distinct rays \(\xi=(x_{n})_{n\in\mathbb{N}}\) and \(\eta=(y_{n})_{n\in\mathbb{N}}\), we denote by \(\Lambda(\xi,\eta)\) the line with endpoints \(\xi\) and \(\eta\); namely, \[\Lambda(\xi,\eta)=\{\ldots,x_{n+1},\xi\wedge\eta,y_{n+1},\ldots\},\] where \(n=|\xi\wedge\eta|\). Denoting by \(\Delta=\{(\xi,\xi)\,;\,\xi\in\partial\mathbb{T}\}\) the diagonal in \(\partial\mathbb{T}\times\partial\mathbb{T}\), consider the surjective (two-to-one) mapping \[\begin{array}{cccc}\Lambda:&(\partial\mathbb{T}\times\partial\mathbb{T}) \setminus\Delta&\longrightarrow&\mathbb{L}\\ &(\xi,\eta)&\longmapsto&\Lambda(\xi,\eta).\end{array}\] We endow \(\mathbb{L}\) with the finest topology that makes \(\Lambda\) into a continuous map (final topology), and with the corresponding Borel \(\sigma\)-algebra.
* Every graph automorphism \(\phi:\mathbb{T}\to\mathbb{T}\) naturally extends to a continuous map from \(\mathbb{L}\) to \(\mathbb{L}\), mapping the line \(\{x_{n},\,n\in\mathbb{Z}\}\) onto \(\{\phi(x_{n}),\,n\in\mathbb{Z}\}\). We say that a Borel measure \(\mu\) on \(\mathbb{L}\) is _invariant_ if for every graph automorphism \(\phi:\mathbb{T}\to\mathbb{T}\), the pushforward \(\phi_{*}\mu\) of \(\mu\) by \(\phi\) agrees with \(\mu\).
* For a subset \(S\subset\mathbb{T}\), we denote by \(\langle S\rangle=\{\ell\in\mathbb{L}:\ell\cap S\neq\emptyset\}\) the set of lines that hit \(S\). For \(x\in\mathbb{T}\), we write \(\langle x\rangle=\langle\{x\}\rangle\) for the set of lines that pass through \(x\); and for \(x\neq y\in\mathbb{T}\), we write \(\langle x,y\rangle=\langle x\rangle\cap\langle y\rangle\) for the set of lines that pass through both points \(x\) and \(y\). Note that \(\langle x,y\rangle\) is not the same as \(\langle\{x,y\}\rangle\). **Claim 1**.: The collection \(\mathcal{C}=\{\langle x,y\rangle\,;\;x\neq y\in\mathbb{T}\}\cup\{\emptyset\}\) forms a \(\pi\)-system that generates the Borel \(\sigma\)-algebra on \(\mathbb{L}\). _Sketch of proof._ The fact that \(\mathcal{C}\) is a \(\pi\)-system is clear. That \(\mathcal{C}\) is included in the Borel \(\sigma\)-algebra follows from the fact that the \((\langle x,y\rangle\,;\;x\neq y\in\mathbb{T})\) are closed subsets of \(\mathbb{L}\). Finally, the collection \(\mathcal{C}\) generates the Borel \(\sigma\)-algebra on \(\mathbb{L}\), as every open subset of \(\mathbb{L}\) can be written as a countable union of elements of \(\mathcal{C}\).
* Finally, we say that a Borel measure \(\mu\) on \(\mathbb{L}\) is _locally finite_ if \(\mu\langle x,y\rangle<\infty\) for all \(x\neq y\in\mathbb{T}\).
We are now ready to state the main result of this section.
**Proposition 1**.: The Borel measure \(\mu\) defined by
\[\int_{\mathbb{L}}\varphi(\ell)\mathrm{d}\mu(\ell)=\mathbb{E}\left[\frac{ \varphi(\Lambda(X,Y))}{d(X,Y)^{2}}\right]\quad\text{for all Borel functions }\varphi:\mathbb{L}\to[0,\infty],\]
where \(X=(X_{n})_{n\in\mathbb{N}}\) and \(Y=(Y_{n})_{n\in\mathbb{N}}\) are two independent non-backtracking random walks starting at the root in \(\mathbb{T}\), is locally finite and invariant. Moreover, for any locally finite invariant Borel measure \(\mu\) on \(\mathbb{L}\), there exists a constant \(c>0\) such that
\[\mu\langle x,y\rangle=c\cdot 2^{-d(x,y)}\quad\text{for all }x\neq y\in\mathbb{T}.\]
Proof.: We start with the second assertion. Let \(\mu\) be a locally finite invariant Borel measure on \(\mathbb{L}\), and let \(x\neq y\in\mathbb{T}\). Set \(n=d(x,y)\), and let \(1_{n}\) be the vertex \(1\ldots 1\in\mathbb{T}\) whose distance to the root is \(n\). We have \(\mu\langle x,y\rangle=\mu\langle\varnothing,1_{n}\rangle\). Now, we claim that \(\mu\langle\varnothing,1_{n}\rangle=\mu\langle\varnothing,1\rangle\cdot 2^{-(n-1)}\) for all \(n\in\mathbb{N}^{*}\). This is easily checked by induction: for each \(n\in\mathbb{N}^{*}\), the set \(\langle\varnothing,1_{n}\rangle\) can be written as the disjoint union \(\langle\varnothing,1_{n}1\rangle\sqcup\langle\varnothing,1_{n}2\rangle\); thus, by invariance,
\[\mu\langle\varnothing,1_{n}\rangle=\mu\langle\varnothing,1_{n}1\rangle+\mu \langle\varnothing,1_{n}2\rangle=2\cdot\mu\langle\varnothing,1_{n+1}\rangle.\]
For the first assertion, let \(\mu\) be the Borel measure defined by
\[\int_{\mathbb{L}}\varphi(\ell)\mathrm{d}\mu(\ell)=\mathbb{E}\left[\frac{ \varphi(\Lambda(X,Y))}{d(X,Y)^{2}}\right]\quad\text{for all Borel functions }\varphi:\mathbb{L}\to[0,\infty],\]
where \(X=(X_{n})_{n\in\mathbb{N}}\) and \(Y=(Y_{n})_{n\in\mathbb{N}}\) are two independent non-backtracking random walks starting at the root. We claim that
\[\mu\langle x,y\rangle=\frac{8}{9}\cdot 2^{-d(x,y)}\quad\text{for all }x\neq y\in \mathbb{T}. \tag{1}\]
Indeed, let \(x\neq y\in\mathbb{T}\). We have
\[\mu\langle x,y\rangle=\mathbb{E}\left[d(X,Y)^{-2}\,;\,\Lambda(X,Y)\text{ passes through }x\text{ and }y\right].\]
Now, we distinguish two cases.
* First, suppose that \(x\) and \(y\) are not descendants of one another. Then \(\Lambda(X,Y)\) passes through \(x\) and \(y\) if and only if \(X\) passes through \(x\) and \(Y\) through \(y\), or (exclusive) \(X\) passes through \(y\) and \(Y\) through \(x\). Moreover, on that event, we have \(d(X,Y)=2^{-|x\wedge y|}\). It follows that \[\mu\langle x,y\rangle =2^{2|x\wedge y|}\cdot\left(3\cdot 2^{|x|-1}\right)^{-1}\cdot \left(3\cdot 2^{|y|-1}\right)^{-1}+2^{2|x\wedge y|}\cdot\left(3\cdot 2^{|y|-1} \right)^{-1}\cdot\left(3\cdot 2^{|x|-1}\right)^{-1}\] \[=2\cdot\frac{4}{9}\cdot 2^{2|x\wedge y|-|x|-|y|}=\frac{8}{9} \cdot 2^{-d(x,y)}.\]
* Next, to treat the case where \(x\) and \(y\) are descendants of one another, we may assume without loss of generality that \(x\prec y\). Let us denote by \(\varnothing=z_{0},\ldots,z_{n}=y\) the geodesic path from the root to \(y\). By assumption, we have \(z_{m}=x\) for some \(m\in\llbracket 0,n\llbracket\). Now, we define \(x_{-1}\) and \(x_{0}\) as the two neighbours of \(\varnothing\) that are not \(z_{1}\); and for each \(k\in\llbracket 1,m\rrbracket\), we define \(x_{k}\) as the neighbour of \(z_{k}\) that is neither \(z_{k-1}\) nor \(z_{k+1}\). See Figure 2 for an illustration. The set \(\langle x,y\rangle\) can be written as the disjoint union \(\bigsqcup_{k=-1}^{m}\langle x_{k},y\rangle\), where for each \(k\in\llbracket-1,m\rrbracket\), the vertices \(x_{k}\) and \(y\) are not descendants of one another. By the previous case, it follows
that
\[\mu\langle x,y\rangle =\sum_{k=-1}^{m}\frac{8}{9}\cdot 2^{-d(x_{k},y)}\] \[=\frac{8}{9}\cdot\left(2^{-(n+1)}+\sum_{k=0}^{m}2^{-(n-k+1)}\right)\] \[=\frac{8}{9}\cdot\left(2^{-(n+1)}+2^{-(n+1)}\cdot\left(2^{m+1}-1 \right)\right)\] \[=\frac{8}{9}\cdot 2^{-(n-m)}=\frac{8}{9}\cdot 2^{-d(x,y)}.\]
This completes the proof of (1). Now, let \(\phi:\mathbb{T}\to\mathbb{T}\) be a graph automorphism. By (1), we have
\[\mu\langle\phi(x),\phi(y)\rangle=\frac{8}{9}\cdot 2^{-d(\phi(x),\phi(y))}=\frac{8 }{9}\cdot 2^{-d(x,y)}=\mu\langle x,y\rangle\quad\text{for all $x\neq y\in \mathbb{T}$.}\]
In particular, the Borel measures \(\phi_{*}\mu\) and \(\mu\) agree on the \(\pi\)-system \(\mathcal{C}\). By Dynkin's \(\pi\)-\(\lambda\) theorem, we deduce that \(\mu\) is invariant.
In the rest of the paper, we denote by \(\mu\) the unique locally finite invariant Borel measure on \(\mathbb{L}\) such that
\[\mu\langle x,y\rangle=2^{-d(x,y)}\quad\text{for all $x\neq y\in\mathbb{T}$.} \tag{2}\]
## 2 Visibility to infinity, despite obstacles
In this section, we prove Theorem 1. Following Benjamini, Jonasson, Schramm and Tykesson [2], we let \(\Pi\) be a Poisson process with intensity \(\alpha\cdot\mu\) on \(\mathbb{L}\), where \(\alpha>0\) is a parameter, and \(\mu\) is the invariant measure on \(\mathbb{L}\) normalised by (2), and we consider the percolative properties of the vacant set \(\mathcal{V}=\mathbb{T}\backslash\bigcup_{\ell\in\Pi}\ell\). More precisley, let us recall the statement of Theorem 1.
* For \(\alpha<\alpha_{0}\), almost surely, the vacant set \(\mathcal{V}\) contains a line.
* For \(\alpha\geq\alpha_{0}\), almost surely, the vacant set \(\mathcal{V}\) does not contain any half-line.
Before giving the proof of Theorem 1, let us recall some basic properties of the Poisson process of lines \(\Pi\). As usual with well-behaved Poisson processes, we view \(\Pi\) sometimes as a random subset of \(\mathbb{L}\), and sometimes as a random atomic measure on \(\mathbb{L}\), without making the distinction. An atomic measure on \(\mathbb{L}\) is a measure of the form \(\pi=\sum_{k=1}^{n}\delta_{\ell_{k}}\), with \(n\in\llbracket 0,\infty\rrbracket\), and \(\ell_{k}\in\mathbb{L}\) for every \(k\). We denote by \(\mathbb{M}\) the space of atomic measures on \(\mathbb{L}\), equipped with the \(\sigma\)-algebra generated by the maps
\[\begin{array}{rcl}\mathbb{M}&\longrightarrow&\llbracket 0,\infty\rrbracket\\ \pi&\longmapsto&\pi(B),\end{array}\]
Figure 2: Illustration of the definition of \(x_{-1},x_{0},\ldots,x_{m}\).
for \(B\) Borel subset of \(\mathbb{L}\). By construction, the Poisson process \(\Pi\) has the following invariance property: for every graph automorphism \(\phi:\mathbb{T}\to\mathbb{T}\), we have \(\phi_{*}\Pi\stackrel{{\mathrm{law}}}{{=}}\Pi\), where \(\phi_{*}\Pi\) denotes the pushforward of \(\Pi\) by \(\phi\). Moreover, the Poisson process \(\Pi\) is mixing: if \((\psi_{n})_{n\in\mathbb{N}}\) is a sequence of graph automorphisms of \(\mathbb{T}\) such that \(d(\varnothing,\psi_{n}(\varnothing))\to\infty\) as \(n\to\infty\), then for every bounded measurable functions \(f\) and \(g\) from \(\mathbb{M}\) to \(\mathbb{R}\), we have
\[\mathbb{E}[f(\Pi)\cdot g(\psi_{n}^{*}\Pi)]\longrightarrow\mathbb{E}[f(\Pi)] \cdot\mathbb{E}[g(\Pi)]\quad\text{as $n\to\infty$},\]
where \(\psi_{n}^{*}\Pi\) denotes the pushforward of \(\Pi\) by \(\psi_{n}\). In particular, any invariant event has probability \(0\) or \(1\). Now, we come to the proof of Theorem 1.
Proof of Theorem 1.: First, let us set some notation. For every \(x\neq y\in\mathbb{T}\), we denote by \(\llbracket x,y\rrbracket\) the geodesic path between \(x\) and \(y\) in \(\mathbb{T}\), and we let \((x\leftrightarrow y)=(\Pi\langle\llbracket x,y\rrbracket\rangle=0)\) be the event "the vacant set \(\mathcal{V}\) contains \(\llbracket x,y\rrbracket\)". For every \(n\in\mathbb{N}^{*}\), we denote by \([\mathbb{T},\varnothing]_{n}=\{x\in\mathbb{T}\colon d(\varnothing,x)\leq n\}\) the set of vertices within graph distance \(n\) from the root, and we let \(\partial[\mathbb{T},\varnothing]_{n}=\{x\in\mathbb{T}\colon d(\varnothing,x)=n\}\). Now, for every \(n\in\mathbb{N}^{*}\), let \(\mathcal{Z}_{n}=\{x\in\partial[\mathbb{T},\varnothing]_{n}:\varnothing\leftrightarrow x\}\), and let \(Z_{n}=\#\mathcal{Z}_{n}\). We claim that \((Z_{n})_{n\in\mathbb{N}^{*}}\) is a branching process. Indeed, for each \(n\in\mathbb{N}^{*}\), let \(\mathcal{F}_{n}\) be the \(\sigma\)-algebra generated by the restriction of \(\Pi\) to the set of lines that hit \([\mathbb{T},\varnothing]_{n}\), and consider the identity
\[Z_{n+1}=\sum_{x\in\partial[\mathbb{T},\varnothing]_{n}}\mathbf{1}(\varnothing \leftrightarrow x)\cdot\sum_{y\text{ child of $x$}}\mathbf{1}(\Pi\langle y1,y2 \rangle=0).\]
On the one hand, the events \(((\varnothing\leftrightarrow x)\,;\,x\in[\mathbb{T},\varnothing]_{n})\) are \(\mathcal{F}_{n}\)-measurable; on the other hand, the events \(((\Pi\langle y1,y2\rangle)\,;\,y\in\partial[\mathbb{T},\varnothing]_{n+1})\) are independent, and independent of \(\mathcal{F}_{n}\). This shows that conditionally on \(\mathcal{F}_{n}\), the random variable \(Z_{n+1}\) is distributed as a sum of \(Z_{n}\) independent binomial random variables with \(2\) trials and success probability \(\mathbb{P}\left(\Pi\langle y1,y2\rangle=0\right)=e^{-\alpha\cdot\mu\langle y1,y2\rangle}=e^{-\alpha/4}\). Now, let us complete the proof of the proposition.
* For \(\alpha\geq 4\ln 2\), we have \(\mathbb{E}\left[\text{Binomial}\left(2,e^{-\alpha/4}\right)\right]\leq 1\). By standard branching processes results, we get that almost surely, we have \(Z_{n}=0\) for all sufficiently large \(n\), which readily implies that almost surely, the vacant set \(\mathcal{V}\) does not contain any ray. By invariance, we deduce that for every \(x\in\mathbb{T}\), the event "the vacant set \(\mathcal{V}\) contains an infinite geodesic path \((x=x_{0},x_{1},\ldots)\)" has probability \(0\), and it follows that almost surely, the vacant set \(\mathcal{V}\) does not contain any half-line.
* For \(\alpha<4\ln 2\), we have \(\mathbb{E}\left[\text{Binomial}\left(2,e^{-\alpha/4}\right)\right]>1\). Moreover, note that \(\mathbb{P}(Z_{1}>0)>0\). By standard branching processes results, we get that with positive probability, we have \(Z_{n}>0\) for all \(n\in\mathbb{N}^{*}\). Thus, by Konig's lemma (see, e.g, [7, Exercise 1.1]), with positive probability, say probability \(\delta>0\), the vacant set \(\mathcal{V}\) contains a ray. It follows that there exists \(i\in\{1,2,3\}\) such that the event \(A_{i}\): "the vacant set \(\mathcal{V}\) contains a ray that passes through the vertex \(i\)" has probability at least \(\delta/3\); but note that by invariance, the probability of \(A_{i}\) does not depend on \(i\). Therefore, by the Harris-FKG inequality for Poisson processes (see, e.g, [6, Theorem 20.4]), since \(A_{1}\) and \(A_{2}\) are both decreasing events (adding lines to \(\Pi\) inhibits them), we get \[\mathbb{P}(A_{1}\cap A_{2})\geq\mathbb{P}(A_{1})\cdot\mathbb{P}(A_{2})\geq \left(\frac{\delta}{3}\right)^{2}>0.\] Since on the event \(A_{1}\cap A_{2}\), the vacant set \(\mathcal{V}\) contains a line that passes through the root, we deduce that the percolation event \(A\): "the vacant set \(\mathcal{V}\) contains a line" has positive
probability. Finally, observe that \(A\) is invariant. Since \(\Pi\) is mixing, we must have \(\mathbb{P}(A)=1\).
## 3 Driving to infinity with a Poisson process of roads
In this section, we prove Theorem 2. Following Aldous [1] and Kendall [5], we let \(\Pi\) be a Poisson process with intensity measure \(\nu\) proportional to \(\mu\otimes v^{-\beta}\mathrm{d}v\) on \(\mathbb{L}\times\mathbb{R}_{+}^{*}\), where \(\beta>1\) is a parameter, and \(\mu\) is the invariant measure on \(\mathbb{L}\) normalised by (2). Viewing each atom \((\ell,v)\) of \(\Pi\) as a road in \(\mathbb{T}\), with \(v\) the speed limit on the line \(\ell\), we consider the random metric \(T:\mathbb{T}\times\mathbb{T}\to\mathbb{R}_{+}\) induced by the driving distance with respect to the road network generated by \(\Pi\). Unlike in the Euclidean case [5, 4, 3], there is no issue in defining this driving distance metric for all values of \(\beta>1\) (see just below), and we consider its "explosive" properties. More precisely, let us recall the statement of Theorem 2.
* For \(\beta<2\), we have \(T(\varnothing,\partial\mathbb{T})<\infty\) almost surely; i.e, almost surely, there exists a ray \((\varnothing=x_{0},x_{1},\ldots)\) such that \(\sum_{n\geq 1}T(x_{n-1},x_{n})<\infty\).
* For \(\beta>2\), we have \(T(\varnothing,\partial\mathbb{T})=\infty\) almost surely.
Before proving Theorem 2, we present the model in more detail. Then, we consider the phase \(\beta<2\) in Subsection 3.1 (see Proposition 2), and the phase \(\beta>2\) in Subsection 3.2 (see Proposition 3).
The Poisson process of roads II.We let \(\Pi\) be a Poisson process with intensity measure \(\nu=(\beta-1)\cdot\mu\otimes v^{-\beta}\mathrm{d}v\) on \(\mathbb{L}\times\mathbb{R}_{+}^{*}\), where \(\beta>1\) is a parameter. The normalising constant \((\beta-1)\) is here for convenience, so that
\[\nu(\langle x,y\rangle\times[v,\infty[)=2^{-d(x,y)}\cdot v_{0}^{-(\beta-1)} \quad\text{for all $x\neq y\in\mathbb{T}$ and $v_{0}\in\mathbb{R}_{+}^{*}$.}\]
In fact, this multiplicative constant does not affect the result of Theorem 2, as multiplying \(\nu\) by a constant factor does not change the probability of the explosion event (\(T(\varnothing,\partial\mathbb{T})<\infty\)). Indeed, if \(\Pi_{\alpha}\) is a Poisson process with intensity \(\alpha\cdot\nu\) on \(\mathbb{L}\times\mathbb{R}_{+}^{*}\), where \(\alpha>0\), then a change of variables shows that
\[\Pi_{\alpha}\stackrel{{\text{\tiny law}}}{{=}}\left\{\left( \ell,\alpha^{1/(\beta-1)}\cdot v\right);\;(\ell,v)\in\Pi\right\}.\]
It follows that the metric \(T_{\alpha}\) induced by \(\Pi_{\alpha}\) has the same distribution as \(\alpha^{-1/(\beta-1)}\cdot T\). In particular, explosion occurs for \(T_{\alpha}\) with the same probability as for \(T\). As usual with well-behaved Poisson processes, we view \(\Pi\) sometimes as a random subset of \(\mathbb{L}\times\mathbb{R}_{+}^{*}\), and sometimes as a random atomic measure on \(\mathbb{L}\times\mathbb{R}_{+}^{*}\), without making the distinction. Note that there is no multiplicity ambiguity here, since almost surely, we have \(\Pi(\mathbb{L}\times\{v\})\leq 1\) for all \(v\in\mathbb{R}_{+}^{*}\). We recall that an atomic measure on \(\mathbb{L}\times\mathbb{R}_{+}^{*}\) is a measure of the form \(\pi=\sum_{k=1}^{n}\delta_{(\ell_{k},v_{k})}\), with \(n\in\llbracket 0,\infty\rrbracket\) and \((\ell_{k},v_{k})\in\mathbb{L}\times\mathbb{R}_{+}^{*}\) for every \(k\). We denote by \(\mathbb{M}\) the space of atomic measures on \(\mathbb{L}\times\mathbb{R}_{+}^{*}\), equipped with the \(\sigma\)-algebra generated by the maps
\[\mathbb{M} \longrightarrow \llbracket 0,\infty\rrbracket\] \[\pi \longmapsto \pi(B),\]
for \(B\) Borel subset of \(\mathbb{L}\times\mathbb{R}_{+}^{*}\). By construction, the Poisson process \(\Pi\) has the following invariance property: for any graph automorphism \(\phi:\mathbb{T}\to\mathbb{T}\), we have
\[\phi_{*}\Pi:=\{(\phi(\ell),v)\,;\,(\ell,v)\in\Pi\}\overset{\text{ law}}{=}\Pi.\]
Moreover, the Poisson process \(\Pi\) is mixing: if \((\psi_{n})_{n\in\mathbb{N}}\) is a sequence of graph automorphisms of \(\mathbb{T}\) such that \(d(\varnothing,\psi_{n}(\varnothing))\to\infty\) as \(n\to\infty\), then for every bounded measurable functions \(f\) and \(g\) from \(\mathbb{M}\) to \(\mathbb{R}\), we have
\[\mathbb{E}[f(\Pi)\cdot g(\psi_{n}^{*}\Pi)]\longrightarrow\mathbb{E}[f(\Pi)] \cdot\mathbb{E}[g(\Pi)]\quad\text{as }n\to\infty,\]
where \(\psi_{n}^{*}\Pi=\{(\psi_{n}(\ell),v)\,;\,(\ell,v)\in\Pi\}\). In particular, any invariant event has probability \(0\) or \(1\).
Construction of the metric \(T\).Now, let us construct the driving distance metric \(T\) induced by \(\Pi\) more precisely. For every \(x,y\in\mathbb{T}\), we let
\[T(x,y)=V_{e_{1}}^{-1}+\ldots+V_{e_{n}}^{-1},\]
where \(e_{1},\ldots,e_{n}\) denote the edges on the geodesic path between \(x\) and \(y\) in \(\mathbb{T}\), and where \(V_{e}\) denotes the speed of the fastest road of \(\Pi\) that passes through \(e\), for each edge \(e\) of \(\mathbb{T}\). More generally, for every \(x\neq y\in\mathbb{T}\), we denote by \(V_{x,y}\) the speed of the fastest road of \(\Pi\) that passes through both points \(x\) and \(y\). We have
\[\mathbb{P}(V_{x,y}<v)=\mathbb{P}(\Pi(\langle x,y\rangle\times[v,\infty[)=0)= \exp\left[-2^{-d(x,y)}\cdot v^{-(\beta-1)}\right]\quad\text{for all }v\in\mathbb{R}_{+}^{*}.\]
In particular, the random variables \((1/V_{e},\,e\text{ edge of }\mathbb{T})\) are well-defined, with values in \(\mathbb{R}_{+}^{*}\). It follows that almost surely, the function \(T:\mathbb{T}\times\mathbb{T}\to\mathbb{R}_{+}\) is a metric on \(\mathbb{T}\). Equivalently, this driving distance metric is the first passage percolation distance function associated with the passage times \((1/V_{e},\,e\text{ edge of }\mathbb{T})\). By the Harris-FKG inequality for Poisson processes (see, e.g, [6, Theorem 20.4]), these passage times are positively associated, as nondecreasing functions of \(\Pi\). For future reference, note that we have
\[T(x,y)=\tau(\Pi;x,y)\quad\text{for all }x,y\in\mathbb{T} \tag{3}\]
for some measurable function \(\tau:\mathbb{M}\times(\mathbb{T}\times\mathbb{T})\to\mathbb{R}_{+}\).
### Explosion and the greedy process
In this subsection, we prove that for \(\beta<2\), we have \(T(\varnothing,\partial\mathbb{T})<\infty\) almost surely. Following Pemantle and Peres (see [9, proof of Theorem 3]), we consider the greedy process \((X_{n})_{n\in\mathbb{N}}\) on \(\mathbb{T}\) which starts at the root and follows the fastest road at each step. More precisely, let \(X_{0}=\varnothing\), and for each \(n\in\mathbb{N}\), let \(X_{n+1}\) be the child \(X_{n}i\) of \(X_{n}\) with minimal label \(i\) (to break ties) that minimises the passage time \(T(X_{n},X_{n}i)=V_{X_{n},X_{n}i}^{-1}\). The following proposition tells us that for \(\beta<2\), this process reaches the boundary of \(\mathbb{T}\) in finite time a.s.
**Proposition 2**.: The greedy process undergoes a phase transition at \(\beta=2\).
* For \(\beta<2\), we have \(\sum_{n\geq 1}T(X_{n-1},X_{n})<\infty\) almost surely.
* For \(\beta\geq 2\), we have \(\sum_{n\geq 1}T(X_{n-1},X_{n})=\infty\) almost surely.
Proof.: By the definition of \((X_{n})_{n\in\mathbb{N}}\), we have \(T(X_{n},X_{n+1})=T(X_{n-1},X_{n})\wedge V_{X_{n}1,X_{n}2}^{-1}\) for each \(n\in\mathbb{N}^{*}\). Therefore, we have
\[T(X_{n-1},X_{n})=T(X_{0},X_{1})\wedge W_{2}\wedge\ldots\wedge W_{n}\quad\text{ for all }n\in\mathbb{N}^{*},\]
where \(W_{n}=V_{X_{n-1}1,X_{n-1}2}^{-1}\) for all \(n\in\mathbb{N}^{*}\). In particular, the sum \(\sum_{n\geq 1}T(X_{n-1},X_{n})\) has the same nature as \(\sum_{n\geq 1}W_{1}\wedge\ldots\wedge W_{n}\). To conclude the proof, let us show that almost surely, the last sum has the same nature as \(\sum_{n\geq 1}n^{-1/(\beta-1)}\). First, notice that the random variables \((W_{n})_{n\in\mathbb{N}^{*}}\) are independent and identically distributed. Indeed, for each \(n\in\mathbb{N}^{*}\), let \(\mathcal{F}_{n}\) be the \(\sigma\)-algebra generated by the restriction of \(\Pi\) to the set of roads that hit \([\mathbb{T},\varnothing]_{n-1}=\{x\in\mathbb{T}:d(\varnothing,x)\leq n-1\}\). On the one hand, the random variables \(X_{n}\) and \(W_{n}\) are \(\mathcal{F}_{n}\)-measurable. On the other hand, the random variable \(W_{n+1}\) is independent of \(\mathcal{F}_{n}\), and distributed as \(V_{1,2}^{-1}\). Indeed, for every \(t\in\mathbb{R}_{+}^{*}\), we have
\[\mathbb{P}(W_{n+1}>t\,|\,\mathcal{F}_{n}) =\sum_{x\in\partial[\mathbb{T},\varnothing]_{n}}\mathbb{P}\left( X_{n}=x\,;\,V_{x1,x2}^{-1}>t\,\Big{|}\,\mathcal{F}_{n}\right)\] \[=\sum_{x\in\partial[\mathbb{T},\varnothing]_{n}}\mathbf{1}(X_{n} =x)\cdot\mathbb{P}\left(V_{x1,x2}^{-1}>t\right)=\exp\left[-1/4\cdot t^{\beta-1 }\right].\]
Next, for every \(n\in\mathbb{N}^{*}\), let \(Y_{n}=n^{1/(\beta-1)}\cdot W_{1}\wedge\ldots\wedge W_{n}\). We have
\[\mathbb{P}(Y_{n}>t)=\mathbb{P}\left(W_{1}>n^{-1/(\beta-1)}\cdot t\right)^{n}= \exp\left[-1/4\cdot t^{\beta-1}\right]\quad\text{for all }t\in\mathbb{R}_{+}^{*};\]
hence, the \((Y_{n})_{n\in\mathbb{N}^{*}}\) are identically distributed random variables, with values in \(\mathbb{R}_{+}^{*}\). Moreover, we have \(\mathbb{E}[Y_{1}]<\infty\). The result claimed above now follows from the Jeulin lemma (see [8, Theorem 3.1 and Proposition 3.2]): almost surely, the sum
\[\sum_{n\geq 1}W_{1}\wedge\ldots\wedge W_{n}=\sum_{n\geq 1}\frac{Y_{n}}{n^{1/( \beta-1)}}\]
has the same nature as \(\sum_{n\geq 1}n^{-1/(\beta-1)}\).
### Non-explosion and the bounded driving distance probability
In this subsection, we consider the phase \(\beta>2\). We first prove that there is no explosion in this phase. Then, we study the so-called bounded driving distance probability; namely, the probability \(\mathbb{P}(T(\varnothing,1_{n})\leq t)\) that the driving distance between two vertices at distance \(n\) in \(\mathbb{T}\) is at most \(t\), for fixed \(t>0\) and as \(n\to\infty\).
#### 3.2.1 Non-explosion
Now, we prove that for \(\beta>2\), we have \(T(\varnothing,\partial\mathbb{T})=\infty\) almost surely. First, consider the following easy lemma.
**Lemma 1**.: _If there exists \(t>0\) such that the driving distance ball \(\{x\in\mathbb{T}:T(\varnothing,x)\leq t\}\) is finite a.s, then we have \(T(\varnothing,\partial\mathbb{T})=\infty\) almost surely._
Proof.: Let \(t>0\) be such that \(\#\{x\in\mathbb{T}:T(\varnothing,x)\leq t\}<\infty\) almost surely. For every \(x\in\mathbb{T}\), let \(A_{x}\) be the event: "there exists an infinite geodesic path \((x=x_{0},x_{1},\ldots)\) in \(\mathbb{T}\) such that \(\sum_{n\geq 1}T(x_{n-1},x_{n})\leq t\)". Notice that the explosion event \((T(\varnothing,\partial\mathbb{T})<\infty)\) is contained in
\(\bigcup_{x\in\mathbb{T}}A_{x}\). On the other hand, we have \(\mathbb{P}(A_{x})=\mathbb{P}(A_{\varnothing})=0\) for all \(x\in\mathbb{T}\), where the first equality holds by invariance, and the second by assumption. Finally, we obtain \(\mathbb{P}(T(\varnothing,\partial\mathbb{T})<\infty)=0\).
By the previous lemma, it suffices to prove that for \(t\) small enough, the driving distance ball \(\{x\in\mathbb{T}:T(\varnothing,x)\leq t\}\) is finite almost surely. This is the crux of the proof.
**Proposition 3**.: For \(\beta>2\), there exists \(t>0\) such that
\[\mathbb{E}[\#\{x\in\mathbb{T}:T(\varnothing,x)\leq t\}]<\infty.\]
Proof.: We keep denoting by \([\mathbb{T},\varnothing]_{n}\) the set of vertices within graph distance \(n\) from the root. More generally, for \(S\subset\mathbb{T}\) and for \(x\in\mathbb{T}\), we let \([S,x]_{n}=\{y\in S:d(x,y)\leq n\}\). Now, let \(t\in\left]0,1/9\right]\) be a parameter to be adjusted later, and let
\[\varphi_{n}^{*}=\mathbb{E}[\#\{x\in[\mathbb{T},\varnothing]_{n}:T(\varnothing,x)\leq t\}]\quad\text{for all }n\in\mathbb{N}.\]
To start working on the \(\varphi_{n}^{*}\) terms, we would like to integrate on the speed of the fastest road of \(\Pi\) that passes through the root. A rigorous way of doing that is to use the Slivnyak-Mecke theorem, that we recall now. For \((\ell,v)\in\mathbb{L}\times\mathbb{R}_{+}^{*}\) and \(\pi\in\mathbb{M}\), we denote by \((\ell,v)\oplus\pi\) (resp. \(\pi\ominus(\ell,v)\)) the atomic measure obtained from \(\pi\) by adding (resp. removing) the atom \((\ell,v)\). The Slivnyak-Mecke theorem (see, e.g, [6, Theorem 4.1]) states that for every measurable function \(f:\left(\mathbb{L}\times\mathbb{R}_{+}^{*}\right)\times\mathbb{M}\to\mathbb{R} _{+}\), we have
\[\mathbb{E}\left[\sum_{(\ell,v)\in\Pi}f(\ell,v;\Pi)\right]=\int_{\mathbb{L} \times\mathbb{R}_{+}^{*}}\mathbb{E}[f(\ell,v;(\ell,v)\oplus\Pi)]\mathrm{d} \nu(\ell,v).\]
Equivalently, for every measurable function \(g:\left(\mathbb{L}\times\mathbb{R}_{+}^{*}\right)\times\mathbb{M}\to\mathbb{R }_{+}\), we have
\[\mathbb{E}\left[\sum_{(\ell,v)\in\Pi}g(\ell,v;\Pi\ominus(\ell,v))\right]=\int _{\mathbb{L}\times\mathbb{R}_{+}^{*}}\mathbb{E}[g(\ell,v;\Pi)]\mathrm{d}\nu( \ell,v).\]
Now, consider the following lemma. For every \(x\in\mathbb{T}\), we denote by \((L_{x},V_{x})\) the fastest road of \(\Pi\) that passes through \(x\).
**Lemma 2**.: _Let \(x\in\mathbb{T}\). For every measurable function \(F:\left(\mathbb{L}\times\mathbb{R}_{+}^{*}\right)\times\mathbb{M}\to\mathbb{R }_{+}\), we have_
\[\mathbb{E}[F(L_{x},V_{x};\Pi)]=\int_{(x)\times\mathbb{R}_{+}^{*}}\mathbb{E}[F( \ell,v;(\ell,v)\oplus\Pi)\,;\,V_{x}<v]\mathrm{d}\nu(\ell,v).\]
_Equivalently, for every measurable function \(G:\left(\mathbb{L}\times\mathbb{R}_{+}^{*}\right)\times\mathbb{M}\to\mathbb{R }_{+}\), we have_
\[\mathbb{E}[G(L_{x},V_{x};\Pi\ominus(L_{x},V_{x}))]=\int_{(x)\times\mathbb{R}_ {+}^{*}}\mathbb{E}[G(\ell,v;\Pi)\,;\,V_{x}<v]\mathrm{d}\nu(\ell,v).\]
Proof of the lemma.: Let us prove the first identity only, the second one is an immediate consequence. Let \(F:\left(\mathbb{L}\times\mathbb{R}_{+}^{*}\right)\times\mathbb{M}\to\mathbb{R }_{+}\) be a measurable function. We have
\[F(L_{x},V_{x};\Pi)=\sum_{(\ell,v)\in\Pi}g(\ell,v;\Pi\ominus(\ell,v)),\]
where \(g:\left(\mathbb{L}\times\mathbb{R}_{+}^{*}\right)\times\mathbb{M}\to\mathbb{R}_{+}\) is the measurable function defined by
\[g(\ell,v;\pi)=\begin{cases}F(\ell,v;(\ell,v)\oplus\pi)&\text{if $\ell\in \langle x\rangle$ and $\pi(\langle x\rangle\times[v,\infty[)=0$}\\ 0&\text{otherwise}\end{cases}\]
for all \((\ell,v;\pi)\in\left(\mathbb{L}\times\mathbb{R}_{+}^{*}\right)\times\mathbb{M}\). By the Slivnyak-Mecke theorem, it follows that
\[\mathbb{E}[F(L_{x},V_{x};\Pi)]=\int_{\mathbb{L}\times\mathbb{R}_{+}^{*}} \mathbb{E}[g(\ell,v;\Pi)]\mathrm{d}\nu(\ell,v)=\int_{\langle x\rangle\times \mathbb{R}_{+}^{*}}\mathbb{E}[F(\ell,v;(\ell,v)\oplus\Pi)\,;\,V_{x}<v]\mathrm{ d}\nu(\ell,v).\]
Back to the proof of the proposition, we apply Lemma 2 with \(x=\varnothing\) and
\[F(\ell,v;\pi)=\#\{y\in[\mathbb{T},\varnothing]_{n}:\tau(\pi;\varnothing,y)\leq t \}\quad\text{for all $(\ell,v;\pi)\in\left(\mathbb{L}\times\mathbb{R}_{+}^{*} \right)$},\]
where \(\tau\) is the measurable function of (3). We obtain
\[\varphi_{n}^{*}=\int_{\langle\varnothing\rangle\times\mathbb{R}_{+}^{*}} \mathbb{E}[\#\{y\in[\mathbb{T},\varnothing]_{n}:\tau((\ell,v)\oplus\Pi; \varnothing,y)\leq t\}\,;\,V_{\varnothing}<v]\mathrm{d}\nu(\ell,v).\]
Now, let us slightly abuse notation, and denote the integrand by \(\varphi_{n}(v)\), as it does not depend on \(\ell\). Indeed, by invariance, we have
\[\varphi_{n}(v) = \mathbb{E}[\#\{y\in[\mathbb{T},\varnothing]_{n}:\tau((\ell,v) \oplus\Pi;\varnothing,y)\leq t\}\,;\,V_{\varnothing}<v]\] \[= \mathbb{E}\left[\#\left\{y\in\left[\mathbb{T},x^{\prime}\right]_{ n}:\tau\left(\left(\ell^{\prime},v\right)\oplus\Pi;x^{\prime},y\right) \leq t\right\}\,;\,V_{x^{\prime}}<v\right]\]
for every \(\ell^{\prime}\in\mathbb{L}\), and for every \(x^{\prime}\in\ell^{\prime}\). Note that
\[\varphi_{n}(v)=\mathbb{P}(V_{\varnothing}<v)=\exp\left[-3/4\cdot v^{-(\beta-1 )}\right]\quad\text{for all $v\in\left]0,\frac{1}{t}\right[$}.\]
Furthermore, we claim that for every \(n\in\mathbb{N}\), we have
\[\varphi_{n+1}(v)\leq 1+\varphi_{n}^{*}+2\cdot t\cdot v\cdot\left(\varphi_{n}^{* }+\int_{\langle\varnothing\rangle\times]v,\infty[}\varphi_{n}\left(v^{\prime} \right)\mathrm{d}\nu\left(\ell^{\prime},v^{\prime}\right)\right)\quad\text{ for all $v\in\left[\frac{1}{t},\infty\right[$}. \tag{4}\]
First, assuming that this holds, let us complete the proof of the proposition. Since \(\beta>2\), we can adjust the parameter \(t\) so that \(\int_{\langle\varnothing\rangle\times]1/t,\infty[}v\mathrm{d}\nu(\ell,v)\leq 1\). Now, let us prove by induction that for every \(n\in\mathbb{N}\), we have \(\varphi_{n}(v)\leq v\) for all \(v\in[1/t,\infty[\). This is obviously true for \(n=0\), since \(\varphi_{0}(v)=\mathbb{P}(V_{\varnothing}<v)\) for all \(v\in\mathbb{R}_{+}^{*}\). Next, let \(n\in\mathbb{N}\), assume that \(\varphi_{n}(v)\leq v\) for all \(v\in[1/t,\infty[\), and let us prove that \(\varphi_{n+1}(v)\leq v\) for all \(v\in[1/t,\infty[\). First, note that
\[\varphi_{n}^{*}=\int_{\langle\varnothing\rangle\times\mathbb{R}_{+}^{*}} \varphi_{n}(v)\mathrm{d}\nu(\ell,v)\leq\int_{\langle\varnothing\rangle\times \mathbb{R}_{+}^{*}}\mathbb{P}(V_{\varnothing}<v)\mathrm{d}\nu(\ell,v)+\int_{ \langle\varnothing\rangle\times]1/t,\infty[}v\mathrm{d}\nu(\ell,v)\leq 2,\]
where we use Lemma 2 with \(x=\varnothing\) and \(F\equiv 1\) to check that \(\int_{\langle\varnothing\rangle\times\mathbb{R}_{+}^{*}}\mathbb{P}(V_{ \varnothing}<v)\mathrm{d}\nu(\ell,v)=1\). Now, it follows from (4) that for every \(v\in[1/t,\infty[\), we have
\[\varphi_{n+1}(v)\leq 1+2+2\cdot t\cdot v\cdot\left(2+\int_{\langle 2\rangle \times]1/t,\infty[}v^{\prime}\mathrm{d}\nu\left(\ell^{\prime},v^{\prime}\right) \right)\leq 3+6\cdot t\cdot v\leq v,\]
where the last inequality holds since \(t\leq 1/9\). By induction, this proves that for every \(n\in\mathbb{N}\), we have \(\varphi_{n}(v)\leq v\) for all \(v\in[1/t,\infty[\). In turn, this implies that for every \(n\in\mathbb{N}\), we have
\[\varphi_{n}^{*}=\int_{\langle\varnothing\rangle\times\mathbb{R}_{+}^{*}} \varphi_{n}(v)\mathrm{d}\nu(\ell,v)\leq\int_{\langle\varnothing\rangle\times \mathbb{R}_{+}^{*}}\mathbb{P}(V_{\varnothing}<v)\mathrm{d}\nu(\ell,v)+\int_{ \langle\varnothing\rangle\times 1]/t,\infty[}v\mathrm{d}\nu(\ell,v)\leq 2.\]
Letting \(n\to\infty\) in the definition of \(\varphi_{n}^{*}\), we obtain
\[\mathbb{E}[\#\{x\in\mathbb{T}:T(\varnothing,x)\leq t\}]\leq 2\]
by the monotone convergence theorem, which gives the result of the proposition.
To complete the proof, it remains to establish (4). Let \(n\in\mathbb{N}\), and fix \((\ell,v)\in\langle\varnothing\rangle\times[1/t,\infty[\). We denote by \(\rho_{1}\) and \(\rho_{2}\) the two neighbours of \(\varnothing\) that are on \(\ell\), and by \(\rho_{3}\) the neighbour of \(\varnothing\) that is not on \(\ell\). For each \(i\in\{1,2,3\}\), we let \(S_{i}=\{x\in\mathbb{T}:x\succeq\rho_{i}\}\). See Figure 3 for an illustration.
We have
\[\#\{x\in[\mathbb{T},\varnothing]_{n+1}:\tau((\ell,v)\oplus\Pi; \varnothing,x)\leq t\}\] \[\leq 1+\sum_{i=1}^{2}\#\{x\in[S_{i},\rho_{i}]_{n}:\tau((\ell,v)\oplus \Pi;\varnothing,x)\leq t\}+\#\left\{x\in[S_{3},\rho_{3}]_{n}:\tau(\Pi;\rho_{3},x )\leq t\right\}\] \[\leq 1+\sum_{i=1}^{2}\#\{x\in[S_{i},\rho_{i}]_{n}:\tau((\ell,v)\oplus \Pi;\varnothing,x)\leq t\}+\#\left\{x\in[\mathbb{T},\rho_{3}]_{n}:\tau(\Pi; \rho_{3},x)\leq t\right\}.\]
It follows that
\[\mathbb{E}[\#\{x\in[\mathbb{T},\varnothing]_{n+1}:\tau((\ell,v) \oplus\Pi;\varnothing,x)\leq t\}\,;V_{\varnothing}<v]\] \[\leq 1+\sum_{i=1}^{2}\mathbb{E}[\#\{x\in[S_{i},\rho_{i}]_{n}:\tau(( \ell,v)\oplus\Pi;\varnothing,x)\leq t\}\,;V_{\varnothing}<v]\] \[+\mathbb{E}[\#\{x\in[\mathbb{T},\rho_{3}]_{n}:\tau(\Pi;\rho_{3},x )\leq t\}],\]
where we recognise \(\mathbb{E}[\#\{x\in[\mathbb{T},\rho_{3}]_{n}:\tau(\Pi;\rho_{3},x)\leq t\}]= \varphi_{n}^{*}\), and it remains to handle the sum of two terms. Since the two terms are equal by invariance, let us focus on the first of them. To prove (4), it suffices to show that
\[\mathbb{E}[\#\{x\in[S_{1},\rho_{1}]_{n}:\tau((\ell,v)\oplus\Pi;\varnothing,x) \leq t\}\,;\,V_{\varnothing}<v]\leq t\cdot v\cdot\left(\varphi_{n}^{*}+\int_{ \langle\varnothing\rangle\times]v,\infty[}\varphi_{n}\left(v^{\prime}\right) \mathrm{d}\nu\left(\ell^{\prime},v^{\prime}\right)\right).\]
Figure 3: The vertices of \(S_{1}\) are in blue, the vertices of \(S_{2}\) are in green, and the vertices of \(S_{3}\) are in purple.
Let us denote by \(x_{1},x_{2},\ldots\) the vertices of \(S_{1}\) that are on \(\ell\), with \(d(\varnothing,x_{i})=i\) for all \(i\in\mathbb{N}^{*}\). In particular, we have \(x_{1}=\rho_{1}\). For each \(i\in\mathbb{N}^{*}\), we denote by \(x_{i}^{\prime}\) the neighbour of \(x_{i}\) that is not on \(\ell\), and we let \(S_{i}^{\prime}=\{x_{i}\}\cup\{x\in\mathbb{T}:x\succeq x_{i}^{\prime}\}\). Moreover, we let \(S_{i}^{\prime\prime}=\{x\in\mathbb{T}:x\succeq x_{i}\}\). See Figure 4 for an illustration.
Let \(k=\left\lfloor t\cdot v\right\rfloor\) be the largest integer \(i\in\mathbb{N}^{*}\) such that \(d(\varnothing,x_{i})\leq t\cdot v\). We claim that
\[\mathbb{E}[\#\{x\in[S_{1},\rho_{1}]_{n}:\tau((\ell,v)\oplus\Pi; \varnothing,x)\leq t\}\,;\,V_{\varnothing}<v]\] \[\leq \sum_{i=1}^{k}\mathbb{E}\left[\#\{x\in[\mathbb{T},x_{i}]_{n}: \tau(\Pi;x_{i},x)\leq t\}\right]\] \[+\sum_{j=1}^{k}\mathbb{E}\left[\#\left\{x\in[\mathbb{T},x_{j}]_{n }:\tau\left(\left(\ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{ j}}\right);x_{j},x\right)\leq t\right\}\,;\,V_{x_{j}}>v\right].\]
Indeed, on the event \((V_{\varnothing}<v)\), we have the following alternative.
* If no road of \(\Pi\) with speed more than \(v\) hits \(\llbracket x_{1},x_{k}\rrbracket\), then \((\ell,v)\) is the fastest road of \((\ell,v)\oplus\Pi\) that passes through each edge on the geodesic path between \(\varnothing\) and \(x_{k+1}\) in \(\mathbb{T}\). Therefore, we have \(\tau((\ell,v)\oplus\Pi;\varnothing,x_{k+1})=(k+1)/v>t\), and it follows that \[\#\{x\in[S_{1},\rho_{1}]_{n}:\tau((\ell,v)\oplus\Pi;\varnothing,x )\leq t\} \leq\sum_{i=1}^{k}\#\left\{x\in\left[S_{i}^{\prime},x_{i}\right]_{n }:\tau(\Pi;x_{i},x)\leq t\right\}\] \[\leq\sum_{i=1}^{k}\#\{x\in[\mathbb{T},x_{i}]_{n}:\tau(\Pi;x_{i},x )\leq t\}.\]
* Otherwise, let \(j\) be the smallest integer \(i\in\llbracket 1,k\rrbracket\) such that \(V_{x_{i}}>v\). We have \[\#\{x\in[S_{1},\rho_{1}]_{n}:\tau((\ell,v)\oplus\Pi;\varnothing,x )\leq t\}\] \[\leq \sum_{i=1}^{j-1}\#\left\{x\in\left[S_{i}^{\prime},x_{i}\right]_{n }:\tau(\Pi;x_{i},x)\leq t\right\}+\#\left\{x\in\left[S_{j}^{\prime\prime},x_{ j}\right]_{n}:\tau((\ell,v)\oplus\Pi;x_{j},x)\leq t\right\}\] \[\leq \sum_{i=1}^{j-1}\#\{x\in[\mathbb{T},x_{i}]_{n}:\tau(\Pi;x_{i},x) \leq t\}+\#\left\{x\in\left[S_{j}^{\prime\prime},x_{j}\right]_{n}:\tau((\ell,v )\oplus\Pi;x_{j},x)\leq t\right\}.\] To bound the last term, consider the fastest road \(\left(L_{x_{j}},V_{x_{j}}\right)\) of \(\Pi\) that passes through \(x_{j}\). Let \(l\) be the largest integer \(i\in\llbracket j+1,\infty\llbracket\) such that \(L_{x_{j}}\) passes through \(x_{i}\). See Figure 5 for an illustration.
Figure 4: The vertices of \(S_{1}^{\prime}\) are in blue, and the vertices of \(S_{2}^{\prime\prime}\) are in green.
We decompose
\[\#\left\{x\in\left[S_{j}^{\prime\prime},x_{j}\right]_{n}:\tau((\ell,v) \oplus\Pi;x_{j},x)\leq t\right\}\] \[\leq \#\left\{x\in\left[S_{j}^{\prime}\cup\ldots\cup S_{l}^{\prime},x_{ j}\right]_{n}:\tau(\Pi;x_{j},x)\leq t\right\}\] \[+\#\left\{x\in\left[S_{l+1}^{\prime\prime},x_{j}\right]_{n}:\tau( (\ell,v)\oplus\Pi;x_{j},x)\leq t\right\}\] \[\leq \#\left\{x\in\left[\mathbb{T},x_{j}\right]_{n}:\tau(\Pi;x_{j},x) \leq t\right\}\] \[+\#\left\{x\in\left[S_{l+1}^{\prime\prime},x_{j}\right]_{n}:\tau( (\ell,v)\oplus\Pi;x_{j},x)\leq t\right\},\]
where the first inequality holds since \(\left(L_{x_{j}},V_{x_{j}}\right)\) is the fastest road of \((\ell,v)\oplus\Pi\) that passes through each edge on the geodesic path between \(x_{j}\) and \(x_{l}\). For the last term, since \(V_{x_{j}}>v\), we have
\[\#\left\{x\in\left[S_{l+1}^{\prime\prime},x_{j}\right]_{n}:\tau( (\ell,v)\oplus\Pi;x_{j},x)\leq t\right\}\] \[\leq \#\left\{x\in\left[S_{l+1}^{\prime\prime},x_{j}\right]_{n}:\tau \left(\left(\ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{j}} \right);x_{j},x\right)\leq t\right\}\] \[= \#\left\{x\in\left[S_{l+1}^{\prime\prime},x_{j}\right]_{n}:\tau \left(\left(\ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{j}} \right);x_{j},x\right)\leq t\right\}\] \[\leq \#\left\{x\in\left[\mathbb{T},x_{j}\right]_{n}:\tau\left(\left( \ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{j}}\right);x_{j}, x\right)\leq t\right\}.\]
Note that we can even add the indicator \(\mathbf{1}\left(V_{x_{j}}>v\right)\) in the right hand side. Altogether, we obtain
\[\#\{x\in\left[S_{1},\rho_{1}\right]_{n}:\tau((\ell,v)\oplus\Pi; \varnothing,x)\leq t\}\] \[\leq \sum_{i=1}^{j-1}\#\{x\in\left[\mathbb{T},x_{i}\right]_{n}:\tau( \Pi;x_{i},x)\leq t\}+\#\{x\in\left[\mathbb{T},x_{j}\right]_{n}:\tau(\Pi;x_{j},x)\leq t\}\] \[+\#\left\{x\in\left[\mathbb{T},x_{j}\right]_{n}:\tau\left(\left( \ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{j}}\right);x_{j}, x\right)\leq t\right\}\cdot\mathbf{1}\left(V_{x_{j}}>v\right).\]
In any case, we get
\[\#\{x\in\left[S_{1},\rho_{1}\right]_{n}:\tau((\ell,v)\oplus\Pi; \varnothing,x)\leq t\}\] \[\leq \sum_{i=1}^{k}\#\{x\in\left[\mathbb{T},x_{i}\right]_{n}:\tau(\Pi; x_{i},x)\leq t\}\] \[+\sum_{j=1}^{k}\#\left\{x\in\left[\mathbb{T},x_{j}\right]_{n}: \tau\left(\left(\ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{j}} \right);x_{j},x\right)\leq t\right\}\cdot\mathbf{1}\left(V_{x_{j}}>v\right).\]
Recall that this holds on the event \((V_{\varnothing}<v)\). The inequality claimed above follows by taking
Figure 5: The intersection of \(\ell\) and \(L_{x_{j}}\), which corresponds to the segment \(\left[\![x_{j},x_{l}]\!]\), is in purple.
expectations:
\[\mathbb{E}[\#\{x\in[S_{1},\rho_{1}]_{n}:\tau((\ell,v)\oplus\Pi; \varnothing,x)\leq t\}\,;\,V_{\varnothing}<v]\] \[\leq \sum_{i=1}^{k}\mathbb{E}\left[\#\{x\in[\mathbb{T},x_{i}]_{n}:\tau( \Pi;x_{i},x)\leq t\}\right]\] \[+\sum_{j=1}^{k}\mathbb{E}\left[\#\left\{x\in[\mathbb{T},x_{j}]_{n }:\tau\left(\left(\ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{j }}\right);x_{j},x\right)\leq t\right\}\,;\,V_{x_{j}}>v\right].\]
Now, for each summand in the first term, we recognise \(\mathbb{E}\left[\#\{x\in[\mathbb{T},x_{i}]_{n}:\tau(\Pi;x_{i},x)\leq t\}\right]= \varphi_{n}^{\star}\). Next, for each summand in the second term, we use Lemma 2 with \(x=x_{j}\) and
\[G\left(\ell^{\prime},v^{\prime};\pi\right)=\begin{cases}\#\left\{y\in[ \mathbb{T},x_{j}]_{n}:\tau\left(\left(\ell,v^{\prime}\right)\oplus\pi;x_{j}, y\right)\leq t\right\}&\text{if }v^{\prime}>v\\ 0&\text{otherwise}\end{cases}\]
for all \((\ell^{\prime},v^{\prime};\pi)\in\left(\mathbb{L}\times\mathbb{R}_{+}^{\star }\right)\times\mathbb{M}\). We get
\[\mathbb{E}\left[\#\left\{x\in[\mathbb{T},x_{j}]_{n}:\tau\left( \left(\ell,V_{x_{j}}\right)\oplus\Pi\ominus\left(L_{x_{j}},V_{x_{j}}\right);x _{j},x\right)\leq t\right\}\,;\,V_{x_{j}}>v\right]\] \[= \int_{\left\langle x_{j}\right\rangle\times|v,\infty[}\mathbb{E} \left[\#\left\{x\in[\mathbb{T},x_{j}]_{n}:\tau\left(\left(\ell,v^{\prime} \right)\oplus\Pi;x_{j},x\right)\leq t\right\}\,;\,V_{x_{j}}<v^{\prime}\right] \mathrm{d}\nu\left(\ell^{\prime},v^{\prime}\right),\]
and we recognise \(\varphi_{n}(v^{\prime})\) as the integrand. Altogether, we obtain
\[\mathbb{E}[\#\{x\in[S_{1},\rho_{1}]_{n}:\tau((\ell,v)\oplus\Pi; \varnothing,x)\leq t\}\,;\,V_{\varnothing}<v]\] \[\leq \sum_{i=1}^{k}\varphi_{n}^{\star}+\sum_{j=1}^{k}\int_{\left\langle x _{j}\right\rangle\times|v,\infty[}\varphi_{n}\left(v^{\prime}\right)\mathrm{d} \nu\left(\ell^{\prime},v^{\prime}\right)\] \[= k\cdot\left(\varphi_{n}^{\star}+\int_{\left\langle\varnothing \right\rangle\times|v,\infty[}\varphi_{n}\left(v^{\prime}\right)\mathrm{d} \nu\left(\ell^{\prime},v^{\prime}\right)\right)\] \[\leq t\cdot v\cdot\left(\varphi_{n}^{\star}+\int_{\left\langle \varnothing\right\rangle\times|v,\infty[}\varphi_{n}\left(v^{\prime}\right) \mathrm{d}\nu\left(\ell^{\prime},v^{\prime}\right)\right).\]
This completes the proof of (4), and concludes the proof of the proposition.
#### 3.2.2 The bounded driving distance probability
In this paragraph, we study the so-called bounded driving distance probability; namely, the probability \(\mathbb{P}(T(\varnothing,1_{n})\leq t)\) that the driving distance between two points at distance \(n\) in \(\mathbb{T}\) is at most \(t\), for fixed \(t>0\) and as \(n\to\infty\). Note that we have the obvious lower bound
\[\mathbb{P}(T(\varnothing,1_{n})\leq t)=\mathbb{P}\left(V_{\varnothing,1_{n}} \geq\frac{n}{t}\right)=1-\exp\left[-2^{-n}\cdot\left(\frac{t}{n}\right)^{ \beta-1}\right],\]
which yields
\[\mathbb{P}(T(\varnothing,1_{n})\leq t)\geq(1+o(1))\cdot 2^{-n}\cdot\left( \frac{t}{n}\right)^{\beta-1}\quad\text{as }n\to\infty. \tag{5}\]
In the other direction, we prove the following inequality.
**Proposition 4**.: For every \(n\in\mathbb{N}^{*}\), and for every \(t>0\), we have
\[\mathbb{P}(T(\varnothing,1_{n})\leq t)\leq 2^{-n}\cdot\left(\frac{t}{n}\right)^{ \beta-1}\cdot\exp\left[\sum_{k=1}^{n-1}\frac{k+1}{k^{\beta-1}}\cdot t^{\beta-1} \right]. \tag{6}\]
This estimate is similar in spirit to [3, Proposition 2.3], which in turn was inspired by Kahn's proof of [4, Theorem 5.1].
_Remark 1_.: For \(\beta>3\), since \(\sum_{k\geq 1}(k+1)\cdot k^{-(\beta-1)}<\infty\), we obtain
\[\mathbb{P}(T(\varnothing,1_{n})\leq t)=O\left(2^{-n}\cdot\left(\frac{t}{n} \right)^{\beta-1}\right)\quad\text{as }n\to\infty.\]
This matches the order of magnitude of the obvious lower bound (5). Moreover, the estimate (6) provides an alternative proof of the fact that the driving distance ball \(\{x\in\mathbb{T}:T(\varnothing,x)\leq t\}\) is finite a.s. for \(\gamma\geq 3\), by a first moment argument.
Proof of Proposition 4.: Fix \(n\in\mathbb{N}^{*}\) and \(t>0\). For \(k\in\llbracket 1,n\rrbracket\), we denote by \(e_{k}\) the edge between \(1_{k-1}\) and \(1_{k}\). For every subset \(E\subset\{e_{1},\ldots,e_{n}\}\), let \(a(E)=\mathbb{P}\left(\sum_{e\in E}1/V_{e}\leq t\right)\); and note that \(\mathbb{P}(T(\varnothing,1_{n})\leq t)=a\{e_{1},\ldots,e_{n}\}\). We claim that \(a(\cdot)\) satisfies the following recursion: for every non-empty subset \(E\subset\{e_{1},\ldots,e_{n}\}\), we have
\[a(E)\leq\sum_{\begin{subarray}{c}F\subset E\text{ non-empty}\\ F\text{ connected in }E\end{subarray}}\mathbb{P}\left(V_{F}\geq\frac{\#E}{t} \right)\cdot a(E\setminus F), \tag{7}\]
where we denote by \(V_{F}\) the speed of the fastest road that passes through every edge of \(F\); and the sum is taken over all non-empty subsets \(F\subset E\) which are _connected in \(E\)_, in the sense that for every \(i\leq j\in\llbracket 1,n\rrbracket\) such that \(e_{i},e_{j}\in F\), we have \(e_{k}\in F\) whenever \(k\in\llbracket i,j\rrbracket\) is such that \(e_{k}\in E\). To prove (7), let \(E\) be a non-empty subset of \(\{e_{1},\ldots,e_{n}\}\). On the event \(\left(\sum_{e\in E}1/V_{e}\leq t\right)\), the fastest road of \(\Pi\) that passes through at least one edge of \(E\) must have speed at least \(\#E/t\). Denoting by \(F\) the set of edges \(e\in E\) that are traversed by this road, we obtain a non-empty subset \(F\subset E\) which is connected in \(E\), and such that \(V_{F}^{E\setminus F}\geq\#E/t\), where \(V_{F}^{E\setminus F}\) denotes the speed of the fastest road of \(\Pi\) that passes through every edge of \(F\) and no edge of \(E\setminus F\). This proves the inclusion
\[\left(\sum_{e\in E}\frac{1}{V_{e}}\leq t\right)\subset\bigcup_{ \begin{subarray}{c}F\subset E\text{ non-empty}\\ F\text{ connected in }E\end{subarray}}\left(V_{F}^{E\setminus F}\geq\frac{\#E}{t} \,;\,\sum_{e\in E\setminus F}\frac{1}{V_{e}}\leq t\right).\]
By a union bound, this yields
\[a(E)\leq\sum_{\begin{subarray}{c}F\subset E\text{ non-empty}\\ F\text{ connected in }E\end{subarray}}\mathbb{P}\left(V_{F}^{E\setminus F}\geq\frac{\#E}{t} \,;\,\sum_{e\in E\setminus F}\frac{1}{V_{e}}\leq t\right).\]
Now, for each term in the sum, observe that the random variable \(V_{F}^{E\setminus F}\) is independent of the random variables \((V_{e})_{e\in E\setminus F}\). Indeed, the former is measurable with respect to the restriction of \(\Pi\) to the set of roads that pass through every edge of \(F\) and no edge of \(E\setminus F\), while the latter are measurable with respect to the restriction of \(\Pi\) to the set of roads that pass through at least
one edge of \(E\setminus F\). Thus, we obtain (7):
\[a(E) \leq\sum_{\begin{subarray}{c}F\subset E\text{ non-empty}\\ F\text{ connected in }E\end{subarray}}\mathbb{P}\left(V_{F}^{E\setminus F}\geq \frac{\#E}{t}\right)\cdot\mathbb{P}\left(\sum_{e\in E\setminus F}\frac{1}{V_{e }}\leq t\right)\] \[\leq\sum_{\begin{subarray}{c}F\subset E\text{ non-empty}\\ F\text{ connected in }E\end{subarray}}\mathbb{P}\left(V_{F}\geq\frac{\#E}{t} \right)\cdot a(E\setminus F).\]
Upon reindexing the sum, we get
\[a(E)\leq\sum_{\begin{subarray}{c}F\subset E\\ E\setminus F\text{ connected in }E\end{subarray}}\mathbb{P}\left(V_{E \setminus F}\geq\frac{\#E}{t}\right)\cdot a(F).\]
Since \(a(\emptyset)=1\), iterating this inequality yields
\[a\{e_{1},\ldots,e_{n}\}\leq\sum_{j=1}^{n}\sum_{\begin{subarray}{c}\{e_{1}, \ldots,e_{n}\}=E_{0}\supseteq\ldots\supseteq E_{j}=\emptyset\\ E_{i}\setminus E_{i+1}\text{ connected in }E_{i}\end{subarray}}\prod_{i=0}^{j-1} \mathbb{P}\left(V_{E_{i}\setminus E_{i+1}}\geq\frac{\#E_{i}}{t}\right).\]
Now, let us work on the summands above. Using the inequality \(\mathbb{P}(\text{Poisson}(\lambda)>0)\leq\lambda\), we get
\[\mathbb{P}\left(V_{E_{i}\setminus E_{i+1}}\geq\frac{\#E_{i}}{t}\right) \leq\mu\{\ell\in\mathbb{L}:\ell\text{ passes through each edge of }E_{i}\setminus E_{i+1}\}\cdot\left(\frac{t}{\#E_{i}}\right)^{\beta-1}\] \[\leq 2^{-\#(E_{i}\setminus E_{i+1})}\cdot\left(\frac{t}{\#E_{i}} \right)^{\beta-1}.\]
We deduce that
\[\prod_{i=0}^{j-1}\mathbb{P}\left(V_{E_{i}\setminus E_{i+1}}\geq\frac{\#E_{i} }{t}\right)\leq\prod_{i=0}^{j-1}\left(2^{-\#(E_{i}\setminus E_{i+1})}\cdot \left(\frac{t}{\#E_{i}}\right)^{\beta-1}\right)=2^{-n}\cdot\left(\frac{t}{n} \right)^{\beta-1}\cdot\prod_{i=1}^{j-1}\left(\frac{t}{\#E_{i}}\right)^{\beta -1}.\]
At this point, we have obtained
\[\mathbb{P}(T(\varnothing,1_{n})\leq t)\leq 2^{-n}\cdot\left(\frac{t}{n} \right)^{\beta-1}\cdot\sum_{\begin{subarray}{c}j=1\\ E_{i}\setminus E_{i+1}\text{ connected in }E_{i}\end{subarray}}\sum_{i=1}^{n} \sum_{\begin{subarray}{c}\{e_{1},\ldots,e_{n}\}=E_{0}\supseteq\ldots \supseteq E_{j}=\emptyset\\ E_{i}\setminus E_{i+1}\text{ connected in }E_{i}\end{subarray}}\prod_{i=1}^{j-1} \left(\frac{t}{\#E_{i}}\right)^{\beta-1}, \tag{8}\]
and the remaining work is purely combinatorial. For each \(j\in\llbracket 1,n\rrbracket\), grouping the terms according to \(k_{i}=\#E_{i}\), we can compute exactly:
\[\sum_{\begin{subarray}{c}\{e_{1},\ldots,e_{n}\}=E_{0}\supseteq \ldots\supseteq E_{j}=\emptyset\\ E_{i}\setminus E_{i+1}\text{ connected in }E_{i}\end{subarray}}\prod_{i=1}^{j-1} \left(\frac{t}{\#E_{i}}\right)^{\beta-1} =\sum_{n=k_{0}>\ldots>k_{j}=0}(k_{1}+1)\cdot\ldots\cdot(k_{j-1}+ 1)\cdot\prod_{i=1}^{j-1}\left(\frac{t}{k_{i}}\right)^{\beta-1}\] \[=\sum_{n=k_{0}>\ldots>k_{j}=0}\prod_{i=1}^{j-1}\left(\frac{k_{i} +1}{k_{i}^{\beta-1}}\cdot t^{\beta-1}\right).\]
Indeed, given any subset \(E_{i}\subset\{e_{1},\ldots,e_{n}\}\) with cardinality \(k_{i}\) and any integer \(k_{i+1}\in\llbracket 1,k_{i}\llbracket\), there are \((k_{i+1}+1)\) ways of choosing a subset \(E_{i+1}\subset E_{i}\) with cardinality \(k_{i+1}\) such that \(E_{i}\setminus E_{i+1}\)
is connected in \(E_{i}\). The above equality leads to the upper bound:
\[\sum_{\begin{subarray}{c}\{e_{1},\ldots,e_{n}\}=E_{0}\geq\ldots\supseteq E _{j}=\emptyset\\ E_{i}\setminus E_{i+1\text{ connected in }E_{i}}\end{subarray}}\prod_{i=1}^{j-1} \left(\frac{t}{\#E_{i}}\right)^{\beta-1} \leq\frac{1}{(j-1)!}\cdot\sum_{1\leq k_{1},\ldots,k_{j-1}\leq n-1} \ \prod_{i=1}^{j-1}\left(\frac{k_{i}+1}{k_{i}^{\beta-1}}\cdot t^{\beta-1}\right)\] \[=\frac{1}{(j-1)!}\cdot\left(\sum_{k=1}^{n-1}\frac{k+1}{k^{\beta-1}} \cdot t^{\beta-1}\right)^{j-1}.\]
Plugging this into (8) and summing over \(j\), we obtain (6).
### Open questions
To conclude this paper, let us state some natural open questions raised by our results.
* The proof of Theorem 2 falls short of describing what happens at \(\beta=2\). It might be the case that there is no explosion; but note that in contrast with the result of Proposition 3, when \(\beta=2\), for every \(t>0\), we have \[\mathbb{E}\left[\#\{x\in\mathbb{T}:T(\varnothing,x)\leq t\}\right] \geq\mathbb{E}\left[1+2\cdot\left[t\cdot V_{\varnothing}\right]\right]\] \[\geq\mathbb{E}[t\cdot V_{\varnothing}]\] \[=t\cdot\int_{0}^{\infty}\left(1-\exp\left[-3/4\cdot v^{-1}\right] \right)\mathrm{d}v=\infty.\]
* The set \[\left\{(x_{n})_{n\in\mathbb{N}}\in\partial\mathbb{T}:\sum_{n\geq 1}T(x_{n-1},x_{ n})<\infty\right\}\] has measure \(0\) a.s, with respect to the natural Borel measure on \(\partial\mathbb{T}\) introduced in Section 1; on the other hand, this set must be dense in \(\partial\mathbb{T}\) as soon as the explosion event (\(T(\varnothing,\partial\mathbb{T})<\infty\)) is realised. In that case, it would be interesting to compute its Hausdorff dimension, with respect to the distance \(d\) on \(\partial\mathbb{T}\) introduced in Section 1.
* Although we fail to obtain a matching upper bound for \(\beta\in\left]2,3\right]\), it seems plausible that the obvious lower bound (5) gives the right order of magnitude for the bounded driving distance probability in the whole phase \(\beta>2\).
* The results presented in this paper should hold more generally in the \(d\)-regular tree for all \(d\geq 2\).
* We expect that a result similar to Theorem 2 holds for the driving distance problem in the hyperbolic plane. We intend to investigate this in a forthcoming paper.
|
2305.02603 | Mean field singular stochastic PDEs | We study some systems of interacting fields whose evolution is given by
singular stochastic partial differential equations of mean field type. We
provide a robust setting for their study leading to a well-posedness result and
a propagation of chaos result. | I. Bailleul, N. Moench | 2023-05-04T07:23:02Z | http://arxiv.org/abs/2305.02603v1 | # Mean field singular stochastic PDEs
###### Abstract
We study some systems of interacting fields whose evolution is given by singular stochastic partial differential equations of mean field type. We provide a robust setting for their study leading to a well-posedness result and a propagation of chaos result.
1. Introduction
2. Additive noise
3. Basics on paracontrolled calculus and long range mean field equations
4. Mean field type singular SPDEs
5. Propagation of chaos
28
A. Enhancing random noises
30
## 1 Introduction
Let \((\xi^{i})_{i\geq 1}\) stand for a sequence of independent, identically distributed, random spacetime distributions on the 2-dimensional torus \(\mathsf{T}^{2}\). We will denote by \((\Omega,\mathcal{F},\mathbb{P})\) the probability space on which these random variables are defined. We assume that the \(\xi^{i}\) are almost surely continuous functions of time with values in the space of \((\alpha-2)\)-Holder regular distributions over \(\mathsf{T}^{2}\), with \(2/3<\alpha<1\), with null spatial mean. The archetype of such a noise is given by (the time independent) space white noise. We study a system of interacting fields whose evolution is given by the following system of'singular' stochastic partial differential equations (SPDEs)
\[(\partial_{t}-\Delta)u^{i}=f(u^{i},\mu_{t}^{n})\,\xi^{i}_{t}+g(u^{i},\mu_{t}^{ n}),\qquad(1\leq i\leq n), \tag{1.1}\]
where
\[\mu_{t}^{n}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{u^{i}_{t}}\]
is the running time empirical measure of the system - a probability measure on a function space. Some (possibly random) initial conditions in that function space are given.
Recall the rule of thumb: One can make sense of the product of two distributions with given Holder regularities if and only if the sum of their regularity exponents is positive. The term'singular' in the expression'singular SPDE' refers to the fact that the regularity of the noise is too low for the regularizing effect of the heat resolvent to give sufficient regularity to the \(u^{i}\) to make sense of the products \(f(u^{i},\mu_{t}^{n})\,\xi^{i}\). The diffusivity term \(f(u^{i},\mu_{t}^{n})\) is expected to have at best parabolic regularity \(\alpha\), while the product \(f(u^{i},\mu_{t}^{n})\,\xi^{i}\) is well-defined if and only if \(\alpha+(\alpha-2)>0\). This condition does not hold in our case where \(\alpha<1\). The settings of regularity structures and paracontrolled calculus have been developed in the last ten years to deal precisely with this kind of problem and one can indeed use either of them to make sense of equation (1.1) as an equation of the form
\[(\partial_{t}-\Delta)\mathsf{u}=\mathsf{f}(\mathsf{u})\,\xi^{[1,n]}+\mathsf{ g}(\mathsf{u}), \tag{1.2}\]
for some \(n\)-dimensional unknown \(\mathsf{u}\) and noise \(\xi^{[1,n]}\), and identify conditions on \(\mathsf{f}\) and \(\mathsf{g}\) under which (1.2) has a unique solution over a given time interval. This way of proceding does not take profit from the specific structure of the mean field type equation (1.1). It is in particular unclear how to prove a propagation of chaos result for the interacting field system from this point of view. The necessity of a point of view tailor-made to mean field-type dynamics gets even clearer if one looks at what should most naturally be the limit dynamics of a given field of system
(1.1) when \(n\) tends to \(\infty\), say the field with label \(i=1\). Based on symmetry/exchangeability considerations this field is expected to be a solution of the equation
\[(\partial_{t}-\Delta)u=f(u,\mathcal{L}(u_{t}))\,\xi+g(u,\mathcal{L}(u_{t})), \tag{1.3}\]
where \(\mathcal{L}(u_{t})\) stands for the law of the random variable \(u_{t}\) and \(\xi\) stands for a random distribution with the same law as the \(\xi^{i}\). Our first aim in this work is to develop a setting within which one can make sense of system (1.1) and equation (1.3) in a unified way, for a large class of spacetime noises \(\xi\).
Denote by \(z\) and \(z^{\prime}\) generic spacetime points. The choice of functions \(f\) and \(g\) in equations of the form (1.1) and (1.3) is guided by the physics of the phenomenon modeled by system (1.1). To make things concrete we consider in this introduction the case where \(f(u,\mu)\) and \(g(u,\mu)\) depend linearly on their measure argument and are of the form
\[z\mapsto\iint F\big{(}u(z),v(z^{\prime})\big{)}k(z,z^{\prime})dz^{\prime}\mu( dv)=\mathbb{E}\bigg{[}\int F\big{(}u(z),V(z^{\prime})\big{)}k(z,z^{\prime})dz^{ \prime}\bigg{]} \tag{1.4}\]
for \(u\) a function on \(\mathsf{T}^{2}\), for a random function \(V\) with law \(\mu\) and a real-valued function \(F\) on \(\mathsf{R}^{2}\). Think of the kernel \(k\) as a parameter that captures the range of the interaction between the different fields in the system, with extreme cases \(k(z,z^{\prime})=1\) and \(k(z,z^{\prime})=\delta_{z}(z^{\prime})\), and intermediate cases represented by \(C^{2}\) kernels for instance. The physics behind the two extreme cases is very different and we will technically deal with them in a different way. We will be able to work with functions that depend polynomially on their measure argument. Our main result reads informally as follows. We fix some initial conditions.
1. [label=0]
2. _Theorem. One can design a setting where equation (_1.3_) makes sense._ 1. _Under proper regularity and growth assumptions on_ \(f\) _and_ \(g\) _there exists a positive time_ \(T\) _such that system (_1.1_) and equation (_1.3_) have unique solutions on the time interval_ \([0,T]\)_._ 2. _The law of any fixed tuple of fields in the field system (_1.1_) converges to a tuple of independent, identically distributed, solutions of (_1.3_) as_ \(n\) _tends to_ \(\infty\)_, on the time interval_ \([0,T]\)_._
So there is propagation of chaos for system (1.1), with mean field dynamics given by the mean field type equation (1.3).
While equation (1.3) and system (1.1) share the common feature of being singular, in the sense that they involve some ill-defined products, the mean field interaction in (1.3) causes a different kind of problem. A close situation was studied by Bailleul, Catellier & Delarue in their analysis of mean field type random rough differential equations [4]. We design in the present work an approach similar to [4] for the study of equation (1.3), using the language of paracontrolled calculus to build our setting. The original form of paracontrolled calculus was introduced by Gubinelli, Imkeller & Perkowski in [11]; one can find a nice short account of the basics of paracontrolled calculus in Gubinelli & Perkowski's lecture notes [12]. Recall that we work with a noise with null spatial mean. Denote by \(\omega\in\Omega\) a generic chance element and write \(X(\omega)\) for \(-(\partial_{t}-\Delta)^{-1}(\xi(\omega))\), and \(\overline{X}\) for an independent copy of the random variable \(X\). As in [4] we use a notion of paracontrolled field that is tailor made to capture not only the paracontrolled structure of \(u\) needed to make sense of its product with \(\xi\) but also of the structure needed to describe the mean field specific spacetime function
\[(t,x)\mapsto f\big{(}u_{t},\mathcal{L}(u_{t})\big{)}(x).\]
This comes under the form of a definition saying that a random field \(u(\omega)\) is \(\omega\)-paracontrolled by a reference field \(X(\omega)\) of parabolic Holder regularity \(\alpha\) if one has almost surely
\[u(\omega)\simeq\mathsf{P}_{(\delta_{x}u)(\omega)}X(\omega)+\overline{\mathbb{E }}\big{[}\mathsf{P}_{(\delta_{\mu}u)(\omega,\cdot)}\overline{X}(\cdot)\big{]} \tag{1.5}\]
up to a remainder of parabolic regularity \(2\alpha\), for some random functions \((\delta_{z}u)(\omega)\) and \((\delta_{\mu}u)(\omega,\cdot)\) that depend on \(\omega\) and an additional independent chance element that is averaged out in the
expectation, where \(\overline{X}(\cdot)=(\partial_{t}-\Delta)^{-1}(\overline{\xi}(\cdot))\) and \(\overline{\xi}\) has the same law as \(\xi\) and is independent of \(\xi\), and \(\cdot\) stands for the chance element argument. A precise definition, conveying in particular the meaning of the notations \(\delta_{z}u,\delta_{\mu}u\), is given in Section 4.2. This definition will play a key role in our construction of a robust setting where to make sense of equation (1.3) and prove a well-posedness result for it.
Setting up a framework for the study of a given singular stochastic PDE driven by a random noise \(\xi(\omega)\) usually requires that we enhance the noise with the additional datum of quantities that do not make sense analytically \(\omega\)-wise. In the archetypal example of the 2-dimensional parabolic Anderson model equation
\[(\partial_{t}-\Delta)v=v\xi,\]
where \(\xi\) is a space white noise that is almost surely of space Holder regularity \(-1-\eta\) for all \(\eta>0\), enhancing the noise consists in building a random variable that plays the role of the \(\omega\)-wise ill-defined product of \(\xi(\omega)\) and \(\Delta^{-1}(\xi(\omega))\). This random variable, suggestively denoted by \(\big{(}\xi\Delta^{-1}(\xi)\big{)}(\omega)\), is given by the \(L^{2}(\Omega,\mathbb{P})\) limit of the renormalized regularized quantity
\[\xi^{\varepsilon}\Delta^{-1}(\xi^{\varepsilon})-C^{\varepsilon},\]
where \(\xi^{\varepsilon}\) stands for a smooth regularization of \(\xi\) that converges to \(\xi\) in the space of distributions with Holder regularity \(-1-\eta\), and \(C^{\varepsilon}\) is an explicit constant that diverges to \(+\infty\) as a multiple of \(|\log\varepsilon|\). The fact that the naive approximation \(\xi^{\varepsilon}\Delta^{-1}(\xi^{\varepsilon})\) is not converging leads to the interpretation of the solution \(v\) to the parabolic Anderson model equation as a limit in probability of solutions \(v^{\varepsilon}\) to the renormalized equation
\[(\partial_{t}-\Delta)v^{\varepsilon}=v^{\varepsilon}\xi^{\varepsilon}-C^{ \varepsilon}v^{\varepsilon},\]
rather than as a limit of solutions to the parabolic Anderson model equation driven by the regularized noise \(\xi^{\varepsilon}\). We talk in this setting of the pair of random variables \((\xi,\xi\Delta^{-1}(\xi))\) as an 'enhanced noise'. A richer enhancement of the noise \(\xi\) is needed in the analysis of the mean field equation (1.3). Not only do we need to add the random variable \(\big{(}\xi\Delta^{-1}(\xi)\big{)}(\omega)\) to our notion of enriched noise, but the description (1.5) of an \(\omega\)-controlled field should make it plain that we also need to add a doubly random variable that plays the role of the analytically ill-defined product of \(\xi(\omega)\) and \((\partial_{t}-\Delta)^{-1}(\overline{\xi}(\varpi))\), where \((\omega,\varpi)\in\Omega^{2}\) and we work with the product probability \(\mathbb{P}^{\otimes 2}\) on \((\Omega^{2},\mathcal{F}^{\otimes 2})\). Luckily, the independence of \(\xi\) and \(\overline{\xi}\) allows to define a doubly random variable \(\big{(}\xi(\partial_{t}-\Delta)^{-1}(\overline{\xi})\big{)}(\omega,\varpi)\) as the \(L^{2}(\Omega^{2},\mathbb{P}^{\otimes 2})\) limit of the regularized quantity
\[\xi^{\varepsilon}(\partial_{t}-\Delta)^{-1}(\overline{\xi}^{\varepsilon})\]
_without_ the need of any _renormalization_. This will lead us to the interpretation of a solution to equation (1.3) as the limit in probability as \(\varepsilon>0\) goes to \(0\) of the solution \(u^{\varepsilon}\) to the renormalized equation
\[(\partial_{t}-\Delta)u^{\varepsilon}=f(u^{\varepsilon},\mathcal{L}(u^{ \varepsilon}_{t}))\,\xi^{\varepsilon}-C^{\varepsilon}(ff^{\prime})\big{(}u^{ \varepsilon},\mathcal{L}(u^{\varepsilon}_{t})\big{)}+g(u^{\varepsilon}, \mathcal{L}(u^{\varepsilon}_{t})),\]
where \(f^{\prime}\) stand for the derivative of \(f\) with respect to its first argument.
_Organization of this work._ We treat the elementary case of systems (1.1) and equation (1.3) with additive noise (\(f=1\)) in Section 2. Very robust results can be obtained in this simple setting, leading in particular to a simple proof of propagation of chaos for the corresponding system of interacting fields for an essentially arbitrary random noise with values in \(C_{T}C^{\alpha-2}\). No tools from paracontrolled calculus are needed to deal with this case. We use the language of paracontrolled calculus to study more general equations or systems. We recall what we need from this domain in Section 3.1 and study equation (1.3) in the simple setting of a diffusivity with form (1.4) and \(C^{2}\) kernel \(k\) in Section 3.3. The notion of mean field enhancement of the noise is introduced in Section 4.1, with an associated notion of paracontrolled structure described in Section 4.2. The well-posed character of equation (1.3) is the object of Section 4.3. The quantitative regularity result that we obtain for the solution \(u\) of equation (1.3) as a function fo the enhanced noise entails in Section 5 a propagation of chaos result for system (1.1).
Notations.: We gather here a number of notations that we will use frequently.
* _We fix some regularity exponents_ \[\frac{2}{3}<\beta<\alpha<1.\]
* _For_ \(\gamma\in\mbox{\sf R}\)_, we denote by_ \(C^{\gamma}=C^{\gamma}(\mathbb{T}^{2})\) _the Besov space_ \(B^{\gamma}_{\infty\infty}(\mathbb{T}^{2})\)_, with norm_ \(\|\cdot\|_{\gamma}\)_. For any Banach space_ \(E\) _and_ \(\gamma\geq 0\) _we set_ \[C^{\gamma}_{T}E:=C^{\gamma}([0,T],E)\] _and write_ \(L^{\infty}_{T}E\) _for_ \(L^{\infty}([0,T];E)\)_. We will also need the parabolic Holder space_ \(\mathscr{C}^{\alpha}_{T}\) _on_ \([0,T]\times\mathbb{T}^{2}\)_, which is isometric to_ \(C^{\alpha/2}_{T}L^{\infty}(\mathbb{T}^{2})\cap C_{T}C^{\alpha}(\mathbb{T}^{2})\) _equipped with its natural norm. We will denote_ \((P_{t})_{t\geq 0}\) _the semigroup generated by the Laplace-Beltrami operator_ \(\Delta\) _on an ad hoc function space. Recall the elementary estimate_ \[\|P_{t}u\|_{C^{\gamma+\delta}}\lesssim_{T}t^{-\delta/2}\|u\|_{C^{\gamma}},\] _for_ \(\delta>0\) _and_ \(0<t\leq T\)_._
* _We denote by_ \(L^{p}(\Omega,E)\) _the space of_ \(E\)_-valued random variables in_ \(L^{p}(\Omega,\mathcal{F},\mathbb{P})\)_._
* _For an integrability exponent_ \(1\leq p<\infty\) _we denote by_ \(\mathcal{P}_{p}(E)\) _the set of probability measures on_ \(E\) _that has a moment of order_ \(p\) _and by_ \(\mathcal{W}_{p,E}\) _the_ \(p-\)_Wasserstein metric on_ \(\mathcal{P}_{p}(E)\)_. We define a distance on_ \(L^{\infty}_{T}\mathcal{P}_{p}(C^{\alpha})\) _setting_ \[d_{L^{\infty}_{T}\mathcal{W}_{p,C^{\alpha}}}(\mu,\mu^{\prime}):=\sup_{t\in[0,T] }\mathcal{W}_{p,C^{\alpha}}(\mu_{t},\mu^{\prime}_{t}).\]
* _We denote by_ \(\mathcal{L}(Z)\) _the law of a random variable_ \(Z\)_._
* _For a measure_ \(\mu\) _on a metric space_ \(E\) _and_ \(\phi\in C_{b}(E)\) _write_ \(\mu(\phi)\) _for_ \(\int\!\phi\,d\mu\)_._
## 2 Additive noise
Fix \(0<T_{0}<\infty\) and \(1\leq p<\infty\). Let \(\zeta\in C_{T_{0}}C^{\alpha-2}\) be an _arbitrary random element_. Following Coghi, Deuschel, Friz & Maurelli [9] we begin our work by studying the case of a mean field type equation with additive noise
\[(\partial_{t}-\Delta)u=\zeta+g(u,\mathcal{L}(u_{t})) \tag{2.1}\]
and random initial condition \(u_{0}\), assuming that the random variable \((\zeta,u_{0})\) is an element of \(L^{p}\big{(}\Omega,C_{T_{0}}C^{\alpha-2}\times C^{\alpha}\big{)}\). No singular product is involved in the study of this equation and we will be able to solve it with classical tools. We prove in Section 2.1 that equation (2.1) is well-posed under proper Lipschitz assumptions on \(g\) and that the law of its solution is a Lipschitz continuous function of the law of \((\zeta,u_{0})\) in the Wasserstein \(p\)-space. This strong result leads in Section 2.2 to a propagation of chaos result for an associated field system.
### Additive mean field equation
For \(\mu\in\mathcal{P}_{p}(C_{T_{0}}C^{\alpha})\) and \(t\in[0,T_{0}]\), we write \(\mu_{t}\) for the image measure of \(\mu\) in \(C^{\alpha}\) by the \(t\)-time coordinate map \(u\in C_{T_{0}}C^{\alpha}\mapsto u_{t}\in C^{\alpha}\).
**Assumption (H\({}_{g}\))**: There exists a constant \(L\) such that for every \(v_{1},v_{2}\in C^{\alpha}\) and \(\nu_{1},\nu_{2}\in\mathcal{P}_{p}(C^{\alpha})\) we have
\[\big{\|}g(v_{1},\nu_{1})-g(v_{2},\nu_{2})\big{\|}_{C^{\alpha-2}}^{p}\leq L^{p} \big{(}\|v_{1}-v_{2}\|_{C^{\alpha}}^{p}+\mathcal{W}_{p,C^{\alpha}}(\nu_{1},\nu _{2})^{p}\big{)}.\]
\(2\) - Proposition.: Suppose Assumption **(H\({}_{g}\))** holds. For any \(\mu\in\mathcal{P}_{p}(C_{T_{0}}C^{\alpha}),u_{0}\in C^{\alpha}\) and \(\zeta\in C_{T_{0}}C^{\alpha-2}\) the equation
\[(\partial_{t}-\Delta)u=\zeta+g(u,\mu) \tag{2.2}\]
with initial condition \(u_{0}\) has a unique solution \(u\in C_{T_{0}}C^{\alpha}\).
_Proof -_ Set
\[Z_{t}:=\int_{0}^{t}P_{t-s}(\zeta_{s})\,ds\]
and recall the well-known Schauder type bound
\[\|Z\|_{C_{T_{0}}C^{\alpha}}\lesssim_{T_{0}}\|\zeta\|_{C_{T_{0}}C^{\alpha-2}}. \tag{2.3}\]
One can rewrite equation (2.2) in integral form
\[u_{t}=P_{t}(u_{0})+Z_{t}+\int_{0}^{t}P_{t-s}g(u_{s},\mu_{s})ds. \tag{2.4}\]
The estimate (2.3) ensures that the map
\[\Phi:u\in C_{T_{0}}C^{\alpha}\mapsto P_{t}(u_{0})+Z_{t}+\int_{0}^{t}P_{t-s}g(u _{s},\mu_{s})ds\in C_{T_{0}}C^{\alpha}\]
is well-defined. For \(u,u^{\prime}\in C_{T_{0}}C^{\alpha}\), using Assumption (\(H_{g}\)) and (2.3), we have
\[\|\Phi(u)_{t}-\Phi(u^{\prime})_{t}\|_{C^{\alpha}}\leq\int_{0}^{t}\|P_{t-s}g(u _{s},\mu_{s})-P_{t-s}g(u^{\prime}_{s},\mu_{s})\|_{C^{\alpha}}ds\leq\int_{0}^{t }L\|u_{s}-u^{\prime}_{s}\|_{C^{\alpha}}ds.\]
Denote by \(\Delta_{k}(0,t)\) the simplex \(\{0\leq s_{1}\leq\cdots\leq s_{k}\leq t\}\) and write \(ds\) for \(ds_{1}\ldots ds_{k}\). An iteration of the previous bound gives
\[\|\Phi^{\circ k}(u)_{t}-\Phi^{\circ k}(u^{\prime})_{t}\|_{C^{\alpha}}\leq L^{k }\int_{\Delta_{k}(0,t)}\|u_{s_{k}}-u^{\prime}_{s_{k}}\|_{C^{\alpha}}ds\leq \frac{(LT)^{k}}{k!}\|u-u^{\prime}\|_{C_{T}C^{\alpha}}.\]
The map \(\Phi^{\circ k}\) is thus contracting for \(k\) large enough, so it has a unique fixed point. \(\rhd\)
We denote by \(u^{\mu}(\zeta,u_{0})\) the solution to equation (2.2). We now work with \((\zeta,u_{0})\) random, an element of \(L^{p}\big{(}\Omega,C_{T_{0}}C^{\alpha-2}\times C^{\alpha}\big{)}\).
_- Proposition. For every \(\mu\in\mathcal{P}_{p}(C_{T_{0}}C^{\alpha})\) the law of \(u^{\mu}(\zeta,u_{0})\) belongs to \(\mathcal{P}_{p}(C_{T_{0}}C^{\alpha})\)._
_Proof -_ Write \(\delta_{\mathbf{0}}\) for Dirac distribution on the null function \(\mathbf{0}\). We have from the integral formulation (2.4) the estimate
\[\|u^{\mu}_{t}\|_{C^{\alpha}} \leq C\Big{(}\|u_{0}\|_{C^{\alpha}}+\|Z_{t}\|_{C^{\alpha}}+\int_ {0}^{t}\|g(u^{\mu}_{s},\mu_{s})\|_{C^{\alpha}}ds\Big{)}\] \[\leq C\left(\|u_{0}\|_{C^{\alpha}}+\|Z_{t}\|_{C^{\alpha}}+\int_{0 }^{t}\|g(0,\delta_{\mathbf{0}})\|+L\Big{(}\|u_{s}\|_{C^{\alpha}}+\mathcal{W}_{ p,C^{\alpha}}(\mu_{s},\delta_{\mathbf{0}})\Big{)}\mathrm{d}s\right)\] \[\leq C\Big{(}\|u_{0}\|_{C^{\alpha}}+\|Z_{t}\|_{C^{\alpha}}+T_{0} \|g(0,\delta_{\mathbf{0}})\|_{C^{\alpha}}+T_{0}\mathcal{W}_{p,C_{T_{0}}C^{ \alpha}}(\mu,\delta_{\mathbf{0}})\Big{)}+CL\int_{0}^{t}\|u_{s}\|_{C^{\alpha}} \mathrm{d}s,\]
for some positive constant \(C\). We get the inequality
\[\|u_{t}\|_{C^{\alpha}}\leq C\Big{(}\|u_{0}\|_{C^{\alpha}}+\|Z_{t}\|_{C^{\alpha} }+T_{0}\|g(0,\delta_{\mathbf{0}})\|_{C^{\alpha}}+T_{0}\mathcal{W}_{p,C_{T}C^{ \alpha}}(\mu,\delta_{\mathbf{0}})\Big{)}e^{CLt}\]
from Gronwall lemma, from which the conclusion follows. \(\rhd\)
Set
\[\Psi:\left\{\begin{array}{ccc}\mathcal{P}_{p}(C_{T_{0}}C^{\alpha})\times L^ {p}(\Omega,C_{T_{0}}C^{\alpha-2}\times C^{\alpha})&\rightarrow&\mathcal{P}_{p} (C_{T_{0}}C^{\alpha})\\ \big{(}\mu,(\zeta,u_{0})\big{)}&\mapsto&\mathcal{L}\big{(}u^{\mu}(\zeta,u_{0} )\big{)}\end{array}\right.\]
We define a _solution to equation (2.1) with initial condition_\(u_{0}\) as a fixed point of the map
\[\Psi\big{(}\cdot,(\zeta,u_{0})\big{)}:\mathcal{P}_{p}(C_{T_{0}}C^{\alpha}) \rightarrow\mathcal{P}_{p}(C_{T_{0}}C^{\alpha}).\]
_- Theorem. Suppose **Assumption (\(H_{g}\))** holds. Then equation (2.1) has a unique solution denoted by \(u(\zeta,u_{0})\). We have the Lipschitz estimate_
\[\mathcal{W}_{p,C_{T_{0}}C^{\alpha}}\big{(}\mathcal{L}(u(\zeta,u_{0})), \mathcal{L}(u(\zeta^{\prime},u^{\prime}_{0}))\big{)}\lesssim_{g,p,T_{0}} \mathcal{W}_{p,C_{T_{0}}C^{\alpha-2}\times C^{\alpha}}\big{(}\mathcal{L}(\zeta,u _{0}),\mathcal{L}(\zeta^{\prime},u^{\prime}_{0})\big{)}. \tag{2.5}\]
Proof -: Fix \((\zeta,u_{0})\) and use the shorthand notation \(\Psi_{\zeta,u_{0}}(\cdot)\) for \(\Psi\big{(}\cdot,(\zeta,u_{0})\big{)}\). For \(\mu,\mu^{\prime}\in\mathcal{P}_{p}(C_{T}C^{\alpha})\) write \(u^{\mu}\) and \(u^{\mu^{\prime}}\) for \(u^{\mu}(\zeta,u_{0})\) and \(u^{\mu^{\prime}}(\zeta,u_{0})\), respectively. One has
\[u^{\mu}_{t}-u^{\mu^{\prime}}_{t}=\int_{0}^{t}\Big{(}P_{t-s}g(u^{\mu}_{s},\mu_{s })-P_{t-s}g(u^{\mu^{\prime}}_{s},\mu^{\prime}_{s})\Big{)}ds,\]
and
\[\big{\|}u^{\mu}_{t}-u^{\mu^{\prime}}_{t}\big{\|}_{C^{\alpha}}^{p}\leq C\int_{0 }^{t}\Big{(}\big{\|}u^{\mu}_{s}-u^{\mu^{\prime}}_{s}\big{\|}_{C^{\alpha}}^{p}+ \mathcal{W}_{p}\big{(}\mu_{[0,s]},\mu^{\prime}_{[0,s]}\big{)}^{p}\Big{)}ds,\]
for some constant \(C\), so we get from Gronwall lemma the estimate
\[\mathcal{W}_{p,C_{t}C^{\alpha}}\Big{(}\mathcal{L}(u^{\mu}_{[0,t]}),\mathcal{L }(u^{\mu^{\prime}}_{[0,t]})\Big{)}^{p}\leq Ce^{CT_{0}}\int_{0}^{t}\mathcal{W} _{p,C_{s}C^{\alpha}}\big{(}\mu_{[0,s]},\mu^{\prime}_{[0,s]}\big{)}^{p}ds.\]
A direct iteration gives
\[\mathcal{W}_{p,C_{T_{0}}C^{\alpha}}\big{(}\Psi^{\circ k}_{\zeta,u _{0}}(\mu^{1}),\Psi^{\circ k}_{\zeta,u_{0}}(\mu^{2})\big{)}^{p} \leq(Ce^{CT_{0}})^{k}\int_{\Delta^{k}_{t}}\mathcal{W}_{p,C_{s}C^{ \alpha}}\big{(}\mu_{[0,s_{k}]},\mu^{\prime}_{[0,s_{k}]}\big{)}^{p}ds\] \[\leq(Ce^{CT_{0}})^{k}\frac{1}{k!}\,\mathcal{W}_{p,C_{T_{0}}C^{ \alpha}}\big{(}\mu,\mu^{\prime}\big{)}^{p},\]
so the map \(\Psi^{\circ k}_{\zeta,u_{0}}\) is contracting for \(k\) sufficiently large and equation (2.1) has a unique solution.
Let now \(\zeta,\zeta^{\prime}\in C_{T_{0}}C^{\alpha-2}\) be two noises and \(u_{0},u^{\prime}_{0}\in C^{\alpha}\) be two initial conditions. Pick \(\mu\in\mathcal{P}_{p}(C_{T_{0}}C^{\alpha})\) and write \(u\) and \(u^{\prime}\) for \(u(\zeta,u_{0})\) and \(u^{\prime}(\zeta,u_{0})\), respectively. We can assume without loss of generality that \(\zeta,\zeta^{\prime},u_{0},u^{\prime}_{0}\) are such that the \(p\)-th moment of \(\|u-u^{\prime}\|_{C_{T_{0}}C^{\alpha}}\) is equal to the \(p\)-Wasserstein distance between \(\mathcal{L}(u(\zeta,u_{0}))\) and \(\mathcal{L}(u(\zeta^{\prime},u^{\prime}_{0}))\). Since
\[u_{s}-u^{\prime}_{s}=P_{s}(u_{0}-u^{\prime}_{0})+Z_{s}-Z^{\prime}_{s}+\int_{0 }^{s}\Big{(}P_{s-r}(g(u_{r},\mu_{r}))-P_{s-r}(g(u^{\prime}_{r},\mu_{r}))\Big{)} dr,\]
we have
\[\sup_{s\in[0,t]}\|u_{s}-u^{\prime}_{s}\|_{C^{\alpha}} \leq\|u_{0}-u^{\prime}_{0}\|_{C^{\alpha}}+\|Z-Z^{\prime}\|_{C_{T} C^{\alpha}}+C\int_{0}^{t}\|u_{s}-u^{\prime}_{s}\|_{C^{\alpha}}ds\] \[\lesssim\|u_{0}-u^{\prime}_{0}\|_{C^{\alpha}}+\|\zeta-\zeta^{ \prime}\|_{C_{T}C^{\alpha-2}}+C\int_{0}^{t}\|u_{s}-u^{\prime}_{s}\|_{C^{ \alpha}}ds\]
and
\[\mathbb{E}\Big{[}\sup_{s\in[0,t]}\|u_{s}-u^{\prime}_{s}\|_{C^{\alpha}}^{p} \Big{]}\lesssim_{p}\|u_{0}-u^{\prime}_{0}\|_{C^{\alpha}}^{p}+\mathbb{E}\big{[} \|\zeta-\zeta^{\prime}\|_{C_{T}C^{\alpha-2}}^{p}\big{]}+\int_{0}^{t}\mathbb{E} \Big{[}\sup_{r\in[0,s]}\|u_{r}-u^{\prime}_{r}\|_{C^{\alpha}}^{p}\Big{]}ds.\]
We get the Lipschitz estimate (2.5) from Gronwall lemma. \(\rhd\)
Note that we do not assume that the noise \(\zeta\) and the initial condition \(u_{0}\) are independent.
### Propagation of chaos
Let now \((\zeta^{i},u^{i}_{0})_{i\geq 1}\) be a sequence of independent, identically distributed, random variables with common distribution the law of \((\zeta,u_{0})\). Denote by \((\Omega,\mathcal{F},\mathbb{P})\) the probability space on which this sequence of random variables is defined, with \(\omega\in\Omega\) a generic element of \(\Omega\). Fix \(\omega\in\Omega\). For an integer \(n\geq 1\) consider the interacting system of fields \(\big{(}u^{1,n}(\omega),\ldots,u^{n,n}(\omega)\big{)}\) with initial conditions \(\big{(}u^{1}_{0}(\omega),\ldots,u^{n}_{0}(\omega)\big{)}\) and dynamics
\[\begin{array}{c}(\partial_{t}-\Delta)u^{i,n}(\omega)\ =\zeta^{i}(\omega)+g\big{(}u^{i,n}(\omega),\mu^{n}_{t}( \omega)\big{)},\\ \mu^{n}_{t}(\omega):=\frac{1}{n}\sum_{k=1}^{n}\delta_{u^{k,n}_{ t}(\omega)},\end{array} \tag{2.6}\]
for \(1\leq i\leq n\). H. Tanaka [16] was the first to notice that system (2.6) is actually, _for each_\(\omega\in\Omega\), an equation of the form (2.1) set on the finite probability space \(\{1,\ldots,n\}\) equipped with the uniform probability measure \(\lambda_{n}\). Following [5], we call this observation 'Tanaka's trick'. Random variables on the space \(\{1,\ldots,n\}\) are \(n\)-tuples indexed by \(1\leq i\leq n\). Denote
by \(\mathcal{L}_{\lambda_{n}}(X)\) the law under \(\lambda_{n}\) of an arbitrary random variable \(X\) defined on \(\{1,\ldots,n\}\). Denote also by \[U_{n}:j\mapsto j\] the canonical random variable on \(\{1,\ldots,n\}\). Tanaka's trick says that a solution to the system \[(\partial_{t}-\Delta)u^{i}(\omega)=\zeta^{i}(\omega)+g\big{(}u^{i}(\omega), \mathcal{L}_{\lambda_{n}}(u^{U_{n}(\cdot)}(\omega))\big{)},\qquad(1\leq i\leq n)\] with parameter \(\omega\) and chance element \(i\in\{1,\ldots,n\}\), is precisely given by the \(n\)-tuple \[\big{(}u^{1,n}(\omega),\ldots,u^{n,n}(\omega)\big{)}\] of solutions to the field system (2.6). Recall that a _sequence \((\mu_{n})_{n\geq 1}\) of probability measures on \(E^{n}\)_, invariant by the action on \(E^{n}\) of the permutation group of \(n\) elements, is said to be \(\mu\)_-chaotic_ if for every \(1\leq k\leq n\) and \(\phi_{1},\ldots\phi_{k}\in C_{b}(E)\), we have \[\mu_{n}\big{(}\phi_{1}\otimes\cdots\otimes\phi_{k}\otimes\mathds{1}^{\otimes( n-k)}\big{)}\underset{n\to\infty}{\longrightarrow}\prod_{i=1}^{k}\mu(\phi_{i}).\] A well-known criterion of \(\mu\)-chaoticity is given by the convergence in law of the empirical mean of an iid \(n\)-sample of \(\mu_{n}\) to the measure \(\mu\) itself - see for instance Proposition 2.2 in Sznitman's lecture notes [15]. Now the law of large numbers tells us that the empirical mean \[\frac{1}{n}\sum_{i=1}^{n}\delta_{(\zeta^{i},u^{i}_{0})(\omega)}\] converges \(\mathbb{P}\)-almost surely in \(\mathcal{W}_{p,C_{T_{0}}C^{\alpha-2}\times C^{\alpha}}\) to \(\mathcal{L}(\zeta,u_{0})\). The following fact is thus a consequence of the Lipschitz estimate (2.5) and Sznitman's criterion. In the next statement we write \(u\in L^{p}(\Omega,C_{T_{0}}C^{\alpha})\) for the solution to equation (2.1).
5. _Corollary._ _For any integer \(k\geq 1\), the law of the \(k\)-tuple \((u^{1,n},\ldots,u^{k,n})\) converges weakly to \(\mathcal{L}(u)^{\otimes k}\) when \(n\) tends to \(\infty\).
## 3 Basics on paracontrolled calculus and long range mean field equations
The study of equation (1.3) with a non-constant diffusivity \(f(\cdot)\) requires that we use one of the languages that have been developed in the last ten years for the study of a large class of singular stochastic PDEs. The problem involved in this class of equations is best illustrated on the toy example of the parabolic Anderson model equation
\[(\partial_{t}-\Delta)u=u\xi\]
set on \(\mathsf{T}^{2}\), with \(\xi\) a space white noise. Recall \(\xi\) has almost surely Holder space regularity \(-1-\varepsilon\) for all \(\varepsilon>0\). One expects from the Schauder estimates satisfied by the resolvent of the heat operator that \(u\) has parabolic regularity \((\alpha-2)+2=\alpha\). This regularity is not sufficient for making sense of the product \(u\xi\) since \(\alpha+(\alpha-2)<0\). There are at least two languages one can use to circumvent this problem and set a robust solution theory for this equation and a whole class of equations involving the same pathology. We choose to work here with the language of paracontrolled calculus first introduced by Gubinelli, Imkeller & Perkowski in [11]. We recall in Section 3.1 the notions and results from paracontrolled calculus that we will use; we refer the reader to [12, 10, 14] for accounts of the basics on the subject. These results are sufficient to deal with the soft case of a mean field equation (1.3) with diffusivity given by the model function (1.4) with a \(C^{2}\) kernel \(k\). We deal with that case in Section 3.3 as a warm-up for Section 4.
### Basics on paracontrolled calculus
We will use the notations \(h_{1}<h_{2}\) and \(h_{1}\odot h_{2}\) for the paraproduct and the resonant operators on _space distributions_\(h_{1},h_{2}\), defined from the Littlewood-Paley projectors. From its definition \(h_{1}<h_{2}\) is well-defined for all distributions
\(h_{1},h_{2}\) on \(\mathsf{T}^{2}\) and has high Fourier modes that are modulations of the high Fourier modes of \(h_{2}\) by low Fourier modes of \(h_{1}\). On that ground, it makes sense to think of \(h_{1}<h_{2}\) as a distribution that 'looks like' \(h_{2}\). Recall from Lemma 2.4 of [11] that the corrector
\[\mathsf{C}(a,b,c):=(a<b)\odot c-a\,(b\odot c)\]
has a continuous extension from \(C^{2}\times C^{2}\times C^{2}\) to \(C^{\alpha_{1}}\times C^{\alpha_{2}}\times C^{\alpha_{3}}\) with values in \(C^{\alpha_{1}+\alpha_{2}+\alpha_{3}}\) if \(\alpha_{2}+\alpha_{3}<0\) and \(0<\alpha_{1}+\alpha_{2}+\alpha_{3}<1\). The following continuity estimate from [2], Proposition 14 therein, will also be useful. One has
\[\big{\|}a<(b<c)-(ab)<c\big{\|}_{C^{\alpha_{2}+\alpha_{3}}}\lesssim\|a\|_{L^{ \infty}}\|b\|_{C^{\alpha_{2}}}\|c\|_{C^{\alpha_{3}}}, \tag{3.1}\]
for all \(a\in L^{\infty},b\in C^{\alpha_{2}}\) with \(\alpha_{2}\) in \((0,1)\) and \(c\in C^{\alpha_{3}}\) with \(-3<\alpha_{3}<3\). (The regularity exponent \(3\) has no particular meaning; it is purely technical.)
_Definition - Pick a reference distribution \(\Lambda\in C^{\rho}\), with \(\rho\in\mathsf{R}\). A **distribution \(v\) on \(\mathsf{T}^{2}\)** is said to be **paracontrolled by \(\Lambda\)** if there exists a positive regularity exponent \(\gamma\) and functions \(v^{\prime}\in C^{\gamma}\) and \(v^{\#}\in C^{\gamma+\rho}\) such that_
\[v=(v^{\prime}<\Lambda)+v^{\#}.\]
_We denote by \(\mathcal{D}^{\gamma}(\Lambda)\) the space of all such couples \((v^{\prime},v^{\#})\); it is equipped with the norm_
\[\|(v^{\prime},v^{\#})\|_{\mathcal{D}^{\gamma}}:=\big{\|}v^{\prime}\big{\|}_{C^ {\gamma}}+\|v^{\#}\|_{C^{\gamma+\rho}}. \tag{3.2}\]
_For reference distributions \(\Lambda_{1},\Lambda_{2}\in C^{\rho}\) and \(\mathbf{v}_{1}=(v^{\prime}_{1},v^{\#}_{1})\in\mathcal{D}^{\gamma}(\Lambda_{1})\) and \(\mathbf{v}_{2}=(v^{\prime}_{2},v^{\#}_{2})\in\mathcal{D}^{\gamma}(\Lambda_{2})\) we set_
\[d_{\mathcal{D}^{\gamma}}(\mathbf{v}_{1},\mathbf{v}_{2}):=\big{\|}v^{\prime}_{ 1}-v^{\prime}_{2}\big{\|}_{C^{\gamma}}+\big{\|}v^{\#}_{1}-v^{\#}_{2}\big{\|}_{ C^{\gamma+\rho}}.\]
The expression 'Gubinelli derivative of \(v\)' is sometimes used to talk about \(v^{\prime}\). Note that the exponent \(\gamma\) in \(\mathcal{D}^{\gamma}(\Lambda)\) does _not_ refer to the regularity of \(v\) but rather to the regularity exponents of \(v^{\prime}\) and \(v^{\sharp}\). Indeed the distribution \(v\) is \(C^{\rho}\). Let \(a\) and \(b\) be two functions on \(\mathsf{T}^{2}\) with \(a\in\mathcal{D}^{\beta}(b)\) for \(\beta>0\), with Gubinelli derivative \(a^{\prime}\). Sony's paralinearization result implies that if \(h\) stands for a \(C^{3}_{b}\) function from \(\mathsf{R}\) into itself then \(h(a)\in\mathcal{D}^{\beta}(b)\); we denote by \(h(a)^{\prime}=h^{\prime}(a)a^{\prime}\) its Gubinelli derivative and by \(h(a)^{\sharp}\) its remainder term. (See e.g. Section 2.3 of [11].)
We will denote by \(k_{1}\prec k_{2}\) the modified paraproduct on _spacetime distributions_ introduced in Section 5 of [11]. It is a parabolic version of the paraproduct operator \(<\) that has the same analytic properties in the scale of Besov parabolic function spaces as the operator \(<\) in the scale of spatial Besov function spaces. When applied to parabolic distributions \(k_{1}\in C^{\alpha/2}_{T}L^{\infty},k_{2}\in C_{T}C^{\beta}\) the two paraproducts are related by the continuity relation
\[\big{\|}k_{1}<k_{2}-k_{1}\prec k_{2}\big{\|}_{C_{T}C^{\alpha+\beta}}\lesssim\| k_{1}\|_{C^{\alpha/2}_{T}L^{\infty}}\|k_{2}\|_{C_{T}C^{\beta}}. \tag{3.3}\]
We further note the useful estimate
\[\big{\|}(\partial_{t}-\Delta)(k_{1}\prec k_{2})-k_{1}\prec\big{(}(\partial_{t }-\Delta)k_{2}\big{)}\big{\|}_{C_{T}C^{\alpha+\beta-2}}\lesssim\|k_{1}\|_{ \mathscr{C}^{\alpha}_{T}}\|k_{2}\|_{C_{T}C^{\beta}}.\]
(These two results are the content of Lemma 5.1 of [11].) We use the \(\prec\) paraproduct and a slightly different notion of size to deal with parabolic functions paracontrolled by a reference parabolic function \(\Xi\).
_Definition - Pick a reference function \(\Xi\in\mathscr{C}^{\rho}_{T}\), with \(\rho>0\). A **parabolic function \(u\) on \([0,T]\times\mathsf{T}^{2}\)** is said to be **paracontrolled by \(\Xi\)** if there exists a function \(u^{\prime}\in\mathscr{C}^{\beta}_{T}\), with \(\beta>0\), such that_
\[u^{\#}:=u-u^{\prime}\prec\Xi\in\mathscr{C}^{\rho}_{T}\]
_and_
\[\sup_{t\in(0,\mathsf{T}]}t^{\beta/2}\big{\|}u^{\#}_{t}\big{\|}_{C^{\beta+\rho}} <+\infty.\]
_We denote by \(\mathcal{D}_{T}^{\rho,\beta}(\Xi)\) the space of all such couples \((u^{\prime},u^{\sharp})\); it is equipped with the norm_
\[\big{\|}(u^{\prime},u^{\sharp})\big{\|}_{\mathcal{D}_{T}^{\rho,\beta}}:=\big{\|} u^{\prime}\big{\|}_{\mathcal{C}_{T}^{\rho}}+\big{\|}u^{\#}\big{\|}_{\mathcal{C}_{T}^ {\rho}}+\sup_{t\in(0,T]}t^{\beta/2}\big{\|}u_{t}^{\#}\big{\|}_{C^{\beta+\rho}}.\]
_For two reference functions \(\Xi_{1},\Xi_{2}\in\mathscr{C}_{T}^{\rho}\) and \(\mathbf{u}_{1}=(u_{1}^{\prime},u_{1}^{\#})\in\mathcal{D}_{T}^{\rho,\beta}(\Xi_ {1})\) and \(\mathbf{u}_{2}=(u_{2}^{\prime},u_{2}^{\#})\in\mathcal{D}_{T}^{\rho,\beta}(\Xi_ {2})\) we set_
\[d_{\mathcal{D}_{T}^{\rho,\beta}}(\mathbf{u}_{1},\mathbf{u}_{2}):=\big{\|}u_{1 }^{\prime}-u_{2}^{\prime}\big{\|}_{\mathscr{C}_{T}^{\beta}}+\big{\|}u_{1}^{ \#}-u_{2}^{\#}\big{\|}_{\mathscr{C}_{T}^{\rho}}+\sup_{t\in(0,T]}t^{\beta/2} \big{\|}u_{1}^{\#}-u_{2}^{\#}\big{\|}_{\beta+\rho}.\]
### Noise enhancement and product definition
Fix a positive time horizon \(T_{0}\), set
\[\mathscr{L}:=(\partial_{t}-\Delta)\]
and write \(\mathscr{L}^{-1}\) for the resolvent operator with null initial condition at time \(0\). Define
\[\mathsf{L}:C_{T_{0}}C^{\infty}\times C([0,T_{0}],\mathsf{R}) \longrightarrow C_{T_{0}}C^{\infty}\times C_{T_{0}}C^{\infty}\] \[(\ell,c) \longrightarrow\big{(}\ell,\mathscr{L}^{-1}(\ell)\odot\ell-c \big{)}.\]
The letter \(\mathsf{L}\) is chosen for 'lift'. The _space \(\mathfrak{N}\) of enhanced noises_ is the closure in \(C_{T_{0}}C^{\alpha-2}\times C_{T_{0}}C^{2\alpha-2}\) of the range of \(\mathsf{L}\). As a shorthand notation, for \(c\in C([0,T_{0}],\mathsf{R})\), we set
\[\mathsf{L}_{c}(\cdot):=\mathsf{L}(\cdot,c). \tag{3.4}\]
We denote by
\[\widehat{\zeta}=(\zeta,\zeta^{(2)})\]
a generic element of \(\mathfrak{N}\) and set here
\[Z:=\mathscr{L}^{-1}(\zeta)\in\mathscr{C}_{T_{0}}^{\alpha}.\]
The natural norm of \(\widehat{\zeta}\) as an element of the product space is denoted by \(\|\widehat{\zeta}\,\|\). The following statement provides a large class of random noises with a natural enhancement as random element of \(\mathfrak{N}\). It is proved in Appendix A. We write \(P_{t}\) for \(e^{t\Delta}\).
_Theorem 6_: _Let \((\xi_{t})_{0\leq t\leq T_{0}}\) stand for a time-dependent Gaussian random distribution on \(\mathsf{T}^{2}\) with covariance of the form_
\[\mathbb{E}\big{[}(\xi_{t},\phi)(\xi_{s},\psi)\big{]}=c(t,s)\langle\psi\star C,\phi\rangle_{L^{2}}\]
_for some distribution \(C\) on \(\mathsf{T}^{2}\). We assume that the Fourier transform of \(C\) satisfies for some \(\eta<1-\alpha\) the condition_
\[|\widehat{C}(k)|\lesssim|k|^{\eta},\]
_and that the function \(c\) satisfies the inequality_
\[0\leq c(t,t)+c(s,s)-2c(s,t)\leq|t-s|^{\delta}\]
_for some positive exponent \(\delta\). Then one defines a random variable \(X\odot\xi\in L^{1}(\Omega,C_{T}C^{2\alpha-2})\) setting_
\[\big{(}X\odot\xi\big{)}(t):=\int_{0}^{t}\Big{(}P_{t-s}(\xi_{s})\odot\xi_{t}- \mathbb{E}[P_{t-s}(\xi_{s})\odot\xi_{t}]\Big{)}ds \tag{3.5}\]
_One further has \(X\odot\xi\in L^{p}(\Omega,C_{T}C^{2\alpha-2})\) for all \(1\leq p<\infty\) and if \(\xi^{\varepsilon}\) stands for a space regularization of \(\xi\) then_
\[\mathsf{L}\big{(}X^{\varepsilon}\odot\xi^{\varepsilon},\mathbb{E}[X^{ \varepsilon}\odot\xi^{\varepsilon}]\big{)}\]
_converges in \(L^{p}(\Omega,C_{T}C^{2\alpha-2})\) to \(X\odot\xi\) as \(\varepsilon>0\) goes to \(0\)._
The end of this section deals with deterministic enhanced noises. The datum of an element of \(\mathfrak{N}\) allows to give a definition of some a priori ill-defined product.
_Definition 7_: _Pick \(\widehat{\zeta}\in\mathfrak{N}\) and \(\beta>2-2\alpha\) and \(0<t\leq T_{0}\). Let \(u\in C([0,T]\times\mathsf{T}^{2})\) be such that for each \(t\in[0,T]\) one has \(\mathbf{u}_{t}\in\mathcal{D}^{\beta}(Z_{t})\). We define the product \(\mathbf{u}_{t}\zeta_{t}\) as the element of \(\mathcal{D}^{\beta}(\zeta_{t})\)
specified by the decomposition_
\[\mathbf{u}_{t}\zeta_{t}:=u_{t}<\zeta_{t}+(\mathbf{u}_{t}\zeta_{t})^{\#},\]
_where_
\[(\mathbf{u}_{t}\zeta_{t})^{\#}:=\zeta_{t}<u_{t}+u_{t}^{\#}\odot\zeta_{t}+ \mathsf{C}\big{(}u_{t}^{\prime},Z_{t},\zeta_{t}\big{)}+u_{t}^{\prime}\zeta_{t} ^{(2)}\]
_and_
\[\|(\mathbf{u}_{t}\zeta_{t})^{\#}\|_{C^{\alpha+\beta-2}}\lesssim\|\mathbf{u}\|_ {\mathcal{D}^{\beta}(Z_{t})}\Big{(}\|\zeta_{t}\|_{C^{\alpha-2}}+\|Z_{t}\|_{C^{ \alpha}}\|\zeta_{t}\|_{C^{\alpha-2}}+\big{\|}\zeta_{t}^{(2)}\big{\|}_{C^{2 \alpha-2}}\Big{)}. \tag{3.6}\]
For \(\widehat{\zeta^{1}}=\big{(}\zeta^{i},\zeta^{i(2)}\big{)}\in\mathfrak{N}\), \(Z^{i}=\mathscr{L}^{-1}(\zeta^{i})\) and \(\mathbf{u}_{t}^{i}\in\mathcal{D}^{\beta}(Z_{t}^{i})\), with \(i\in\{1,2\}\), set
\[m:=\max_{i\in\{2,3\}}\Big{\{}\big{\|}\zeta^{i}\big{\|}_{C^{\alpha-2}},\big{\|} \zeta^{i(2)}\big{\|}_{C^{2\alpha-2}},\big{\|}\mathbf{u}_{t}^{i}\big{\|}_{ \mathcal{D}^{\beta}(Z_{t}^{i})}\Big{\}}.\]
The proof of the following proposition can be found in [11], Theorem 3.7 therein.
_8 - Proposition. We have the local Lipschitz estimate_
\[\Big{\|}(\mathbf{u}_{t}^{1}\zeta_{t}^{1})^{\#}-\big{(}\mathbf{u}_{t}^{2}\zeta _{t}^{2}\big{)}^{\#}\Big{\|}_{C^{\alpha+\beta-2}}\lesssim_{m}d_{\mathcal{D}^ {\beta}}\big{(}\mathbf{u}_{t}^{1},\mathbf{u}_{t}^{2}\big{)}+\big{\|}\widehat{ \zeta^{1}}-\widehat{\zeta}^{2}\,\big{\|}_{\mathfrak{N}},\]
_and the function \(t\mapsto\mathbf{u}_{t}\zeta_{t}\) is in \(C_{T}C^{\alpha-2}\) for \(\mathbf{u}\in\mathcal{D}_{T}^{\alpha,\beta}(X)\)._
The starting point of the next statement is the description for each time of the right hand side of a parabolic equation as a \(<\) paracontrolled distribution whenever this makes sense. The statement provides as an outcome a description of the solution of the equation as a \(\prec\) paracontrolled function. This can be read as a kind of Schauder-type estimate in the setting of paracontrolled calculus. See Section 5 of [11] for a proof.
_9 - Proposition. Pick a positive regularity exponent \(b\). For \(\pi\in C_{T}C^{\alpha-2}\) let \(\Pi\in\mathscr{C}_{T}^{\alpha}\) be the solution of the equation_
\[(\partial_{t}-\Delta)\Pi=\pi\]
_with null initial condition at time \(0\). Then for every \(w^{\prime},w^{\#}\in\mathscr{C}_{T}^{\alpha}\) such that_
\[\sup_{t\in(0,T]}t^{\beta/2}\big{\|}w_{t}^{\#}\big{\|}_{(\alpha-2)+\beta}<\infty \tag{3.7}\]
_and \(u_{0}\in C^{\alpha}\), the solution \(u\) to the equation_
\[(\partial_{t}-\Delta)u=w^{\prime}<\pi+w^{\#},\quad u(0)=u_{0}, \tag{3.8}\]
_belongs to \(\mathcal{D}_{T}^{\alpha,\beta}(\Pi)\) and \(u^{\prime}=w^{\prime}\). We further have the estimate_
\[\|(u^{\prime},u^{\sharp})\|_{\mathcal{D}_{T}^{\alpha,\beta}(\Pi)}\lesssim\|u_{0 }\|_{C^{\alpha}}+T^{(\alpha-\beta)/2}\Big{(}\|w^{\prime}\|_{\mathscr{C}_{T}^{ \alpha}}\big{(}1+\|\pi\|_{C_{T}C^{\alpha-2}}\big{)}+\sup_{t\in(0,T]}t^{\beta/2 }\big{\|}w_{t}^{\#}\big{\|}_{C^{(\alpha-2)+\beta}}\Big{)}.\]
_For different \(w_{i}^{\prime},w_{i}^{\#}\) satisfying condition (3.7), initial conditions \(u_{i,0}\) and noises \(\pi_{i}\in C_{T}C^{\alpha-2}\), for \(i\in\{1,2\}\), setting_
\[m^{\prime}:=\max_{i\in\{1,2\}}\Big{\{}1,\|w_{i}^{\prime}\|_{\mathscr{C}_{T}^{ \alpha}},\|\pi_{i}\|_{C_{T}C^{\alpha-2}}\Big{\}}\]
_and denoting by \(u_{1},u_{2}\) the corresponding solutions to equation (3.8) with corresponding paracontrolled decomposition \(\mathbf{u}_{1},\mathbf{u}_{2}\), we have_
\[d_{\mathcal{D}_{T}^{\alpha,\beta}}(\mathbf{u}_{1},\mathbf{u}_{2}) \lesssim\|u_{1,0}-u_{2,0}\|_{C^{\alpha}}+P(m^{\prime})\,T^{(\alpha-\beta)/2} \Big{(}\|w_{1}^{\prime}-w_{2}^{\prime}\|_{\mathscr{C}_{T}^{\alpha}}+\|\pi_{1} -\pi_{2}\|_{C_{T}C^{\alpha-2}}\] \[+\sup_{t\in(0,T]}t^{\beta/2}\big{\|}w_{1}^{\#}(t)-w_{2}^{\#}(t) \big{\|}_{C^{(\alpha-2)+\beta}}\Big{)},\]
_for some quadratic polynomial \(P\)._
### Long range mean field equations
As a direct application of the results of Section 3.1 we treat in this section a particular case of mean field singular stochastic PDE where the function \(f\) in (1.3) has a simple structure. Let a function \(F\in C_{b}^{3}(\mathsf{R}^{2},\mathsf{R})\) and a \(C_{b}^{2}\) kernel \(k(z,z^{\prime})\)
on the torus \(\mathsf{T}^{2}\) be given, together with a constant \(\beta\in(2/3,\alpha)\). For \(a\in C^{\alpha}\) and \(\mu\in\mathcal{P}_{p}(C^{\alpha})\) we set in this section
\[f(a,\mu)(z)=\int_{C^{\alpha}}\int_{\mathsf{T}^{2}}F\big{(}a(z),b(z^{\prime}) \big{)}k(z,z^{\prime})\,dz^{\prime}\mu(db). \tag{3.9}\]
This is a linear function of its measure argument. The setting and results of Section 3.1 are sufficient to deal with the mean field equation
\[(\partial_{t}-\Delta)u=f(u,\mathcal{L}(u_{t}))\zeta+g(u,\mathcal{L}(u_{t})), \tag{3.10}\]
when \(f\) has the form (3.9) and \(g\) satisfies the following Lipschitz condition.
_Assumption (\(\boldsymbol{A}_{g}\)) - One has \(\|g\big{(}a_{1},\mu_{1}\big{)}-g\big{(}a_{2},\mu_{2}\big{)}\|_{C^{(\alpha-2)+ \beta}}\lesssim\|a_{1}-a_{2}\|_{C^{\alpha}}+\mathcal{W}_{p,C^{\alpha}}\big{(} \mu_{1},\mu_{2}\big{)}\)._
We first deal with the paracontrolled structure of \(f(a,\mu)\). Fix \(t>0\) and some reference function \(X_{t}\in C^{\alpha}\).
_10 - Proposition.: For \(a\in\mathcal{D}^{\beta}(X_{t})\) and \(\mu\in\mathcal{P}_{p}(C^{\alpha})\) one has_
\[f(a,\mu)=f(a,\mu)^{\prime}<X_{t}+f(a,\mu)^{\#}\]
_with_
\[f(a,\mu)^{\prime}(z)=\int_{C^{\alpha}}\int_{\mathsf{T}^{2}}\partial_{1}F\big{(} a(z),b(z^{\prime})\big{)}k(z,z^{\prime})\,dz^{\prime}\mu(db),\]
_and_
\[\|f(a,\mu)^{\#}\|_{C^{\alpha+\beta}}\lesssim\big{(}1+\|X_{t}\|_{C^{\alpha}}^{ 2}\big{)}\Big{(}1+\|a^{\prime}\|_{C^{\beta}}+\|a^{\#}\|_{C^{\alpha}}\Big{)} \Big{(}1+\|a^{\prime}\|_{C^{\beta}}+\|a^{\#}\|_{C^{\alpha+\beta}}\Big{)}.\]
_Furthermore, for \(X_{t}^{i}\in C^{\alpha}\) and \(a_{i}\in\mathcal{D}^{\beta}(X_{t}^{i}),\mu_{i}\in\mathcal{P}_{p}(C^{\alpha})\), for \(1\leq i\leq 2\), one has_
\[\|f(a_{1},\mu_{1})^{\#}-f(a_{2},\mu_{2})^{\#}\|_{C^{\alpha+\beta}}\lesssim d_ {\mathcal{D}^{\beta}}\big{(}a_{1},a_{2}\big{)}+\mathcal{W}_{p,C^{\alpha}} \big{(}\mu_{1},\mu_{2}\big{)}+\|X_{t}^{1}-X_{t}^{2}\|_{C^{\alpha}}, \tag{3.11}\]
_for an implicit constant that is a polynomial of degree 3 on_
\[\max_{i=1,2}\Big{\{}1,\|a_{i}\|_{\mathcal{D}^{\beta}(X^{i})},\mathcal{W}_{p,C^ {\alpha}}(\mu_{i},\delta_{0}),\|X_{t}^{i}\|_{C^{\alpha}}\Big{\}}.\]
Proof.: We paralinearize with respect to the \(z\) variable, with \(z^{\prime}\) in the role of a parameter in the paraproducts below. We use the shorthand notations
\[k_{z^{\prime}}(z):=k(z,z^{\prime}),\quad F_{b(z^{\prime})}(w):=F(w,b(z^{\prime })).\]
With these notations one has
\[F(a,b(z^{\prime})) =\partial_{1}F(a,b(z^{\prime}))<a+F_{b(z^{\prime})}(a)^{\sharp}\] \[=\big{\{}\partial_{1}F(a,b(z^{\prime}))a^{\prime}\big{\}}<X_{t}+ \partial_{1}F(a,b(z^{\prime}))<a^{\#}\] \[\quad+\partial_{1}F\big{(}a,b(z^{\prime}))<(a^{\prime}<X_{t})- \big{(}\partial_{1}F\big{(}a,b(z^{\prime})\big{)}a^{\prime}<X_{t}\big{)}+F_{b( z^{\prime})}(a)^{\sharp}\]
and
\[f(a,\mu) =\bigg{\{}a^{\prime}\int_{\mathsf{T}^{2}\times C^{\alpha}}\partial _{1}F\big{(}a,b(z^{\prime})\big{)}k_{z^{\prime}}\,dz^{\prime}\mu(db)\bigg{\}}< X_{t}\] \[\quad+\int_{\mathsf{T}^{2}\times C^{\alpha}}\bigg{(}\big{(} \partial_{1}F\big{(}a,b(z^{\prime})\big{)}a^{\prime}<X_{t}\big{)}k_{z^{\prime}}- \big{\{}k_{z^{\prime}}\partial_{1}F\big{(}a,b(z^{\prime})\big{)}a^{\prime} \big{\}}<X_{t}\bigg{)}dz^{\prime}\mu(db)\] \[\quad+\int_{\mathsf{T}^{2}\times C^{\alpha}}k_{z^{\prime}}F_{b(z^{ \prime})}(a)^{\sharp}\,dz^{\prime}\mu(db)+\int_{C^{\alpha}}\int_{\mathsf{T}^{2}} \big{(}\partial_{1}F(a,b)<a^{\#}\big{)}k_{z^{\prime}}\,dz^{\prime}\mu(db)\] \[=:\bigg{\{}a^{\prime}\int_{\mathsf{T}^{2}\times C^{\alpha}} \partial_{1}F\big{(}a,b(z^{\prime})\big{)}k_{z^{\prime}}\,dz^{\prime}\mu(db) \bigg{\}}<X_{t}+f(a,b)^{\#}.\]
We estimate each term separately to show that the remainder is regular, using commutator type estimates when needed. First, since \(k_{z^{\prime}}\) is \(C_{b}^{2}\) and \(\alpha+\beta<2\) we have from (3.1) the continuity estimate
\[\Big{\|}\big{(}\{\partial_{1}F\big{(}a,b(z^{\prime})\big{)}a^{ \prime}\}<X_{t}\big{)}k_{z^{\prime}} -\big{\{}k_{z^{\prime}}\partial_{1}F\big{(}a,b(z^{\prime})\big{)}a^{ \prime}\big{\}}<X_{t}\Big{\|}_{C^{\alpha+\beta}}\] \[\lesssim\|k_{z^{\prime}}\|_{C^{2\alpha}}\|\partial_{1}F\big{(}a,b( z^{\prime})\big{)}a^{\prime}\|_{C^{\beta}}\|X_{t}\|_{C^{\alpha}}\] \[\lesssim\|k\|_{C^{2}_{b}}\big{(}1+\|a\|_{C^{\alpha}}\big{)}\|a^{ \prime}\|_{C^{\beta}}\|X_{t}\|_{C^{\alpha}}\] \[\lesssim\big{(}1+\|X_{t}\|_{C^{\alpha}}^{2}\big{)}\Big{(}1+\|a^{ \prime}\|_{C^{\beta}}^{2}+\|a^{\#}\|_{C^{\alpha}}^{2}\Big{)}\]
and
\[\Big{\|}\partial_{1}F(a,b(z^{\prime}))<(a^{\prime}<X_{t}) -\big{\{}\partial_{1}F(a,b(z^{\prime}))a^{\prime}\big{\}}<X_{t} \Big{\|}_{C^{\alpha+\beta}}\] \[\lesssim\|\partial_{1}F(a,b(z^{\prime}))\|_{C^{\beta}}\|a^{ \prime}\|_{C^{\beta}}\|X_{t}\|_{C^{\alpha}}\] \[\lesssim\big{(}1+\|a\|_{C^{\alpha}}\big{)}\|a^{\prime}\|_{C^{ \beta}}\|X_{t}\|_{C^{\alpha}}\] \[\lesssim\big{(}1+\|X_{t}\|_{C^{\alpha}}^{2}\big{)}\Big{(}1+\|a^{ \prime}\|_{C^{\beta}}^{2}+\|a^{\#}\|_{C^{\alpha}}^{2}\Big{)}\]
and
\[\|k_{z^{\prime}}F_{b(z^{\prime})}(a)^{\sharp}\|_{C^{\alpha+\beta}}\lesssim\|F _{b(z^{\prime})}\|_{C^{\beta}_{b}}\big{(}1+\|a\|_{C^{\alpha}}^{2}\big{)} \lesssim\big{(}1+\|X_{t}\|_{C^{\alpha}}^{2}\big{)}\Big{(}1+\|a^{\prime}\|_{C^ {\beta}}^{2}+\|a^{\#}\|_{C^{\alpha}}^{2}\Big{)}\]
and
\[\|\big{(}\partial_{1}F(a,b(z^{\prime}))<a^{\#}\big{)}k_{z^{\prime }}\|_{C^{\alpha+\beta}} \lesssim\big{(}1+\|a\|_{C^{\alpha}}\big{)}\|a^{\#}\|_{C^{\alpha +\beta}}\] \[\lesssim\big{(}1+\|X_{t}\|_{C^{\alpha}}\big{)}\Big{(}1+\|a^{ \prime}\|_{C^{\beta}}+\|a^{\#}\|_{C^{\alpha}}\Big{)}\|a^{\#}\|_{C^{\alpha+ \beta}}.\]
Integrating over \(z^{\prime}\) and summing we get
\[\|f(a,\mu)^{\#}\|_{C^{\alpha+\beta}}\lesssim\big{(}1+\|X_{t}\|_{C^{\alpha}}^{ 2}\big{)}\Big{(}1+\|a^{\prime}\|_{C^{\beta}}+\|a^{\#}\|_{C^{\alpha}}\Big{)} \Big{(}1+\|a^{\prime}\|_{C^{\beta}}+\|a^{\#}\|_{C^{\alpha+\beta}}\Big{)}.\]
We leave the proof of the estimate (3.11) to the reader as it is similar to what is above. \(\rhd\)
For \(\widehat{\zeta}\in\mathfrak{N}\) we write \(Z:=\mathscr{L}^{-1}(\zeta)\), so \(Z\in\mathscr{C}^{\alpha}_{T}\). We emphasize below the fact that \(u\) is paracontrolled in the product of \(f(u_{t},\mu_{t})\) with \(\zeta_{t}\) by writing \(f(\mathbf{u}_{t},\mu_{t})\zeta_{t}\).
11. _- Proposition. Assume Assumption_ **(A\({}_{g}\))** _holds and fix \(0<T_{0}<\infty\). For every initial condition \(u_{0}\in C^{\alpha}\), for every enhanced noise \(\widehat{\zeta}\in\mathfrak{N}\) and any \(\mu\in\mathcal{P}_{p}(\mathscr{C}^{\alpha}_{T_{0}})\) there exists a positive time horizon \(T\leq T_{0}\) and a unique solution to the equation_ \[(\partial_{t}-\Delta)u=f(\mathbf{u}_{t},\mu_{t})\zeta_{t}+g(u_{t},\mu_{t})\] (3.12) _in \(\mathcal{D}^{\alpha,\beta}_{T}(Z)\). This solution is a locally Lipschitz function of \(u_{0}\in C^{\alpha},\mu\in\mathcal{P}_{p}(\mathscr{C}^{\alpha}_{T})\) and \(\widehat{\zeta}\in\mathfrak{N}\)._
_Proof -_ Rewrite equation (3.12) as the fixed point equation
\[u_{t}=P_{t}u_{0}+\int_{0}^{t}P_{t-s}\big{(}f(\mathbf{u}_{s},\mu_{s})\zeta_{s}+g( u_{s},\mu_{s})\big{)}\,ds.\]
We get from Proposition 10 and Proposition 8 that \(f(\mathbf{u}_{s},\mu_{s})\zeta_{s}+g(u_{s},\mu_{s})\) is for each \(s\) an element of \(\mathcal{D}^{\alpha}(\zeta_{s})\) with Gubinelli derivative \(f(u_{s},\mu_{s})\) and remainder \((f(\mathbf{u}_{s},\mu_{s})\zeta_{s})^{\#}+g(u_{s},\mu_{s})\). With Proposition 9 in mind we check that \(f(\mathbf{u},\mu)\in\mathscr{C}^{\alpha}_{T_{0}}\) and \((f(u_{s},\mu_{s})\zeta_{s})^{\#}+g(u_{s},\mu_{s})\) satisfies (3.7). Take \(\mathbf{u}\in\mathcal{D}^{\alpha,\beta}_{T}(Z)\). First one has for \((s,x),(t,y)\in[0,T_{0}]\times\mathrm{T}^{2}\)
\[\big{|}f(u_{t},\mu_{t})(y)-f(u_{s},\mu_{s})(x)\big{|} =\Big{|}\int_{\mathrm{T}^{2}\times\mathscr{C}^{\alpha}_{T}}F\big{(}u_{t}(y),v _{t}(z)\big{)}k(y,z)-F\big{(}u_{s}(x),v_{s}(z)\big{)}k(x,z)\,dz\mu(dv)\Big{|}\] \[\leq\int_{\mathrm{T}^{2}\times\mathscr{C}^{\alpha}_{T}}\bigg{(} \big{|}F\big{(}u_{t}(y),v_{t}(z)\big{)}\big{(}k(y,z)-k(x,z)\big{)}\big{|}\] \[+\big{|}F\big{(}u_{t}(y),v_{t}(y)\big{)}-F\big{(}u_{s}(x),v_{s}(x) \big{)}\big{|}\,|k(x,z)|\bigg{)}\,dz\mu(dv)\]
\[\lesssim\int_{\mathsf{T}^{2}\times\mathscr{C}^{\alpha}_{T}}\Big{(}|x-y|+ \big{(}\|u\|_{\mathscr{C}^{\alpha}_{T}}+\|v\|_{\mathscr{C}^{\alpha}_{T}}\big{)} \big{(}|x-y|^{\alpha}+|t-s|^{\alpha/2}\big{)}\Big{)}\,dz\mu(dv)\] \[\lesssim\big{(}1+\|u\|_{\mathscr{C}^{\alpha}_{T}}+\mathcal{W}_{p, \mathscr{C}^{\alpha}_{T}}\big{(}v,\delta_{\mathbf{0}}\big{)}\big{)}\big{(}|x-y |^{\alpha}+|t-s|^{\alpha/2}\big{)},\]
so we have the norm estimate
\[\|f(u,\mu)\|_{\mathscr{C}^{\alpha}_{T_{0}}}\lesssim\big{(}1+\|Z\|_{\mathscr{C} ^{\alpha}_{T_{0}}}\big{)}\Big{(}1+\|\mathbf{u}\|_{\mathcal{D}^{\alpha,\beta}_{ T_{0}}}+\mathcal{W}_{p,\mathscr{C}^{\alpha}_{T_{0}}}\big{(}\mu,\delta_{ \mathbf{0}}\big{)}\Big{)}.\]
Second, one gets for \(0<T\leq T_{0}\)
\[\sup_{t\in(0,T]}t^{\beta/2}\big{\|}\big{(}f(\mathbf{u}_{t},\mu_{t})\zeta_{t} \big{)}^{\#}+g(u_{t},\mu_{t})\big{\|}_{\alpha+\beta-2}\lesssim\big{(}1+\| \widehat{\zeta}\,\|_{\mathfrak{N}}^{3}\big{)}\Big{(}1+\|\mathbf{u}\|^{2}_{ \mathcal{D}^{\alpha,\beta}_{T}}+\mathcal{W}_{p,\mathscr{C}^{\alpha}_{T}} \big{(}\mu,\delta_{\mathbf{0}}\big{)}\Big{)}. \tag{3.13}\]
from Proposition 10 and Proposition 8. It follows from Proposition 9 that the map
\[\Phi_{\widehat{\zeta},u_{0},\mu}:\mathcal{D}^{\alpha,\beta}_{T}(Z)\to\mathcal{ D}^{\alpha,\beta}_{T}(Z)\]
which associates to \(\mathbf{u}\in\mathcal{D}^{\alpha,\beta}_{T}(Z)\) the solution \(w\) of the equation
\[\mathscr{L}w=f(\mathbf{u},\mu)\zeta+g(u,\mu),\]
with initial condition \(w_{0}=u_{0}\), is well-defined and satisfies the estimate
\[\big{\|}\Phi_{\widehat{\zeta},u_{0},\mu}(\mathbf{u})\big{\|}_{\mathcal{D}^{ \alpha,\beta}_{T}}\lesssim\|u_{0}\|_{C^{\alpha}}+T^{\frac{\alpha-\beta}{2}} \Big{(}1+\big{\|}\widehat{\zeta}\,\big{\|}_{\mathfrak{N}}^{3}\Big{)}\Big{(}1 +\|\mathbf{u}\|^{2}_{\mathcal{D}^{\alpha,\beta}_{T}(X)}+\mathcal{W}_{p, \mathscr{C}^{\alpha}_{T}}\big{(}\mu,\delta_{\mathbf{0}}\big{)}\Big{)}. \tag{3.14}\]
One can then find
\[M=M\Big{(}\|u_{0}\|_{\alpha}\vee\|\widehat{\zeta}\,\|_{\mathfrak{N}}\vee \mathcal{W}_{p,\mathscr{C}^{\alpha}_{T}}\big{(}\mu,\delta_{\mathbf{0}}\big{)} \Big{)}\]
and
\[T=T\Big{(}\|u_{0}\|_{\alpha}\vee\|\widehat{\zeta}\|_{\mathfrak{N}}\vee \mathcal{W}_{p,\mathscr{C}^{\alpha}_{T}}\big{(}\mu,\delta_{\mathbf{0}}\big{)} \Big{)}\]
such that the map \(\Phi_{\widehat{\zeta},u_{0},\mu}\) sends the ball \(\Big{\{}\mathbf{u}\in\mathcal{D}^{\alpha,\beta}_{T}(Z)\,;\,\|\mathbf{u}\|_{ \mathcal{D}^{\alpha,\beta}_{T}}\leq M\Big{\}}\) into itself. One can choose \(M\) as an increasing function of its arguments and \(T\) as a decreasing function of its arguments.
Given \(\widehat{\zeta}_{1},\widehat{\zeta}_{2}\) in \(\mathfrak{N}\), two initial conditions \(u_{01},u_{02}\) in \(C^{\alpha}\) and \(\mu_{1},\mu_{2}\) in \(\mathcal{P}_{p}(\mathscr{C}^{\alpha}_{T})\), set
\[M^{\prime}=M\Big{(}\max_{i=1,2}\Big{\{}\|u_{0i}\|_{C^{\alpha}}\vee\|\widehat{ \zeta}_{i}\|_{\mathfrak{N}}\vee\mathcal{W}_{p,\mathscr{C}^{\alpha}_{T}}\big{(} \mu_{i},\delta_{\mathbf{0}}\big{)}\Big{\}}\Big{)}.\]
For \(\|\mathbf{u}\|_{\mathcal{D}^{\alpha,\beta}_{T}}\leq M^{\prime}\), Proposition 9 tells us that
\[\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(}\Phi_{\widehat{ \zeta}_{1},u_{01},\mu_{1}}(\mathbf{u}_{1}),\Phi_{\widehat{\zeta}_{2},u_{02}, \mu_{2}}(\mathbf{u}_{2})\big{)}\] \[\lesssim\|u_{01}-u_{02}\|_{C^{\alpha}}+T^{(\alpha-\beta)/2}\Big{\{} \mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(}\mathbf{u}_{1},\mathbf{u}_{ 2}\big{)}+\|\widehat{\zeta}_{1}-\widehat{\zeta}_{2}\|_{\mathfrak{N}}+ \mathcal{W}_{p,\mathscr{C}^{\alpha}_{T}}\big{(}\mu^{1},\mu^{2}\big{)}\Big{\}}.\]
So choosing \(T\) small ensures that the map \(\Phi_{\widehat{\zeta},u_{0},\mu}\) has a unique fixed point \(\mathbf{u}=(u^{\prime},u^{\sharp})\) which depends in a locally Lipschitz way on \(u_{0}\in C^{\alpha},\mu\in\mathcal{P}_{p}(\mathscr{C}^{\alpha}_{T})\) and \(\widehat{\zeta}\in\mathfrak{N}\). \(\rhd\)
Before we can consider the case where \(\zeta\) is random and formulate a fixed point equation to get \(\mu_{t}=\mathcal{L}(u_{t})\) we need a setting where the local solution to equation (3.12) can be turned into a fixed horizon solution. The following statement is a first step to do that. It gives an explosion criterion. It is a small variation on a similar result in Theorem 5.4 of [11].
12 - Lemma._For every \(R>0\), the solution \(\mathbf{u}\) to equation (3.12) is defined up to the time_
\[T^{*}=\inf\big{\{}t\geq 0,\quad\|u(t)\|_{L^{\infty}}\geq R\big{\}}.\]
_Proof -_ The existence time \(T\) from Proposition 11 is a decreasing function
\[T=T\big{(}\|u_{0}\|_{C^{\alpha}},\|\widehat{\zeta}\|_{\mathfrak{N}},\mathcal{W}_{ p,\mathscr{C}^{\alpha}_{T}}(\mu,\delta_{\mathbf{0}})\big{)}\]
of its arguments. One fixes here \(\widehat{\zeta}\) and \(\mu\) and consider \(T\) as a function of \(\|u_{0}\|_{C^{\alpha}}\). We obtain below a constant bound for \(\|u\|_{C^{\alpha}}\) that is valid as long as \(\|u\|_{L^{\infty}}\leq R\). As \(\|u\|_{C_{T}C^{\alpha}}\lesssim_{\widehat{\zeta}}\|\mathbf{u}\|_{\mathcal{D}^{ \alpha,\beta}_{T}}\) we actually prove that
\[\|u\|_{\mathcal{D}^{\alpha,\beta}_{T}}\lesssim_{\mu,\widehat{\zeta}}1+\|u\|^{2 }_{C_{T}L^{\infty}}.\]
This is done as follows. Since \(u^{\prime}_{t}=f(u_{t},v_{t})\), we have
\[\|u^{\prime}\|_{\mathscr{C}^{\alpha}_{T}}\lesssim_{\mu}1+\|u\|_{\mathscr{C}^{ \beta}_{T}}.\]
Yet since \(u=u^{\prime}\prec X+u^{\#}\) where \(u^{\prime}\) appears as an \(L^{\infty}\) contribution we have
\[\|u^{\prime}\|_{\mathscr{C}^{\beta}_{T}}\lesssim_{\mu,\widehat{\zeta},R}1+ \|u^{\#}\|_{\mathscr{C}^{\beta}_{T}}.\]
We now use the fact that
\[(\partial_{t}-\Delta)u^{\#}=\Phi^{\#} \tag{3.15}\]
where
\[\Phi^{\#}=\big{(}f(\mathbf{u},\mu)\zeta-f(u,\mu)\prec\zeta\big{)}+g(u,\mu).\]
The refined paralinearization lemma C.1 from [11] ensures that
\[\big{\|}f\big{(}u^{\prime}\prec X+u^{\#},\mu\big{)} -f^{\prime}\big{(}u^{\prime}\prec X+u^{\#},\mu\big{)}\prec\big{(}u ^{\prime}\prec X+u^{\#}\big{)}\big{\|}_{C^{\alpha+\beta}}\] \[\lesssim_{\mu}\big{(}1+\|u^{\prime}\prec X\|^{2}_{C^{\alpha}}+\|u ^{\#}\|^{2}_{L^{\infty}}\big{)}\big{(}1+\|u^{\#}\|_{C^{\alpha+\beta}}\big{)}\] \[\lesssim_{\widehat{\zeta},\mu}\big{(}1+\|u\|^{2}_{L^{\infty}} \big{)}\big{(}1+\|u^{\#}\|_{C^{\alpha+\beta}}\big{)},\]
so using the continuity relation (3.3) and the estimate (3.6) from Definition 7 we obtain
\[\|\Phi^{\#}\|_{C^{\alpha+\beta-2}} \lesssim_{\widehat{\zeta},\mu}\Big{(}1+\|u\|^{2}_{C_{T}L^{\infty }}\Big{)}\Big{(}1+\|u\|_{\mathscr{C}^{\alpha}_{T}}+\|u^{\#}\|_{C^{\alpha+\beta }}\Big{)}\] \[\lesssim_{\widehat{\zeta},\mu}\Big{(}1+\|u\|^{2}_{C_{T}L^{\infty }}\Big{)}\Big{(}1+\|u^{\#}\|_{\mathscr{C}^{\alpha}_{T}}+\|u^{\#}\|_{C^{\alpha+ \beta}}\Big{)},\]
where the constant is a polynomial in \(\|\widehat{\zeta}\|_{\eta}\) of degree \(3\). Schauder estimates - Lemma 5.3 of [11], ensure that
\[\sup_{0<t<T}t^{\beta/2}\|u^{\#}\|_{C^{\alpha+\beta}}\lesssim_{u_{0}}1+\sup_{0 <t<T}t^{\beta/2}\|\Phi^{\#}\|_{C^{\alpha+\beta-2}}, \tag{3.16}\]
and
\[\|u^{\#}\|_{\mathscr{C}^{\alpha}_{T}}\lesssim_{u_{0}}1+\sup_{0<t<T}t^{\beta/ 2}\|\Phi^{\#}\|_{C^{\alpha+\beta-2}}, \tag{3.17}\]
so we have
\[\sup_{0<t\leq T}t^{\beta/2}\|\Phi^{\#}\|_{\alpha+\beta-2}\lesssim_{u_{0},\mu, \widehat{\zeta}}\Big{(}1+\|u\|^{2}_{C_{T}L^{\infty}}\Big{)}\Big{(}1+\sup_{0<t \leq T}t^{\beta/2}\|\Phi^{\#}\|_{C^{\alpha+\beta-2}}\Big{)}. \tag{3.18}\]
The coefficient in front of the sup term in the right hand side does not allow a priori to absorb that term in the left hand side. We follow [11] and use a scaling argument to isolate the \(\Phi^{\#}\) terms. Let
\[(\Lambda^{\lambda}u)(t,x):=u(\lambda^{2}t,\lambda x)\]
and
\[\mathsf{T}^{2}_{\lambda}=\big{(}\mathsf{R}/(2\pi\lambda^{-1}\mathbf{Z})\big{)} ^{2}.\]
We have
\[(\partial_{t}-\Delta)\circ\Lambda^{\lambda}=\lambda^{2}\Lambda^{\lambda}\circ( \partial_{t}-\Delta)\]
and
\[\zeta^{\lambda}:=\lambda^{2-\alpha}\Lambda^{\lambda}\zeta,\quad\|\zeta^{ \lambda}\|_{\alpha-2}\simeq\|\zeta\|_{C^{\alpha-2}},\]
a deterministic estimate, and
\[u^{\lambda}:=\Lambda^{\lambda}u\]
is a solution of the equation
\[(\partial_{t}-\Delta)u^{\lambda}=\lambda^{\alpha}f(\mathbf{u}^{\lambda},\mu^{ \lambda})\zeta^{\lambda}+\lambda^{2}g(u^{\lambda},\mu^{\lambda}).\]
We now rewrite (3.18) for the rescaled equation, that is replacing \(f\) with \(\lambda^{\alpha}f\) and \(g\) with \(\lambda^{2}g\), the bound for \(\Phi^{\#}\) becomes for \(\lambda\leq 1\)
\[\|\Phi^{\#,\lambda}\|_{C^{\alpha+\beta-}} \lesssim(\lambda^{\alpha}+\lambda^{2})\big{(}1+\|u^{\lambda}\|_{C_ {T}L^{\infty}}^{2}\big{)}\big{(}1+\|u^{\#,\lambda}\|_{C^{\alpha}_{T}}+\|u^{\#, \lambda}\|_{C^{\alpha+\beta}}\big{)}\] \[\lesssim\lambda^{\alpha}\big{(}1+\|u^{\lambda}\|_{C_{T}L^{\infty} }^{2}\big{)}\big{(}1+\|u^{\#,\lambda}\|_{C^{\alpha}_{T}}+\|u^{\#,\lambda}\|_{C ^{\alpha+\beta}}\big{)},\]
so
\[\sup_{0\leq t\leq T/\lambda^{2}}t^{\beta/2}\|\Phi^{\#,\lambda}\|_{C^{\alpha+ \beta-2}}\lesssim\lambda^{\alpha}\Big{(}1+\|u\|_{C_{T}L^{\infty}}^{2}\Big{)} \Big{(}1+\sup_{0\leq t\leq T/\lambda^{2}}t^{\beta/2}\|\Phi^{\#,\lambda}\|_{C^{ \alpha+\beta-2}}\Big{)},\]
and choosing \(\lambda\) small enough we finally get after inverse scaling
\[\sup_{0\leq t\leq T}t^{\beta/2}\|\Phi^{\#}\|_{C^{\alpha+\beta-2}}\lesssim_{u_{ 0},\widehat{\zeta},\mu}1+\|u\|_{C_{T}L^{\infty}}^{2}.\]
In the end we obtain from the estimates (3.16) and (3.17) the bound
\[\|u^{\#}\|_{\mathscr{C}_{T}}+\sup_{0\leq t\leq T}t^{\beta/2}\|u^{\#}\|_{C^{ \alpha+\beta}}\lesssim_{u_{0},\widehat{\zeta},\mu}1+\|u\|_{C_{T}L^{\infty}}^{2}.\]
\(\rhd\)
We are thus looking now for a condition on \(f\) that ensures a good control of the \(L^{\infty}\) norm of the solution to equation (3.12). We follow Proposition 3.28 of Cannizzaro, Friz & Gassiat's work [7] and introduce the following assumption to control the \(L^{\infty}\) norm of the solution \(u\) to (4.5).
_Assumption (B) - There exists a positive constant \(C_{0}\) such that_
\[f(\pm C_{0},\mu)=0,\quad g(\pm C_{0},\mu)=0\]
_for all \(\mu\in\mathcal{P}_{p}(C^{\alpha})\)._
Examples of such functions can be constructed from functions \(F\) such that \(F(\cdot,\mu)\) is compactly supported with a support independent of \(\mu\). Alternatively one can think of functions of the form \(F(c,\mu)=F_{1}(c)F_{2}(\mu)\) with separate variables, with \(F_{1}(\pm C_{0})=0\). We now specialize the result of Proposition 11 to the case where \(\widehat{\zeta}\) is the random enhancement \(\widehat{\xi}\) of a random noise \(\xi\) provided by Theorem 6. We emphasize that point by writing \(\mathbf{u}_{\widehat{\xi}}\) for the solution to equation (3.12) in that case. Given \(\varepsilon_{n}>0\) set
\[c_{n}:=\mathbb{E}[X^{\varepsilon_{n}}\odot\xi^{\varepsilon_{n}}].\]
_13 - Lemma. There is a sequence \(\varepsilon_{n}>0\) converging to \(0\) such that one has_
\[u=\lim_{n+\infty}u_{n}\]
_where \(u_{n}\) stands for the well-defined solution in \(\big{[}0,T(\|u_{0}\|_{C^{\alpha}})\big{]}\) of the equation_
\[(\partial_{t}-\Delta)u_{n}=f(u_{n}(t),\mu_{t})\zeta_{t}^{\varepsilon_{n}}-c_{ n}f(u_{n}(t),\mu_{t})f(u_{n}(t),\mu_{t})^{\prime}+g(u_{n}(t),\mu_{t}) \tag{3.19}\]
_Proof -_ The enhanced noise \(\widehat{\zeta}\) is the limit in \(\mathfrak{N}\) of the sequence of enhanced smooth noises \(\widehat{\zeta}_{n}:=(\zeta_{n},(X\odot\zeta)_{n})\) where
\[(X\odot\zeta)_{n}:=\zeta_{n}\odot X_{n}-c_{n}.\]
So it follows from Proposition 11 that the function \(u\) is the limit in \(C^{\alpha}\) of the sequence \(\widetilde{u}_{n}\) where \(\widetilde{\mathbf{u}}_{n}\) is the solution to equation (3.12) with noise \(\widehat{\zeta}_{n}\). We have
\[f(\widetilde{\mathbf{u}}_{n},\mu)\zeta_{n}+g(\widetilde{u}_{n},\mu) =f(\widetilde{u}_{n},\mu)<\zeta_{n}+\zeta_{n}<f(\widetilde{u}_{n},\mu)+f(\widetilde{u}_{n},\mu)^{\#}\odot\zeta_{n}\] \[\quad+\big{(}f(\widetilde{u}_{n},\mu)^{\prime},X_{n},\zeta_{n})+f (\widetilde{u}_{n},\mu)\big{(}X\odot\zeta\big{)}_{n}+g(\widetilde{u}_{n},\mu)\] \[=f(\widetilde{u}_{n},\mu)\zeta_{n}-c_{n}(ff^{\prime})(\widetilde {u}_{n},\mu)+g(\widetilde{u}_{n},\mu),\]
so \(\widetilde{u}_{n}\) is a solution of the equation
\[(\partial_{t}-\Delta)\widetilde{u}_{n}=f(\widetilde{\mathbf{u}}_{n}(t),\mu_{t} )\zeta_{t}^{\varepsilon_{n}}-c_{n}(ff^{\prime})(\widetilde{u}_{n}(t),\mu_{t}) +g(\widetilde{u}_{n}(t),\mu_{t}),\]
and one has indeed \(u_{n}=\widetilde{u}_{n}\).
_._
* _- Proposition. Under the assumptions_ _(A\({}_{g}\)-B)__, if_ \(\|u_{0}\|_{L^{\infty}}\leq C_{0}\) _then_ \({\bf u}_{\widehat{\xi}}\) _is defined globally in time._
_Proof -_ For every \(n\in{\bf N}\) the constant function \(C_{0}\) is a sub-solution and \(-C_{0}\) is a super-solution of renormalized regularized equation (3.19). It follows from the classical comparison principle that one has
\[|u_{n}(t,x)|\leq C_{0}\]
for all \(t\leq T\) and \(x\in{\sf T}^{2}\). The local Lipschitz continuity of \({\bf u}_{\widehat{\xi}}\) as a function of \(\widehat{\xi}\) and the convergence in \({\mathfrak{N}}\) of \(\widehat{\xi}^{{}_{n}}\) ensure that \(u_{n}\) is converging to \(u\) in \(C_{T}L^{\infty}\). It follows that we have \(\|u(t)\|_{L^{\infty}}\leq C_{0}\) for all \(0\leq t\leq T\). The result of the statement follows from the explosion criterion of Lemma 12. \(\rhd\)
_- Proposition. If \(\|u_{0}\|_{L^{\infty}}\leq C_{0}\) the random variable \(\|{\bf u}\|_{{\cal D}^{\alpha,\beta}_{\sf T}}(\omega)\) has moments of any order._
_Proof -_ Following what was done in the proof of the Proposition 22 we have an estimate
\[\|{\bf u}\|_{{\cal D}^{\alpha,\beta}_{\sf T}}\lesssim_{\widehat{\zeta},u_{0}, \mu}1+\|u\|_{C_{T}L^{\infty}}^{2}\lesssim_{\widehat{\xi},u_{0},\mu}1+C_{0}\]
with an implicit multiplicative constant that is polynomial function in \(\|\widehat{\xi}\|_{{\mathfrak{N}}}\) of degree \(3\). \(\rhd\)
_- Theorem. Fix \(T_{0}>0\). Suppose that \(f\) and \(g\) satisfy assumptions **(A\({}_{g}\)-B)** and pick \(1\leq p<\infty\). There exists a positive time \(T\leq T_{0}\) with the following property. For every \(u_{0}\in C^{\alpha}\) there exists a unique solution to the mean field equation (3.10) in \(L^{p}(\Omega,{\mathscr{C}}^{\alpha}_{T_{0}})\). It is a locally Lipschitz continuous function of the initial condition \(u_{0}\) and the enhanced noise \(\widehat{\xi}\in L^{12p}(\Omega,{\mathfrak{N}})\). Furthermore \(u\) is the limit in \(L^{p}\big{(}\Omega,{\mathscr{C}}^{\alpha}_{T_{0}}\big{)}\) of the solutions \(u_{\varepsilon}\) of the renormalized equations_
\[(\partial_{t}-\Delta)u_{\varepsilon}=f\big{(}u_{\varepsilon},{\cal L}(u_{ \varepsilon}(t))\big{)}\zeta^{\varepsilon}-c_{\varepsilon}(t)(\partial_{1}ff )\big{(}u_{\varepsilon},{\cal L}(u_{\varepsilon}(t))\big{)}+g\big{(}u_{ \varepsilon},{\cal L}(u_{\varepsilon}(t))\big{)}.\]
_Proof -_ Pick \(0<T\leq T_{0}\). Write \({\bf u}^{\mu}_{\widehat{\xi},u_{0}}\) for the solution to equation (3.12). We define from Proposition 11 a map \(\Psi_{\widehat{\xi},u_{0}}\) from \(L^{p}(\Omega,{\mathscr{C}}^{\alpha}_{T})\) into itself setting
\[\Psi_{\widehat{\xi},u_{0}}(\mu)={\bf u}^{\mu}_{\widehat{\xi},u_{0}}.\]
One has from the estimate (3.14)
Integrating and using Cauchy-Schwarz inequality we get for \(\mathbb{E}\big{[}\|{\bf u}^{\mu}_{\widehat{\xi},u_{0}}\|_{{\cal D}^{\alpha, \beta}_{\sf T}}^{2p}\big{]}^{2}\) the upper bound
\[\|u_{0}\|_{C^{\alpha}}^{4p}+T^{4p\delta}\Big{(}1+\mathbb{E}\Big{[} \big{\|}\widehat{\xi}\big{\|}_{{\mathfrak{N}}}^{12p}\Big{]}\Big{)}\Big{\{} \mathbb{E}\big{[}\|{\bf u}^{\mu}_{\widehat{\xi},u_{0}}\|_{{\cal D}^{\alpha, \beta}_{\sf T}}^{2p}\Big{]}\Big{(}1+\|u_{0}\|_{C^{\alpha}}+{\cal W}_{p,{\mathscr{ C}}^{\alpha}_{T}}\big{(}\mu,\delta_{\bf 0}\big{)}\Big{)}^{6p}\] \[+{\cal W}_{p,{\mathscr{C}}^{\alpha}_{T}}\big{(}\mu,\delta_{\bf 0} \big{)}^{4p}\Big{\}}.\]
So for \(T=T\big{(}{\cal W}_{p,{\mathscr{C}}^{\alpha}_{T}}\big{(}\mu,\delta_{\bf 0} \big{)}\big{)}\) sufficiently small we have
\[\mathbb{E}\big{[}\big{\|}{\bf u}^{\mu}_{\widehat{\xi},u_{0}}\|_{{\cal D}^{ \alpha,\beta}_{\sf T}}^{2p}\big{]}^{\frac{1}{2p}}\lesssim\|u_{0}\|_{C^{\alpha} }+T^{\delta}\Big{(}1+\mathbb{E}\big{[}\big{\|}\widehat{\xi}\big{\|}_{{\mathfrak{ N}}}^{12p}\big{]}^{\frac{1}{4p}}\Big{)}\,{\cal W}_{p,{\mathscr{C}}^{\alpha}_{T}} \big{(}\mu,\delta_{\bf 0}\big{)}.\]
We have
\[\|u\|_{{\mathscr{C}}^{\alpha}_{T}}\lesssim(1+\|X\|_{{\mathscr{C}}^{\alpha}_{T}} )\|{\bf u}^{\mu}_{\widehat{\xi},u_{0}}\|_{{\cal D}^{\alpha,\beta}_{T}(X)},\]
so we have from Cauchy-Schwarz inequality
\[\mathbb{E}\big{[}\big{\|}\mathbf{u}_{\widehat{\xi},u_{0}}^{\mu} \big{\|}_{\mathscr{C}_{T}^{\alpha}}^{p}\big{]}^{\frac{1}{p}} \lesssim\mathbb{E}\big{[}\|\mathbf{u}_{\widehat{\xi},u_{0}}^{\mu} \|_{\mathcal{D}_{T^{\alpha},\beta}^{2p}}^{2p}\big{]}^{\frac{1}{2p}}\Big{(}1+ \mathbb{E}\big{[}\big{\|}\widehat{\xi}\big{\|}_{9{\mathfrak{I}}}^{2p}\big{]}^ {\frac{1}{2p}}\Big{)} \tag{3.20}\] \[\lesssim\Big{(}1+\|u_{0}\|_{C^{\alpha}}\Big{)}\Big{(}1+\mathbb{E} \big{[}\big{\|}\widehat{\xi}\big{\|}_{9{\mathfrak{I}}}^{12p}\big{]}^{\frac{1}{ 3p}}\Big{)}\Big{(}1+T^{\delta}\mathcal{W}_{p,\mathscr{C}_{T}^{\alpha}}\big{(} \mu,\delta_{{\mathfrak{0}}}\big{)}\Big{)}. \tag{3.21}\]
Pick \(A>0\). For \(M\) sufficiently big and \(T=T(M,A)\) even smaller, for every \(u_{0}\in C^{\alpha}\) with \(\|u_{0}\|_{C^{\alpha}}\leq A\), the map \(\Psi_{\widehat{\xi},u_{0}}\) sends the ball
\[\Big{\{}\mu\in L^{p}(\Omega,\mathscr{C}_{T}^{\alpha})\,;\,\mathcal{W}_{p, \mathscr{C}_{T}^{\alpha}}(\mu,\delta_{{\mathfrak{0}}})\leq M\Big{\}}\]
into itself. Now pick \(\mu_{1},\mu_{2}\) in \(L^{p}(\Omega,\mathscr{C}_{T}^{\alpha})\), two initial conditions \(u_{01},u_{02}\) in \(C^{\alpha}\) and \(\widehat{\xi}_{1},\widehat{\xi}_{2}\) in \(L^{12p}(\Omega,{\mathfrak{N}})\) such that one has
\[\mathbb{E}\big{[}\big{\|}\widehat{\xi}_{i}\big{\|}_{{\mathfrak{N}}_{1}}^{8p} \big{]}\vee\|u_{0i}\|_{C^{\alpha}}\leq A,\qquad\mathcal{W}_{p,\mathscr{C}_{T} ^{\alpha}}\big{(}\mu_{i},\delta_{{\mathfrak{0}}}\big{)}\leq M,\]
for \(1\leq i\leq 2\). Write \(\mathbf{u}_{i}\) for \(\Phi_{\widehat{\xi}_{i},u_{0i}}(\mu_{i})\) and define the random variable
\[R:=\big{\|}\widehat{\xi}_{1}\big{\|}_{{\mathfrak{N}}}+\big{\|}\widehat{\xi}_{ 2}\big{\|}_{{\mathfrak{N}}}.\]
We have from the Schauder estimates of Proposition 9
\[\mathrm{d}_{\mathcal{D}_{T}^{\alpha,\beta}}\big{(}\mathbf{u}_{1}, \mathbf{u}_{2}\big{)} \lesssim_{R}\|u_{01}-u_{02}\|_{C^{\alpha}}+T^{\delta}\bigg{\{}\big{\|} \widehat{\xi}_{1}-\widehat{\xi}_{2}\big{\|}_{{\mathfrak{N}}}+\mathrm{d}_{ \mathcal{D}_{T}^{\alpha,\beta}}(\mathbf{u}_{1},\mathbf{u}_{2})+\mathcal{W}_{p,\mathscr{C}_{T}^{\alpha}}\big{(}\mu^{1},\mu^{2}\big{)}\bigg{\}}\] \[\lesssim_{R}\|u_{01}-u_{02}\|_{C^{\alpha}}+T^{\delta}\bigg{\{} \big{\|}\widehat{\xi}_{1}-\widehat{\xi}_{2}\big{\|}_{{\mathfrak{N}}}+\mathrm{ d}_{\mathcal{D}_{T}^{\alpha,\beta}}(\mathbf{u}_{1},\mathbf{u}_{2})^{\frac{1}{2}}+ \mathcal{W}_{p,\mathscr{C}_{T}^{\alpha}}\big{(}\mu^{1},\mu^{2}\big{)}\bigg{\}},\]
for some implicit positive multiplicative constant that is a polynomial of \(R\), which is of degree \(5\), combining Proposition 9, Proposition 10 and Proposition 8. Integrating and using Cauchy-Schwarz inequality we obtain the estimate
\[\mathbb{E}\big{[}\mathrm{d}_{\mathcal{D}_{T}^{\alpha,\beta}}\big{(} \mathbf{u}_{1},\mathbf{u}_{2}\big{)}^{2p}\big{]}^{2}\lesssim \|u_{01}-u_{02}\|_{C^{\alpha}}^{4p}+\mathbb{E}\big{[}\big{\|} \widehat{\xi}_{1}-\widehat{\xi}_{2}\big{\|}_{{\mathfrak{N}}}^{4p}\big{]}\] \[+T^{4p\delta}\Big{\{}\mathbb{E}\big{[}\mathrm{d}_{\mathcal{D}_{T} ^{\alpha,\beta}}\big{(}\mathbf{u}_{1},\mathbf{u}_{2}\big{)}^{2p}\big{]}+ \mathcal{W}_{p,\mathscr{C}_{T}^{\alpha}}\big{(}\mu_{1},\mu_{2}\big{)}^{4p} \Big{\}},\]
so taking \(T>0\) deterministic, small enough, independently of \(u_{0i}\) and \(\widehat{\xi}_{i}\), ensures that we have
\[\mathbb{E}\big{[}\mathrm{d}_{\mathcal{D}_{T}^{\alpha,\beta}}\big{(}\mathbf{u}_{ 1},\mathbf{u}_{2}\big{)}^{2p}\big{]}^{2}\lesssim\|u_{01}-u_{02}\|_{C^{\alpha}} ^{4p}+\mathbb{E}\big{[}\|\widehat{\xi}_{1}-\widehat{\xi}_{2}\|^{4p}\big{]}+T^{4 p\delta}\mathcal{W}_{p,\mathscr{C}_{T}^{\alpha}}(\mu_{1},\mu_{2})^{4p}.\]
We have moreover
\[\|u_{1}-u_{2}\|_{\mathscr{C}_{T}^{\alpha}}\lesssim\big{(}1+\|X_{1}\|_{\mathscr{ C}_{T}^{\alpha}}\big{)}\,\mathrm{d}_{\mathcal{D}_{T}^{\alpha,\beta}}(\mathbf{u}_{1}, \mathbf{u}_{2})+\|X_{1}-X_{2}\|_{\mathscr{C}_{T}^{\alpha}}\|\mathbf{u}_{2}\|_{ \mathcal{D}_{T}^{\alpha,\beta}(X_{2})},\]
so we obtain from Cauchy-Schwarz inequality that
\[\mathbb{E}\big{[}\|u_{1}-u_{2}\|_{\mathscr{C}_{T}^{\alpha}}^{p}\big{]}^{2} \lesssim\big{(}1+\mathbb{E}[\|X_{1}\|_{\mathscr{C}_{T}^{\alpha}}]\big{)}\, \mathbb{E}\big{[}\mathrm{d}_{\mathcal{D}_{T}^{\alpha,\beta}}(\mathbf{u}_{1}, \mathbf{u}_{2})^{2p}\big{]}+\mathbb{E}\big{[}\|X_{1}-X_{2}\|_{\mathscr{C}_{T}^{ \alpha}}^{2p}\big{]}\mathbb{E}\big{[}\|\mathbf{u}_{2}\|_{\mathcal{D}_{T}^{ \alpha,\beta}(X_{2})}^{2p}\big{]},\]
hence
\[\mathcal{W}_{p,\mathscr{C}_{T}^{\alpha}}(\Psi(\mu_{1}),\Psi(\mu_{2}))\lesssim\|u_ {01}-u_{02}\|_{C^{\alpha}}^{4p}+\mathbb{E}\big{[}\|\widehat{\xi}_{1}-\widehat{ \xi}_{2}\|_{{\mathfrak{N}}_{1}}^{4p}\big{]}+T^{\delta}\mathcal{W}_{p,\mathscr{C}_{T }^{\alpha}}(\mu_{1},\mu_{2}).\]
We conclude that equation (1.3) has a unique local solution \(\mathbf{u}\) in \(\mathcal{P}_{p}(\mathscr{C}_{T}^{\alpha})\), and that the law \(\mathcal{L}(\mathbf{u})\in\mathcal{P}_{p}(\mathcal{D}_{T}^{\alpha,\beta}(X))\) of \(\mathbf{u}\) depends continuously on \(\widehat{\xi}\in L^{12p}(\Omega,{\mathfrak{N}})\) and on \(u_{0}\in C^{\alpha}\). \(\rhd\)
We remark that the integrability exponent \(12p\) in the condition \(\widehat{\xi}\in L^{12p}(\Omega,{\mathfrak{N}})\) in Proposition 15 comes from both the nonlinearity and the use of the Cauchy-Schwarz inequality when passing from \(\mathcal{D}_{T}^{\alpha,\beta}\) to \(\mathscr{C}_{T}^{\alpha}\). In the next section we obtain a better exponent \(8p\) as the last step is skipped, working directly in \(\mathcal{D}_{T}^{\alpha,\beta}.\) For the class of Gaussian noises of Theorem 6 we have \(\widehat{\xi}\in L^{q}(\Omega,{\mathfrak{N}})\) for all \(1\leq q<\infty\).
## 4 Mean field type singular SPDEs
We deal in this section with a large family of mean field type singular SPDEs (1.3). The enhancement of the noise needed to make sense of (1.3) is specific to the mean field setting and described in Section 4.1. The paracontrolled structure needed to make sense of (1.3) is described in Section 4.2. This structure is proved to be stable by a certain solution map to a fixed point equation (4.5) similar to (1.3) where the measure argument is frozen and has a particular structure. The proper statement and proof of item _(a)_ of Theorem 1 is done in Section 4.3.
### Mean field enhancement of the noise
We work here as above with the class of random Gaussian noises specified in Theorem 6. The random field \(\xi\) is initially defined on a probability space \((\Omega,\mathcal{F},\mathbb{P})\). We extend it canonically as a random variable defined on the probability space \(\left(\Omega^{2},\mathcal{F}^{\otimes 2},\mathbb{P}^{\otimes 2}\right)\) setting
\[\xi(\omega,\varpi)=\xi(\omega).\]
We also define
\[\overline{\xi}(\omega,\varpi):=\xi(\varpi);\]
this is under \(\mathbb{P}^{\otimes 2}\) an independent copy of \(\xi\). For a distribution \(\Lambda\) on \(\mathsf{T}^{2}\) and a positive regularization parameter \(\varepsilon\) set
\[\Lambda^{\varepsilon}:=\Lambda\circ e^{\varepsilon\Delta}\in C^{\infty}.\]
Recall \(T_{0}\) stands for the time horizon that we use in our definition of the space of enhanced noises \(\mathfrak{N}\) - the interval \([0,T_{0}]\) is our maximal interval of time. Pick \(1\leq p<\infty\). We define on \(\left(\Omega^{2},\mathcal{F}^{\otimes 2},\mathbb{P}^{\otimes 2}\right)\) the random variable
\[\overline{X}:=\mathscr{L}^{-1}(\overline{\xi}).\]
and denote by
\[\xi\odot\overline{X}\in L^{\mathfrak{N}p}(\mathbb{P}^{\otimes 2}),\]
the limit of the \(\xi^{\varepsilon}(\omega)\odot\mathscr{L}^{-1}(\overline{\xi}^{\varepsilon}( \varpi))\) as \(\varepsilon>0\) goes to \(0\). We have
\[\left\|\left(\xi\odot\overline{X}\right)(\omega,\cdot)\right\|_{L^{\mathfrak{ N}p}(\Omega,C_{T_{0}}C^{2\alpha-2})}<\infty\]
and
\[\left\|\left(\xi\odot\overline{X}\right)(\cdot,\varpi)\right\|_{L^{\mathfrak{ N}p}(\Omega,C_{T_{0}}C^{2\alpha-2})}<\infty\]
for \(\mathbb{P}\)-almost every \(\omega\in\Omega\) and \(\varpi\in\Omega\). We will use the notation \(\overline{\mathbb{E}}\) to denote the expectation operator with respect to \(\varpi\) on the product probability space.
17 - Definition: The **mean field enhancement of the random noise**\(\xi\) is the random variable defined on \(\left(\Omega^{2},\mathcal{F}^{2},\mathbb{P}^{\otimes 2}\right)\). We define on \((\Omega,\mathcal{F},\mathbb{P})\) the random variable
\[\begin{split}\langle\overline{\xi}^{+}\rrbracket_{\omega}& :=\|\xi(\omega)\|_{C_{T_{0}}C^{\alpha-2}}+\left\|\xi^{(2)}(\omega) \right\|_{C_{T_{0}}C^{2\alpha-2}}\\ &\quad+\overline{\mathbb{E}}\big{[}\|\overline{\xi}(\omega, \cdot)\|_{C_{T_{0}}C^{\alpha-2}}^{4}\big{]}^{\frac{1}{4}}+\overline{\mathbb{E} }\big{[}\|(\xi\odot\overline{X})(\omega,\cdot)\|_{C_{T_{0}}C^{2\alpha-2}}^{4} \big{]}^{\frac{1}{4}}.\end{split} \tag{4.1}\]
This is an element of \(L^{\mathfrak{N}p}(\Omega,\mathbf{R})\) - it actually has moments of any finite order.
### Paracontrolled structure for mean field singular SPDEs
The appropriate notion of paracontrolled structure for the study of a large class of mean field singular SPDEs is captured by the following definition.
18 - Definition: Pick an \(L^{2}\) random variable \(\Lambda:\Omega\to C^{\alpha}\). A \(C^{\alpha}\)**-valued random variable**\(v\) on \(\Omega\) is said to be \(\omega\)**-paracontrolled by**\(\Lambda\) if there are some random variables**
\[\delta_{\varepsilon}v:\Omega\to C^{\beta}\]
\[\delta_{\mu}v:\Omega\to L^{\frac{4}{3}}\big{(}\Omega,C^{\beta}\big{)}\]
_and_
\[v^{\sharp}:\Omega\to C^{\alpha+\beta}\]
_such that one has_
\[v(\omega)=(\delta_{z}v)(\omega)<\Lambda(\omega)+\overline{\mathbb{E}}\big{[}( \delta_{\mu}v)(\omega,\cdot)<\overline{\Lambda}(\cdot)\big{]}+v^{\sharp}(\omega) \tag{4.2}\]
_for \(\mathbb{P}\)-almost all \(\omega\in\Omega\), and_
\[\|\delta_{z}v\|_{L^{2}(\Omega)}+\|\delta_{\mu}v\|_{L^{2}(\Omega)}+\|v^{\sharp} \|_{L^{2}(\Omega)}<\infty.\]
We simply say that \(v\) is paracontrolled by \(\Lambda\). We first check that the datum of a mean field enhancement \(\widehat{\xi}^{+}\) of the random noise \(\xi\) comes with a natural definition of the product of \(\xi\) by a random function \(v\in C_{T}C^{\alpha}\) with the property that \(v_{t}\) is paracontrolled by \(X_{t}\) for each \(0<t\leq T\). To emphasize the fact that we use the paracontrolled structure of \(v\) to make sense of that product we write
\[\mathbf{v}_{t}\xi_{t},\]
using a bold letter \(\mathbf{v}\). Set then
\[(\mathbf{v}_{t}\xi_{t})(\omega):=v_{t}(\omega)<\xi_{t}(\omega)+(\mathbf{v}_{t }\xi_{t})^{\sharp}(\omega)\]
where
\[(\mathbf{v}_{t}\xi_{t})^{\sharp}(\omega) :=\xi_{t}(\omega)<v_{t}(\omega)+v_{t}^{\#}(\omega)\odot\xi_{t}(\omega)\] \[\quad+\mathsf{C}\big{(}(\delta_{z}v)(\omega),X(\omega),\xi_{t}( \omega)\big{)}+\overline{\mathbb{E}}\Big{[}\mathsf{C}\big{(}(\delta_{\mu}v)( \omega,\cdot),\overline{X}(\cdot),\xi_{t}(\omega)\big{)}\Big{]}\] \[\quad+(\delta_{z}v)(\omega)\xi_{t}^{(2)}(\omega)+\overline{ \mathbb{E}}\Big{[}(\delta_{\mu}v)(\omega,\cdot)\,\big{(}\xi\odot\overline{X} )(\omega,\cdot)\Big{]}.\]
The proof of the next statement comes from standard continuity estimates on paraproducts and correctors and from Holder inequality in the expectation \(\overline{\mathbb{E}}\); it is left to the reader.
19 - Proposition: One has \(\mathbb{P}\)-almost surely \(\mathbf{v}\xi\in C_{T}C^{\alpha-2}\) and
\[\|(\mathbf{v}_{t}\xi_{t})^{\#}(\omega)\|_{C^{\alpha+\beta-2}}\lesssim\big{(}1 +(\widehat{\xi}^{+})^{\sharp}_{\omega}\big{)}\bigg{(}\|(\delta_{z}v)(\omega) \|_{C^{\beta}}+\overline{\mathbb{E}}\big{[}\|\delta_{\mu}v\|_{C^{\beta}}^{ \frac{4}{3}}\big{]}^{\frac{3}{4}}+\|v^{\#}(\omega)\|_{C^{\alpha+\beta}}\bigg{)}.\]
Furthermore, for two enhanced noises \(\widehat{\xi}^{1+},\widehat{\xi}^{2+}\) in our class, and with \(v^{i}\in C_{T}C^{\alpha}\) with \(v^{i}_{t}\) paracontrolled by \(X^{i}_{t}\), for integers \(1\leq i\leq 2\), for each \(0<t\leq T\), one has
\[\|(\mathbf{v}_{t}^{1}\xi_{t})^{\#}(\omega)-(\mathbf{v}_{t}^{2}\xi _{t}^{2})^{\#}(\omega)\|_{C^{\alpha+\beta-2}}\] \[\lesssim(\star)_{12}(\omega)\bigg{(}\|\delta_{z}v^{1}-\delta_{z} v^{2}\|_{C^{\beta}}+\overline{\mathbb{E}}\big{[}\|\delta_{\mu}v^{1}-\delta_{ \mu}v^{2}\|_{C^{\beta}}^{\frac{4}{3}}\big{]}^{\frac{3}{4}}+\|v^{1\#}-v^{2\#}\|_ {C^{\alpha+\beta}}+(\widehat{\xi}^{+1}-\widehat{\xi}^{+2})_{\omega}\bigg{)},\]
where
\[(\star)_{12}(\omega)=P\Big{(}\max_{i\in\{1,2\}}\Big{\{}(\widehat{\xi}^{+i})_{ \omega},\|\delta_{z}v^{i}\|_{C^{\alpha}},\overline{\mathbb{E}}\big{[}\|\delta_ {\mu}v^{i}\|_{C^{\alpha}}^{\frac{4}{3}}\big{]}^{\frac{3}{4}},\|v^{\#i}\|_{C^{ \alpha+\beta}}\Big{\}}\Big{)},\]
for some quadratic polynomial \(P\).
For a noise \(\xi\in C_{T}C^{\alpha-2}\) in our class of noises we set
\[X:=\mathscr{L}^{-1}(\xi)\in\mathscr{C}^{\alpha}_{T}.\]
Fix \(t>0\). We prove now that the class of random functions on \(\mathsf{T}^{2}\) paracontrolled by \(X_{t}\) is stable by a certain family of nonlinear functions \(f:C^{\alpha}\times\mathcal{W}_{p}(C^{\alpha})\to C^{\alpha}\). This comes under the form of a paralinearization formula. Our primary goal is to give a useful description of the random variable \(f(v_{t},\mathcal{L}(v_{t}))\) when \(v_{t}\) is paracontrolled by \(X_{t}\). For that purpose it will be useful to lift any function \(f:C^{\alpha}\times\mathcal{W}_{p}(C^{\alpha})\to C^{\alpha}\) into a real valued function on \(C^{\alpha}\times L^{p}(\Omega,\overline{\mathbb{P}};C^{\alpha})\) setting, with a slight abuse of notation,
\[f(v,A):=f\big{(}v,\mathcal{L}(A)\big{)},\]
for \(A\in L^{p}(\Omega,\overline{\mathbb{F}};C^{\alpha}_{T})\). We assume in this work that \(f\) depends polynomially on its measure argument
\[f(u,\mu)(z)=\int F\big{(}u(z),v_{1}(z),\ldots,v_{m}(z)\big{)}\,\mu^{\otimes m}( dv_{1}\ldots dv_{m}) \tag{4.3}\]
for some integer \(m\geq 1\), for a function \(F:\mbox{\sf R}^{m+1}\to\mbox{\sf R}\) of class \(C^{3}_{b}\) - or is a linear combination of such monomials. With \(m=1\), and compared to the long range interaction (3.9) studied in Section 3.3, this function corresponds to a pointwise singular Dirac kernel
\[k(z,z^{\prime})=\delta_{z}(z^{\prime}).\]
It will be useful to work on the probability space \((\Omega^{m+1},\mathcal{F}^{\otimes(m+1)},\mathbb{P}^{\otimes(m+1)})\) and write
\[(\omega,\omega_{1},\ldots,\omega_{m})\]
for an element of \(\Omega^{m+1}\). We set \(\overline{\mathbb{E}}^{i}\) for the expectation operator with respect to the variable \(\omega_{i}\) and for \(I=(i_{1},\ldots,i_{k})\) a subset of the integer interval \([\![1,m]\!]\) we write \(\overline{\mathbb{E}}^{I}\) for the expectation operator with respect to the variables \((\omega_{i_{1}},\ldots,\omega_{i_{k}})\). In those terms, and for \(A\in L^{p}(\Omega,\overline{\mathbb{F}};C^{\alpha}_{T})\) and \(\mu=\mathcal{L}(A)\), one has
\[f(v,\mu)(z)=f(v,A)(z)=\overline{\mathbb{E}}^{[\![1,m]\!]}\Big{[}F\big{(}v(z),A (\omega_{1})(z),\ldots,A(\omega_{m})(z)\big{)}\Big{]}.\]
As \(F\in C^{3}_{b}\subset C^{1}_{b}\) one has
\[\|F(v,A(\omega_{1}),\ldots,A(\omega_{m})\|_{C^{\alpha}}\lesssim 1+\|v\|_{C^{ \alpha}}+\sum_{j=1}^{m}\|A(\omega_{j})\|_{C^{\alpha}},\]
and as \(A\in C^{\alpha}\) is integrable the function \(f(v,A)\) on \(\mathbb{T}^{2}\) is indeed an element of \(C^{\alpha}\). For \(i\in[\![1,m]\!]\) we set
\[\partial_{i}f(v,A)(z):=\overline{\mathbb{E}}^{[\![1,m]\!]}\Big{[}(\partial_{i }F)\big{(}v(z),A(\omega_{1})(z),\cdots,A(\omega_{m})(z)\big{)}\Big{]}.\]
_Proposition. Fix \(t>0\) and assume we are given two \(L^{\mathcal{B}p}(\Omega,\mathcal{D}^{\alpha}(X_{t}))\) random variables \((h^{\prime},h^{\sharp})\) and \((k^{\prime},k^{\sharp})\) with corresponding \(C^{\alpha}\) functions \(h,k\) on \(\mathbb{T}^{2}\). Then \(f(h,k)\) is paracontrolled by \(X_{t}\) in the sense of Definition 18, with_
\[(\delta_{z}f)(h,k)(\omega)=(\partial_{1}f)\big{(}h(\omega),k\big{)}h^{\prime}(\omega)\]
_and_
\[(\delta_{\mu}f)(h,k)(\omega,\varpi)\] \[\qquad=\sum_{j=1}^{m}\overline{\mathbb{E}}^{[\![1,m]\!]\setminus\{ j\}}\bigg{[}\big{(}\partial_{j+1}F\big{)}\Big{(}h(\omega),k(\omega_{1}), \cdots,k(\omega_{j-1}),k(\varpi),k(\omega_{j+1}),\cdots,k(\omega_{m})\Big{)} \bigg{]}k^{\prime}(\varpi),\]
_and_
\[\|f(h(\omega),k)^{\#}\|_{C^{\alpha+\beta}}\lesssim\bigg{(}1+\|X_{ t}(\omega)\|_{C^{\alpha}}^{2}+\overline{\mathbb{E}}\big{[}\|\overline{X}_{t}\|_{C^ {\alpha}}^{4}\big{]}^{\frac{1}{2}}\bigg{)}\] \[\qquad\qquad\qquad\qquad\qquad\times\bigg{(}1+\|h^{\prime}(\omega )\|_{C^{\beta}}+\|h^{\#}(\omega)\|_{C^{\alpha}}+\overline{\mathbb{E}}\big{[}\|k ^{\prime}\|_{C^{\beta}}^{4}\big{]}^{\frac{1}{4}}+\overline{\mathbb{E}}\big{[} \|k^{\#}\|_{C^{\alpha}}^{4}\big{]}^{\frac{1}{4}}\bigg{)}\] \[\qquad\qquad\qquad\qquad\times\bigg{(}1+\|h^{\prime}(\omega)\|_{C ^{\beta}}+\|h^{\#}(\omega)\|_{C^{\alpha+\beta}}+\overline{\mathbb{E}}\big{[} \|k^{\prime}\|_{C^{\beta}}^{4}\big{]}^{\frac{1}{4}}+\overline{\mathbb{E}} \big{[}\|k^{\#}\|_{C^{\alpha+\beta}}^{4}\big{]}^{\frac{1}{4}}\bigg{)}.\]
_Moreover for \(\widehat{\xi}^{+i}\in L^{\mathcal{B}p}(\Omega^{2},\mathfrak{V}^{2})\) and \(h\) and \(k\) in \(L^{\mathcal{B}p}(\Omega,\mathcal{D}^{\alpha}(X_{t}))\), for \(1\leq i\leq 2\), we have_
\[\|f\big{(}h^{1}(\omega),k^{1}\big{)}^{\#}-f\big{(}h^{2}(\omega),k^ {2}\big{)}^{\#}\|_{C^{\alpha+\beta}}\lesssim(\star)_{12}(\omega)\times\] \[\qquad\bigg{\{}\|X_{t}^{1}(\omega)-X_{t}^{2}(\omega)\|_{C^{ \alpha}}+\overline{\mathbb{E}}\big{[}\big{\|}\overline{X}_{t}^{1}-\overline{X} _{t}^{2}\big{\|}_{C^{\alpha}}^{4}\big{]}^{\frac{1}{4}}+d_{\mathcal{D}^{\beta}} \big{(}h^{1}(\omega),h^{2}(\omega)\big{)}+\overline{\mathbb{E}}\big{[}d_{ \mathcal{D}^{\beta}}\big{(}k^{1},k^{2}\big{)}^{4}\big{]}^{\frac{1}{4}}\bigg{\}}, \tag{4.4}\]
_where_
\[(\star)_{12}(\omega)=P\Big{(}\max_{i\in\{1,2\}}\Big{\{}\|X^{i}_{t}(\omega)\|_{C^{ \alpha}},\,\overline{\mathbb{E}}\big{[}\|X^{i}_{t}\|^{4}_{C^{\alpha}}\big{]}^{ \frac{1}{4}},\,\|h^{i}(\omega)\|_{\mathcal{D}^{\alpha}},\,\overline{\mathbb{E}} \big{[}\|k^{i}\|^{4}_{\mathcal{D}^{\alpha}}\big{]}^{\frac{1}{4}}\Big{\}}\Big{)},\]
_for some polynomial \(P\)._
Proof.: One has from paralinearisation
\[F\big{(}h(\omega),k(\omega_{1}),\ldots,k(\omega_{m})\big{)}\] \[\qquad=\partial_{1}F\big{(}h(\omega),k(\omega_{1}),\ldots,k( \omega_{m})\big{)}<h(\omega)+\sum_{j=1}^{m}\partial_{j+1}F\big{(}h(\omega),k( \omega_{1}),\ldots,k(\omega_{m})\big{)}<k(\omega_{j})\] \[\qquad\quad+R_{F}\big{(}h(\omega),k(\omega_{1}),\ldots,k(\omega_ {m})\big{)}\] \[\qquad=\big{(}\partial_{1}F\big{(}h(\omega),k(\omega_{1}),\ldots,k(\omega_{m})\big{)}h^{\prime}(\omega)\big{)}<X_{t}(\omega)\] \[\qquad\quad+\sum_{j=1}^{m}\Big{(}\partial_{j+1}F\big{(}h(\omega),k(\omega_{1}),\ldots,k(\omega_{m})\big{)}k^{\prime}(\omega_{j})\Big{)}< \overline{X}_{t}(\omega_{j})+R_{F}+R_{0}+\sum_{j=1}^{m}R_{j}\]
where \(R_{F}=R_{F}\big{(}h(\omega),k(\omega_{1}),\cdots,k(\omega_{m})\big{)}\in C^{ \alpha+\beta}\) and
\[R_{0}=\Big{\{}\partial_{1}F\big{(}h(\omega),k(\omega_{1}),\ldots,k(\omega_{m})\big{)}<\big{(}h^{\prime}(\omega)<X_{t}(\omega)\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\big{(} \partial_{1}F\big{(}h(\omega),k(\omega_{1}),\ldots,k(\omega_{m})\big{)}h^{ \prime}\big{)}<X_{t}(\omega)\Big{\}}\] \[\qquad\quad+\partial_{1}F\big{(}h(\omega),k(\omega_{1}),\cdots,k( \omega_{m})\big{)}<h^{\#}(\omega),\] \[R_{j}=\Big{\{}\partial_{j+1}F\big{(}h(\omega),k(\omega_{1}), \cdots,k(\omega_{m})\big{)}<\big{(}k^{\prime}(\omega_{j})<\overline{X}_{t}( \omega_{j})\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\big{(}\partial_{j+1} F\big{(}h(\omega),k(\omega_{1}),\cdots,k(\omega_{m})\big{)}k^{\prime}(\omega_{j}) \big{)}<\overline{X}_{t}(\omega_{j})\Big{\}}\] \[\qquad\quad+\partial_{j+1}F\big{(}h(\omega),k(\omega_{1}),\cdots,k (\omega_{m})\big{)}<k^{\#}(\omega_{j}).\]
From classical results in paradifferential calculus we have
\[\|R_{F}\|_{C^{\alpha+\beta}}\lesssim\|F\|_{C^{2}}\Big{(}1+\|h(\omega)\|^{2}_{ C^{\alpha}}+\sum_{j=1}^{m}\|k(\omega_{j})\|^{2}_{C^{\alpha}}\Big{)}\]
\[\lesssim\Big{(}1+\|X_{t}(\omega)\|^{2}_{C^{\alpha}}+\sum_{j=1}^{m}\|\overline {X}(\omega_{j})\|^{2}_{C^{\alpha}}\Big{)}\]
\[\qquad\qquad\qquad\times\Big{(}1+\|h^{\prime}(\omega)\|^{2}_{C^{\beta}}+\|h^{ \#}(\omega)\|^{2}_{C^{\alpha}}+\sum_{j=1}^{m}\|k^{\prime}(\omega_{j})\|^{2}_{C^ {\beta}}+\|k^{\#}(\omega_{j})\|^{2}_{C^{\alpha}}\Big{)},\]
and
\[\|R_{0}\|_{C^{\alpha+\beta}}\lesssim\|\partial_{1}F\big{(}h(\omega),k(\omega_ {1}),\cdots,k(\omega_{m})\big{)}\|_{C^{\alpha}}\Big{(}\|h^{\prime}(\omega)\|_{ C^{\beta}}\|X_{t}(\omega)\|_{C^{\alpha}}+\|h^{\#}(\omega)\|_{C^{\alpha+\beta}} \Big{)}\]
\[\lesssim\Big{(}1+\|h(\omega)\|_{C^{\alpha}}+\sum_{j=1}^{m}\|k(\omega_{j})\|_{ C^{\alpha}}\Big{)}\Big{(}\|h^{\prime}\|_{C^{\beta}}\|X_{t}(\omega)\|_{C^{ \alpha}}+\|h^{\#}(\omega)\|_{C^{\alpha+\beta}}\Big{)}\]
\[\lesssim\Big{(}1+\|X_{t}(\omega)\|^{2}_{C^{\alpha}}+\sum_{j=1}^{m}\|\overline {X}(\omega_{j})\|^{2}_{C^{\alpha}}\Big{)}\]
\[\qquad\times\Big{(}1+\|h^{\prime}(\omega)\|_{C^{\beta}}+\|h^{\#}(\omega)\|_{C^{ \alpha}}+\sum_{j=1}^{m}\|k^{\prime}(\omega_{j})\|_{C^{\beta}}+\|k^{\#}(\omega_{ j})\|_{C^{\alpha}}\Big{)}\]
\[\qquad\times\Big{(}1+\|h^{\prime}(\omega)\|_{C^{\beta}}+\|h^{\#}(\omega)\|_{C^{ \alpha+\beta}}+\sum_{j=1}^{m}\|k^{\prime}(\omega_{j})\|_{C^{\beta}}+\|k^{\#}( \omega_{j})\|_{C^{\alpha}}\Big{)},\]
and, for \(1\leq i\leq m\), we have for \(\|R_{i}\|_{C^{\alpha+\beta}}\) the upper bound
\[\Big{(}1 +\|X_{t}(\omega)\|_{C^{\alpha}}^{2}+\sum_{j=1}^{m}\|\overline{X}( \omega_{j})\|_{C^{\alpha}}^{2}\Big{)}\] \[\times\bigg{\{}1+\|h^{\prime}(\omega)\|_{C^{\beta}}+\|h^{\#}( \omega)\|_{C^{\alpha}}+\sum_{j=1}^{m}\|k^{\prime}(\omega_{j})\|_{C^{\beta}}+\| k^{\#}(\omega_{j})\|_{C^{\alpha}}\bigg{\}}\] \[\times\bigg{\{}1+\|h^{\prime}(\omega)\|_{C^{\beta}}+\|h^{\#}( \omega)\|_{C^{\alpha}}+\|k^{\#}(\omega_{i})\|_{C^{\alpha+\beta}}+\sum_{j=1}^{m }\|k^{\prime}(\omega_{j})\|_{C^{\beta}}+\|k^{\#}(\omega_{j})\|_{C^{\alpha}} \bigg{\}}.\]
So we have for \(\big{\|}R_{F}+\sum_{j=0}^{m}R_{j}\big{\|}_{C^{\alpha+\beta}}\) the bound
\[\Big{(}1+\|X_{t}(\omega)\|_{C^{\alpha}}^{2}+\sum_{j=1}^{m}\| \overline{X}_{t}(\omega_{j})\|_{C^{\alpha}}^{2}\Big{)}\Big{(}1+\|h^{\prime}\| _{C^{\beta}}+\|h^{\#}\|_{C^{\alpha}}+\sum_{j=1}^{m}\|k^{\prime}(\omega_{j})\| _{C^{\beta}}+\|k^{\#}(\omega_{j})\|_{C^{\alpha}}\Big{)}\] \[\times\Big{(}1+\|h^{\prime}\|_{C^{\beta}}+\|h^{\#}\|_{C^{\alpha+ \beta}}+\sum_{j=1}^{m}\|k^{\prime}(\omega_{j})\|_{C^{\beta}}+\|k^{\#}(\omega_{ j})\|_{C^{\alpha+\beta}}\Big{)}.\]
Taking the \(\overline{\mathbb{E}}^{\llbracket 1,m\rrbracket}\) expectation one gets
\[f(h(\omega),k) =\big{(}\partial_{1}f(h(\omega),k)h^{\prime}(\omega)\big{)}<X_{t }(\omega)\] \[\quad+\overline{\mathbb{E}}^{\llbracket 1,m\rrbracket}\bigg{[}\sum_{j=1}^ {m}\Big{(}\big{(}\partial_{j+1}F\big{)}(h(\omega),k(\omega_{1}),\ldots,k( \omega_{m}))k^{\prime}(\omega_{j})\Big{)}<\overline{X}_{t}(\omega_{j})\bigg{]}+ f(h(\omega),k)^{\#}\] \[=\big{(}\partial_{1}f(h(\omega),k)h(\omega)^{\prime}\big{)}<X_{t }(\omega)\,+\sum_{j=1}^{m}\] \[\overline{\mathbb{E}}\bigg{[}\overline{\mathbb{E}}^{\llbracket 1,m \rrbracket\setminus\{j\}}\bigg{[}\big{(}\partial_{j+1}F\big{)}\Big{(}h(\omega),k(\omega_{1}),\cdots,k(\omega_{j-1}),k(\varpi),k(\omega_{j+1}),\ldots,k( \omega_{m})\Big{)}k^{\prime}(\varpi)\bigg{]}<\overline{X}_{t}(\varpi)\bigg{]}\] \[\quad+f(h(\omega),k)^{\#},\]
with
\[\|f(h(\omega),k)^{\#}\|_{C^{\alpha+\beta}}\] \[\quad\lesssim\Big{(}1+\|X_{t}(\omega)\|_{C^{\alpha}}^{2}+\overline {\mathbb{E}}\big{[}\|\overline{X}_{t}\|_{C^{\alpha}}^{4}\big{]}^{\frac{1}{2}} \Big{)}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \times\bigg{(}1+\|h^{\prime}(\omega)\|_{C^{\beta}}+\|h^{\#}(\omega)\|_{C^{ \alpha}}+\overline{\mathbb{E}}\big{[}\|k^{\prime}\|_{C^{\beta}}^{4}\big{]}^{ \frac{1}{4}}+\overline{\mathbb{E}}\big{[}\|k^{\#}\|_{C^{\alpha+\beta}}^{4} \big{]}^{\frac{1}{4}}\bigg{)}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
that the fixed measure dynamics is defined on a fixed interval, not on a small interval, as is typically given by fixed point arguments. Assumption _(B)_ guarantees the long time existence.
Recall from (3.4) the definition of the maps \(\mathsf{L}_{c}\), for \(c\in C([0,T_{0}],\mathsf{R})\), and the existence of functions \(c_{n}\in C([0,T_{0}],\mathsf{R})\) such that the random variables \(\mathsf{L}_{c_{n}}(\xi_{n})\) are converging in \(L^{\mathscr{B}}(\Omega,\mathsf{R})\) to the random variable \(\xi\odot\mathscr{L}^{-1}(\xi)\). We emphasize below in the product (4.5) of \(f(u,v)\) by \(\xi\) the fact that \(u\) is seen therein as a paracontrolled function by using the bold notation \(\mathbf{u}\).
_21 - Proposition. Fix \(0<T_{0}<\infty\). Assume the assumptions **(A\({}_{f}\)-A\({}_{g}\)-B)** hold true. For every \(\mathbf{v}\in L^{p}\big{(}\Omega,\mathcal{D}^{\alpha,\beta}_{T_{0}}(X)\big{)}\) and \(u_{0}\in C^{\alpha}\) there exists a positive random time_
\[T=T\big{(}\!\big{(}\!\big{(}\widehat{\xi^{+}}\!\big{)}\!\!\big{)}\!\!_{ \omega},\mathbf{v},u_{0}\big{)}\leq T_{0}\]
_and a unique solution in \(\mathbf{u}_{\widehat{\xi^{+}},u_{0},\mathbf{v}}\in\mathcal{D}^{\alpha,\beta}_ {T}(X)\) to the equation_
\[(\partial_{t}-\Delta)u=f(\mathbf{u},\mathbf{v})\,\xi+g(u,v), \tag{4.5}\]
_where \(\mathbf{u}\) is \(\omega-\)paracontrolled by \(X\) with null \(\delta_{\mu}\) derivative. This random solution \(\mathbf{u}_{\widehat{\xi^{+}},u_{0},\mathbf{v}}(\omega)\) satisfies the local Lipschitz property_
\[d_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(}\mathbf{u}_{\widehat{\xi^{+}}_{1},u _{0},\mathbf{v}_{1}}(\omega),\mathbf{u}_{\widehat{\xi^{+}}_{2},u_{0},\mathbf{ v}_{2}}(\omega)\big{)}\lesssim_{\omega}\|u_{01}-u_{02}\|_{C^{\alpha}}+\overline{\mathbb{E}} \big{[}\|\mathbf{v}_{1}-\mathbf{v}_{2}\|_{L^{p}(\Omega,\mathcal{D}^{\alpha, \beta}_{T})}\big{]}+(\widehat{\xi^{+}}_{1}-\widehat{\xi^{+}}_{2})\!\!\!_{ \omega}. \tag{4.6}\]
_The random function \(u(\omega)\in\mathscr{C}^{\alpha}_{T}\) associated with \(\mathbf{u}_{\widehat{\xi^{+}},u_{0},\mathbf{v}}\) is the limit in probability of the solutions \(u_{n}\) of the equations_
\[(\partial_{t}-\Delta)u_{n}=f(u_{n},v)\,\xi_{n}+g(u_{n},v)-c_{n}(t)(f\partial_{ 1}f)(u_{n},v), \tag{4.7}\]
_with initial condition \(u_{0}\)._
We should more properly write \(u(\omega),u^{\prime}(\omega),u^{\sharp}(\omega)\) rather than just \(u,u^{\prime},u^{\sharp}\). Also the randomness in \(\mathbf{u}_{\widehat{\xi^{+}},u_{0},\mathbf{v}}(\omega)\) only occurs via \(\widehat{\xi^{+}}(\omega)\).
_Proof -_ Rewrite equation (4.5) as the fixed point equation
\[u_{t}=P_{t}u_{0}+\int_{0}^{t}P_{t-s}\big{(}f(\mathbf{u}_{s},\mathbf{v}_{s}) \xi_{x}+g(u_{s},v_{s})\big{)}\,ds.\]
We get from Lemma 19 and Lemma 20 that \(f(\mathbf{u}_{s},\mathbf{v}_{s})\xi_{x}+g(u_{s},v_{s})\) is for each \(s\) an element of \(\mathcal{D}^{\alpha}(\xi_{s})\) with Gubinelli derivative \(f(\mathbf{u}_{s},\mathbf{v}_{s})\) and remainder \(\big{(}f(\mathbf{u}_{s},\mathbf{v}_{s})\xi\big{)}^{\#}+g(u_{s},v_{s})\). With Proposition 9 in mind we check that \(f(u,v)\in\mathscr{C}^{\alpha}_{T}\) and \(\big{(}f(\mathbf{u}_{s},\mathbf{v}_{s})\xi\big{)}^{\#}+g(u_{s},v_{s})\) satisfies (3.7). Recall from (4.1) the definition of the mixed pathwise/averaged random variable \((\!\big{(}\widehat{\xi^{+}}\!\big{)}\!\!_{\omega}\). Take \(\mathbf{u}\in\mathcal{D}^{\alpha,\beta}_{T}(X)\). First one has
\[\|f(u,v)\|_{\mathscr{C}^{\alpha}_{T}} \lesssim 1+\|u\|_{\mathscr{C}^{\alpha}_{T}}+\overline{\mathbb{E}} \big{[}\|v\|_{\mathscr{C}^{\alpha}_{T}}\big{]}\] \[\lesssim\]
Second, combining the estimates from Lemmas 19 and 20 one gets at some fixed time \(t\) the estimates
\[\|(f(\mathbf{u},\mathbf{v})\xi)^{\#}\|_{C^{\alpha+\beta-2}}\lesssim\big{(}1+( \widehat{\xi^{+}}\!\big{)}\!\!_{\omega}^{2}\big{)}\Big{(}\!\|\delta_{z}f(u,v)\|_ {C^{\beta}}+\overline{\mathbb{E}}\big{[}\|\delta_{\mu}f(u,v)\|_{C^{\beta}}^{ \frac{4}{3}}\big{]}^{\frac{3}{4}}+\|f(\mathbf{u},\mathbf{v})^{\#}\|_{C^{ \alpha+\beta}}\Big{)}\]
\[\lesssim\big{(}1+(\overleftarrow{\hat{\xi}^{+}})^{2}_{\omega}\big{)} \bigg{\{}\Big{(}1+\|u\|_{C^{\alpha}}+\overline{\mathbb{E}}\big{[}\|v\|_{C^{\alpha} }\big{]}\Big{)}\|u^{\prime}\|_{C^{\beta}}\] \[\qquad\qquad+\Big{(}1+\|u\|_{C^{\alpha}}+\overline{\mathbb{E}} \big{[}\|v\|_{C^{\alpha}}^{2}\big{]}^{\frac{1}{2}}\Big{)}\overline{\mathbb{E}} \big{[}\|v^{\prime}\|_{C^{\beta}}^{4}\big{]}^{\frac{1}{4}}+\|f({\bf u},{\bf v})^ {\#}\|_{C^{\alpha+\beta}}\bigg{\}}\] \[\lesssim\big{(}1+(\overleftarrow{\hat{\xi}^{+}})^{4}_{\omega} \big{)}\bigg{(}1+\|u^{\prime}\|_{C^{\beta}}+\|u^{\#}\|_{C^{\alpha}}+\overline{ \mathbb{E}}\big{[}\|v^{\prime}\|_{C^{\beta}}^{4}\big{]}^{\frac{1}{4}}+ \overline{\mathbb{E}}\big{[}\|v^{\#}\|_{C^{\alpha}}^{4}\big{]}^{\frac{1}{4}} \bigg{)}\] \[\qquad\qquad\times\bigg{(}1+\|u^{\prime}\|_{C^{\beta}}+\|u^{\#} \|_{C^{\alpha+\beta}}+\overline{\mathbb{E}}\big{[}\|v^{\prime}\|_{C^{\beta}}^{ 4}\big{]}^{\frac{1}{4}}+\overline{\mathbb{E}}\big{[}\|v^{\#}\|_{C^{\alpha+ \beta}}^{4}\big{]}^{\frac{1}{4}}\bigg{)},\]
so
\[\sup_{t\in(0,T]}t^{\beta/2}\|(f({\bf u}_{t},{\bf v}_{t})\xi_{t})^{\#}\|_{C^{ \alpha+\beta-2}}\lesssim\big{(}1+(\overleftarrow{\hat{\xi}^{+}})^{4}_{\omega} \big{)}\Big{(}1+\|{\bf u}\|_{{\cal D}_{T}^{\alpha,\beta}}^{2}+\overline{ \mathbb{E}}\big{[}\|{\bf v}\|_{{\cal D}_{T}^{\alpha,\beta}}^{4}\big{]}^{\frac{ 1}{2}}\Big{)}.\]
We have also
\[\sup_{t\in(0,T]}t^{\beta/2}\|g(u_{t},v_{t})\|_{C^{\alpha+\beta-2}} \lesssim\sup_{t\in(0,T]}t^{\beta/2}\Big{(}1+\|u_{t}\|_{C^{\alpha}} +\overline{\mathbb{E}}\big{[}\|v_{t}\|_{{\cal C}^{\alpha}}^{2}\big{]}^{\frac{ 1}{2}}\Big{)}\] \[\lesssim\big{(}1+(\overleftarrow{\hat{\xi}^{+}})_{\omega}\big{)} \Big{(}1+\|{\bf u}\|_{{\cal D}_{T}^{\alpha,\beta}}^{2}+\overline{\mathbb{E}} \big{[}\|{\bf v}\|_{{\cal D}_{T}^{\alpha,\beta}}^{4}\big{]}^{\frac{1}{2}} \Big{)},\]
so we have in the end the pathwise estimate
\[\sup_{t\in(0,T]}t^{\beta/2}\|(f({\bf u}_{t},{\bf v}_{t})\xi_{t})^{\#}+g(u_{t}, v_{t})\|_{C^{\alpha+\beta-2}}\lesssim\big{(}1+(\overleftarrow{\hat{\xi}^{+}})^{4}_{ \omega}\big{)}\Big{(}1+\|{\bf u}\|_{{\cal D}_{T}^{\alpha,\beta}}^{2}+\overline{ \mathbb{E}}\big{[}\|{\bf v}\|_{{\cal D}_{T}^{\alpha,\beta}}^{4}\big{]}^{1/2} \Big{)}.\]
It follows from Proposition 9 that the map
\[\Phi_{\overleftarrow{\hat{\xi}^{+}},u_{0},{\bf v}}:{\cal D}_{T}^{\alpha,\beta }(X(\omega))\to{\cal D}_{T}^{\alpha,\beta}(X(\omega))\]
which associates to \({\bf u}\in{\cal D}_{T}^{\alpha,\beta}(X(\omega))\) the solution \(w\) of the equation
\[(\partial_{t}-\Delta)w=f({\bf u},{\bf v})\xi+g(u,v)\]
with initial condition \(w_{0}=u_{0}\), is well-defined and satisfies the bound
\[\big{\|}\Phi_{\overleftarrow{\hat{\xi}^{+}},u_{0},{\bf v}}({\bf u})\big{\|}_{ {\cal D}_{T}^{\alpha,\beta}}\lesssim\|u_{0}\|_{C^{\alpha}}+T^{(\alpha-\beta)/2 }\big{(}1+(\overleftarrow{\hat{\xi}^{+}})^{4}_{\omega}\big{)}\Big{(}1+\|{ \bf u}\|_{{\cal D}_{T}^{\alpha,\beta}}^{2}+\overline{\mathbb{E}}\big{[}\|{\bf v }\|_{{\cal D}_{T}^{\alpha,\beta}}^{4}\big{]}^{1/2}\Big{)}.\]
Recall \(4\leq p<\infty\). One can then find some random positive constants
\[M=M\Big{(}\|u_{0}\|_{\alpha}\vee\,\overline{\mathbb{E}}\big{[}\|{\bf v}\|_{{ \cal D}_{T}^{\alpha,\beta}}^{p}\big{]}\vee\,(\overleftarrow{\hat{\xi}^{+}})_{\omega} \Big{)}\]
and
\[T=T\Big{(}\|u_{0}\|_{\alpha}\vee\,\overline{\mathbb{E}}\big{[}\|{\bf v}\|_{{ \cal D}_{T}^{\alpha,\beta}}^{p}\big{]}\vee\,(\overleftarrow{\hat{\xi}^{+}})_{\omega} \Big{)}\]
so that the map \(\Phi_{\overleftarrow{\hat{\xi}^{+}},u_{0},{\bf v}}\) sends the ball
\[\Big{\{}{\bf u}\in{\cal D}_{T}^{\alpha,\beta}(X(\omega))\,;\,\|{\bf u}\|_{{\cal D }_{T}^{\alpha,\beta}}\leq M\Big{\}}\]
into itself. Now, given \(\widehat{\xi}_{1}^{+},\widehat{\xi}_{2}^{+}\) in \(L^{8p}(\Omega^{2},\mathfrak{N}^{2})\), two initial conditions \(u_{01},u_{02}\) in \(C^{\alpha}\) and \({\bf v}_{1},{\bf v}_{2}\) in \(L^{p}\big{(}\Omega,{\cal D}_{T_{0}}^{\alpha,\beta}(X(\omega))\big{)}\), we define a random constant
\[M^{\prime}_{\omega}=M\Big{(}\max_{i=1,2}\Big{\{}\|u_{0i}\|_{C^{\alpha}}\sqrt{ \mathbb{E}}\big{[}\|{\bf v}_{i}\|_{{\cal D}_{T_{0}}^{\alpha,\beta}}^{p}\big{]} \vee(\overleftarrow{\hat{\xi}^{+}})_{\omega}\Big{\}}\Big{)}.\]
For \(\|{\bf u}\|_{{\cal D}_{T}^{\alpha,\beta}}\leq M^{\prime}_{\omega}\), Proposition 9 tells us that
\[\mathrm{d}_{{\cal D}_{T}^{\alpha,\beta}}\big{(}\Phi_{\overleftarrow{ \hat{\xi}^{+}},u_{01},{\bf v}_{1}}({\bf u}_{1}),\Phi_{\overleftarrow{\hat{\xi}^{+} },u_{02},{\bf v}_{2}}({\bf u}_{2})\big{)}\] \[\lesssim_{M^{\prime}_{\omega}}\|u_{01}-u_{02}\|_{C^{\alpha}}+T^{( \alpha-\beta)/2}\Big{\{}\mathrm{d}_{{\cal D}_{T}^{\alpha,\beta}}\big{(}{\bf u}_{1},{ \bf u}_{2}\big{)}+\overline{\mathbb{E}}\big{[}\|{\bf v}_{1}-{\bf v}_{2}\|_{L^{p}( \Omega,{\cal D}_{T}^{\alpha,\beta})}\big{]}+(\overleftarrow{\hat{\xi}^{+}_{1} }-\overleftarrow{\hat{\xi}^{+}_{2}})_{\omega}\Big{\}}.\]
So choosing
\[T\Big{(}\max_{i=1,2}\Big{\{}\|u_{0i}\|_{C^{\alpha}}\sqrt{\mathbb{E}}\big{[}\|{\bf v }_{i}\|_{L^{p}(\Omega,{\cal D}_{T}^{\alpha,\beta})}\big{]}\vee(\overleftarrow{ \hat{\xi}^{+}})_{\omega}\Big{\}}\Big{)}\]
small enough ensures that the map \(\Phi_{\widehat{\xi}^{+},u_{0},\mu}\) has a unique fixed point \(\mathbf{u}_{\widehat{\xi}^{+},u_{0},\mu}(\omega)\) which satisfies the local Lipschitz property
\[\mathrm{d}_{\mathcal{D}_{T}^{\alpha,\beta}}\big{(}\mathbf{u}_{\widehat{\xi}^{+} _{1},u_{0},\mathbf{v}_{1}}(\omega),\mathbf{u}_{\widehat{\xi}^{+}_{2},u_{0}, \mathbf{v}_{2}}(\omega)\big{)}\lesssim_{M^{\prime}_{\omega}}\|u_{01}-u_{02}\|_ {C^{\alpha}}+\mathbb{E}\big{[}\|\mathbf{v}_{1}-\mathbf{v}_{2}\|_{L^{p}( \Omega,\mathcal{D}_{T}^{\alpha,\beta})}\big{]}+(\widehat{\xi}^{+}_{1}-\widehat {\xi}^{+}_{2})_{\omega}.\]
Recall that \((\xi,X\odot\xi)\in\mathfrak{N}\) is the limit in any \(L^{q}(\Omega,\mathbb{P})\) space, \(1\leq q<\infty\), of the sequence of enhanced noises
\[\big{(}\xi_{n},\xi_{n}\odot X_{n}-c_{n}\big{)}=:(\xi_{n},(\xi\odot X)_{n})\]
for some diverging function \(c_{n}\), and that \(\xi\odot\overline{X}\) is the limit in \(L^{q}(\Omega^{2},\mathbb{P}^{\otimes 2})\) of \(\xi_{n}\odot\overline{X}_{n}\). We then have
\[f(\mathbf{u}_{n},\mathbf{v})\xi_{n}+g(u_{n},v) =f(u_{n},v)<\xi_{n}+\xi_{n}<f(u_{n},v)+f(u_{n},v)^{\#}\odot\xi_{n}\] \[\quad+\mathsf{C}\big{(}\delta_{z}f(u_{n},v),X_{n},\xi_{n})+ \overline{\mathbb{E}}\Big{[}\mathsf{C}\big{(}\delta_{\mu}f(u_{n},v), \overline{X}_{n},\xi_{n}\big{)}\Big{]}\] \[\quad+g(u_{n},v)\] \[=f(u_{n},v)\xi_{n}-c_{n}(f\partial_{1}f)(u_{n})+g(u_{n},v),\]
so the function \(u_{n}\) is a solution of the renormalized equation
\[(\partial_{t}-\Delta)u_{n}=f(u_{n},v)\xi_{n}-c_{n}(f\partial_{1}f)(u_{n})+g(u_ {n},v).\]
As we know that the solution \(\mathbf{u}_{\widehat{\xi}^{+},u_{0},\mathbf{v}}\in\mathcal{D}_{T}^{\alpha, \beta}(X)\) is a continuous function of \(\widehat{\xi}^{+}\in\mathfrak{N}^{2}\), and since \(\widehat{\xi}^{+}_{n}\) converges to \(\widehat{\xi}^{+}\) in probability, we see that \(\mathbf{u}_{\widehat{\xi}^{+},u_{0},\mathbf{v}}\) is the limit in probability in \(\mathcal{D}_{T}^{\alpha,\beta}\) of the sequence \((u_{n},f(u_{n},v))\in\mathcal{D}_{T}^{\alpha,\beta}(X_{n})\). \(\rhd\)
The following statement is the analogue of Lemma 12 in the present setting.
22 - Lemma. _For every \(R>0\), the solution \(\mathbf{u}_{\widehat{\xi}^{+},u_{0},\mathbf{v}}(\omega)\) to equation_ (4.5) _is defined up to the time_
\[T^{*}=\inf\big{\{}t\geq 0,\quad\|u(t)\|_{L^{\infty}}\geq R\big{\}}.\]
_Proof -_ The proof is a direct adaptation of the proof of Lemma 12. We give the details for the interested reader. To lighten the notations we write \(\mathbf{u}\) for \(\mathbf{u}_{\widehat{\xi}^{+},u_{0},\mathbf{v}}(\omega)\). Recall that the local well-posedness time from the Picard iteration argument for \(\mathbf{u}\) reads as a decreasing function
\[T=T\big{(}u_{0},\widehat{\xi}^{+},\overline{\mathbb{E}}[\|\mathbf{v}\|_{ \mathcal{D}_{T}^{\alpha,\beta}}]\big{)}.\]
If we fix \(\widehat{\xi}^{+}\) and \(v\), one ends up with a function \(T=T\big{(}\|u_{0}\|_{C^{\alpha}}\big{)}\), so that it is sufficient to obtain a bound for \(\|u\|_{C_{T}C^{\alpha}}\) that depends only on the constant \(R\). As \(\|u\|_{C_{T}C^{\alpha}}\lesssim_{\widehat{\xi}^{+}}\|\mathbf{u}\|_{\mathcal{D}_ {T}^{\alpha,\beta}}\) we actually show that
\[\|u\|_{\mathcal{D}_{T}^{\alpha,\beta}}\lesssim_{\widehat{\xi}^{+}}1+\|u\|_{C_{ T}L^{\infty}}^{2}+\mathbb{E}\big{[}\|v\|_{C_{T}L^{\infty}}^{4}\big{]}^{1/2}.\]
We proceed as follows. Since \(u^{\prime}_{t}=f(u_{t},v_{t})\), we have
\[\|u^{\prime}\|_{\mathscr{C}_{T}^{\beta}}\lesssim 1+\|u\|_{\mathscr{C}_{T}^{\beta}}+\|v\|_{ \mathscr{C}_{T}^{\beta}}.\]
Yet since \(u=u^{\prime}\prec X+u^{\#}\) where \(u^{\prime}\) appears as an \(L^{\infty}\) contribution we have
\[\|u^{\prime}\|_{\mathscr{C}_{T}^{\beta}}\lesssim_{\widehat{\xi}^{+},R}1+\|u^{ \#}\|_{\mathscr{C}_{T}^{\beta}}+\|v^{\#}\|_{\mathscr{C}_{T}^{\beta}}.\]
We now use the fact that
\[(\partial_{t}-\Delta)u^{\#}=\Phi^{\#} \tag{4.8}\]
where
\[\Phi^{\#}=\big{(}f(\mathbf{u},\mathbf{v})\xi-f(u,v)\prec\xi\big{)}+g(u,v).\]
The refined paralinearization lemma C.1 from [11] ensures here that
\[\big{\|}F\big{(}u^{\prime}\prec X+u^{\#}, v^{\prime}\prec\overline{X}+v^{\#}\big{)}-\] \[\nabla f\big{(}u^{\prime}\prec X+u^{\#},v\prec\overline{X}+v^{\#} \big{)}\prec\big{(}u^{\prime}\prec X+u^{\#},v^{\prime}\prec\overline{X}+v^{\#} \big{)}\big{\|}_{\alpha+\beta}\] \[\lesssim\big{(}1+\|u^{\prime}\prec X\|_{C^{\alpha}}^{2}+\|v^{ \prime}\prec\overline{X}\|_{C^{\alpha}}^{2}+\|u^{\#}\|_{L^{\infty}}^{2}+\|v^{ \#}\|_{L^{\infty}}^{2}\big{)}\big{(}1+\|u^{\#}\|_{C^{\alpha+\beta}}+\|v^{\#}\| _{C^{\alpha+\beta}}\big{)}\] \[\lesssim\big{(}1+\|X\|_{C^{\beta}}^{2}+\|\overline{X}\|_{C^{ \alpha}}^{2}\big{)}\big{(}1+\|u\|_{L^{\infty}}^{2}+\|v\|_{L^{\infty}}^{2} \big{)}\big{(}1+\|u^{\#}\|_{C^{\alpha+\beta}}+\|v^{\#}\|_{C^{\alpha+\beta}} \big{)},\]
so that using continuity relation 3.3 and estimate from Definition 7
\[\|\Phi^{\#}\|_{C^{\alpha+\beta-2}} \lesssim\overline{\mathbb{E}}\Big{[}\big{(}1+\|\widehat{\xi}^{+} \|^{3}\big{)}\big{(}1+\|u\|_{C_{T}L^{\infty}}^{2}+\|v\|_{C_{T}L^{\infty}}^{2} \big{)}\] \[\qquad\times\big{(}1+\|u\|_{\mathscr{C}^{\alpha}_{T}}+\|v\|_{ \mathscr{C}^{\alpha}_{T}}+\|u^{\#}\|_{C^{\alpha+\beta}}+\|v^{\#}\|_{C^{\alpha+ \beta}}\big{)}\Big{]}\] \[\lesssim\Big{(}1+\big{(}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
so choosing \(\lambda\) small enough we finally get
\[\sup_{0\leq t\leq T}t^{\beta/2}\|\Phi^{\#}\|_{C^{\alpha+\beta-2}} \lesssim_{\widehat{\xi}^{+}}\Big{(}1+\|u\|_{C_{T}L^{\infty}}^{2}+ \mathbb{E}\big{[}\|v\|_{C_{T}L^{\infty}}^{2}\big{]}^{1/2}\Big{)}\] \[\times\Big{(}1+\mathbb{E}\big{[}\|v^{\#}\|_{\widehat{\xi}^{+}_{T} }^{4}\big{]}^{1/4}+\mathbb{E}\big{[}\|v^{\#}\|_{C^{\alpha+\beta}}^{4}\big{]}^{1 /4}\Big{)}.\]
In the end we obtain from Proposition 4.9 and Proposition 4.10 the estimate
\[\|u^{\#}\|_{\widehat{\xi}^{+}_{T}}+\sup_{0\leq t\leq T}t^{\beta/2}\|u^{\#}\|_ {C^{\alpha+\beta}}\lesssim_{\widehat{\xi}^{+}}\Big{(}1+\|u\|_{C_{T}L^{\infty}}^ {2}+\mathbb{E}\big{[}\|v\|_{C_{T}L^{\infty}}^{2}\big{]}^{1/2}\Big{)}\] \[\times\Big{(}1+\mathbb{E}\big{[}\|v^{\#}\|_{\widehat{\xi}^{+}_{T} }^{4}\big{]}^{1/4}+\mathbb{E}\big{[}\|v^{\#}\|_{C^{\alpha+\beta}}^{4}\big{]}^{ 1/4}\Big{)},\]
\(\rhd\)
_23 - Proposition. Under assumptions **(A\({}_{f}\)-A\({}_{g}\)-B)**, if \(\|u_{0}\|_{L^{\infty}}\leq C_{0}\) then \(\mathbf{u}_{\widehat{\xi}^{+},u_{0},\mathbf{v}}\) is defined globally in time and \(\|\mathbf{u}\|_{\mathcal{D}^{\alpha,\beta}_{\mathbf{T}}}(\omega)\) has moments of order \(p\)._
_Proof -_ The global in time existence is a direct consequence of the explosion criterion of Lemma 14 and the maximum principle applied to the solution \(u_{n}\) of the renormalized equation (4.7). Following what is done in the proof of the Proposition 22 we have an estimate of the form
\[\|u\|_{\mathcal{D}^{\alpha,\beta}_{T}}\lesssim_{\widehat{\xi}^{+},u_{0}}1+\|u \|_{C_{T}L^{\infty}}^{2}\lesssim_{\widehat{\xi}^{+},u_{0}}1+C_{0}\]
with an implicit multiplicative constant that is polynomial function in \((\widehat{\xi}^{+})_{\omega}\) of degree \(3\). \(\rhd\)
### Solving equation (1.3)
The proof of well-posedness of equation (1.3) requires a second fixed point which is the object of the next statement. We fix as above \(4\leq p<\infty\).
_24 - Theorem. We assume that the assumptions **(A\({}_{f}\)-A\({}_{g}\)-B)** hold true. There exists a positive deterministic positive time \(T\leq T_{0}\) with the following property._ * _For every_ \(u_{0}\in C^{\alpha}\) _such that_ \(\|u_{0}\|_{L^{\infty}}\leq C_{0}\) _there exists a unique solution_ \(\mathbf{u}=(u^{\prime},u^{\sharp})\) _to the mean field equation (_1.3_) in_ \(L^{p}(\Omega,\mathcal{D}^{\alpha,\beta}_{T}(X))\)_. The law_ \(\mathcal{L}(\mathbf{u})\in\mathcal{P}_{p}(\mathcal{D}^{\alpha,\beta}_{T}(X))\) _of_ \(\mathbf{u}\) _depends continuously on_ \(\widehat{\xi}^{+}\in L^{8p}(\Omega^{2},\mathfrak{N}^{2})\) _and_ \(u_{0}\in C^{\alpha}\)_._ * _Write_ \(u=u^{\prime}<X+u^{\sharp}\)_. The function_ \(u\in\mathscr{C}^{\alpha}_{T}\) _is the limit in probability of the family of solutions of the renormalized equations_ \[(\partial_{t}-\Delta)u_{n}=f\big{(}u_{n},\mathcal{L}(u_{n}(t))\big{)}\xi_{n}- c_{n}(t)(f\partial_{1}f)\big{(}u_{n},\mathcal{L}(u_{n}(t))\big{)}+g\big{(}u_{n}, \mathcal{L}(u_{n}(t))\big{)}.\]
_Proof -_ Write here \(\mathbf{u}^{\mathbf{v}}_{\widehat{\xi}^{+},u_{0}}\) for \(\mathbf{u}_{\widehat{\xi}^{+},u_{0},\mathbf{v}}\). We define from Proposition 21 a map \(\Psi_{\widehat{\xi}^{+},u_{0}}\) from \(L^{p}(\Omega,\mathcal{D}^{\alpha,\beta}_{T}(X))\) into itself setting
\[\Psi_{\widehat{\xi}^{+},u_{0}}(\mathbf{v})=\mathbf{u}^{\mathbf{v}}_{\widehat{ \xi}^{+},u_{0}}.\]
One has from Proposition 9
\[\|\mathbf{u}^{\mathbf{v}}_{\widehat{\xi}^{+},u_{0}}\|_{\mathcal{D }^{\alpha,\beta}_{T}} \lesssim\|u_{0}\|_{C^{\alpha}}+T^{\delta}\big{(}1+(\widehat{\xi}^{+}) _{\omega}^{4}\big{)}\Big{(}1+\|\mathbf{u}\|_{\mathcal{D}^{\alpha,\beta}_{T}}^{2} +\overline{\mathbb{E}}\big{[}\|\mathbf{v}\|_{\mathcal{D}^{\alpha,\beta}_{T}}^{4 }\big{]}^{1/2}\Big{)}\] \[\lesssim\|u_{0}\|_{C^{\alpha}}+T^{\delta}\big{(}1+(\widehat{\xi} ^{+})_{\omega}^{4}\big{)}\bigg{(}\|\mathbf{u}\|_{\mathcal{D}^{\alpha,\beta}_{T}}^ {\frac{1}{2}}\Big{\{}1+\|u_{0}\|_{C^{\alpha}}+\overline{\mathbb{E}}\big{[}\| \mathbf{v}\|_{\mathcal{D}^{\alpha,\beta}_{T}}^{4}\big{]}^{\frac{1}{2}}\Big{\}} ^{\frac{3}{2}}\] \[+\overline{\mathbb{E}}\big{[}\|\mathbf{v}\|_{\mathcal{D}^{\alpha, \beta}_{T}}^{4}\big{]}^{\frac{1}{2}}\bigg{)}.\]
Integrating and using Cauchy-Schwarz inequality we get
\[\mathbb{E}\big{[}\|\mathbf{u}^{\mathbf{v}}_{\widehat{\xi}^{+},u_{ 0}}\|_{\mathcal{D}^{\alpha,\beta}_{T}}^{p}\big{]}^{2}\lesssim\|u_{0}\|_{C^{ \alpha}}^{2p}+T^{2p\delta}\big{(}1+\mathbb{E}\big{[}(\widehat{\xi}^{+})^{8p} \big{]}\big{)}\bigg{\{}\mathbb{E}\big{[}\|\mathbf{u}\|_{\mathcal{D}^{\alpha, \beta}_{T}}^{p}\big{]}\Big{(}1+\|u_{0}\|_{C^{\alpha}}+\overline{\mathbb{E}}\big{[} \|\mathbf{v}\|_{\mathcal{D}^{\alpha,\beta}_{T}}^{4}\big{]}^{\frac{1}{2}}\Big{)}^{ 3p}\] \[+\overline{\mathbb{E}}\big{[}\|\mathbf{v}\|_{\mathcal{D}^{\alpha, \beta}_{T}}^{p}\big{]}^{2}\bigg{\}}.\]
So for \(T=T\big{(}\overline{\mathbb{E}}\big{[}\|\mathbf{v}\|^{p}_{\mathcal{D}^{\alpha,\beta} _{T}}\big{]}\big{)}\) sufficiently small we have
\[\mathbb{E}\big{[}\|\mathbf{u}\|^{p}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{]}^{ \frac{1}{p}}\lesssim\|u_{0}\|_{C^{\alpha}}+T^{\delta}\Big{(}1+\mathbb{E}\big{[} (\widehat{\xi}^{+})^{8p}\big{]}^{\frac{1}{2p}}\Big{)}\mathbb{E}\big{[}\| \mathbf{v}\|^{p}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{]}^{\frac{1}{p}}.\]
Pick
\[A>C_{0}^{2}\vee 2\mathbb{E}\big{[}(\widehat{\xi}^{+})^{8p}\big{]}.\]
For \(M\) sufficiently big and \(T=T(M,A)\) even smaller, for every \(u_{0}\in C^{\alpha}\) with \(\|u_{0}\|_{C^{\alpha}}\leq A\) the map \(\Psi_{\widehat{\xi}^{+},u_{0}}\) sends the ball
\[\Big{\{}\mathbf{v}\in L^{p}(\Omega,\mathcal{D}^{\alpha,\beta}_{T}(X))\,;\, \|\mathbf{v}\|_{L^{p}(\Omega,\mathcal{D}^{\alpha,\beta}_{T})}\leq M\Big{\}}\]
into itself. Now pick \(\mathbf{v}_{1},\mathbf{v}_{2}\) in \(L^{p}(\Omega,\mathcal{D}^{\alpha,\beta}_{T}(X))\), two initial conditions \(u_{01},u_{02}\) in \(C^{\alpha}\) and \(\widehat{\xi}^{+}_{1},\widehat{\xi}^{+}_{2}\) in \(L^{8p}(\Omega^{2},\mathfrak{M}^{2})\) such that one has
\[\mathbb{E}\big{[}(\widehat{\xi}^{+}_{1})^{8p}\big{]}\vee\|u_{0i}\|_{C^{ \alpha}}\leq A,\qquad\overline{\mathbb{E}}\big{[}\|\mathbf{v}_{i}\|^{p}_{ \mathcal{D}^{\alpha,\beta}_{T}}\big{]}\leq M,\]
for \(1\leq i\leq 2\). Write \(\mathbf{u}_{i}\) for \(\Phi_{\widehat{\xi}^{+}_{i},u_{0i}}(\mathbf{v}_{i})\) and define the random variable
\[R_{\omega}:=(\widehat{\xi}^{+}_{1}]_{\omega}+(\widehat{\xi}^{+}_{2})_{\omega}.\]
We have
\[\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(}\mathbf{u}_{1}, \mathbf{u}_{2}\big{)}\] \[\lesssim_{R_{\omega}}\|u_{01}-u_{02}\|_{C^{\alpha}}\!+T^{\delta} \bigg{\{}(\widehat{\xi}^{+}_{1}-\widehat{\xi}^{+}_{2})_{\omega}+\mathrm{d}_{ \mathcal{D}^{\alpha,\beta}_{T}}(\mathbf{u}_{1},\mathbf{u}_{2})+\overline{ \mathbb{E}}\big{[}\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(}\mathbf{ v}^{1},\mathbf{v}^{2}\big{)}^{4}\big{]}^{\frac{1}{4}}\bigg{\}}\] \[\lesssim_{R_{\omega}}\|u_{01}-u_{02}\|_{C^{\alpha}}\!+\!T^{\delta} \bigg{\{}(\widehat{\xi}^{+}_{1}-\widehat{\xi}^{+}_{2})_{\omega}+\mathrm{d}_{ \mathcal{D}^{\alpha,\beta}_{T}}(\mathbf{u}_{1},\mathbf{u}_{2})^{\frac{1}{2}} \!+\overline{\mathbb{E}}\big{[}\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}} \big{(}\mathbf{v}^{1},\mathbf{v}^{2}\big{)}^{4}\big{]}^{\frac{1}{4}}\bigg{\}},\]
for some implicit positive multiplicative constant that is a polynomial of \(R_{\omega}\), which is of degree \(5\) combining Propositions 9.20 and 19. Integrating and using Cauchy-Schwarz inequality we obtain the estimate
\[\mathbb{E}\big{[}\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(} \mathbf{u}_{1},\mathbf{u}_{2}\big{)}^{p}\big{]}^{2}\lesssim \|u_{01}-u_{02}\|_{C^{\alpha}}^{2p}+\mathbb{E}\big{[}(\widehat{ \xi}^{+}_{1}-\widehat{\xi}^{+}_{2})^{2p}\big{]}\] \[+T^{2p\delta}\Big{\{}\mathbb{E}\big{[}\mathrm{d}_{\mathcal{D}^{ \alpha,\beta}_{T}}\big{(}\mathbf{u}_{1},\mathbf{u}_{2}\big{)}^{p}\big{]}+ \overline{\mathbb{E}}\big{[}\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(} \mathbf{v}_{1},\mathbf{v}_{2}\big{)}^{4}\big{]}^{\frac{p}{2}}\Big{\}},\]
so taking \(T>0\) deterministic, small enough, independently of \(u_{0i}\) and \(\widehat{\xi}^{+}_{i}\), ensures that we have
\[\mathbb{E}\big{[}\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_{T}}\big{(}\mathbf{u}_ {1},\mathbf{u}_{2}\big{)}^{p}\big{]}^{2}\lesssim\|u_{01}-u_{02}\|_{C^{\alpha}}^ {2p}+\mathbb{E}\big{[}(\widehat{\xi}^{+}_{1}-\widehat{\xi}^{+}_{2})^{2p}\big{]} +T^{2p\delta}\overline{\mathbb{E}}\big{[}\mathrm{d}_{\mathcal{D}^{\alpha,\beta}_ {T}}\big{(}\mathbf{v}_{1},\mathbf{v}_{2}\big{)}^{4}\big{]}^{\frac{p}{2}}.\]
As \(4\leq p<\infty\), we conclude that equation (1.3) has a unique local solution \(\mathbf{u}\) in \(\mathcal{P}_{p}(\mathcal{D}^{\alpha,\beta}_{T}(X))\), and that the law \(\mathcal{L}(\mathbf{u})\in\mathcal{P}_{p}(\mathcal{D}^{\alpha,\beta}_{T}(X))\) of \(\mathbf{u}\) depends continuously on \(\widehat{\xi}^{+}\in L^{2p}(\Omega^{2},\mathfrak{M}^{2})\) and on \(u_{0}\in C^{\alpha}\). \(\rhd\)
## 5 Propagation of chaos
Let now \((\xi^{i},u^{i}_{0})\) be a sequence of independent and identically distributed random variables with common law \(\mathcal{L}(\xi,u_{0})\), defined on the probability space \((\Omega,\mathcal{F},\mathbb{P})\). We fix \(\omega\in\Omega\) and an integer \(n\geq 1\) and study the dynamics
\[\begin{split}(\partial_{t}-\Delta)u^{i,n}(\omega)&=f\big{(} u^{i,n}(\omega),\mu^{n}_{t}\big{)}\xi^{i}(\omega)+g\big{(}u^{i,n}(\omega),\mu^{n}_{t}( \omega)\big{)},\qquad(1\leq i\leq n)\\ \mu^{n}_{t}(\omega)&:=\frac{1}{n}\sum_{i=1}^{n}\delta _{u^{i,n}_{t}(\omega)},\end{split} \tag{5.1}\]
with initial conditions \(\big{(}u^{1}_{0}(\omega),\dots,u^{n}_{0}(\omega)\big{)}\). We suppose that \(f\) and \(g\) satisfy the assumptions _(A\({}_{f}\)-A\({}_{g}\)-B)_. System (5.1) can either be understood as a multidimensional singular stochastic PDE driven by a multidimensional (enhanced) noise or as a mean field singular stochastic PDE.
We prove in paragraph _(a)_ that these two interpretations coincide and prove in paragraph _(b)_ that we have a propagation of chaos result for (5.1). We write \([\![1,n]\!]\) for the set of integers between \(1\) and \(n\).
_(a) Singular systems of interacting fields -_ To lighten the notations we consider here the case that the diffusivity \(f\) is linear in the measure argument - see (5.2) below. The polynomial case is treated similarly. One can see equation (5.1) as a single multidimensional singular stochastic equation
\[(\partial_{t}-\Delta)\mathsf{u}=\mathsf{f}(\mathsf{u})\xi^{[1,n]}+\mathsf{g}( \mathsf{u})\]
with unknown \(\mathsf{u}=\big{(}u^{1,n},\ldots,u^{n,n}\big{)}\) and noise \(\xi^{[1,n]}=\big{(}\xi^{1},\ldots,\xi^{n}\big{)}\), and where \(\mathsf{f}\) is \((f^{1},\ldots,f^{n})\) with
\[f^{i}:\big{(}u^{1,n},\ldots,u^{n,n}\big{)}\mapsto f\bigg{(}u^{i,n},\frac{1}{n }\sum_{j=1}^{n}\delta_{u^{j,n}}\bigg{)}=:f(u^{i,n},\mu^{n}),\]
with a similar definition of \(\mathsf{g}\). The noise \(\xi^{[1,n]}\) needs to be enhanced to make sense of the equation. The solution will be a tuple of paracontrolled functions
\[u^{i,n}=(u^{i,n})^{\prime}<X^{i}+(u^{i,n})^{\#}=f^{i}(u^{1,n},\ldots,u^{n,n})< X^{i}+(u^{i,n})^{\#}\]
so we will have from paralinearisation
\[f^{i}\big{(}u^{1,n},\cdots,u^{n,n}\big{)}=\sum_{j=1}^{n}\Big{(}\partial_{j}f^ {i}\big{(}u^{1,n},\ldots,u^{n,n}\big{)}(u^{j,n})^{\prime}\Big{)}<X^{j}+f^{i} \big{(}u^{1,n},\ldots,u^{n,n}\big{)}^{\#},\]
with
\[\partial_{j}f^{i}\big{(}u^{1,n},\ldots,u^{n,n}\big{)}=\delta_{i,j}\partial_{1 }f\big{(}u^{i,n},\mu^{n}\big{)}+\frac{1}{n}\partial_{2}F\big{(}u^{i,n},\mu^{n }\big{)},\]
since
\[f(u^{i,n},\mu^{n})=\frac{1}{n}\sum_{j=1}^{n}F\big{(}u^{i,n},u^{j,n}\big{)}. \tag{5.2}\]
The singular product in (5.1) then reads
\[\begin{split} f\big{(}u^{i,n},\mu^{n}\big{)}\xi^{i}& =f\big{(}u^{i,n},\mu^{n}\big{)}<\xi^{i}+\xi^{i}<f\big{(}u^{i,n}, \mu^{n}\big{)}+f\big{(}u^{i,n},\mu^{n}\big{)}^{\#}\odot\xi^{i}\\ &\quad+\mathsf{C}\Big{(}\partial_{1}f\big{(}u^{i,n},\mu^{n} \big{)}(u^{i,n})^{\prime},X^{i},\xi^{i}\Big{)}+\frac{1}{n}\sum_{j=1}^{n} \mathsf{C}\Big{(}\partial_{2}F(u^{i},\mu^{n})(u^{j,n})^{\prime},X^{j},\xi^{i }\Big{)}\\ &\quad+\partial_{1}f\big{(}u^{i,n},\mu\big{)}(u^{i,n})^{\prime} \big{(}\xi^{i}\odot X^{j}\big{)}+\frac{1}{n}\sum_{j=1}^{n}\partial_{2}F(u^{i },\mu)(u^{j,n})^{\prime}\big{(}\xi^{i}\odot X^{j}\big{)}.\end{split} \tag{5.3}\]
Our task is now to prove that (5.1) may also be understood as a mean field singular stochastic PDE with a suitable enhancement of the noise and that the two interpretations coincide. With the notations of Section 2.2, Tanaka's trick gives an interpretation of (5.1) as the mean field type equation
\[(\partial_{t}-\Delta)u^{i,n}(\omega)=f\Big{(}u^{i,n}(\omega),u^{U_{n}(\cdot),n }(\omega)\Big{)}\xi^{i}(\omega)+g\Big{(}u^{i,n}(\omega),u^{U_{n}(\cdot),n}( \omega)\Big{)} \tag{5.4}\]
studied in Section 4, but now set on the finite probability space \(([\![1,n]\!],2^{[\![1,n]\!]},\lambda_{n})\), with generic chance element \(i\). The enhanced noise from Definition 17 is then
\[\Big{\{}\xi^{i},\ \xi^{i}\odot X^{i},\ \xi^{j},\ \xi^{j}\odot X^{i}\Big{\}}_{1 \leq i,j\leq n},\]
where the index \(i\) plays the role of \(\omega\) and \(j\) the role of \(\varpi\). Let us now clarify the meaning of the singular product. We have
\[\delta_{z}f\big{(}u^{i,n},u^{U_{n}(\cdot)}\big{)}=\partial_{1}f\big{(}u^{i,n},u^{U_{n}(\cdot),n}\big{)}\big{(}u^{i,n}\big{)}^{\prime},\]
and
\[\delta_{\mu}f\big{(}u^{i,n},u^{U_{n}(\cdot)}\big{)}=\partial_{2}F\big{(}u^{i,n},v^{U_{n}(\cdot),n}\big{)}\big{(}u^{U_{n}(\cdot),n}\big{)}^{\prime}.\]
In the sense of Section 4.2 the singular product in Equation (5.4) is defined as
\[\begin{split} f\big{(}u^{i,n},u^{U_{n}(\cdot),n}\big{)}\xi^{i}& =f\big{(}u^{i,n},u^{U_{n}(\cdot),n}\big{)}<\xi^{i}+\xi^{i}<f\big{(} u^{i,n},u^{U_{n}(\cdot),n}\big{)}+f\big{(}u^{i,n},u^{U_{n}(\cdot),n}\big{)}^{ \#}\odot\xi^{i}\\ &\quad+\mathsf{C}\Big{(}\partial_{1}f\big{(}u^{i,n},u^{U_{n}( \cdot),n}\big{)}\big{(}u^{i,n}\big{)}^{\prime},X^{i},\xi^{i}\Big{)}+\partial_{1 }f\big{(}u^{i,n},u^{U_{n}(\cdot),n}\big{)}\big{(}u^{i,n}\big{)}^{\prime}\big{(} \xi\odot X\big{)}^{i}\\ &\quad+\frac{1}{n}\sum_{j=1}^{n}\mathsf{C}\Big{(}\partial_{2}F \big{(}u^{i,n},u^{j,n}\big{)}\big{(}u^{j,n}\big{)}^{\prime},X^{j},\xi^{i} \Big{)}\\ &\quad+\frac{1}{n}\sum_{j=1}^{n}\partial_{2}F\big{(}u^{i,n},u^{U _{n}(\cdot),n}\big{)}\big{(}u^{U_{n}(\cdot),n}\big{)}^{\prime}\big{(}\xi^{i} \odot X^{j}\big{)}.\end{split} \tag{5.5}\]
We conclude from (5.3) and (5.5) that the two formulations coincide as they amount to solving the same classical PDE for the remainders \((u^{i,n})^{\#}\).
_(b) Mean field limit -_ We know from the continuity result of Theorem 24 that the almost sure convergence of
\[\mathcal{W}_{p}\bigg{(}\frac{1}{n}\sum_{i=1}^{n}\delta_{(\widehat{\xi}^{i,+}, u^{i}_{0})(\omega)},\mathcal{L}(\widehat{\xi}^{+},u_{0})\bigg{)}\]
to \(0\) granted by the law of large numbers entails the convergence of \(\mathcal{W}_{p,C_{T}C^{\alpha}}\big{(}\frac{1}{n}\sum_{i=1}^{n}\delta_{u^{i,n} },\mathcal{L}(u)\big{)}\) to \(0\), where \(u\) is the function associated with the solution \(\mathbf{u}\) of the mean field dynamics (1.3). It follows then from Sznitman's Proposition 2.2 in [15] that there is propagation of chaos for the system (5.1) of interacting fields to the mean field limit dynamics (1.3).
_25 - Corollary._ _For any fixed integer \(k\), the law of \(\big{(}u^{1,n},\ldots,u^{k,n}\big{)}\) converges to \(\mathscr{L}(u)^{\otimes k}\) when \(n\) tends to \(\infty\)._
## Appendix A - Enhancing random noises
We prove Theorem 6 in this section. Recall from (3.5) the definition of the random variable \(X\odot\xi\). Write \(e_{k}\) for the function \(x\mapsto\exp(i(k,x))\) and \(\widehat{\xi}(k)\) for \((\xi,e_{k})\). Our noises satisfy the identity
\[\mathbb{E}\big{[}\widehat{\xi}_{t}(k)\widehat{\xi}_{s}(-k^{\prime})\big{]}= \mathbf{1}_{k=k^{\prime}}\,c(t,s)\,\widehat{\eta}(k).\] (A.1)
We denote below by \(\textsc{Var}(A)\) the variance of a random variable \(A\).
_26 - Lemma._ _There exists a positive constant \(\lambda\) such that on has for all \(\ell\in\textsf{N}\), \(s,t,a,b\in\textsf{R}_{+}\) and \(x\in\mathsf{T}^{2}\), the estimate_
\[\textsc{Var}\Big{(}\Delta_{\ell}\big{(}P_{t}\xi_{s}\odot\xi_{a}\big{)}(x)\Big{)} \lesssim\frac{2^{2\ell}2^{2\ell\eta}}{t}\,e^{-\lambda t2^{2\ell}}\big{(}c(s,s) \,c(a,a)+c(s,a)^{2}\big{)}\]
_and_
\[\textsc{Var}\Big{(}\Delta_{\ell}\Big{(}\big{(}(\mathrm{Id}-P_{b})P_{t}\xi_{s} \big{)}\odot\xi_{a}\Big{)}(x)\Big{)}\lesssim b\,\frac{2^{2\ell}2^{2\ell\eta}}{t }\,e^{-\lambda t2^{2\ell-1}}\big{(}c(s,s)\,c(a,a)+c(s,a)^{2}\big{)}.\]
_Proof -_ The proof follows closely the proof of Lemma 5.2 in [11]. We have
\[\begin{split}\Delta_{\ell}\big{(}P_{t}\xi_{s}\odot\xi_{a}\big{)}( x)&=(2\pi)^{-2}\sum_{k\in\mathbf{Z}^{2}}e^{i(k,x)}\rho_{\ell}(k) \mathcal{F}\big{(}P_{t}\xi_{s}\odot\xi_{a}\big{)}(k)\\ &=(2\pi)^{-4}\!\!\sum_{k_{1},k_{2}\in\mathbf{Z}^{2}}\sum_{|i-j| \leq 1}\!\rho_{\ell}(k_{1}+k_{2})\rho_{i}(k_{1})\,e^{-t|k_{1}|^{2}}\,\widehat{ \xi}_{s}(k_{1})\,\rho_{j}(k_{2})\,\widehat{\xi}_{a}(k_{2})\,e_{k_{1}+k_{2}}(x),\end{split}\]
then \(\textsc{Var}\Big{(}\Delta_{\ell}(P_{t}\xi_{s}\odot\xi_{a})(x)\Big{)}\) is equal to
\[(2\pi)^{-8}\sum_{k_{1},k_{2},k_{1}^{\prime},k_{2}^{\prime}}\sum_{|i -j|\leq 1}\sum_{|i^{\prime}-j^{\prime}|\leq 1}\rho_{\ell}(k_{1}+k_{2})\,\rho_{i}(k_{ 1})\,e^{-t|k_{1}|^{2}}\rho_{j}(k_{2})\] \[\times\rho_{\ell}(k_{1}^{\prime}+k_{2}^{\prime})\,\rho_{i^{\prime} }(k_{1}^{\prime})\,e^{-t|k_{1}^{\prime}|^{2}}\,\rho_{j^{\prime}}(k_{2}^{\prime })\,\mathrm{Cov}\Big{(}\widehat{\xi}_{s}(k_{1})\widehat{\xi}_{a}(k_{2}), \widehat{\xi}_{s}(k_{1}^{\prime})\widehat{\xi}_{a}(k_{2}^{\prime})\Big{)}e_{k_ {1}+k_{2}+k_{1}^{\prime}+k_{2}^{\prime}}(x).\]
Using Wick theorem and the identity A.1 one gets
\[\mathrm{Cov}\Big{(}\widehat{\xi}_{s}(k_{1})\widehat{\xi}_{a}(k_{ 2}),\widehat{\xi}_{s}(k_{1}^{\prime})\widehat{\xi}_{a}(k_{2}^{\prime})\Big{)}\] \[=\mathbb{E}\Big{[}\widehat{\xi}_{s}(k_{1})\widehat{\xi}_{a}(k_{2} )\widehat{\xi}_{s}(k_{1}^{\prime})\widehat{\xi}_{a}(k_{2}^{\prime})\Big{]}- \mathbb{E}\Big{[}\widehat{\xi}_{s}(k_{1})\widehat{\xi}_{a}(k_{2})\Big{]} \mathbb{E}\Big{[}\widehat{\xi}_{s}(k_{1}^{\prime})\widehat{\xi}_{a}(k_{2}^{ \prime})\Big{]}\] \[=\mathbb{E}\Big{[}\widehat{\xi}_{s}(k_{1})\widehat{\xi}_{s}(k_{1 }^{\prime})\Big{]}\mathbb{E}\Big{[}\widehat{\xi}_{a}(k_{2})\widehat{\xi}_{a}( k_{2}^{\prime})\Big{]}+\mathbb{E}\Big{[}\widehat{\xi}_{s}(k_{1})\widehat{\xi}_{a}(k_{ 2}^{\prime})\Big{]}\mathbb{E}\Big{[}\widehat{\xi}_{s}(k_{1}^{\prime})\widehat {\xi}_{a}(k_{2})\Big{]}\] \[=(2\pi)^{4}\,\widehat{\eta}(k_{1})\,\widehat{\eta}(k_{2})\Big{(} \mathbf{1}_{k_{1}=-k_{1}^{\prime},k_{2}=-k_{2}^{\prime}}\,c(s,s)\,c(a,a)+ \mathbf{1}_{k_{1}=-k_{2}^{\prime},k_{2}=-k_{1}^{\prime}}c(s,a)^{2}\Big{)},\]
consequently
\[\mathrm{Var}\Big{(}\Delta_{\ell}(P_{t}\xi_{s}\circ\xi_{a})(x) \Big{)}=\sum_{k_{1},k_{2}}\sum_{|i-j|\leq 1}\sum_{|i^{\prime}-j^{\prime}| \leq 1}(2\pi)^{4}\,\widehat{\eta}(k_{1})\,\widehat{\eta}(k_{2})\,\rho_{\ell} (k_{1}+k_{2})^{2}\,\rho_{i}(k_{1})\,\rho_{j}(k_{2})\] \[\qquad\qquad\qquad\times\Big{(}c(s,s)\,c(a,a)\,\rho_{i^{\prime} }(k_{1})\,\rho_{j^{\prime}}(k_{2})e^{-2t|k_{1}|^{2}}+c(s,a)^{2}\rho_{i^{ \prime}}(k_{2})\rho_{j^{\prime}}(k_{1})\,e^{-t|k_{1}|^{2}-t|k_{2}|^{2}}\Big{)}.\]
The factors \(\rho_{i}(k_{1})\,\rho_{i^{\prime}}(k_{1})\) and \(\rho_{i}(k_{1})\,\rho_{j^{\prime}}(k_{1})\) ensure that one can restrict the sum on \(i\) and \(i^{\prime}\) to couples \((i,i^{\prime})\) such that \(\frac{1}{\mu}|i|\leq|i^{\prime}|\leq\mu|i|\) for some constant \(\mu\), which will be denoted by \(i\sim i^{\prime}\). Likewise the factor \(\rho_{\ell}(k_{1}+k_{2})\) enables us to restrict the sum to \(|i|\geq\frac{1}{\mu^{\prime}}l\) for some \(\mu^{\prime}\). There exists some \(\lambda_{0}>0\) such that \(e^{-2t|k|^{2}}\lesssim e^{-t\lambda 2^{2i}}\) for \(k\in\mathrm{supp}(\rho_{i})\), so that for some \(\lambda>0\)
\[\mathrm{Var}\Big{(}\Delta_{\ell}(P_{t}\xi_{s}\circ\xi_{a})(x)\Big{)}\] \[\qquad\qquad\lesssim\big{(}c(s,s)\,c(a,a)+c(s,a)^{2}\big{)}\sum_ {i,i^{\prime},j,j^{\prime}}\mathbf{1}_{\ell\lesssim i}\mathbf{1}_{i\sim i^{ \prime}\sim j\sim j^{\prime}}\sum_{k_{1},k_{2}}\mathbf{1}_{\mathrm{supp}( \rho_{\ell})}(k_{1}+k_{2})\] \[\qquad\qquad\qquad\times\mathbf{1}_{\mathrm{supp}(\rho_{i})}(k_{ 1})\mathbf{1}_{\mathrm{supp}(\rho_{j})}(k_{2})2^{2i\eta}e^{-2t\lambda 2^{2i}}\] \[\qquad\qquad\lesssim\big{(}c(s,s)\,c(a,a)+c(s,a)^{2}\big{)}\sum_ {i,l\lesssim i}2^{2i}2^{2\ell}2^{2i\eta}e^{-2t\lambda 2^{2i}}\] \[\qquad\qquad\lesssim\big{(}c(s,s)\,c(a,a)+c(s,a)^{2}\big{)}\frac{2 ^{2\ell}2^{2\ell\eta}}{t}e^{-2t\lambda 2^{2\ell}},\]
hence the first estimate. For the second estimate we notice that the \(e^{-t|k_{1}|^{2}}\) is replaced by \((1-e^{-b|k_{1}|^{2}})\,e^{-t|k_{1}^{2}|}\) and that
\[(1-e^{-b|k_{1}|^{2}})\,e^{-t|k_{1}^{2}|}\leq b|k_{1}|^{2}e^{-t|k_{1}|^{2}} \lesssim ve^{-t|k_{1}|^{2}/2}\]
The remainder of the proof is the same as for the first estimate. \(\rhd\)
We can now prove Theorem 6. We will estimate \(\mathbb{E}\big{[}\big{\|}\big{(}X\odot\xi\big{)}(t)-\big{(}X\odot\xi\big{)}(s) \big{\|}_{B_{2\eta,2p}^{2p}}^{2p_{2\eta-2}}\big{]}\) in order to use Kolmogorov continuity criterion and Besov embedding. For \(0<s\leq t\), write
\[\int_{0}^{t}P_{t-a}(\xi_{a})\odot\xi_{t}\,da-\int_{0}^{s}P_{s-a}( \xi_{a})\odot\xi_{s}\,da\] \[\qquad\qquad=\int_{0}^{s}\bigl{(}(P_{t-s}-\mathrm{Id})P_{s-a}( \xi_{a})\bigr{)}\odot\xi_{t}\,da+\int_{0}^{s}\!\!P_{s-a}(\xi_{a})\odot(\xi_{t}- \xi_{s})\,da+\int_{s}^{t}\!\!P_{t-a}(\xi_{a})\odot\xi_{t}\,da\] \[\qquad\qquad=:\int_{0}^{s}\widetilde{A}_{1}(a)\,da+\int_{0}^{s} \widetilde{A}_{2}(a)\,da+\int_{s}^{t}\widetilde{A}_{3}(a)\,da,\]
and set
\[A_{i}:=\widetilde{A}_{i}-\mathbb{E}\big{[}\widetilde{A}_{i}\big{]}\qquad(i\in[1, 3]).\]
The quantity is equal to
\[\sum_{\ell\geq-1}2^{2p\ell(2\alpha-2)}\int_{\mathbb{T}^{2}}\mathbb{E}\Big{[} \Big{|}\Delta_{\ell}\Big{(}\big{(}X\odot\xi\big{)}(t)-\big{(}X\odot\xi\big{)}(s )\Big{)}\Big{|}^{2p}\Big{]}\]
From gaussian hypercontractivity we have
\[\mathbb{E}\bigg{[}\Big{|}\int_{0}^{s}\Delta_{\ell}A_{1}(a)\,da+ \int_{0}^{s}\Delta_{\ell}A_{2}(a)\,da+\int_{s}^{t}\Delta_{\ell}A_{3}(a)\,da \Big{|}^{2p}\bigg{]}\] \[\leq\mathbb{E}\bigg{[}\int_{0}^{s}\big{|}\Delta_{\ell}A_{1}(a) \big{|}\,da+\int_{0}^{s}\big{|}\Delta_{\ell}A_{2}(a)\big{|}\,da+\int_{s}^{t} \big{|}\Delta_{\ell}A_{3}(a)\big{|}\,da\bigg{]}^{2p}\] \[\lesssim\bigg{(}\int_{0}^{s}\mathbb{E}\big{[}\big{|}\Delta_{\ell }A_{1}(a)\big{|}^{2}\big{]}^{1/2}\,da\bigg{)}^{2p}+\bigg{(}\int_{0}^{s} \mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{2}(a)\big{|}^{2}\big{]}^{1/2}\,da \bigg{)}^{2p}+\bigg{(}\int_{s}^{t}\mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{3}(a )\big{|}^{2}\big{]}^{1/2}\,da\bigg{)}^{2p}\]
So that the bounds for becomes
\[\bigg{(}\sum_{\ell\geq-1}2^{2p\ell(2\alpha-2)}\int_{\mathbb{T}^{2 }}\mathbb{E}\bigg{[}\bigg{|}\int_{0}^{s}\Delta_{\ell}A_{1}(a)\,da+\int_{0}^{s} \Delta_{\ell}A_{2}(a)\,da+\int_{s}^{t}\Delta_{\ell}A_{3}(a)\,da\bigg{|}^{2p} \bigg{]}\bigg{)}^{\frac{1}{2p}}\] \[\lesssim\sum_{\ell\geq-1}2^{\ell(2\alpha-2)}\Big{(}\int_{0}^{s} \mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{1}(a)\big{|}^{2}\big{]}^{\frac{1}{2} }\,da+\int_{0}^{s}\mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{2}(a)\big{|}^{2} \big{]}^{\frac{1}{2}}\,da+\int_{s}^{t}\mathbb{E}\big{[}\big{|}\Delta_{\ell}A_ {3}(a)\big{|}^{2}\big{]}^{\frac{1}{2}}\,da\Big{)}\] \[=:S_{1}+S_{2}+S_{3}\]
We now use Lemma 26 to estimate the \(\mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{i}(a)\big{|}^{2}\big{]}\). First we have
\[\mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{1}(a)\big{|}^{2}\big{]} =\textsc{Var}\Big{(}\Delta_{\ell}\Big{(}\big{(}(P_{t-s}-\mathrm{Id })P_{s-a}\xi_{a}\big{)}\odot\xi_{t}\Big{)}\Big{)}\] \[\lesssim(t-s)\frac{2^{2\ell}2^{2\ell\eta}}{s-a}\,e^{-\lambda(s-a) 2^{2\ell}}\big{(}c(s,s)\,c(a,a)+c(s,a)^{2}\big{)}\]
and
\[\mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{2}(a)\big{|}^{2}\big{]} =\textsc{Var}\Big{(}\Delta_{\ell}\Big{(}\big{(}P_{s-a}\xi_{a} \big{)}\odot\big{(}\xi_{t}-\xi_{s}\big{)}\Big{)}\Big{)}\] \[\lesssim\frac{2^{2\ell}2^{2\ell\eta}}{t-a}\,e^{-\lambda(t-a)2^{2 \ell}}\Big{(}c(t,t)c(a,a)+c(t,a)^{2}\big{)}.\]
So, writing \(c_{st}\) for \(c(t,t)+c(s,s)-2c(s,t)\), we get
\[\int_{0}^{s}\mathbb{E}\Big{[}\big{|}\Delta_{\ell}A_{1}(a)\big{|}^ {2}\Big{]}^{1/2}\,da \lesssim(t-s)^{\frac{1}{2}}2^{2\ell}2^{2\ell\eta}\int_{0}^{s}e^{- \lambda(s-a)2^{2\ell-1}}\frac{da}{(s-a)^{1/2}}\] \[\int_{0}^{s}\mathbb{E}\Big{[}\big{|}\Delta_{\ell}A_{2}(a)\big{|}^ {2}\Big{]}^{1/2}\,da \lesssim c_{st}^{\frac{1}{2}}\,2^{\ell}2^{2\ell\eta}\int_{0}^{s}e^{- \lambda(s-a)2^{2\ell-1}}\frac{da}{(s-a)^{1/2}}\] \[\int_{s}^{t}\mathbb{E}\Big{[}\big{|}\Delta_{\ell}A_{3}(a)\big{|}^ {2}\Big{]}^{1/2}\,da \lesssim 2^{\ell}2^{2\ell\eta}\int_{s}^{t}e^{-\lambda(t-a)2^{2\ell-1}} \frac{da}{(t-a)^{1/2}}.\]
We have
\[S_{1} \lesssim(t-s)^{\frac{1}{2}}\sum_{\ell\geq-1}2^{\ell(2\alpha+2\eta-1) }\int_{0}^{s}e^{-\lambda(s-a)2^{2\ell-1}}\frac{da}{(s-a)^{1/2}}\] \[\lesssim(t-s)^{\frac{1}{2}}\int_{0}^{s}\int_{-1}^{+\infty}2^{x(2 \alpha+2\eta-1)}e^{-\lambda(s-a)2^{2x-1}}\frac{dx\,da}{(s-a)^{1/2}}\] \[\lesssim(t-s)^{\frac{1}{2}}\int_{0}^{s}\int_{0}^{+\infty}(s-a)^{- \alpha-\eta}y^{2\alpha+2\eta-2}e^{-\lambda y^{2}/2}\,dy\,da\]
and similarly
\[S_{2} \lesssim c_{st}^{\frac{1}{2}}\sum_{\ell\geq-1}2^{\ell(2\alpha-1) }\int_{0}^{s}e^{-\lambda(s-a)2^{2\ell-1}}\frac{da}{(s-a)^{1/2}}\] \[\lesssim c_{st}^{\frac{1}{2}}\int_{0}^{s}\int_{0}^{+\infty}(s-a)^{- \alpha-\eta}y^{2\alpha+2\eta-2}e^{-\lambda y^{2}/2}\,dy\,da\]
and
\[S_{3} \lesssim\sum_{\ell\geq-1}2^{\ell(2\alpha-1)}\int_{s}^{t}e^{- \lambda(t-a)2^{2\ell-1}}\frac{da}{(t-a)^{1/2}}\] \[\lesssim\int_{s}^{t}\int_{0}^{+\infty}(t-a)^{-\alpha-\eta}y^{2 \alpha+2\eta-2}e^{-\lambda y^{2}/2}\mathrm{d}y\,da.\]
Finally we see that
\[\mathbb{E}\Big{[}\|\big{(}X\odot\xi\big{)}(t)-\big{(}X\odot\xi \big{)}(s)\|_{B_{2p,2p}^{2\alpha-2}}^{2p}\Big{]} \lesssim\Big{(}(t-s)^{1/2}+c_{st}^{1/2}+(t-s)^{1-\alpha-\eta} \Big{)}^{2p}\] \[\lesssim|t-s|^{2pm},\]
with
\[m:=\min\big{\{}1/2,\ \ \delta/2,\ 1-\alpha-\eta\big{\}}.\]
From Kolmogorov continuity criterion and Besov embedding, for every \(\alpha<1\) and \(1\leq p<\infty\) the process \(X\odot\xi\) is almost surely an element of \(C_{T}^{m-1/p}C^{2\alpha-2-1/p}(\mathsf{T}^{2})\).
The mollifier approximation result in the statement of Theorem 6 comes from the same arguments and calculations writing
\[(X\odot\xi)(t)-\Big{(}(X^{\varepsilon}\odot\xi^{\varepsilon})(t) -\mathbb{E}\big{[}(X^{\varepsilon}\odot\xi^{\varepsilon})(t)\big{]} \Big{)}\] \[=\int_{0}^{t}\Big{(}P_{t-a}(\xi_{a}-\xi_{a}^{\varepsilon})\odot \xi_{t}-\mathbb{E}\big{[}P_{t-a}(\xi_{a}-\xi_{a}^{\varepsilon})\odot\xi_{t} \big{]}\Big{)}da\] \[\qquad+\int_{0}^{t}\Big{(}P_{t-a}\xi_{a}^{\varepsilon}\odot(\xi_ {t}-\xi_{t}^{\varepsilon})-\mathbb{E}\big{[}P_{t-a}\xi_{a}^{\varepsilon}\odot( \xi_{t}-\xi_{t}^{\varepsilon})\big{]}\Big{)}da.\]
If \(\varphi\) is the fourier transform of the mollifier, we have
\[\widehat{\xi}^{\varepsilon}(k)=\varphi(k\varepsilon)\,\widehat{\xi}(k),\]
and the same calculations as in the proof of Lemma 26 give
\[\mathrm{Var}\Big{(}\Delta_{\ell}(P_{t-a}(\xi_{a}-\xi_{a}^{ \varepsilon})\odot\xi_{t})(x)\Big{)}\] \[\lesssim\sum_{i,i^{\prime},j,j^{\prime}}\mathbf{1}_{l\lesssim i} \mathbf{1}_{i\sim i^{\prime}\sim j\sim j^{\prime}}\sum_{k_{1},k_{2}}(1-\varphi (k_{1}\varepsilon))\mathbf{1}_{\mathrm{supp}(\rho_{\ell})}(k_{1}+k_{2}) \mathbf{1}_{\mathrm{supp}(\rho_{i})}(k_{1})\mathbf{1}_{\mathrm{supp}(\rho_{j} )}(k_{2})2^{2i\eta}e^{-2t\lambda 2^{2i}}\] \[\lesssim\sum_{i}\mathbf{1}_{\ell\lesssim i}\sum_{k_{1},k_{2}}(1- \varphi(k_{1}\varepsilon))\mathbf{1}_{\mathrm{supp}(\rho_{\ell})}(k_{1}+k_{2}) \mathbf{1}_{\mathrm{supp}(\rho_{i})}(k_{1})\mathbf{1}_{\mathrm{supp}(\rho_{j} )}(k_{2})2^{2i\eta}e^{-2t\lambda 2^{2i}}.\]
For some integer \(N=N(\varepsilon)\), one can decompose the last sum as
\[\sum_{i\leq N}\mathbf{1}_{\ell\lesssim i}\sum_{k_{1},k_{2}}(1-\varphi (k_{1}\varepsilon))\mathbf{1}_{\mathrm{supp}(\rho_{\ell})}(k_{1}+k_{2})\mathbf{1 }_{\mathrm{supp}(\rho_{i})}(k_{1})\mathbf{1}_{\mathrm{supp}(\rho_{j})}(k_{2})2 ^{2i\eta}e^{-2t\lambda 2^{2i}}\] \[+\sum_{i>N}\mathbf{1}_{\ell\lesssim i}\sum_{k_{1},k_{2}}(1- \varphi(k_{1}\varepsilon))\mathbf{1}_{\mathrm{supp}(\rho_{\ell})}(k_{1}+k_{2} )\mathbf{1}_{\mathrm{supp}(\rho_{i})}(k_{1})\mathbf{1}_{\mathrm{supp}(\rho_{j} )}(k_{2})2^{2i\eta}e^{-2t\lambda 2^{2i}}\] \[\lesssim\sup_{|x|\leq N}\big{(}1-\varphi(x\varepsilon)\big{)}\frac {2^{2t}2^{2\ell\eta}}{t-a}\,e^{-\lambda(t-a)2^{2\ell}}+\frac{2^{2N}2^{2N\eta} }{t-a}\,e^{-\lambda(t-a)2^{2N}}\]
So that choosing \(N(\varepsilon)\) such that \(N(\varepsilon)\to\infty\) and \(\varepsilon N(\varepsilon)\to 0\) as \(\varepsilon\) goes to zero, one gets
\[\mathrm{Var}\Big{(}\Delta_{\ell}\Big{(}P_{t-a}(\xi_{a}-\xi_{a}^{\varepsilon} )\odot\xi_{t}(x)\Big{)}\Big{)}\lesssim\psi_{\ell}(\varepsilon)\,\frac{2^{2t(1 +\eta)}}{t-a}\,e^{-\lambda(t-a)2^{2\ell}},\]
where \(0\leq\psi_{\ell}(\varepsilon)\leq 1\) tends to \(0\) as \(\varepsilon>0\) goes to \(0\). Likewise one has
\[\mathrm{Var}\Big{(}\Delta_{\ell}\Big{(}P_{t-a}\xi_{a}^{\varepsilon}\odot(\xi_ {t}-\xi_{t}^{\varepsilon})(x)\Big{)}\Big{)}\lesssim\psi_{\ell}(\varepsilon)\, \frac{2^{2\ell(1+\eta)}}{t-a}\,e^{-\lambda(t-a)2^{2\ell}}.\]
The same calculations as above give for
\[\mathbb{E}\Big{[}\big{\|}(X\odot\xi)(t)-\big{(}(X^{\varepsilon}\odot\xi^{ \varepsilon})(t)-\mathbb{E}\big{[}(X^{\varepsilon}\odot\xi^{\varepsilon})(t) \big{]}\big{)}\big{\|}_{B^{2\alpha-2}_{2p,2p}}^{2p}\Big{]}\]
the bound
\[\sum_{\ell\geq 0}\psi_{\ell}(\varepsilon)\,2^{\ell(2\alpha+2\eta-2)}\int_{0}^ {t}\mathbb{E}\big{[}\big{|}\Delta_{\ell}A_{3}(a)\big{|}^{2}\big{]}^{\frac{1}{ 2}}da\]
The result follows from dominated convergence argument as the series
\[\sum_{\ell\geq 0}2^{\ell(2\alpha+2\eta-2)}\int_{0}^{t}\mathbb{E}\big{[}\big{|} \Delta_{\ell}A_{3}(a)\big{|}^{2}\big{]}^{\frac{1}{2}}da\]
is seen to be convergent.
|
2307.14618 | Comparison geometry for substatic manifolds and a weighted Isoperimetric
Inequality | Substatic Riemannian manifolds with minimal boundary arise naturally in
General Relativity as spatial slices of static spacetimes satisfying the Null
Energy Condition. Moreover, they constitute a vast generalization of
nonnegative Ricci curvature. In this paper we will prove various geometric
results in this class, culminating in a sharp, weighted Isoperimetric
inequality that quantifies the area minimizing property of the boundary. Its
formulation and proof will build on a comparison theory partially stemming from
a newly discovered conformal connection with $\mathrm{CD}(0, 1)$ metrics. | Stefano Borghini, Mattia Fogagnolo | 2023-07-27T04:22:40Z | http://arxiv.org/abs/2307.14618v1 | # Comparison geometry for substatic manifolds and a weighted isoperimetric inequality
###### Abstract.
Substatic Riemannian manifolds with minimal boundary arise naturally in General Relativity as spatial slices of static spacetimes satisfying the Null Energy Condition. Moreover, they constitute a vast generalization of nonnegative Ricci curvature. In this paper we will prove various geometric results in this class, culminating in a sharp, weighted Isoperimetric inequality that quantifies the area minimizing property of the boundary. Its formulation and proof will build on a comparison theory partially stemming from a newly discovered conformal connection with \(\mathrm{CD}(0,1)\) metrics.
MSC (2020): 49Q10, 53C21, 53E10,
Keywords: substatic manifolds, comparison geometry, isoperimetric inequality.
## 1. Introduction
In this paper we are interested in the study of triples \((M,g,f)\), where \((M,g)\) is a Riemannian manifold of dimension \(n\geq 3\) with (possibly empty) compact boundary \(\partial M\) and \(f:M\to\mathbb{R}\) is a smooth function that is positive in the interior of \(M\) and zero on \(\partial M\), satisfying the following inequality
\[f\mathrm{Ric}-\nabla^{2}f+(\Delta f)g\geq 0, \tag{1.1}\]
where \(\mathrm{Ric}\) is the Ricci tensor of the metric \(g\), \(\nabla^{2}\) is the Hessian and \(\Delta=\mathrm{tr}\nabla^{2}\) is the Laplace-Beltrami operator with respect to the Levi-Civita connection \(\nabla\) of \(g\). We will refer to such triples \((M,g,f)\) as _substatic triples_ or simply _substatic manifolds_. We say that a substatic manifold has _horizon boundary_ if \(\partial M\) is either empty or it is a minimal hypersurface and \(|\nabla f|\neq 0\) on \(\partial M\).
Condition (1.1) arises naturally in the study of static spacetimes satisfying the Null Energy Condition, as already observed in [10]. More precisely, a Lorentzian manifold \((L,\mathfrak{g})\) of the form
\[L=\mathbb{R}\times M\,,\qquad\mathfrak{g}\,=\,-f^{2}dt\otimes dt+g,\]
happens to be a solution to the Einstein Field Equation
\[\mathrm{Ric}_{\mathfrak{g}}\,+\,\left(\Lambda-\frac{1}{2}\mathrm{R}_{ \mathfrak{g}}\right)\mathfrak{g}\,=\,T\,,\]
subject to \(T(X,X)\geq 0\) for any vector field \(X\) satisfying \(\mathfrak{g}(X,X)=0\), exactly when \(f\) and \(g\) satisfy (1.1). A minimal boundary represents, in this framework, the event horizon of a black hole. For the reader's sake, we included the computations in Appendix A.1. The class of substatic manifolds obviously includes the very large and thoroughly studied class of manifolds with nonnegative Ricci curvature, where \(f\) is just constant, and consequently the minimal boundary is empty. However, even considering explicit model warped products only, a whole new zoo of examples arises.
As an example, we recall that the following family of triples \((M,g,f)\) is in fact a family of substatic triples:
\[M=I\times\Sigma\,,\qquad g=\frac{dr\otimes dr}{f^{2}}+r^{2}g_{\Sigma},\qquad f =\sqrt{1-\frac{2\Lambda}{n(n-1)}r^{2}-\frac{2m}{r^{n-2}}+\frac{q^{2}}{r^{2n- 4}}}, \tag{1.2}\]
where \((\Sigma,g_{\Sigma})\) is a closed \((n-1)\)-dimensional Riemannian manifold satisfying \(\mathrm{Ric}_{g_{\Sigma}}\geq(n-2)g_{\Sigma}\), \(\Lambda,q\in\mathbb{R}\), \(m\geq 0\) and \(I\subseteq[0,+\infty)\) is the maximal interval such that the quantity in square root in (1.2) is nonnegative for all \(r\in I\). According to the sign of \(\Lambda\), the case \(m=q=0\) corresponds to the space forms. If instead \(m>0\), \(q=0\), one obtains the families of the Schwarzschild,
Schwarzschild-de Sitter and Schwarzschild-Anti de Sitter black holes, again with respect to \(\Lambda\) being vanishing, positive or negative. If \(m>0\) and \(q\neq 0\), one gets the Reissner-Nordstrom versions of these last spaces. From a physical point of view, \(\Lambda\) is the cosmological constant, \(m\) is the mass and \(q\) is the charge of the black hole.
We will always tacitly assume that \((M,g)\) is complete as a metric space. This holds true for the models (1.2), provided the absolute value of the charge \(q\) is not too big. For instance, for \(\Lambda=0\), the solution has a singularity at \(r=0\) when \(|q|>m\).
The main achievement of the present work is the following sharp Isoperimetric Inequality, taking place in a relevant subclass of substatic triples. It is saturated by warped product metrics only, such as the ones in (1.2).
**Theorem A** (Substatic \(f\)-isoperimetric inequality).: _Let \((M,g,f)\) be a substatic triple of dimension \(n\leq 7\), with horizon boundary and one uniform \(f\)-complete end. Assume there exists an exhaustion of nonmimimal outward minimizing hypersurfaces homologous to the boundary. Then, for any bounded domain \(\Omega_{\Sigma}\) with smooth boundary \(\partial\Omega_{\Sigma}=\partial M\sqcup\Sigma\) it holds_
\[|\Sigma|^{\frac{n}{n-1}}-|\partial M|^{\frac{n}{n-1}}\geq n\left[\mathrm{AVR }(M,g,f)|\mathbb{S}^{n-1}|\right]^{\frac{1}{n-1}}|\Omega_{\Sigma}|_{f}. \tag{1.3}\]
_Moreover, in the case \(\mathrm{AVR}(M,g,f)>0\),_
* _if_ \(\partial M\neq\O\)_, the equality holds in (_1.3_) if and only if_ \(\partial M\) _is connected and_ \((M,g)\) _is isometric to_ \[\left([\overline{s},+\infty)\times\partial M,\,\frac{ds\otimes ds}{f(s)^{2}} +\frac{s^{2}}{\overline{s}^{2}}\,g_{\partial M}\right),\] (1.4) _where_ \(g_{\partial M}\) _is the metric induced by_ \(g\) _on_ \(\partial M\) _and_ \(\Sigma\) _is a level set of_ \(s\)_. In particular,_ \(f=f(s)\) _is a function of_ \(s\) _alone._
* _If_ \(\partial M=\O\)_, the equality holds in (_1.3_) if and only if_ \((M,g)\) _is isometric to_ \[\left([0,+\infty)\times\mathbb{S}^{n-1},\,\frac{ds\otimes ds}{f(s)^{2}}+\frac {s^{2}}{f(x)^{2}}\,g_{\mathbb{S}^{n-1}}\right),\] _where_ \(g_{\mathbb{S}^{n-1}}\) _is the round metric on the_ \((n-1)\)_-dimensional sphere_ \(\mathbb{S}^{n-1}\) _and_ \(x\in\Omega_{\Sigma}\)_. In this case,_ \(\Sigma\) _is a level set of_ \(s\) _homothetic to the round sphere. The function_ \(f=f(s)\) _depends on_ \(s\) _alone also in this case._
The asymptotic assumptions entering in the above statement will be better understood in the next Subsection, in connection with the comparison results presented below. Concerning the quantities appearing in (1.3), we have denoted by \(|\Omega_{\Sigma}|_{f}\) the weighted volume \(\int_{\Omega_{\Sigma}}f\,d\mu\), whereas \(\mathrm{AVR}(M,g,f)\) is a suitable substatic generalization of the classical Asymptotic Volume Ratio for nonnegative Ricci curvature, see (1.9). When it is nonzero, inequality (1.3) in particular yields a quantitative information about the minimal boundary being in fact area minimizing, in terms of a suitable weighted volume. Observe that a priori the boundary is not even assumed to be area minimizing at all. From a more analytical point of view, formula (1.3) constitutes a nonstandard weighted isoperimetric inequality, as the perimeter is actually unweighted. The geometric intuition behind it will be given by the end of the following Subsection. One can interpret the very thoroughly recently studied Isoperimetric Inequality in nonnegative Ricci curvature [1, 1, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] as a special case of (1.3), obtained when the boundary is empty and \(f\) is constant. The rigidity statement accordingly generalizes the one of the nonnegative Ricci curvature case.
We point out that Theorem A is particularly meaningful and perfectly sharp already in the above recalled Reissner-Nordstrom and Schwarzschild metrics, consisting in (1.2) for \(\Lambda=0\), \(m>0\), \(|q|<m\), and more generally in _asymptotically flat_ substatic manifolds. With asymptotically flat we mean that the manifold converges to the Euclidean space (in a very weak sense) and that \(f\) goes to \(1\) at infinity, see Definition 4.8. From the definition, it follows that an asymptotically flat end is automatically uniform and \(f\)-complete, it possesses a natural exhaustion of coordinate spheres and it is possible to compute \(\mathrm{AVR}(M,g,f)=1\). Thus, the above statement simplifies significantly.
**Corollary 1.1**.: _Let \((M,g,f)\) be a substatic triple of dimension \(n\leq 7\), with horizon boundary and one asymptotically flat end. Then, for any bounded domain \(\Omega_{\Sigma}\) with smooth boundary \(\partial\Omega_{\Sigma}=\partial M\sqcup\Sigma\) it holds_
\[|\Sigma|^{\frac{n}{n-1}}-|\partial M|^{\frac{n}{n-1}}\geq n|\mathbb{S}^{n-1}|^ {\frac{1}{n-1}}|\Omega_{\Sigma}|_{f}. \tag{1.5}\]
_The same rigidity statement as in Theorem A applies in case of equality._
To our knowledge, even in the model cases, inequality (1.5) was never observed before, and does not seem to be inferable from the characterization of classical isoperimetric sets resulting from the work of Brendle [10], or the earlier [11, 12] about the Schwarzschild case.
### Substatic comparison geometry
Our analysis begins with the aim of working out a satisfactory substatic comparison theory, inspired by the classical nonnegative Ricci case. While, in such case, the model to be compared with is \(\mathbb{R}^{n}\), or more generally a cone, in the substatic generalization the model should be constituted by the large family of substatic warped products in fact appearing in the rigidity statement of Theorem A.
To pursue our goal, an initial step consists in comparing the mean curvature of geodesic spheres with that of the models. Interestingly, in order to obtain a manageable Riccati equation ruling such comparison, we are led to work in the metric \(\tilde{g}=g/f^{2}\). This is no accident: the metric \(\tilde{g}\) happens to fulfil the \(\mathrm{CD}(0,1)\) condition, consisting in a metric subject to
\[\mathrm{Ric}_{\tilde{g}}+\widetilde{\nabla}^{2}\psi+\frac{1}{n-1}d\psi\otimes d \psi\geq 0 \tag{1.6}\]
for some smooth function \(\psi\). To our knowledge, such explicit conformal relation was not pointed out in literature yet. However, a remarkable link is described by Li-Xia [14]: they come up with a family of connections with Ricci curvatures interpolating between the tensor in the left-hand side of (1.6) and the tensor in the left-hand side of (1.1). We discuss the \(\mathrm{CD}(0,1)\) conformal change and Li-Xia connections in more details in Appendix A.3. We also point out that the conformal metric \(\tilde{g}\) has a natural physical interpretation in the context of static spacetimes, where it is referred to as _optical metric_. We give some more details on this point at the end of Appendix A.1.
We will denote by \(\rho\) the \(\tilde{g}\)-distance from a point \(p\in M\), or the signed \(\tilde{g}\)-distance from a smooth strictly mean-convex hypersurface \(\Sigma\) homologous to the boundary. We give some more details on this second case, which is slightly less classical but will be crucial for the Willmore-type inequality (1.10) discussed below and in turn for the proof of Theorem A. With homologous to the boundary we mean that there exists a compact domain \(\Omega\) with boundary \(\partial\Omega=\partial M\sqcup\Sigma\), and by strictly mean-convex we understand that \(\Sigma\) has pointwise positive \(g\)-mean curvature \(\mathrm{H}\) with respect to the normal pointing towards infinity. We always choose the signed distance \(\rho\) to be positive in the noncompact region \(M\setminus\Omega\), that is,
\[\rho(x)\,=\,\begin{cases}\mathrm{d}_{\tilde{g}}(x,\Sigma)&\text{if }x\not\in \Omega,\\ -\mathrm{d}_{\tilde{g}}(x,\Sigma)&\text{if }x\in\Omega.\end{cases}\]
Both in the case of the distance from a point and in the case of the signed distance from a hypersurface, through an analysis of the evolution of the mean curvature of the level sets of \(\rho\), we come up (see Theorem 2.5 and Proposition 2.7) with the following inequality
\[0\,<\,\frac{\mathrm{H}}{f}\,=\,\Delta\rho+\frac{1}{f}\langle\nabla f\,|\, \nabla\rho\rangle\,\leq\,\frac{n-1}{\eta}\,, \tag{1.7}\]
where \(\mathrm{H}\) denotes the \(g\)-mean curvature of a level set of the \(\tilde{g}\)-distance \(\rho\), and \(\eta\) denotes an useful auxiliary function that will be called _reparametrized distance_. It is defined by the first order PDE (2.7), when the distance from a point is concerned, and in (2.13) when \(\rho\) is the \(\tilde{g}\)-distance from a hypersurface. The function \(\eta\) represents the distance along the radial geodesics computed with respect to the metric \(\overline{g}=f^{2}g=f^{4}\tilde{g}\). This third conformal metric will not play a prominent role in the paper, but we will take some advantage from this along the proof of Theorem A. More details on this point and further comments on \(\eta\) (in particular its relation with the weighted connection introduced by Li-Xia) may be found in Remark 2.4.
We remark that (1.7) could be derived also from [20, Theorem 3.2], rewriting it in the substatic setting thanks to the conformal relation with \(\operatorname{CD}(0,1)\)-metrics. Nevertheless, we have preferred to include a full proof of it, in order to emphasize the role of \(\eta\) and to show the substatic point of view.
A main consequence we draw out of the Laplacian Comparison Theorem above is a Bishop-Gromov Monotonicity Theorem. We state here a version substantially gathering Theorem 2.9 and Theorem 2.11 below.
**Theorem B** (Substatic Bishop-Gromov).: _Let \((M,g,f)\) be a substatic triple. Suppose that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}=g/f^{2}\). Let \(\rho\) be the \(\tilde{g}\)-distance function from a point or the signed \(\tilde{g}\)-distance function from a strictly mean-convex hypersurface \(\Sigma\) homologous to \(\partial M\) and disjoint from it. Let \(\eta\) be the corresponding reparametrized distance, defined by (2.7) or by (2.13), and let \(\operatorname{Cut}^{\tilde{g}}\) be the cut locus of the point/hypersurface. Then, for any \(k>0\), the functions_
\[A(t)\,=\,\frac{1}{|\mathbb{S}^{n-1}|}\int_{\{\rho=t\}\setminus\operatorname{ Cut}^{\tilde{g}}}\frac{1}{\eta^{n-1}}d\sigma\,,\qquad V(t)\,=\,\frac{1}{| \mathbb{B}^{n}|t^{k}}\int_{\{0\leq\rho\leq t\}}\frac{\rho^{k-1}}{f\eta^{n-1}} d\mu\,,\]
_are well defined and monotonically nonincreasing. Furthermore:_
* _if_ \(A(t_{1})=A(t_{2})\) _for_ \(0<t_{1}<t_{2}\)_, then the set_ \(\{t_{1}\leq\rho\leq t_{2}\}\) _is isometric to_ \([t_{1},t_{2}]\times\Sigma\)_, for some_ \((n-1)\)_-dimensional manifold_ \((\Sigma,g_{0})\)_, with metric_ \[g=f^{2}\,d\rho\otimes d\rho+\eta^{2}g_{0}\,;\]
* _if_ \(V(t_{1})=V(t_{2})\) _for_ \(0<t_{1}<t_{2}\)_, then the set_ \(\{0\leq\rho\leq t_{2}\}\) _is isometric to_ \([0,t_{2}]\times\Sigma\)_, for some_ \((n-1)\)_-dimensional manifold_ \((\Sigma,g_{0})\)_, with metric_ \[g=f^{2}\,d\rho\otimes d\rho+\eta^{2}g_{0}\,;\] _in the case where_ \(\rho\) _is the distance from a point_ \(x\)_, then_ \(f\) _and_ \(\eta\) _are functions of_ \(\rho\) _only and_ \(g_{0}=f(x)^{-2}g_{\mathbb{S}^{n-1}}\)_._
A first \(\operatorname{CD}(0,1)\)-version of Bishop-Gromov monotonicity has been obtained by Wylie-Yeroshkin [20]. Various further generalizations then arose; dropping any attempt to be complete, we mention [10, 11, 12, 13], and leave the interested reader to the references therein and to Ohta's monograph [15]. However, even appealing to the conformal change \(g/f^{2}\), formally relating the \(\operatorname{CD}(0,1)\) to the substatic condition, it does not seem straightforward to deduce a Bishop-Gromov statement in the form above, that will be ruling the Willmore-type inequality (1.10) below and in turn (1.3). To the authors' knowledge, the rigidity statements contained in Theorem B have not been considered in literature yet.
In the case where \(f\) is constant equal to \(1\), then both \(\rho\) and \(\eta\) coincide with the \(g\)-distance, hence we recover the standard Bishop-Gromov monotonicity for nonnegative Ricci tensor. Remarkably, in contrast with the standard Bishop-Gromov, in our setting we do not have to require the boundary \(\partial M\) to be empty. This is because, since we have \(f=0\) at \(\partial M\), the boundary becomes an end with respect to the conformal metric \(\tilde{g}=g/f^{2}\), see Lemma 3.4. As a consequence, we do not have issues when the geodesic spheres \(\{\rho=t\}\) intersect the boundary, simply because the boundary is at infinite \(\tilde{g}\)-distance so the intersection is always empty. On the other hand, in general \(\tilde{g}\)-geodesics may have finite length when going towards the ends of \(M\). To avoid this, we added the assumption of \(\tilde{g}\)-geodesic completeness in the statement. The main type of ends considered in this paper (the \(f\)-complete ends introduced just below) will be \(\tilde{g}\)-geodesically complete by definition.
Proceeding in analogy with the nonnegative Ricci curvature case, we are interested in defining a suitable Asymptotic Volume Ratio, motivated by the Bishop-Gromov monotonicity above. In order to get a satisfactory notion, we first aim at understanding basic properties of the ends of substatic manifolds, at least under asymptotic assumptions on the potential of \(f\). A fundamental tool for this kind of study in the classical theory is Cheeger-Gromoll Splitting Theorem [12].
Wylie [20] in fact exploited the Laplacian Comparison Theorem to prove a splitting theorem in the \(\operatorname{CD}(0,1)\) setting, that will in turn provide surprising pieces of information in the conformal
substatic setting. For our main geometric goals, the kind of end that we will be mostly interested in is that of \(f\)-complete ends. We say that an end is _\(f\)-complete_ if for any \(g\)-unit speed curve \(\gamma:[0,+\infty)\to M\) going to infinity along the end it holds
\[\lim_{t\to+\infty}\rho(\gamma(t))\,=\,+\infty\,,\qquad\int_{0}^{+\infty}f( \gamma(t))dt\,=\,+\infty\,. \tag{1.8}\]
The first condition is essentially asking that the end remains an end even with respect to the conformal metric \(\tilde{g}=g/f^{2}\). In other words, the first condition is equivalent to the requirement that the end is geodesically complete with respect to the metric \(\tilde{g}\). The second condition instead is a way to ensure that the reparametrized distance \(\eta\) diverges to \(+\infty\) and is connected to an analogous definition given in [21] in the \(\mathrm{CD}(0,1)\) framework. Further discussion and comments on this definition can be found after Definition 3.1. It is easy to check that an end is \(f\)-complete whenever there exists a constant \(c\) such that \(cr^{-k}<f<cr^{k}\) for \(0<k<1\) at sufficiently large distances, where \(r\) is the \(g\)-distance from a point (see Proposition 3.2).
We provide here the full statement of the substatic Splitting Theorem.
**Theorem C** (Substatic Splitting Theorem).: _Let \((M,g,f)\) be a substatic triple with ends that are all \(f\)-complete. If there is more than one end, then \((M,g)\) is isometric to_
\[(\mathbb{R}\times\Sigma,\,f^{2}\,ds\otimes ds+g_{\Sigma})\,,\]
_for some \((n-1)\)-dimensional Riemannian manifold \((\Sigma,g_{\Sigma})\). In particular, if \(\partial M\) is nonempty, then \((M,g,f)\) has only one end._
Finally, we point out that similar arguments can be performed also for a different kind of ends, known as conformally compact ends, see Theorems 3.7 and 3.8. In particular we will show in Theorem 3.7 that conformally compact substatic manifolds necessarily have connected conformal infinity, generalizing a known result in the literature of _static vacuum solutions_[13]. Static vacuum solutions are in fact substatic triples such that (1.1) is satisfied with equality on the whole space.
Focusing now for simplicity on \(f\)-complete substatic triples that have only one end, one would then be led to define the _Asymptotic Volume Ratio_ as
\[\mathrm{AVR}(M,g,f)\,=\,\frac{1}{[\mathbb{S}^{n-1}]}\lim_{t\to+\infty}\int_{ \{\rho=t\}}\frac{1}{\eta^{n-1}}d\sigma\,=\,\frac{1}{|\mathbb{B}^{n}|}\lim_{t \to+\infty}\frac{1}{t^{n}}\int_{\{\rho\leq t\}}\frac{\rho^{n-1}}{f\eta^{n-1}} d\mu\,. \tag{1.9}\]
with \(\rho\) denoting the \(g/f^{2}\)-distance from a mean-convex hypersurface homologous to \(\partial M\) or from a point, if the boundary is empty. The fact that both limits above give the same result is easy to establish. However, we have to make sure that such quantity is independent of the initial hypersurface. We accomplish this task under the assumption of _uniformity of the end_, meaning that the quotient \(\eta_{x}/\eta_{y}\) of the reparametrized distances with respect to two different points \(x,y\) converges uniformly to \(1\) at infinity. Again, such condition is immediately checked to be fulfilled in the asymptotically flat regime. More generally, it can in fact be inferred under a natural decay condition on the gradient of \(f\) only, see Proposition 4.3.
Exploiting the global features of our Bishop-Gromov Theorem we obtain the following Willmore-like inequality for mean-convex hypersurfaces homologous to \(\partial M\).
**Theorem D** (Substatic Willmore inequality).: _Let \((M,g,f)\) be a substatic triple with one uniform \(f\)-complete end. Let \(\Sigma\) be a hypersurface homologous to the boundary. Suppose that the mean curvature \(\mathrm{H}\) of \(\Sigma\) with respect to the normal pointing towards infinity satisfies \(\mathrm{H}>0\) pointwise. Then_
\[\int_{\Sigma}\left[\frac{\mathrm{H}}{(n-1)f}\right]^{n-1}d\sigma\,\geq\, \mathrm{AVR}(M,g,f)\,|\mathbb{S}^{n-1}|\,. \tag{1.10}\]
_Furthermore, if the equality holds, then the noncompact connected component \(U\) of \(M\setminus\Sigma\) is isometric to \([0,+\infty)\times\Sigma\) with metric_
\[g\,=\,f^{2}d\rho\otimes d\rho+\eta^{2}g_{0}\,,\]
_where \(g_{0}\) is a metric on \(\Sigma\)._
Notice that if there are multiple ends, the above inequality is trivial, since in this case the Asymptotic Volume Ratio of each end is zero as pointed out in Lemma 4.7.
In the classical nonnegative Ricci curvature setting, the above result was obtained in [1]. However, the more elementary proof we propose displays more resemblances with the alternative argument of Wang [16].
The validity of the above Willmore-type inequality very naturally suggests the isoperimetric inequality (1.3). Indeed, assume that smooth area minimizers exist among hypersurfaces \(\Sigma_{V}\) enclosing a given weighted volume \(\int_{\Omega}fd\mu=V\) with \(\partial M\), for any given value \(V\). We will call such \(\Sigma_{V}\)\(f\)-isoperimetric. Then, through standard variation formulas, there must exist a Lagrange multiplier \(\lambda\in\mathbb{R}\) such that
\[\int_{\Sigma}(\mathrm{H}-\lambda f)\varphi d\sigma=0\]
for any \(\varphi\in\mathscr{C}_{c}^{\infty}(\Sigma)\), implying that the mean curvature of \(\Sigma\) satisfies \(\mathrm{H}/f=\lambda\). Letting \(I_{f}(V)=|\Sigma_{V}|\), one has that \((I_{f})^{\prime}(V)\) is proportional to \(\lambda\). If this multiplier happens to positive, it is sharply estimated in terms of \(I_{f}(V)\) by (1.10). The resulting differential inequality leads to
\[|\Sigma_{V}|^{\frac{n}{n-1}}-|\Sigma_{V_{0}}|^{\frac{n}{n-1}}\geq n(\mathrm{ AVR}(M,g,f)|\mathbb{S}^{n-1}|)^{\frac{1}{n-1}}(V-V_{0}) \tag{1.11}\]
for any \(V_{0}<V\). Now, if \(\partial M\) happens to be, in addition to minimal, also area-minimizing, (1.11) directly implies (1.3) for \(\Sigma_{V}\) in the limit as \(V_{0}\to 0^{+}\). But \(\Sigma_{V}\) being the best competitor, it holds for any \(\Sigma\) as in the claimed statement.
The assumptions of \(f\)-completeness and uniformity at infinity are obviously added in order to count on the validity of Theorem D. The additional requirement of existence of a nonminimal outward minimizing exhaustion (see Section 5) is ultimately added in order to overcome the problem of the possible nonexistence of \(f\)-isoperimetric \(\Sigma_{V}\)'s. We follow the general strategy devised by Kleiner [10], and reinterpreted in the nonnegative Ricci curvature setting in [13], that consists in considering \(f\)-isoperimetric sets constrained in mean-convex boxes, in our case given by the exhaustion. However, some new geometric difficulties arise, mostly given by the new portion of boundary \(\partial M\). They will be overcome by exploiting the fact that such boundary is in turn _a priori_ outermost area-minimizing, a new piece of information that we obtain through an argument involving the Mean Curvature Flow of the outward minimizing exhaustion (see Proposition 5.1), and by discovering that the constrained \(f\)-isoperimetric sets crucially never touch \(\partial M\), see Theorem 5.2. We will in the end have all the tools at hand to run the argument sketched above, under the usual dimensional threshold ensuring the constrained \(f\)-isoperimetric sets to be regular enough. The strong rigidity statement contained in Theorem A will stem from the fact that in case of equality all of the \(\Sigma_{V}\) must satisfy the equality in (1.10). The rigidity statement of Theorem D will be thus complemented with the additional information given by \(\mathrm{H}=\lambda f\), forcing the metric to split as (1.4).
### Further directions
The results presented in this paper raise a number of natural questions, especially out of our main geometric inequalities, Theorem A and Theorem D. The Willmore-type inequality in nonnegative Ricci curvature [1, Theorem 1.1] has been first obtained with a completely different technique, involving the evolution of the initial hypersurface along the level sets of a harmonic potential function. Understanding a version of such a route in the substatic context may have various interests. First of all, it may allow to remove the mean-convexity assumption on \(\Sigma\) we have in Theorem D in favour of the absolute value of the mean curvature in (1.10). Secondly, and more interestingly, it would suggest the viability of a suitable version of the analysis through \(p\)-harmonic functions performed in nonnegative curvature in [1], likely leading to a new substatic Minkowski inequality, potentially stronger than our Willmore-type. Moreover, studying the behaviour of such substatic \(p\)-harmonic functions may have implications in the existence of the weak Inverse Mean Curvature Flow [14] in the substatic regime, furnishing a vast extension of the important existence results in nonnegative Ricci curvature [15].
Recalling the outward minimizing properties of the evolving hypersurfaces [11, Minimizing Hull Property 1.4], the existence of the weak IMCF would imply the a priori existence of the outward minimizing exhaustion requested in Theorem A. In the special case of asymptotically flat static vacuum solutions, the weak IMCF has already been introduced and employed to prove Minkowski-type inequalities in [10, 11, 12]. Such inequalities are lower bound on the integral of \(f\)H, and, as such, do not seem related to (1.10).
It would also be rather interesting to explore other approaches for the proof of the \(f\)-Isoperimetric Inequality (1.3) as well, possibly allowing to remove the dimensional threshold. Antonelli-Pasqualeto-Pozzetta-Semola [1, Theorem 1.1] provided a natural and very strong proof in the nonnegative Ricci curvature case taking advantage of a generalized compactness result [13, 14, 15] for isoperimetric minimizing sequences in the nonsmooth RCD setting. This immediately invites to study the nonsmooth counterpart of the substatic condition. A possible key for this may lie in the recent optimal transport equivalent definition of \(\mathrm{CD}(0,1)\) given in [16].
Another, completely different approach one may undertake consists in Brendle's [1], building on the ABP method applied to a torsion problem with Neumann condition. A substatic version of such approach promises to deal with the PDE considered in [1, 15] in relation with the Heintze-Karcher inequality. We also point out that both these alternative approaches should have consequences in going beyond the dimensional threshold we imposed.
From the comparison geometry point of view, the validity of the Splitting Theorem and of the Bishop-Gromov monotonicity strongly suggests that other classical results, such as the Cheng eigenvalue estimate and Cheng-Yau gradient estimate, should have analogues in the substatic setting. A promising advance in this direction has been obtained in the \(\mathrm{CD}(0,1)\) setting [10].
It may also be interesting to study compact substatic triples. Important models for this class of manifolds are given by static solutions with positive cosmological constant, most notably the Schwarzschild-de Sitter and Reissner-Nordstrom-de Sitter spacetimes, corresponding to (1.2) with \(\Lambda\) positive and \((\Sigma,g_{\Sigma})\) a round sphere. Another natural direction is to investigate what can be said for the more general problem of studying triples \((M,g,f)\) satisfying
\[f\mathrm{Ric}-\nabla^{2}f+(\Delta f)g\geq-\mu g\,,\quad\mu\in\mathbb{R}\,. \tag{1.12}\]
The case \(\mu=0\) corresponds to the substatic condition. The case \(\mu\neq 0\) is also of interest: triples that saturate (1.12) for \(\mu\neq 0\) are called \(V\)-static and are connected with the critical point equation and the Besse conjecture, see [14, 1] and references therein for more details on these topics. We mention that inequality (1.12) has been considered in [10], where an almost-Schur inequality has been proved and exploited to generalize results in [1, 1].
### Structure of the paper
In Section 2 we compute the evolution of the mean curvature of geodesic spheres, leading to the aforementioned Laplacian Comparison Theorem, formula (1.7) (see Theorem 2.5). Building on it, we prove Theorem B, first for the functional \(A\) (Theorem 2.9) and then for the functional \(V\) (Theorem 2.11). Section 3 is dedicated to the proof of the Splitting Theorem. We first analyze the most important case of \(f\)-complete ends and prove Theorem C (Subsection 3.2), then we discuss analogous results for conformally compact ends as well, see Theorems 3.7 and 3.8. In Section 4 we introduce the notions of uniform ends (Definition 4.1) and Asymptotic Volume Ratio (Definition 4.4) and prove the Willmore Inequality (see Theorem 4.10). Finally, in Section 5 we prove Theorem A. We include an Appendix encompassing the physical motivation for the substatic condition, the conformal relation with the \(\mathrm{CD}(0,1)\) curvature-dimension condition and some additional comments.
### Acknowledgements
The work was initiated during the authors' participation at the conference_Special Riemannian Metrics and Curvature Functionals_ held at _Centro De Giorgi_ in Pisa in 2022. A substantial part of the work has been carried out during the authors' attendance to the _Thematic Program on Nonsmooth Riemannian and Lorentzian Geometry_ that took place at the
Fields Institute in Toronto. They warmly thank the staff, the organizers and the colleagues for the wonderful atmosphere and the excellent working conditions set up there.
During the preparation of the work, M. F. was supported by the European Union - NextGenerationEU and by the University of Padova under the 2021 STARS Grants@Unipd programme "QuASAR".
The authors are members of Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA), which is part of the Istituto Nazionale di Alta Matematica (INdAM), and are partially funded by the GNAMPA project "Problemi al bordo e applicazioni geometriche".
The authors are grateful to Lucas Ambrozio, Gioacchino Antonelli, Luca Benatti, Camillo Brena, Philippe Castillion, Nicola Gigli, Marc Herzlich, Lorenzo Mazzieri, Marco Pozzetta and Eric Woolgar for their interest in their work and for various useful conversations.
## 2. Riccati comparison and Bishop-Gromov Theorem
### Evolution of the mean curvature
Let \((M,g,f)\) be a substatic solution. Since \(f\) does not vanish inside \(M\) by definition, the metric \(\tilde{g}=g/f^{2}\) is well defined in \(M\setminus\partial M\). Let \(\rho\) be the distance function from a point \(p\in M\setminus\partial M\) with respect to the metric \(\tilde{g}\) and consider Riemannian polar coordinates \((\rho,\theta^{1},\dots,\theta^{n-1})\) in a neighborhood \(U\) of \(p\). In this subsection we focus our computation in \(U\) and we assume that \(U\) does not contain points in the cut locus of \(p\). This guarantees that \(\rho\) is smooth in \(U\), that \(|d\rho|_{\tilde{g}}=1\) at all points of \(U\) and that the metric \(\tilde{g}\) has the following form:
\[\tilde{g}\,=\,d\rho\otimes d\rho+\tilde{g}_{ij}(\rho,\theta^{1},\dots,\theta^ {n-1})d\theta^{i}\otimes d\theta^{j}\,. \tag{2.1}\]
We denote by \(\widetilde{\nabla}\) the Levi-Civita connection with respect to \(\tilde{g}\). It is well known that the Hessian of a distance function satisfies the inequality
\[\big{|}\widetilde{\nabla}^{2}\rho\big{|}_{\tilde{g}}^{2}\geq\frac{(\Delta_{ \tilde{g}}\rho)^{2}}{n-1}\,. \tag{2.2}\]
We now write down this inequality in terms of the original metric. To this end, we first compute
\[\begin{split}|\nabla\rho|^{2}&\,=\,\frac{1}{f^{2}} \,,\\ \widetilde{\nabla}^{2}\rho&\,=\,\nabla^{2}\rho\,+\, \frac{1}{f}\,(d\rho\otimes df+df\otimes d\rho-\langle\nabla\rho\,|\,\nabla f \rangle g)\,\\ \Delta_{\tilde{g}}\rho&\,=\,f^{2}\Delta\rho-(n-2)f \langle\nabla\rho\,|\,\nabla f\rangle\,.\end{split} \tag{2.3}\]
Using these identities, with some computations we can rewrite (2.2) in terms of the original metric as
\[|\nabla^{2}\rho|^{2}\,\geq\,\frac{(\Delta\rho)^{2}}{n-1}-\frac{n-2}{n-1} \frac{1}{f^{2}}\langle\nabla\rho\,|\,\nabla f\rangle^{2}+\frac{2}{n-1}\frac{1 }{f}\Delta\rho\langle\nabla\rho\,|\,\nabla f\rangle+\frac{2}{f^{4}}|\nabla f |^{2}\,. \tag{2.4}\]
From this formula onwards we just focus on the original metric \(g\). From (2.1), with respect to the coordinates \((\rho,\theta^{1},\dots,\theta^{n-1})\), we have
\[g\,=\,f^{2}\,d\rho\otimes d\rho+g_{ij}(\rho,\theta^{1},\dots,\theta^{n-1})d \theta^{i}\otimes d\theta^{j}\,.\]
We are interested in the evolution of the mean curvature H of the level sets of \(\rho\) with respect to the metric \(g\). We have
\[\text{H}\,=\,\frac{\Delta\rho}{|\nabla\rho|}\,-\,\frac{\nabla^{2}\rho(\nabla \rho,\nabla\rho)}{|\nabla\rho|^{3}}\,=\,f\Delta\rho\,-\,\frac{1}{2}f^{3} \langle\nabla f\,|\,\nabla\rho\rangle\,=\,f\Delta\rho\,+\,\langle\nabla f\,| \,\nabla\rho\rangle\,.\]
On the other hand, using the fact that \(|\nabla\rho|=1/f\) and the Bochner formula we compute
\[\frac{6}{f^{4}}|\nabla f|^{2}\,-\,\frac{2}{f^{3}}\Delta f\,=\,\Delta|\nabla\rho |^{2}\,=\,2\,|\nabla^{2}\rho|^{2}\,+\,2\,\text{Ric}(\nabla\rho,\nabla\rho)\,+\, 2\,\langle\nabla\Delta\rho\,|\,\nabla\rho\rangle\,.\]
Combining this with (1.1), some computations lead to
\[\langle\nabla\Delta\rho\,|\,\nabla\rho\rangle\,\leq\,\Delta\rho\langle\nabla \rho\,|\,\nabla f\rangle-\frac{1}{f}\nabla^{2}f(\nabla\rho,\nabla\rho)\,-\,| \nabla^{2}\rho|^{2}\]
We can use this information to find the evolution of \(\mathrm{H}\):
\[\left\langle\nabla\mathrm{H}\,|\,\nabla\rho\right\rangle \leq\,-f|\nabla^{2}\rho|^{2}\,+\,\frac{2}{f^{3}}|\nabla f|^{2}\,+\, \Delta\rho\langle\nabla\rho\,|\,\nabla f\rangle\] \[\leq\,-f\frac{(\Delta\rho)^{2}}{n-1}\,+\,\frac{n-3}{n-1}\Delta \rho\langle\nabla\rho\,|\,\nabla f\rangle\,+\,\frac{n-2}{n-1}\frac{1}{f}\left( \left\langle\nabla\rho\,|\,\nabla f\right\rangle\right)^{2}\] \[=\,-\frac{1}{n-1}\frac{1}{f}\mathrm{H}^{2}\,+\,\frac{1}{f} \mathrm{H}\langle\nabla\rho\,|\,\nabla f\rangle\,,\]
where in the second inequality we have used estimate (2.4). This formula can be rewritten as
\[\left\langle\nabla\left(\frac{\mathrm{H}}{f}\right)\,\left|\,\nabla\rho\right \rangle\,\leq\,-\frac{1}{n-1}\left(\frac{\mathrm{H}}{f}\right)^{2}\,.\]
In other words, we have found the following formula for the evolution of the mean curvature of the level sets of \(\rho\).
**Lemma 2.1**.: _In the notations above, at any point of \(U\) the evolution of the mean curvature \(\mathrm{H}\) along \(\rho\) satisfies_
\[\frac{\partial}{\partial\rho}\left(\frac{\mathrm{H}}{f}\right)\,\leq\,-\frac{ 1}{n-1}\,\mathrm{H}^{2}\,. \tag{2.5}\]
### Bounds on the mean curvature and Laplacian comparison
Let \(p\in M\setminus\partial M\) and \(\rho\) be the distance function from \(p\) with respect to the metric \(\tilde{g}=g/f^{2}\). We assume that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}\). We denote by \(\mathrm{Cut}^{\tilde{g}}_{p}(M)\) the cut locus of \(p\), again with respect to \(\tilde{g}\). The function \(\rho\) is smooth in \(U=M\setminus(\mathrm{Cut}^{\tilde{g}}_{p}(M)\cup\{p\})\), where \(\mathrm{Cut}^{\tilde{g}}_{p}(M)\) is the cut locus of \(p\) with respect to the metric \(\tilde{g}\).
For every \(\theta\in\mathbb{S}^{n-1}\subset T_{p}M\), we denote by \(\sigma_{\theta}\) the geodesic starting from \(p\) in the direction \(\theta\) and by \(\tau(\theta)\in(0,+\infty]\) the smallest positive value such that \(\sigma_{\theta}(\tau(\theta))\in\mathrm{Cut}^{\tilde{g}}_{p}(M)\). We recall from [1, Proposition 2.9, Chapter 13] that \(\tau:\mathbb{S}^{n-1}\to(0,+\infty]\) is a continuous function. Notice that there is a diffeomorphism between \(U\) and the set
\[\{(\rho,\theta)\in(0,+\infty)\times\mathbb{S}^{n-1}\,:\,\rho<\tau(\theta)\}\,,\]
hence we can use \(\rho,\theta\) as coordinates in \(U\).
We can now exploit Lemma 2.1 to find a bound for \(\mathrm{H}\) in \(U\). To this end, given a positive function \(\eta\in\mathscr{C}^{\infty}\) we use (2.5) to compute
\[\frac{\partial}{\partial\rho}\left(\frac{f}{\mathrm{H}}-\frac{\eta}{n-1}\right) \,=\,-\frac{f^{2}}{\mathrm{H}^{2}}\frac{\partial}{\partial\rho}\left(\frac{ \mathrm{H}}{f}\right)-\frac{1}{n-1}\frac{\partial\eta}{\partial\rho}\,\geq\, \frac{1}{n-1}\left(f^{2}-\frac{\partial\eta}{\partial\rho}\right)\,. \tag{2.6}\]
We then choose \(\eta\) so that the the right hand side vanishes pointwise. Since \(f\) is smooth, the equation
\[\frac{\partial\eta}{\partial\rho}\,=\,f^{2}\]
can be solved and yields a unique solution once we fix its value on a level set of \(\rho\).
**Proposition 2.2**.: _There exists a unique function \(\eta\in\mathscr{C}^{\infty}(U)\) satisfying_
\[\begin{cases}\frac{\partial\eta}{\partial\rho}\,=\,f^{2}\,,\\ \lim_{\rho\to 0^{+}}\eta\,=\,0\,.\end{cases} \tag{2.7}\]
**Remark 2.3**.: _It should be remarked that \(\eta\) is not even necessarily continuous outside \(U\). In fact, if there are two minimizing geodesics from \(p\) to \(q\in\mathrm{Cut}^{\tilde{g}}_{p}(M)\), the function \(\eta\) may behave differently on the two geodesics, hence the limit of \(\eta\) as we approach \(q\) along the two different geodesics would give different results._
Proof.: For every \(\varepsilon>0\), consider the function \(\eta_{\varepsilon}\) defined by
\[\begin{cases}\frac{\partial\eta_{\varepsilon}}{\partial\rho}\,=\,f^{2}&\text{ in } \{\rho>\varepsilon\}\cap U\,,\\ \eta_{\varepsilon}\,=\,0&\text{ on }\{\rho=\varepsilon\}\,.\end{cases} \tag{2.8}\]
Since \(f\) is smooth, it is well known (see for instance [14, Section 3.2.4]) that (2.8) can be solved and yields a unique \(\mathscr{C}^{2}\) solution. Furthermore, by differentiating the first equation we find that the first derivatives \(\partial_{\alpha}\eta\) also solve a first order PDE, namely \(\partial_{\rho}\partial_{\alpha}\eta=\partial_{\alpha}f^{2}\). Since \(\partial_{\alpha}f^{2}\) is smooth, it follows that the derivatives \(\partial_{\alpha}\eta\) are also \(\mathscr{C}^{2}\). Proceeding this way, we deduce that \(\eta_{\varepsilon}\) is smooth.
It is now sufficient to pass to the limit as \(\varepsilon\to 0\) using Ascoli-Arzela. Since the functions \(\eta_{\varepsilon}\) (and their derivatives as well) are uniformly continuous and uniformly bounded on any compact domain inside \(U\), it follows that \(\eta_{\varepsilon}\) converge to a smooth function \(\eta\), defined on the whole \(U\).
Concerning uniqueness, if there were two different solutions \(\eta_{1},\eta_{2}\) of (2.7), then the difference \(\eta_{1}-\eta_{2}\) would have derivative equal to zero along the direction \(\partial/\partial\rho\). Since the limit as \(\rho\to 0\) is zero, one immediately obtains \(\eta_{1}-\eta_{2}=0\).
**Remark 2.4** (The reparametrized distance \(\eta\)).: _The function \(\eta\) is also called reparametrized distance. The reason for this terminology is that the radial \(\tilde{g}\)-geodesics from our point \(p\), reparametrized with respect to \(\eta\), are geodesics for the weighted connection_
\[\mathrm{D}_{X}Y=\nabla_{X}Y+\frac{1}{f}g(X,Y)\nabla f\,.\]
_The significance of such connection is that the Ricci tensor associated to \(\mathrm{D}\) is nonnegative if and only if the substatic condition is satisfied. More details on this can be found in Appendix A.3 and in [10] (see also [11, 12] for further discussions in the conformally related \(\mathrm{CD}(0,1)\) setting). Alternatively, as mentioned in the Introduction, \(\eta\) can also be seen to represent the distance along radial \(\tilde{g}\)-geodesics with respect to the metric \(\overline{g}=f^{2}g\). In fact, if \(\sigma:[0,S]\to M\) is a radial \(\tilde{g}\)-geodesic with \(\sigma(0)=p\) and \(\dot{\sigma}=\partial/\partial\rho\), we have \(|\dot{\sigma}(s)|_{\tilde{g}}=1\) and so_
\[\mathrm{d}_{\overline{g}}(\gamma(S),p)\,=\,\int_{0}^{S}|\dot{\sigma}(s)|_{ \overline{g}}ds\,=\,\int_{0}^{S}f^{2}(\sigma(s))ds\,=\,\eta(S)\,.\]
We are now in a position to prove the following crucial bound on the mean curvature of the level sets of \(f\). This bound can be naturally interpreted as the Laplacian Comparison for the conformal distance function \(\rho\). This result corresponds to [12, Theorem 3.2] in the \(\mathrm{CD}(0,1)\) framework.
**Theorem 2.5** (Laplacian Comparison).: _Let \((M,g,f)\) be a substatic triple. Suppose that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}=g/f^{2}\). Let \(\rho\) be the distance function to a point \(p\in M\setminus\partial M\) with respect to the metric \(\tilde{g}=g/f^{2}\) and \(\eta\) be the solution to (2.7). Then the mean curvature \(\mathrm{H}\) of the level sets of \(\rho\) with respect to the metric \(g\) satisfies_
\[0\,<\,\frac{\mathrm{H}}{f}\,=\,\Delta\rho+\frac{1}{f}\langle\nabla f\,|\, \nabla\rho\rangle\,\leq\,\frac{n-1}{\eta}\]
_in the classical sense in the open dense set \(U=M\setminus(\mathrm{Cut}_{p}^{\tilde{g}}(M)\cup\{p\})\)._
Proof.: Let us first prove the thesis working inside the open set \(U\). From the definition of \(\eta\) and (2.6), we immediately deduce
\[\frac{\partial}{\partial\rho}\left(\frac{f}{\mathrm{H}}-\frac{\eta}{n-1} \right)\,\geq\,0\,.\]
In other words, the function \(f/\mathrm{H}-\eta/(n-1)\) is nondecreasing.
We then estimate its value near the point \(p\). It is well known that, for small \(\rho\)'s, the mean curvature \(\mathrm{H}_{\tilde{g}}\) of the geodesic balls grows as the geodesic balls in Euclidean space, namely \(\mathrm{H}_{\tilde{g}}=(n-1)/\rho+o(1/\rho)\) when \(\rho\) is sufficiently small. Here we have denoted by \(\mathrm{H}_{\tilde{g}}\) the mean curvature
with respect to the metric \(\tilde{g}\). This is related to the mean curvature with respect to \(g\) by \(\mathrm{H}=\mathrm{H}_{\tilde{g}}/f+(n-1)\langle\nabla\rho\,|\,\nabla f\rangle\). Since \(\langle\nabla\rho\,|\,\nabla f\rangle\) is bounded in a neighborhood of \(p\), we obtain
\[\mathrm{H}=\frac{n-1}{f(p)\rho}+o(1/\rho)\,,\quad\text{as }\rho\to 0\,.\]
In particular, \(f/\mathrm{H}\to 0\) as \(\rho\to 0\). Since \(\eta\to 0\) as well by definition, we have obtained that \(f/\mathrm{H}-\eta/(n-1)\to 0\) when \(\rho\to 0\). From the monotonicity of \(f/\mathrm{H}-\eta/(n-1)\) we then deduce \((n-1)f/\mathrm{H}\geq\eta\) on the whole \(U\). Since \(\eta\) is positive on the whole manifold by construction, the conclusion follows.
A first important observation is that there is a more effective version of the above inequality. We now argue that the Laplacian comparison obtained in Theorem 2.5 and Proposition 2.7 gives us a vector with nonnegative divergence, which is
\[X\,=\,\frac{f}{\eta^{n-1}}\nabla\rho\,, \tag{2.9}\]
defined on the open dense set \(U=M\setminus(\mathrm{Cut}_{p}^{\tilde{g}}(M)\cup\{p\})\). In fact:
\[\mathrm{div}\,X =\,\frac{f}{\eta^{n-1}}\left[\Delta\rho\,+\,\frac{1}{f}\langle \nabla f\,|\,\nabla\rho\rangle\right]\,-\,(n-1)\frac{f}{\eta^{n}}\langle\nabla \eta\,|\,\nabla\rho\rangle\] \[\leq\,(n-1)\frac{f}{\eta^{n}}\,-\,(n-1)\frac{f}{\eta^{n}}\frac{ 1}{f^{2}}\frac{\partial\eta}{\partial\rho}\,=\,0\,.\]
The inequality \(\mathrm{div}\,X\leq 0\) holds then in the classical sense in the whole \(U\). It is crucial that this inequality actually holds in the distributional sense in the whole manifold.
**Theorem 2.6**.: _Let \((M,g,f)\) be a substatic triple. Suppose that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}=g/f^{2}\). Let \(\rho\) be the distance function to a point \(p\) with respect to the metric \(\tilde{g}=g/f^{2}\) and \(\eta\) be the solution to (2.7). Then the vector \(X\) defined in (2.9) has nonpositive divergence in the weak sense in the whole \(M\setminus\{p\}\). Namely, for every nonnegative test function \(\chi\in\mathscr{C}_{c}^{\infty}(M)\) with \(\chi\equiv 0\) in a neighborhood of \(p\), it holds_
\[\int_{M}\langle X\,|\,\nabla\chi\rangle d\sigma\,\geq\,0\,.\]
Proof.: Let \(\sqrt{g}\) be the volume element of the metric \(g\) with respect to the coordinates \((\rho,\theta)\). We first observe that at all points of \(U=M\setminus(\mathrm{Cut}_{p}^{\tilde{g}}(M)\cup\{p\})\) it holds
\[\frac{\partial}{\partial\rho}\left(\frac{\sqrt{g}}{f\eta^{n-1}}\right) =\,\frac{\partial}{\partial\rho}\left(\frac{f^{n-1}\sqrt{g}}{ \eta^{n-1}}\right)\] \[=\,\left[\frac{f^{n-1}\mathrm{H}_{\tilde{g}}}{\eta^{n-1}}\,-\,(n- 1)\frac{f^{n+1}}{\eta^{n}}\,+\,(n-1)\frac{f^{n-2}}{\eta^{n-1}}\frac{\partial f }{\partial\rho}\right]\sqrt{\tilde{g}}\] \[=\,\frac{f^{n}}{\eta^{n-1}}\left[\mathrm{H}-(n-1)\frac{1}{f^{2}} \frac{\partial f}{\partial\rho}\,-\,(n-1)\frac{f}{\eta}\,+\,(n-1)\frac{1}{f^ {2}}\frac{\partial f}{\partial\rho}\right]\sqrt{\tilde{g}}\] \[\leq\,0\,, \tag{2.10}\]
where the last inequality follows from the Laplacian comparison. Recalling that in polar coordinates \((\rho,\theta)\) the set \(U\) is diffeomorphic to the set of pairs \((\rho,\theta)\) with \(\rho<\tau(\theta)\) for a suitable
continuous function \(\tau:\mathbb{S}^{n-1}\to(0,+\infty]\), we can then compute
\[\int_{M}\langle X\,|\,\nabla\chi\rangle d\mu =\,\int_{\mathbb{S}^{n-1}}\int_{0}^{\tau(\theta)}\frac{f}{\eta^{n- 1}}\langle\nabla\rho\,|\,\nabla\chi\rangle\sqrt{g}d\rho d\theta\] \[=\,\int_{\mathbb{S}^{n-1}}\int_{0}^{\tau(\theta)}\frac{\partial \chi}{\partial\rho}\frac{\sqrt{g}}{f\eta^{n-1}}d\rho d\theta\] \[=\,-\int_{\mathbb{S}^{n-1}}\!\!\int_{0}^{\tau(\theta)}\chi\, \frac{\partial}{\partial\rho}\left(\frac{\sqrt{g}}{f\eta^{n-1}}\right)\!d \rho d\theta+\!\int_{\{\theta\in\mathbb{S}^{n-1}:\,\tau(\theta)<\infty\}} \left[\chi\frac{\sqrt{g}}{f\eta^{n-1}}\right]\!(\tau(\theta),\theta)d\theta\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\lim_{ \varepsilon\to 0^{+}}\int_{\mathbb{S}^{n-1}}\left[\chi\frac{\sqrt{g}}{f\eta^{n-1}} \right]\!(\varepsilon,\theta)d\theta\,. \tag{2.11}\]
In the last identity, the first integral is nonnegative thanks to (2.10) whereas the second integral is nonnegative by construction. Concerning the third and final integral, notice that \(\eta\) behaves near \(p\) as \(f^{2}(p)\rho\) by definition, hence the integral is easily seen to converge to \(\chi(p)|\mathbb{S}^{n-1}|/f^{2n-1}(p)\) as \(\varepsilon\to 0\). Since we are assuming that \(\chi\) vanishes in a neighborhood of \(p\), the third integral goes to zero as \(\varepsilon\to 0\). The conclusion follows.
### Evolution of the mean curvature of hypersurfaces
Let \(\Sigma\) be a compact smooth hypersurface. We are interested to the case where \(\Sigma\) is homologous to the boundary: namely, there exists a compact domain \(\Omega\) such that \(\partial\Omega=\partial M\sqcup\Sigma\). Let \(\rho\) be the \(\tilde{g}\)-distance from \(\Sigma\) in \(M\setminus\Omega\). If \(\Sigma\) is smooth, then so is \(\rho\) in a collar of \(\Sigma\). Under the usual assumption of \(\tilde{g}\)-geodesic completeness of \(M\setminus\partial M\), it is known from [14, Proposition 4.6] that the open set \(U=(M\setminus\Omega)\setminus\mathrm{Cut}_{\Sigma}^{\tilde{g}}(M)\) of the points where \(\rho\) is smooth is dense in \(M\setminus\Omega\). In particular, there is a function \(\tau:\Sigma\to(0,+\infty]\) such that the gradient flow of \(\partial/\partial\rho\) gives a diffeomorphism between \(U\) and
\[\{(\rho,x)\,:\,x\in\Sigma\,,\ \rho\in(0,\tau(x))\}\,. \tag{2.12}\]
It is convenient to estimate the evolution of the geometry of \(\Sigma\) with respect to these coordinates. If \(\mathrm{H}>0\) at the point \(x\in\Sigma\), the evolution of the mean curvature \(\mathrm{H}(0,x)\) is quite similar to the one described for geodesic spheres in Subsection 2.1 and 2.2. One can then define the function \(\eta\in\mathscr{C}^{\infty}(U)\) as the solution to
\[\begin{cases}\frac{\partial\eta}{\partial\rho}(\rho,x)\,=\,f^{2}(\rho,x)\,,\\ \eta(0,x)\,=\,(n-1)\frac{f(0,x)}{\mathrm{H}(0,x)}\,.\end{cases} \tag{2.13}\]
As in Proposition 2.2, one shows that \(\eta\) is well defined and smooth in \(U\). We can then replicate the proof of Theorem 2.5 to find \(\mathrm{H}/f(\rho,x)\leq(n-1)/\eta(\rho,x)\) for any \(0\leq\rho<\tau(x)\). In this case, the proof is actually even easier, since \(f/\mathrm{H}-\eta/(n-1)\) vanishes at \(x\) by construction, without the need of proving it.
Notice that from (2.5) we can get interesting information on the evolution of the mean curvature also in the case where \(\mathrm{H}\) is nonpositive at a point \(x\in\Sigma\). If \(\mathrm{H}<0\) at \(x\), we deduce from (2.5) that \(\mathrm{H}\) must remain negative for all times, whereas if \(\mathrm{H}=0\) then \(\mathrm{H}\) remains nonpositive. Furthermore, even when \(\mathrm{H}\) is nonpositive one can still define \(\eta\) by (2.13) and find \(\eta/(n-1)\leq f/\mathrm{H}<0\) at the points \((\rho,x)\). From this fact we can obtain even more information under the assumption that the end is \(f\)-complete. We recall that an end is \(f\)-complete if (1.8) is satisfied, see also Definition 3.1 below for a more extensive discussion about this notion. The main feature of \(f\)-complete ends is that the end is complete with respect to the metric \(\tilde{g}\) (so that \(\rho\) goes to \(+\infty\)) and \(\eta\) grows to \(+\infty\) along the end. Since \(\eta(x)\) is negative when the mean curvature at \(x\) is negative, in particular \(\eta\) must reach the value zero, at which point the bound \(\eta/(n-1)\leq f/\mathrm{H}<0\) fails. This implies that the line \(\rho\mapsto(\rho,x)\) must reach the cut locus before \(\eta\) hits zero. Finally, if \(\mathrm{H}=0\) then from (2.5) we would get that \(\mathrm{H}\) remains nonpositive. If at some point \(\mathrm{H}\) becomes negative, then the previous argument applies and the line \(\rho\mapsto(\rho,x)\) must reach the cut locus. Summarizing, we have obtained the following:
**Proposition 2.7**.: _Let \((M,g,f)\) be a substatic triple. Suppose that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}=g/f^{2}\). Let \(\rho\) be the distance from an hypersurface \(\Sigma\) homologous to the boundary with respect to the metric \(\tilde{g}=g/f^{2}\), and let \(\eta\) be the solution to (2.13). Finally, let \(x\in\Sigma\) and consider the evolution \(\mathrm{H}(\rho,x)\) of the mean curvature in the direction of the end._
* _If_ \(\mathrm{H}(0,x)>0\) _then for any_ \(0<\rho<\tau(x)\) _it holds_ \[0\,<\,\frac{\mathrm{H}}{f}(\rho,x)\,\leq\,\frac{n-1}{\eta(\rho,x)}\,.\]
* _If_ \(\mathrm{H}(0,x)<0\)_, then for every_ \(0<\rho<\tau(x)\) _it holds_ \[\frac{\mathrm{H}}{f}(\rho,x)\,\leq\,\frac{n-1}{\eta(\rho,x)}\,<\,0\,.\] _Furthermore, if the ends of_ \(M\) _are_ \(f\)_-complete then_ \(\tau(x)<+\infty\)_._
* _If_ \(\mathrm{H}(0,x)=0\)_, then_ \(\mathrm{H}(\rho,x)\leq 0\) _for every_ \(0<\rho<\tau(x)\)_. Furthermore, if the ends of_ \(M\) _are_ \(f\)_-complete and_ \(\tau(x)=+\infty\)_, then_ \(\mathrm{H}(\rho,x)=0\) _for all_ \(\rho\geq 0\)_._
In the following we will focus on the case where the hypersurface \(\Sigma\) is homologous to \(\partial M\) and strictly mean-convex, meaning that \(\Sigma\) has positive mean curvature \(\mathrm{H}\) with respect to the normal pointing outside \(\Omega\). In this case, Proposition 2.7-\((i)\) tells us that the bound \(\mathrm{H}/f\leq(n-1)/\eta\) is in place on the whole \(U=M\setminus(\Omega\cup\mathrm{Cut}^{\tilde{g}}_{\Sigma})\). Furthermore, the vector field (2.9), that we recall here for convenience,
\[X\,=\,\frac{f}{\eta^{n-1}}\nabla\rho\,, \tag{2.14}\]
is also well defined on \(U\). We can now proceed exactly as in the proof of Theorem 2.6 to show that the vector field \(X\) has nonnegative divergence in the weak sense.
**Theorem 2.8**.: _Let \((M,g,f)\) be a substatic triple. Suppose that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}=g/f^{2}\). Let \(\Sigma\) be a strictly mean-convex hypersurface homologous to \(\partial M\) and disjoint from it. Suppose that the mean curvature \(\mathrm{H}\) of \(\Sigma\) with respect to the normal pointing towards infinity satisfies \(\mathrm{H}>0\) pointwise. Let \(\rho\) be the \(\tilde{g}\)-distance function from \(\Sigma\) and \(\eta\) be the solution to (2.13). Then the vector \(X\) defined in (2.14) has nonpositive divergence in the weak sense in the whole \(M\setminus\Omega\). Namely, for every nonnegative test function \(\chi\in\mathscr{C}_{c}^{\infty}(M\setminus\Omega)\) it holds_
\[\int_{M\setminus\Omega}\langle X\,|\,\nabla\chi\rangle d\sigma\,\geq\,0\,.\]
### Growth of weighted areas and volumes
In this subsection, we exploit the monotonicity of the mean curvature of the level sets to deduce a Bishop-Gromov-type theorem for the behaviour of areas and volumes of geodesic spheres. We first study the monotonicity of the following functional
\[A(t)\,=\,\frac{1}{|\mathbb{S}^{n-1}|}\int_{\{\rho=t\}\setminus\mathrm{Cut}^{ \tilde{g}}}\frac{1}{\eta^{n-1}}d\sigma\,,\]
where \(\rho\) is the distance function from a point or the signed distance from a strictly mean-convex hypersurface homologous to the boundary, with respect to the metric \(\tilde{g}\), whereas \(\mathrm{Cut}^{\tilde{g}}\) is the cut locus of the point/hypersurface with respect to \(\tilde{g}\). It is important that we remove the cut locus from the domain of the integral, as we have observed in Remark 2.3 that the function \(\eta\) is not well defined on it. When \(\rho\) is the distance from a point, the function \(A\) can be written in polar coordinates as
\[A(t)\,=\,\frac{1}{|\mathbb{S}^{n-1}|}\int_{\{\theta\in\mathbb{S}^{n-1}\,:\, \tau(\theta)>t\}}\frac{\sqrt{g}(t,\theta)}{\eta^{n-1}(t,\theta)}d\theta\,, \tag{2.15}\]
where we recall that \(\tau(\theta)\) is the minimum value of \(\rho\) such that the point with coordinate \((\rho,\theta)\) belongs to the cut locus. An analogous definition can of course be given for the distance from an hypersurface using coordinates (2.12). Notice that \(\tau\) is a continuous function, hence the domain of the integral in (2.15) is measurable, meaning that the function \(A\) is well defined for all \(t\in(0,+\infty)\)
The domain of the integral shrinks as \(t\) increases, whereas the integrand is positive and continuous, hence it is easily seen that for all values \(a\in(0,+\infty)\) it holds
\[\liminf_{t\to a^{-}}A(t)\,\geq\,A(a)\,\geq\,\limsup_{t\to a^{+}}A(t)\,. \tag{2.16}\]
Furthermore, notice that if the cut locus intersects \(\{\rho=a\}\) in a set with positive \(\mathscr{H}^{n-1}\)-measure, then the first inequality is strict, that is \(\liminf_{t\to a^{-}}A(t)\,\geq\,A(a)\). We are finally ready to state the first main result of this subsection.
**Theorem 2.9**.: _Let \((M,g,f)\) be a substatic triple. Suppose that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}=g/f^{2}\). Let \(\rho\) be the \(\tilde{g}\)-distance function from a point or the signed \(\tilde{g}\)-distance function from a strictly mean-convex hypersurface \(\Sigma\) homologous to \(\partial M\) and disjoint from it. Let \(\eta\) be the corresponding reparametrized distance, defined by \((\ref{eq:1})\) or by \((\ref{eq:1})\), and let \(\mathrm{Cut}^{\tilde{g}}\) be the cut locus of the point/hypersurface. Then the function_
\[A(t)\,=\,\frac{1}{|\mathbb{S}^{n-1}|}\int_{\{\rho=t\}\setminus\mathrm{Cut}^{ \tilde{g}}}\frac{1}{\eta^{n-1}}d\sigma\]
_is monotonically nonincreasing._
_Furthermore, if \(A(t_{1})=A(t_{2})\) for any \(0<t_{1}<t_{2}\), then the set \(U=\{t_{1}\leq\rho\leq t_{2}\}\) is isometric to \([t_{1},t_{2}]\times\Sigma\) with metric_
\[g\,=\,f^{2}d\rho\otimes d\rho+\eta^{2}g_{0}\,,\]
_where \(g_{0}\) is a metric on the level set \(\Sigma\). In \(U\) the functions \(f\) and \(\eta\) satisfy_
\[\frac{1}{\eta}\frac{\partial\eta}{\partial\theta^{i}}\,=\,\psi\eta-\varphi \frac{1}{\eta^{n-1}}\,,\qquad\frac{1}{f}\frac{\partial f}{\partial\theta^{i}} \,=\,\psi\eta+\frac{n-2}{2}\varphi\frac{1}{\eta^{n-1}}\,, \tag{2.17}\]
_where \(\varphi,\psi\) are independent of \(\rho\)._
**Remark 2.10**.: _If we set \(f\equiv 1\) then \(\rho\) is the distance function with respect to \(g\) and \(\eta=\rho\), hence \(A(t)=|\{\rho=t\}\setminus\mathrm{Cut}^{\tilde{g}}|/(|\mathbb{S}^{n-1}|t^{n-1})\) and the above monotonicity becomes completely analogous to the standard Bishop-Gromov monotonicity of the areas of geodesic spheres when \(\mathrm{Ric}\geq 0\), which concerns the function \(|\partial\{\rho\leq t\}|/(|\mathbb{S}^{n-1}|t^{n-1})\). Clearly, the two functions coincide almost everywhere, except at the values \(t\) such that \(\{\rho=t\}\cap\mathrm{Cut}^{\tilde{g}}\) has nonzero \(\mathscr{H}^{n-1}\)-measure. Notice that the number of values for which this may happen is necessarily countable. A way to see this is to observe that, as mentioned below formula \((\ref{eq:1})\), every value such that \(\{\rho=t\}\cap\mathrm{Cut}^{\tilde{g}}\) has nonzero \(\mathscr{H}^{n-1}\)-measure must correspond to a jump of \(A\), and these are at most countable since \(A\) has bounded variation (this is shown in the proof below)._
Proof.: If the cut locus were empty, the proof of the monotonicity of \(A\) would follow easily by integrating the inequality \(\mathrm{div}\,X\leq 0\) between two level sets of \(\rho\), where \(X\) is the vector field defined in \((\ref{eq:1})\), and then applying the Divergence Theorem. In the general case, in order to take into account the lack of smoothness of \(\rho\) at the cut locus, we will need a more refined analysis, that we now discuss.
Since \(\rho\) is a \(\tilde{g}\)-distance function, in particular it is locally Lipschitz, hence its gradient is well defined almost everywhere. Furthermore, as highlighted in [1, Proposition 2.1], \(\rho\) being Lipschitz also implies that the coarea formula can be applied to the level sets of \(\rho\). In particular, for any \(0<a<b<+\infty\) we have
\[\int_{a}^{b}A(t)dt\,=\,\int_{a}^{b}\left(\int_{\{\rho=t\}\setminus\mathrm{Cut }^{\tilde{g}}}\frac{1}{\eta^{n-1}}d\sigma\right)dt\,=\,\int_{\{a<\rho<b\}} \frac{1}{f\eta^{n-1}}d\mu\,<\,+\infty\,.\]
In the last integral we did not have to specify that we are not integrating on \(\mathrm{Cut}^{\tilde{g}}\), since \(\mathrm{Cut}^{\tilde{g}}\) is negligible when integrating on a volume. The above tells us that \(A\) is locally integrable. Consider now a test function \(\chi\in\mathscr{C}_{c}^{\infty}((0,+\infty))\) and let \(X\) be the vector field defined by \((\ref{eq:1})\). We then
compute
\[\int_{M}\langle\nabla(\chi\circ\rho)\,|\;X\rangle\,d\mu = \int_{M}\chi^{\prime}(\rho)\langle X\,|\,\nabla\rho\rangle d\mu \tag{2.18}\] \[= \int_{M}\chi^{\prime}(\rho)\frac{f}{\eta^{n-1}}|\nabla\rho|^{2}d\mu\] \[= \int_{0}^{+\infty}\int_{\{\rho=t\}}\chi^{\prime}(t)\frac{f}{\eta^ {n-1}}|\nabla\rho|\,d\sigma\,dt\] \[= |\mathbb{S}^{n-1}|\int_{0}^{+\infty}\chi^{\prime}(t)A(t)\,dt\,.\]
On the other hand, Theorem 2.6 (when \(\rho\) is the distance from a point) and Theorem 2.8 tells us that the first integral in the above chain of identities is nonnegative whenever the test function \(\chi\) is nonnegative. More precisely, from (2.11), since \(\chi(0)=0\), we have
\[\int_{M}\langle\nabla(\chi\circ\rho)\,|\,X\rangle d\mu\,=-\int_{ \mathbb{S}^{n-1}}\!\!\int_{0}^{\tau(\theta)}\!\!\!\chi(\rho)\,\frac{\partial} {\partial\rho}\bigg{(}\frac{\sqrt{g}}{f\eta^{n-1}}\bigg{)}\,d\rho d\theta\\ +\int_{\{\theta\in\mathbb{S}^{n-1}:\,\tau(\theta)<\infty\}} \bigg{[}\chi\frac{\sqrt{g}}{f\eta^{n-1}}\bigg{]}(\tau(\theta),\theta)d\theta \,\geq\,0\,, \tag{2.19}\]
where as usual \(\tau(\theta)\) is the minimum value of \(\rho\) such that the point with coordinate \((\rho,\theta)\) belongs to the cut locus. Combining (2.19) with the above chain of identities we have obtained \(\int_{0}^{+\infty}\chi^{\prime}(t)A(t)dt\geq 0\) for any nonnegative test function. If we knew \(A\) to be weakly differentiable, this would force its weak derivative to be nonpositive thus proving that \(A\) is nonincreasing. However, we have no information on the regularity of the function \(A\) at the moment. In the following we will show that \(A\) has bounded variation, which will be enough to infer the desired monotonicity.
If we fix \(0<a<b<+\infty\), for any \(\chi\in\mathscr{C}^{\infty}([a,b])\) with \(\|\chi\|_{\infty}=1\) we have from (2.19) the following bound
\[\int_{M}\langle\nabla(\chi\circ\rho)\,|\,X\rangle d\mu \leq -\int_{\{\theta\in\mathbb{S}^{n-1}:\,\tau(\theta)>a\}}\!\!\int_{a }^{\min\{\tau(\theta),b\}}\!\!\frac{\partial}{\partial\rho}\bigg{(}\frac{ \sqrt{g}}{f\eta^{n-1}}\bigg{)}\,d\rho d\theta\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\int_{\{ \theta\in\mathbb{S}^{n-1}:\,a\leq\tau(\theta)\leq b\}}\!\!\left[\frac{\sqrt{g} }{f\eta^{n-1}}\right]\!\!\!(\tau(\theta),\theta)\,d\theta\] \[= \int_{\{\theta\in\mathbb{S}^{n-1}:\,\tau(\theta)>a\}}\bigg{[}\frac {\sqrt{g}}{f\eta^{n-1}}\bigg{]}\!\!(a,\theta)\,d\theta\,-\int_{\{\theta\in \mathbb{S}^{n-1}:\,\tau(\theta)>b\}}\bigg{[}\frac{\sqrt{g}}{f\eta^{n-1}} \bigg{]}\!\!(b,\theta)\,d\theta\,.\]
In other words, the quantity \(\int_{M}\langle\nabla(\chi\circ\rho)\,|\,X\rangle d\mu\), and thus also \(\int_{0}^{+\infty}\chi^{\prime}(t)A(t)\,dt\), is bounded from above by a constant that depends on \(a\), \(b\), \(f\) and \(g\), but not on \(\chi\). It follows that \(A\) has bounded variation in \([a,b]\). As a consequence, the signed finite Radon measure \(\mu\) on \((a,b)\) defined by \(\mu((c,d))=\lim_{t\to d^{-}}A(t)-\lim_{t\to c^{+}}A(t)\) for any \(a<c<d<b\), is such that
\[\int_{0}^{+\infty}\chi^{\prime}(t)A(t)\,dt\,=\,-\int_{0}^{+\infty}\chi(t)\mu( dt)\,.\]
Since we have already shown that \(\int_{0}^{+\infty}\chi^{\prime}(t)A(t)\,dt\geq 0\) for any nonnegative test function \(\chi\), it follows that the measure \(\mu\) is nonpositive. From the definition of \(\mu\) and (2.16), we deduce then that the function \(A\) is monotonically nonincreasing in \((a,b)\). Since this should hold for any \(0<a<b<+\infty\), it must necessarily hold on the whole \((0,+\infty)\). This proves that \(A\) is monotonically nonincreasing.
It remains to prove the rigidity statement. If \(A(t_{1})=A(t_{2})\), then thanks to the discussion above it follows \(A(t)=A(t_{1})\) for all \(t_{1}<t<t_{2}\). As a consequence, for any test function \(\chi\) supported in \([t_{1},t_{2}]\), we get that the last line in the computation (2.18) vanishes, that is, \(\int_{M}\langle\nabla(\chi\circ\rho)\,|\;X\rangle\,d\mu=0\). On the other hand, this integral can also be computed as in (2.19). From the fact that \(\chi\geq 0\) and from (2.10), we know that the two terms on the right hand side of (2.19) are both nonnegative,
hence they must both vanish for all \(\chi\in\mathscr{C}^{\infty}_{c}([0,+\infty))\). This implies that \(\tau(\theta)\) never belongs to \((t_{1},t_{2})\), meaning that the cut locus does not intersect \(\{t_{1}<\rho<t_{2}\}\) and that equality is achieved in (2.10). In other words, the following holds
\[\frac{\mathrm{H}}{f}\,=\,\Delta\rho+\frac{1}{f}\langle\nabla f\,|\,\nabla\rho \rangle\,=\,\frac{n-1}{\eta}\qquad\text{ in }\{t_{1}\leq\rho\leq t_{2}\}\,. \tag{2.20}\]
This identity in turn triggers the equality in the estimates made in Subsection 2.1, namely
\[\left|\widetilde{\nabla}^{2}\rho\right|_{\tilde{g}}^{2}\,=\,\frac{(\Delta_{ \tilde{g}}\rho)^{2}}{n-1}\,,\qquad\text{Ric}\big{(}\widetilde{\nabla}\rho, \widetilde{\nabla}\rho\big{)}\,=\,(n-1)\left[\frac{1}{f}\widetilde{\nabla}^{2} f\big{(}\widetilde{\nabla}\rho,\widetilde{\nabla}\rho\big{)}-\frac{2}{f^{2}} \big{\langle}\widetilde{\nabla}f\,|\,\widetilde{\nabla}\rho\big{\rangle}^{2} \right]\,, \tag{2.21}\]
where we recall that \(\widetilde{\nabla}\) is the Levi-Civita connection with respect to \(\tilde{g}\). Notice that, since \(|\widetilde{\nabla}\rho|_{\tilde{g}}=1\), for any vector \(X\) it holds
\[\widetilde{\nabla}^{2}\rho\big{(}\widetilde{\nabla}\rho,X\big{)}\,=\,\big{ \langle}\widetilde{\nabla}|\widetilde{\nabla}\rho|_{\tilde{g}}^{2}\,|\,X \big{\rangle}_{\tilde{g}}\,=\,0\,.\]
It follows immediately from this and the first equation in (2.21) that, in the coordinates in which \(\tilde{g}\) has the form (2.1), for any \(i,j=1,\dots,n-1\) it holds
\[\widetilde{\nabla}_{ij}^{2}\rho\,=\,\frac{\Delta_{\tilde{g}}\rho}{n-1}\tilde{ g}_{ij}\,=\,\left(\frac{f^{2}}{\eta}-\frac{1}{f}\frac{\partial f}{\partial \rho}\right)\tilde{g}_{ij}\,.\]
where the latter identity makes use of (2.3). On the other hand, from the definition of Hessian we have \(\widetilde{\nabla}_{ij}^{2}\rho=-\Gamma_{ij}^{\rho}=\partial_{\rho}\tilde{g}_ {ij}/2\), hence
\[\frac{\partial\tilde{g}_{ij}}{\partial\rho}\,=\,2\left(\frac{f^{2}}{\eta}- \frac{1}{f}\frac{\partial f}{\partial\rho}\right)\tilde{g}_{ij}\,=\,2\frac{ \partial}{\partial\rho}\left(\log\eta-\log f\right)\tilde{g}_{ij}\,.\]
This identity can be solved explicitly, yielding
\[\tilde{g}_{ij}\,=\,\frac{\eta^{2}}{f^{2}}(g_{0})_{ij}\,,\]
where \((g_{0})_{ij}\) does not depend on \(\rho\). Comparing with (2.1) and recalling \(g=f^{2}\tilde{g}\), we have obtained
\[g\,=\,f^{2}d\rho\otimes d\rho+\eta^{2}g_{0}\,. \tag{2.22}\]
The functions \(f\) and \(\eta\) may be functions of both the radial coordinate \(\rho\) and of \(\{\theta^{1},\dots,\theta^{n-1}\}\). Any metric \(g\) having the form (2.22) satisfies the substatic condition with equality in the radial direction, that is:
\[f\mathrm{R}_{\rho\rho}\,-\,\nabla_{\rho\rho}^{2}f\,+\,(\Delta f)g_{\rho\rho}\, =\,0\,.\]
From this identity and the substatic condition we find out that, for any vector \(X=\partial/\partial\rho+\lambda\partial/\partial_{i}\), it holds
\[0 \leq\,\big{[}f\text{Ric}\,-\,\nabla^{2}f-(\Delta f)g\big{]}\,(X,X)\] \[=\,\lambda^{2}\,\left[f\text{R}_{ii}\,-\,\nabla_{ii}^{2}f\,+\,( \Delta f)g_{ii}\right]\,+\,2\lambda\,\left[f\text{R}_{i\rho}\,-\,\nabla_{i\rho }^{2}f\,+\,(\Delta f)g_{i\rho}\right]\,.\]
Since this inequality holds pointwise for any \(\lambda\in\mathbb{R}\), it follows that
\[f\text{R}_{i\rho}\,-\,\nabla_{i\rho}^{2}f\,+\,(\Delta f)g_{i\rho}\,=\,0\,. \tag{2.23}\]
Recalling the expression (2.22) for the metric, a direct computation gives us that (2.23) is equivalent to
\[(n-2)\partial_{i\rho}^{2}G\,+\,\partial_{i\rho}^{2}F-(n-1)\partial_{\rho}G \partial_{i}F\,=\,0\,, \tag{2.24}\]
where \(F=\log f\), \(G=\log\eta\). On the other hand, since \(\partial_{\rho}\eta=f^{2}\), we have \(\partial_{\rho}G=e^{2F-G}\), from which we compute
\[\partial_{i\rho}^{2}G\,-\,2\partial_{i}F\partial_{\rho}G\,+\,\partial_{i}G \partial_{\rho}G\,=\,0\,. \tag{2.25}\]
We will now combine (2.24) and (2.25) in two different ways.
On the one hand, if we subtract \((n-1)\) times equation (2.25) from equation (2.24), we obtain
\[\partial_{i\rho}^{2}F\,+\,(n-1)\partial_{i}F\partial_{\rho}G\,=\,\partial_{i \rho}^{2}G\,+\,(n-1)\partial_{\rho}G\partial_{i}G\,,\]
which can be rewritten as
\[\partial_{\rho}\left(e^{(n-1)G}\partial_{i}F\right)\,=\,\partial_{\rho}\left(e^{( n-1)G}\partial_{i}G\right)\,.\]
In other words, we have found
\[e^{(n-1)G}(\partial_{i}F-\partial_{i}G)=\varphi(\theta)\,,\,\,\,\,\forall i=1, \ldots,n-1\,. \tag{2.26}\]
On the other hand, subtracting \((n/2-1)\) times equation (2.25) from (2.24), we obtain
\[\partial_{i\rho}^{2}F\,-\,\partial_{i}F\partial_{\rho}G\,=\,-\frac{n-2}{2} \partial_{i\rho}^{2}G\,+\,\frac{n-2}{2}\partial_{\rho}G\partial_{i}G\,,\]
which can be rewritten as
\[\partial_{\rho}\left(e^{-G}\partial_{i}F\right)\,=\,-\frac{n-2}{2}\partial_{ \rho}\left(e^{-G}\partial_{i}G\right)\,,\]
that gives
\[e^{-G}\left(\partial_{i}F-\frac{n-2}{2}\partial_{i}G\right)=\psi(\theta)\,, \,\,\,\,\forall i=1,\ldots,n-1\,. \tag{2.27}\]
Writing (2.26) and (2.27) in terms of \(\eta\) and \(f\), with straightforward computations we obtain formulas (2.17).
An immediate consequence of the above result is that we can find a bound for the area functional \(A(t)\) by taking its limit as \(t\to 0\). In the case where \(\rho\) is the \(\tilde{g}\)-distance from a hypersurface, then \(A(0)\) is in fact well defined and we have \(A(t)\leq A(0)\). This will be exploited later, in Subsection 4.2 to prove Theorem D.
We focus now briefly on the case where \(\rho\) is the \(\tilde{g}\)-distance from a point \(x\). As \(\rho\to 0\) we have \(\partial\eta/\partial\rho=f(x)^{2}+o(1)\), hence \(\eta=f(x)^{2}\rho+o(\rho)\). Furthermore, \(d\sigma=(1+o(1))f(x)^{n-1}d\sigma_{\tilde{g}}=(1+o(1))f(x)^{n-1}\rho^{n-1}d \sigma_{\mathbb{S}^{n-1}}\). It follows that
\[\lim_{t\to 0^{+}}A(t)\,=\,\frac{1}{f(x)^{n-1}}\,.\]
As a consequence of the monotonicity of \(A\) we then deduce that, for every \(t>0\), it holds
\[A(t)\,\leq\,\frac{1}{f(x)^{n-1}}\,. \tag{2.28}\]
If the equality holds, then the rigidity statement in Theorem 2.9 applies in \(\{\rho\leq t\}\).
Building on Theorem 2.9, we can also show the following volumetric version of the Bishop-Gromov monotonicity theorem.
**Theorem 2.11**.: _Let \((M,g,f)\) be a substatic triple. Suppose that \(M\setminus\partial M\) is geodesically complete with respect to the metric \(\tilde{g}=g/f^{2}\). Let \(\rho\) be the \(\tilde{g}\)-distance function from a point or the signed \(\tilde{g}\)-distance function from a strictly mean-convex hypersurface \(\Sigma\) homologous to \(\partial M\) and disjoint from it. Let \(\eta\) be the corresponding reparametrized distance, defined by (2.7) or by (2.13). Then, for any \(k>0\), the function_
\[V(t)\,=\,\frac{1}{|\mathbb{B}^{n}|t^{k}}\int_{\{0\leq\rho\leq t\}}\frac{\rho^ {k-1}}{f\eta^{n-1}}d\mu \tag{2.29}\]
_is well defined and monotonically nonincreasing._
_Furthermore, if \(V(t_{1})=V(t_{2})\) for \(0<t_{1}<t_{2}\), then the set \(U=\{0\leq\rho\leq t_{2}\}\) is isometric to \([0,t_{2}]\times\Sigma\) with metric_
\[g\,=\,f^{2}d\rho\otimes d\rho+\eta^{2}g_{0}\,,\]
_where \(g_{0}\) is a metric on the level sets \(\Sigma\). In \(U\) the functions \(f\) and \(\eta\) satisfy_
\[\frac{1}{\eta}\frac{\partial\eta}{\partial\theta^{i}}\,=\,\psi\eta-\varphi \frac{1}{\eta^{n-1}}\,,\qquad\frac{1}{f}\frac{\partial f}{\partial\theta^{i}} \,=\,\psi\eta+\frac{n-2}{2}\varphi\frac{1}{\eta^{n-1}}\,, \tag{2.30}\]
_where \(\varphi,\psi\) are independent of \(\rho\). If \(\rho\) is the distance from a point \(x\), then \(g_{0}=f(x)^{-2}g_{\mathbb{S}^{n-1}}\) and \(\varphi=\psi=0\) in the whole \(U\), that is, both \(f\) and \(\eta\) are functions of \(\rho\) in \(U\)._
**Remark 2.12**.: _Recall that \(\eta\) is smooth outside the cut locus. Since the cut locus has finite \(\mathscr{H}^{n-1}\)-measure, the functional \(V\) in the statement is well posed. When \(k=n\) we have recovered the standard Bishop-Gromov monotonicity for volumes of geodesic spheres for \(\operatorname{Ric}\geq 0\). Indeed if one sets \(f=1\) in the statement above, so that \(\eta=\rho\), one gets \(V(t)=|\{\rho\leq t\}|/(|\mathbb{B}^{n}|t^{n})\)._
Proof.: We start by observing that the coarea formula (together with the fact that \(|\nabla\rho|=1/f\)) gives the following relation between the functionals \(A\) and \(V\):
\[V(t)\,=\,\frac{n}{|\mathbb{S}^{n-1}|t^{k}}\int_{0}^{t}\left(\int_{\{\rho=\tau \}}\frac{\tau^{k-1}}{\eta^{n-1}}d\sigma\right)d\tau\,=\,\frac{n}{t^{k}}\int_{ 0}^{t}\tau^{k-1}A(\tau)d\tau\,. \tag{2.31}\]
From Theorem 2.9, we know that for almost every \(\tau\leq t\) the area integral \(A(\tau)\) is well defined and that it is nonincreasing in \(\tau\). In the case where \(\rho\) is the distance from a point \(x\), we have observed in (2.28) that \(A(\tau)\) is bounded by the constant \(1/f(x)^{n-1}\). If instead \(\rho\) is the distance from a strictly mean-convex hypersurface, then \(A(0)\) is well defined and we have \(A(\tau)\leq A(0)\). In both cases, it holds \(A(\tau)\leq\mathrm{C}\) for some constant \(\mathrm{C}\), hence from (2.31), recalling \(k>0\), we compute
\[V(t)\,\leq\,\frac{\mathrm{C}}{t^{k}}\int_{0}^{t}\tau^{k-1}d\tau\,=\,\frac{ \mathrm{C}}{k}\,.\]
As a consequence, \(V(t)\) is well defined. Furthermore, the monotonicity of \(A\) also implies \(A(t)\leq A(\tau).\) Plugging this information in (2.31) gives
\[V(t)\,\geq\,\frac{n}{t^{k}}A(t)\int_{0}^{t}\tau^{k-1}d\tau\,=\,\frac{n}{k}A(t)\,.\]
With this information at hand, we are ready to compute the derivative of \(V(t)\):
\[V^{\prime}(t) =\,\lim_{\varepsilon\to 0}[V(t+\varepsilon)-V(t)]/\varepsilon\] \[=\,\lim_{\varepsilon\to 0}\frac{1}{\varepsilon|\mathbb{B}^{n}|} \left(\frac{1}{(t+\varepsilon)^{k}}-\frac{1}{t^{k}}\right)\int_{\{\rho\leq t +\varepsilon\}}\frac{\rho^{k-1}}{f\eta^{n-1}}d\mu\,+\,\lim_{\varepsilon\to 0 }\frac{1}{t^{k}}\frac{1}{\varepsilon|\mathbb{B}^{n}|}\int_{\{t\leq\rho\leq t +\varepsilon\}}\frac{\rho^{k-1}}{f\eta^{n-1}}d\mu\] \[=\,\lim_{\varepsilon\to 0}\frac{t^{k}-(t+\varepsilon)^{k}}{ \varepsilon t^{k}}V(t+\varepsilon)\,+\,\lim_{\varepsilon\to 0}\frac{n}{t^{k}} \frac{1}{\varepsilon}\int_{t}^{t+\varepsilon}\tau^{k-1}A(\tau)d\tau\] \[=\,-\frac{k}{t}V(t)+\frac{n}{t}A(t)\] \[\leq\,0\,.\]
We now prove the rigidity statement. If \(V(t_{1})=V(t_{2})\) for two values \(0<t_{1}<t_{2}\), then retracing our computations we find out that \(A(\tau)=A(t)\) for all \(0<\tau<t_{2}\). From the rigidity statement of Theorem 2.9 we then deduce that in \(\{0\leq\rho\leq t\}\) the metric writes as
\[g\,=\,f^{2}d\rho\otimes d\rho+\eta^{2}g_{0}\,,\]
and \(f,\eta\) satisfy formulas (2.30).
Finally, we suppose now that \(\rho\) is the distance from a point \(x\) and we prove that \(f,\eta\) must necessarily depend on \(\rho\) only. To this end, notice that formulas (2.30) must hold up to \(\rho=0\) (that is, up to the point \(x\)), hence at the limit as \(\rho\) goes to zero, the derivative \(\partial f/\partial\theta^{i}\) goes to zero. Since \(\eta\) goes to \(0\) as \(\rho\to 0\), it follows then from the second formula in (2.30) that \(\varphi\) must vanish identically.
As a consequence, the first formula in (2.30) can be rewritten as
\[\frac{1}{\eta^{2}}\frac{\partial\eta}{\partial\theta^{i}}\,=\,\psi\,. \tag{2.32}\]
In particular, taking the derivative with respect to \(\rho\) we deduce that \(\partial^{2}(1/\eta)/\partial\rho\partial\theta=0\). In other words, \(1/\eta=\alpha+\beta\), where \(\alpha\) is a function of \(\rho\) and \(\beta\) is a function of the \(\theta^{i}\)'s. Substituting
this expression for \(\eta\) in (2.32), we deduce that
\[\frac{\partial\beta}{\partial\theta^{i}}\,=\,-\psi\]
On the other hand, taking the difference of the two formulas in (2.30) we have
\[\frac{\partial}{\partial\theta^{i}}\,(\log\eta-\log f)\,=\,0\,.\]
In other words, the quantity \(\eta/f\) must be a function of \(\rho\). Recalling the decomposition \(1/\eta=\alpha+\beta\) shown right above, it follows then that \(1/f=\lambda(\alpha+\beta)\), where \(\lambda\) is a function of \(\rho\).
Notice now that, when \(\rho\) goes to zero, the limit of \(f\) must go to the value of \(f\) at the point \(x\), so that in particular the limit of \(1/f\) as \(\rho\to 0\) must not depend on \(\theta^{i}\). It follows that \(\beta=0\), hence \(\psi=-\partial\beta/\partial\theta^{i}\) vanishes as well. We have proved that both \(\varphi\) and \(\psi\) in (2.30) vanish, hence both \(f\) and \(\eta\) must be functions of the sole \(\rho\) in the whole \(\{\rho\leq t\}\). Since the metric
\[\tilde{g}=d\rho\otimes d\rho+\frac{\eta^{2}}{f^{2}}g_{0}\]
is smooth at the point \(x\), it follows that \((\eta^{2}/f^{2})g_{0}\) should be close to \(\rho^{2}g_{\mathbb{S}^{n-1}}\) near \(x\). From the definition of \(\eta\) it follows \(\eta=f(x)^{2}\rho+o(\rho)\) close to \(x\), hence \(g_{0}=f(x)^{-2}g_{\mathbb{S}^{n-1}}\) and we conclude the rigidity statement.
## 3. Wylie's Splitting Theorem for substatic manifolds
### \(f\)-complete and conformally compact ends
From now on we will study noncompact manifolds with some special behaviour at infinity, focusing mainly on \(f\)-complete ends.
**Definition 3.1**.: _We say that an end is \(f\)-complete if for any \(g\)-unit speed curve \(\gamma:[0,+\infty)\to M\) going to infinity along the end it holds_
\[\lim_{t\to+\infty}\rho(\gamma(t))\,=\,+\infty\,,\qquad\int_{0}^{+\infty}f( \gamma(t))dt\,=\,+\infty\,, \tag{3.1}\]
_where \(\rho\) is the distance from a point with respect to \(\tilde{g}=g/f^{2}\)._
It is clear from the triangle inequality that the definition above does not depend on the point we are taking the distance \(\rho\) from. It also would not change if we replace the distance from a point with the distance from a hypersurface.
For all the arguments that follows it would actually be enough to require (3.1) only along \(\tilde{g}\)-geodesics. More precisely, it is enough to require the end to be \(\tilde{g}\)-complete and satisfying the second condition in (3.1) along any \(\tilde{g}\)-geodesic. In fact, the above definition is analogous to the one given in [20, Definition 6.2] in the CD\((0,1)\) framework: there, a triple \((M,\tilde{g},\psi)\) satisfying the CD\((0,1)\) condition is said to be \(\psi\)-complete if for any \(\tilde{g}\)-geodesic \(\sigma:[0,+\infty)\to M\) going to infinity along the end it holds
\[\int_{0}^{+\infty}e^{-\frac{2\psi(\sigma(t))}{n-1}}dt\,=\,+\infty\,.\]
Recalling the relations \(\tilde{g}=g/f^{2}\) and \(\psi=-(n-1)\log f\) between the CD\((0,1)\) and substatic setting (see Appendix A.2), it is easily seen that this integrability condition is equivalent to the second requirement in (3.1). As already observed in [20], this integrability condition can be interpreted as completeness with respect to the metric \(f^{2}g\) or, alternatively, as completeness with respect to the weighted connection introduced in [20] and [10] (see Appendix A.3). For what concerns this paper however, the only relevance of the second condition in (3.1) is that it implies that the reparametrized distance \(\eta\) defined in Section 2 goes to infinity along the end. This is easy to show as follows. Let \(\rho\) be the \(\tilde{g}\)-distance to a point or hypersurface and let \(\eta\) be defined by (2.7) or (2.13) depending on whether we are taking the distance from a point or
hypersurface. Let \(\sigma:[0,+\infty)\to M\) be a \(\tilde{g}\)-geodesic with \(\dot{\sigma}=\widetilde{\nabla}\rho\) and let \(\gamma:[0,+\infty)\to M\) be the reparametrization of \(\sigma\) that has \(g\)-length constant and equal to \(1\). We then have
\[\eta(\gamma(t))-\eta(\gamma(0))\,=\,\int_{0}^{t}f^{2}(\gamma(t))|\dot{\gamma}( t)|_{\tilde{g}}dt\,=\,\int_{0}^{t}f(\gamma(t))|\dot{\gamma}(t)|_{g}dt\,=\,\int_{0}^{t} f(\gamma(t))dt\,,\]
hence if the second condition in (3.1) holds then \(\eta\) goes to \(+\infty\).
The family of \(f\)-complete ends includes a number of interesting examples. Most notably, _asymptotically flat_ ends are \(f\)-complete. We say that \((M,g,f)\) is asymptotically flat if there exists a compact set \(K\) such that \(M\setminus K\) is diffeomorphic to \(\mathbb{R}^{n}\) minus a ball, the metric \(g\) converges to the Euclidean metric and \(f\) goes to \(1\) at infinity along the end. A precise definition of asymptotic flatness is given below, see Definition 4.8. A notable example of asymptotically flat substatic solution is the Reissner-Nordstrom solution, corresponding to (1.2) with \(\Lambda=0\). In fact, the family of \(f\)-complete ends is quite more general: for instance, it is sufficient to require a suitable behaviour of \(f\) at infinity, without any assumption on the topology and geometry of the end, as clarified by the following proposition.
**Proposition 3.2**.: _Let \((M,g,f)\) be a substatic triple and let \(r\) be the \(g\)-distance from a point (or more generally from a compact domain). If there exist a compact set \(K\supset\partial M\) and constants \(0<c<C\), \(0<k<1\) such that_
\[cr^{-k}<f<Cr^{k} \tag{3.2}\]
_at all points in \(M\setminus K\), then all ends are \(f\)-complete._
Proof.: Let \(\gamma:[0,+\infty)\to M\) be a \(g\)-unit speed curve going to infinity along the end and let \(\delta\) be the \(g\)-distance between \(\gamma(0)\) and the point \(p\) we are taking the distance \(r\) from (if \(r\) is the distance from a compact domain instead, it is sufficient to choose \(\delta\) as the maximum distance between \(\gamma(0)\) and the points of the domain; the rest of the proof is easy to adapt). It is also convenient to assume that \(\gamma(0)\) and \(p\) belong to \(K\) and that \(K\) is geodesically convex with respect to the metric \(\tilde{g}=g/f^{2}\) (this can of course always be achieved by possibly enlarging \(K\)).
Since \(\gamma\) has unit speed, we have \(d(\gamma(0),\gamma(t))<t\), hence by triangle inequality
\[r(\gamma(t))<t+\delta\,.\]
If we then denote by \(T\) the maximum value of \(t\) such that \(\gamma(t)\in K\), estimate (3.2) tells us that for any \(t>T\) it holds
\[c\,(t+\delta)^{-k}\,<\,f(\gamma(t))\,<\,C\,(t+\delta)^{k}\,.\]
In particular, since \(0<k<1\) we have
\[\int_{T}^{+\infty}f(\gamma(t))dt\,>\,c\,\int_{T}^{+\infty}(t+\delta)^{-k}dt\, >\,c\lim_{t\to+\infty}\frac{(t+\delta)^{1-k}-(T+\delta)^{1-k}}{1-k}\,=\,+ \infty\,.\]
To conclude, it is sufficient to show that the \(\tilde{g}\)-distance \(\rho\) of \(\gamma(t)\) from a fixed point (we will take \(\gamma(0)\) for simplicity) also goes to \(+\infty\) as \(t\to+\infty\). For any fixed \(t>0\), let \(\sigma_{t}\) be the unit-speed \(\tilde{g}\)-geodesic from \(\gamma(0)\) to \(\gamma(t)\). We reparametrize \(\sigma_{t}\) so that it has speed \(1\) with respect to the metric \(g\). With a slight abuse of notation, we still denote by \(\sigma_{t}\) the reparametrized curve. We will have \(\sigma_{t}:[0,\tau]\to M\), with \(\tau\geq t\). Since we have chosen \(K\) to be \(\tilde{g}\)-geodesically convex, there will exist a value \(T_{t}\) such that \(\sigma_{t}(s)\in K\) for all \(s\leq T_{t}\) and \(\sigma_{t}(s)\not\in K\) for all \(s>T_{t}\). Clearly \(\tau-T_{t}>\mathrm{d}_{g}(K,\gamma(t))\). Furthermore, since \(\sigma_{t}\) restricted to \([0,T_{t}]\) is \(\tilde{g}\)-minimizing and both \(\sigma_{t}(0)=\gamma(0)\) and \(\sigma_{t}(T_{t})\) belong to \(K\), we have
\[\mathrm{diam}_{\tilde{g}}(K)\,>\,\int_{0}^{T_{t}}|\dot{\sigma}_{t}(s)|_{ \tilde{g}}ds\,=\,\int_{0}^{T_{t}}\frac{1}{f(\sigma_{t}(s))}ds>\frac{T_{t}}{ \max_{K}f}\,.\]
We are now ready to estimate the \(\tilde{g}\)-distance as follows:
\[\rho(\gamma(t))=\int_{0}^{\tau}|\dot{\sigma}_{t}(s)|_{\tilde{g}}ds\,=\,\int_{ 0}^{\tau}\frac{1}{f(\sigma_{t}(s))}|\dot{\sigma}_{t}(s)|_{g}ds\,=\,\int_{0}^ {\tau}\frac{1}{f(\sigma_{t}(s))}ds\,>\,\int_{T_{t}}^{\tau}\frac{r(\sigma_{t}(s ))^{-k}}{\sqrt{C}}ds\,.\]
Since \(r(\sigma_{t}(s))<\mathrm{d}_{g}(\sigma_{t}(s),\gamma(0))+\delta<s+\delta\), we then have
\[\rho(\gamma(t)) >\frac{(\tau+\delta)^{1-k}-(T_{t}+\delta)^{1-k}}{\sqrt{C}(1-k)}\] \[>\,\frac{(T_{t}+\delta+\mathrm{d}_{g}(\gamma(t),K))^{1-k}-(T_{t}+ \delta)^{1-k}}{\sqrt{C}(1-k)}\] \[>\,\frac{(\mathrm{d}_{g}(\gamma(t),K))^{1-k}-(\mathrm{diam}_{ \tilde{g}}(K)\max_{K}f)^{1-k}}{\sqrt{C}(1-k)}\,.\]
Since \(K\) is fixed, the distance \(\mathrm{d}_{g}(\gamma(t),K)\) is going to \(+\infty\), hence \(\rho(\gamma(t))\to+\infty\) as \(t\to+\infty\) as wished.
Another well studied family of ends is the following:
**Definition 3.3**.: _We say that an end of a substatic triple \((M,g,f)\) is conformally compact if a neighborhood \(E\) of it is the interior of a compact manifold \(\overline{E}\) with boundary \(\partial\overline{E}\) and the metric \(\tilde{g}=g/f^{2}\) extends to the boundary of \(\overline{E}\) in \(\mathscr{C}^{3}\)-fashion. We denote by \(\partial E_{\infty}=\partial\overline{E}\setminus M\) the conformal boundary of the end. Finally, we require \(f^{-1}\) to extend to a \(\mathscr{C}^{3}\)-function on \(\overline{E}\) in such a way that \(f^{-1}=0\) and \(d(f^{-1})\neq 0\) on \(\partial E_{\infty}\)._
It is clear that on conformally compact ends the \(\tilde{g}\)-distance function \(\rho\) does not grow to infinity along the end. On the other hand, it is easily seen that \(\eta\) goes to infinity along any ray going into a conformally compact end. An example of this behaviour is given by the Schwarzschild-Anti de Sitter solution.
The two families of \(f\)-complete ends and conformally compact ends enclose the model solutions we are interested in. Furthermore, following [20], we can prove a splitting theorem for both these types of ends. The proof, given below, makes substantial use of the conformal metric \(\tilde{g}\). It is then convenient to remark that our manifold remains complete with respect to this conformal metric.
**Lemma 3.4**.: _Let \((M,g,f)\) be a substatic triple with ends that are either \(f\)-complete or conformally compact and let \(\partial M_{\infty}\) be the (possibly empty) conformal infinity (namely, the union of the conformal infinities of the conformally compact ends). Then the manifold \((M\cup\partial M_{\infty})\setminus\partial M\) is complete with respect to the metric \(\tilde{g}=g/f^{2}\)._
**Remark 3.5**.: _We specify that we are referring here to completeness as a metric space, not to geodesic completeness. Clearly geodesic completeness fails in presence of a conformal boundary, since \(\tilde{g}\)-geodesics may end at \(\partial M_{\infty}\)._
Proof.: The fact that the manifold is complete near the conformal boundary is immediate from the definition of conformally compact ends. The \(\tilde{g}\)-completeness of \(f\)-complete ends has already been discussed after Definition 3.1. It remains to prove \(\tilde{g}\)-completeness near \(\partial M\). To do this, we show that the boundary components become ends with respect to the conformal metric \(\tilde{g}\). Let \(\gamma:[0,\ell]\to\mathbb{R}\) be a \(g\)-unit speed curve with \(\gamma(0)\in\partial M\) and \(\gamma(t)\) in the interior of \(M\) for any \(t>0\). It is enough to show that its \(\tilde{g}\)-length is infinite. From the mean value theorem we know that for any \(t\in(0,\ell]\) there exists \(\xi\in(0,t)\) such that
\[f(\gamma(t))\,=\,f(\gamma(t))-f(\gamma(0))\,=\,\langle\nabla f(\gamma(\xi)) \,|\,\dot{\gamma}(t)\rangle\,t\,\leq\,|\nabla f|(\gamma(\xi))\,t\,.\]
If \(K\) is a compact collar neighborhood of \(\partial M\) containing \(\gamma\), we then compute
\[\int_{0}^{\ell}|\dot{\gamma}(t)|_{\tilde{g}}dt\,=\,\int_{0}^{\ell}\frac{dt}{f( \gamma(t))}\,\geq\,\int_{0}^{\ell}\frac{1}{(\max_{K}|\nabla f|)\,t}dt=+\infty\,,\]
as wished.
### Splitting Theorem for \(f\)-complete ends
In [20], the author proves a Splitting Theorem in the \(\operatorname{CD}(0,1)\) framework. Here we translate this result in the substatic setting, obtaining Theorem C. The proof is essentially the one in [20], but we prefer to show it for completeness. The strategy is that of exploiting the Laplacian Comparison given by Theorem 2.5 together with standard techniques for the Busemann function, to be coupled with a refined splitting argument.
Proof of Theorem C.: The first part of the proof follows quite closely the usual proof of the classical Splitting Theorem for manifolds with nonpositive Ricci curvature, so we avoid to give all the technical details, that can be found in any standard Riemannian geometry book (see for instance [19, Theorem 7.3.5]).
We are assuming that there are at least two \(f\)-complete ends and we know from Lemma 3.4 that \(M\setminus\partial M\) is complete with respect to \(\tilde{g}\), hence we can take points arbitrarily far away in the two ends and connect them via a \(\tilde{g}\)-minimizing geodesic. At the limit, we thus produce a globally minimizing geodesic \(\sigma:[-\infty,+\infty]\) going from one end to the other. For a given \(t\in\mathbb{R}\) we then consider the functions
\[\beta_{t}(x)=\operatorname{d}_{\tilde{g}}(x,\sigma(t))-t\]
and the Busemann functions
\[\beta_{+}(x)=\lim_{t\to+\infty}\beta_{t}(x)\,,\qquad\beta_{-}(x)=\lim_{t\to- \infty}\beta_{t}(x)\,.\]
Under the assumption of \(f\)-completeness, we know that the \(\tilde{g}\)-distance goes to infinity as we approach the end, hence the limits are well defined. Theorem 2.5 tells us that \(\Delta\beta_{t}+(1/f)\langle\nabla f\,|\,\nabla\beta_{t}\rangle\leq 1/\eta\) for every \(t\). Again using the fact that we are assuming \(f\)-completeness of the ends, we have \(\eta\to+\infty\) at infinity. Standard arguments then tell us that \(\beta_{\pm}\) satisfy the inequality
\[\Delta\beta_{\pm}+\frac{1}{f}\langle\nabla f\,|\,\nabla\beta_{\pm}\rangle\leq 0\]
in the barrier sense. In particular, \(\beta_{-}+\beta_{+}\) satisfies the same elliptic inequality. Furthermore \(\beta_{-}+\beta_{+}=0\) on \(\sigma\) by construction and a simple application of the triangle inequality shows that \(\beta_{-}+\beta_{+}\geq 0\) on the whole \(M\). In follows then by maximum principle that \(\beta_{+}+\beta_{-}=0\). We then conclude that \(\beta=\beta_{+}\) solves
\[\Delta\beta+\frac{1}{f}\langle\nabla f\,|\,\nabla\beta\rangle\,=\,0 \tag{3.3}\]
in the barrier sense. Standard regularity theory tells us that \(\beta\) is in fact smooth, hence solves (3.3) in the classical sense as well.
We denote by \(\widetilde{\nabla}\) the Levi-Civita connection with respect to the metric \(\tilde{g}\). Our next step is to prove that \(\widetilde{\nabla}\beta\) is in fact the splitting direction. We first exploit (3.3) to compute
\[\Delta_{\tilde{g}}\beta\,=\,-(n-1)\frac{1}{f}\langle\widetilde{\nabla}f\,|\, \widetilde{\nabla}\beta\rangle_{\tilde{g}}\]
Furthermore, rewriting the substatic condition (1.1) in terms of the new metric, we find
\[\operatorname{Ric}_{\tilde{g}}\,\geq\,(n-1)\frac{1}{f}\widetilde{\nabla}^{2}f -2(n-1)\frac{1}{f^{2}}df\otimes df\]
The above formulas can be applied in combination with the Bochner formula to obtain
\[\Delta_{\tilde{g}}\big{|}\widetilde{\nabla}\beta\big{|}_{\tilde{g }}^{2} =\,\big{|}\widetilde{\nabla}^{2}\beta\big{|}_{\tilde{g}}^{2}+2 \operatorname{Ric}_{\tilde{g}}\big{(}\widetilde{\nabla}\beta,\widetilde{\nabla }\beta\big{)}+2\big{(}\widetilde{\nabla}\Delta_{\tilde{g}}\beta\,|\, \widetilde{\nabla}\beta\big{)}_{\tilde{g}}\] \[\geq\,2\,\left[|\widetilde{\nabla}^{2}\beta|_{\tilde{g}}^{2}- \frac{(\Delta_{\tilde{g}}\beta)^{2}}{n-1}\right]-(n-1)\frac{1}{f}\langle \widetilde{\nabla}\big{|}\widetilde{\nabla}\beta\big{|}_{\tilde{g}}^{2}\,| \,\widetilde{\nabla}f\rangle_{\tilde{g}}\]
A standard estimate for the Hessian tells us that
\[\big{|}\widetilde{\nabla}^{2}\beta\big{|}_{\tilde{g}}^{2}-\frac{(\Delta_{ \tilde{g}}\beta)^{2}}{n-1}\,\geq\,-\frac{1}{n-1}\frac{\Delta_{\tilde{g}}\beta }{|\widetilde{\nabla}\beta|_{\tilde{g}}^{2}}\langle\widetilde{\nabla}\big{|} \widetilde{\nabla}\beta\big{|}_{\tilde{g}}^{2}\,|\,\widetilde{\nabla}\beta \rangle_{\tilde{g}}\,.\]
Substituting this in the previous inequality, we get
\[\Delta_{\tilde{g}}\big{|}\widetilde{\nabla}\beta\big{|}_{\tilde{g}}^{2}\,\geq\, \left\langle\widetilde{\nabla}\big{|}\widetilde{\nabla}\beta\big{|}_{\tilde{g}} ^{2}\,\bigg{|}\,\frac{2}{f}\frac{\langle\widetilde{\nabla}f\,|\,\widetilde{ \nabla}\beta\rangle_{\tilde{g}}}{|\widetilde{\nabla}\beta|_{\tilde{g}}^{2}} \widetilde{\nabla}\beta+(n-1)\frac{1}{f}\widetilde{\nabla}f\right\rangle_{ \tilde{g}}\]
By Cauchy-Schwarz the vector on the right-hand side is bounded on any compact set. In particular, \(|\widetilde{\nabla}\beta|_{\tilde{g}}^{2}\) satisfies the maximum principle. To conclude from this that \(|\widetilde{\nabla}\beta|_{\tilde{g}}\) is constant, we first observe via triangle inequality that \(\beta(x)-\beta(y)\leq\mathrm{d}_{\tilde{g}}(x,y)\) for any two points \(x,\,y\). This immediately implies that \(|\widetilde{\nabla}\beta|_{\tilde{g}}\leq 1\) on \(M\). On the other hand, since \(\beta(\sigma(\tau))=\tau\), we conclude that \(|\widetilde{\nabla}\beta|_{\tilde{g}}=1\) on \(\sigma\). The strong maximum principle then implies that \(|\widetilde{\nabla}\beta|_{\tilde{g}}=1\) on the whole manifold. In particular, the previous inequalities must be equalities, namely
\[\mathrm{Ric}_{\tilde{g}}\big{(}\widetilde{\nabla}\beta,\widetilde {\nabla}\beta\big{)} =\,(n-1)\frac{1}{f}\widetilde{\nabla}^{2}f\big{(}\widetilde{\nabla }\beta,\widetilde{\nabla}\beta\big{)}-2(n-1)\frac{1}{f^{2}}\big{\langle} \widetilde{\nabla}f\,\big{|}\,\widetilde{\nabla}\beta\big{\rangle}_{\tilde{g} }\,,\] \[\big{|}\widetilde{\nabla}^{2}\beta\big{|}_{\tilde{g}}^{2} =\,\frac{(\Delta_{\tilde{g}}\beta)^{2}}{n-1}\,.\]
Furthermore, the fact that \(|\widetilde{\nabla}\beta|_{\tilde{g}}=1\) on the whole manifold grants us that we can use \(\beta\) as a coordinate and that the manifold \(M\) is diffeomorphic to \(\mathbb{R}\times\Sigma\), for some \((n-1)\)-dimensional manifold \(\Sigma\). With respect to coordinates \(\{\beta,\theta^{1},\dots,\theta^{n-1}\}\), the conformal metric writes as
\[\tilde{g}\,=\,d\beta\otimes d\beta+\tilde{g}_{ij}d\theta^{i}\otimes d\theta^{ j}\,.\]
Again, since \(|\widetilde{\nabla}\beta|_{\tilde{g}}=1\), for any vector \(X\) it holds
\[\widetilde{\nabla}^{2}\beta\big{(}\widetilde{\nabla}\beta,X\big{)}\,=\, \big{\langle}\widetilde{\nabla}\big{|}\widetilde{\nabla}\beta\big{|}_{\tilde{ g}}^{2}\,\big{|}\,X\big{\rangle}_{\tilde{g}}\,=\,0\,.\]
It follows immediately from this and the identity \(|\widetilde{\nabla}^{2}\beta|_{\tilde{g}}^{2}=(\Delta_{\tilde{g}}\beta)^{2}/ (n-1)\) that, in the coordinates in which \(\tilde{g}\) has the form (2.1), for any \(i,j=1,\dots,n-1\) it holds
\[\widetilde{\nabla}_{ij}^{2}\beta\,=\,\frac{\Delta_{\tilde{g}}\beta}{n-1} \tilde{g}_{ij}\,=\,-\frac{1}{f}\frac{\partial f}{\partial\beta}\tilde{g}_{ij}\,,\]
where the latter identity makes use of (2.3). On the other hand, from the definition of Hessian we have \(\widetilde{\nabla}_{ij}^{2}\beta=-\Gamma_{ij}^{\beta}=\partial_{\beta}\tilde {g}_{ij}/2\), hence
\[\frac{\partial\tilde{g}_{ij}}{\partial\beta}\,=\,-\frac{2}{f}\frac{\partial f }{\partial\beta}\tilde{g}_{ij}\,.\]
This identity can be solved explicitly, yielding
\[\tilde{g}_{ij}\,=\,\frac{1}{f^{2}}(g_{0})_{ij}\,,\]
where \((g_{0})_{ij}\) does not depend on \(\rho\). Comparing with (2.1) and recalling \(g=f^{2}\tilde{g}\), we have obtained
\[g\,=\,f^{2}d\beta\otimes d\beta+g_{0}\,.\]
Finally, we remark again that in this proof we did not have to ask for the boundary of \(M\) to be empty since, as observed in Lemma 3.4, with respect to the conformal metric \(\tilde{g}\) the boundary components become ends, hence they cannot obstruct minimizing geodesics. Therefore, the argument to produce a line between two \(f\)-complete ends goes through and the manifold splits. But then this would imply \(\partial M=(-\infty,+\infty)\times\partial\Sigma\), which contradicts our initial assumption that the boundary is compact. It follows that the boundary must be empty if there is more than one \(f\)-complete end.
**Remark 3.6**.: _We point out that it is actually possible to obtain a stronger thesis in Theorem C above. In fact, proceeding as in the proof of Theorem 2.9, one can also show that identity (2.23) is in force, and from there deduce that \(f=f_{1}f_{2}\), where \(f_{1}\) is a function of \(s\) whereas \(f_{2}\) does not depend on \(s\). We do not give the details on this computation, which has already been performed in the conformal \(\mathrm{CD}(0,1)\) framework [23, Proposition 2.2]. Recalling the relation between \(\mathrm{CD}(0,1)\)
and substatic discussed in Appendix A.2, one can easily translate this result in our setting. One may also write down explicitly the substatic condition in the directions tangential to the cross section, to obtain some information on the triple \((\Sigma,g_{0},f_{2})\). Again, in the \(\operatorname{CD}(0,1)\) setting, this has been done in [23, Proposition 2.3], where it is shown that the triple \((\Sigma,g_{0},-(n-1)\log f_{2})\) satisfies the \(\operatorname{CD}(0,1)\) condition (in fact, it is even \(\operatorname{CD}(0,0)\)). It is not immediately clear whether this fact translates nicely in our setting. These refinements of the thesis of Theorem C will not be needed in the rest of the paper._
### Splitting theorem for conformally compact ends
We now discuss conformally compact ends. For such ends, by definition the metric extends to the conformal infinity sufficiently smoothly so that the mean curvature \(\operatorname{H}_{\tilde{g}}\) of the conformal infinity \(\partial E_{\infty}\) is well defined. On the other hand, the mean curvatures \(\operatorname{H}\) and \(\operatorname{H}_{\tilde{g}}\) of a hypersurface with respect to the two different metrics can be seen to be related by
\[\operatorname{H}_{\tilde{g}}\,=\,f\operatorname{H}-(n-1)\langle\nabla f\,|\, \nu\rangle\,.\]
Alternatively, setting \(\varphi=1/f\) we can write
\[\operatorname{H}\,=\,\varphi\operatorname{H}_{\tilde{g}}-(n-1)\big{\langle} \widetilde{\nabla}\varphi\,\big{|}\,\nu_{\tilde{g}}\big{\rangle}_{\tilde{g}}\,.\]
By definition of conformal compactness we know that \(\varphi\) extend in a \(\mathscr{C}^{3}\) fashion to the conformal boundary by setting \(\varphi=0\) on \(\partial E_{\infty}\). In particular \(|\widetilde{\nabla}\varphi|_{\tilde{g}}\) is bounded, which implies that the quantity \(\operatorname{H}/f\) can be extended to zero in a continuous fashion on \(\partial E_{\infty}\). Taking now as \(\rho\) the \(\tilde{g}\)-distance from \(\partial E_{\infty}\), recalling the Riccati equation (2.5) we have that
\[\frac{\partial}{\partial\rho}\left(\frac{\operatorname{H}}{f}\right)\,\leq\,- \frac{1}{n-1}\operatorname{H}^{2}\,,\qquad\frac{\operatorname{H}}{f}_{|_{\rho =0}}=0\,.\]
The assumption of \(\mathscr{C}^{3}\)-regularity of the conformal boundary made in Definition 3.3 was needed precisely to make sense of the \(\rho\)-derivative of \(\operatorname{H}\). Proceeding exactly as in Subsection 2.3, from this formula on the evolution of the mean curvature we obtain the Laplacian comparison
\[\frac{\operatorname{H}}{f}\,=\,\Delta\rho+\frac{1}{f}\langle\nabla f\,|\, \nabla\rho\rangle\,\leq\,0\,. \tag{3.4}\]
This is the main ingredient to prove the Splitting Theorem in the conformally compact setting:
**Theorem 3.7**.: _Let \((M,g,f)\) be a substatic triple with conformally compact ends. Then there is at most one end._
Proof.: Again, a \(\operatorname{CD}(0,1)\)-version of this argument can be found in [23, Theorem 5.1]. The proof follows closely the one in [16, Theorem B-(1)], where a splitting theorem for compact manifolds is discussed. Suppose by contradiction that the conformal infinity has at least two connected components. Let \(S_{-},S_{+}\) be the two components with least distance. Then there exists a \(\tilde{g}\)-geodesic \(\sigma\) minimizing the distance between them.
Let \(\beta_{-}\) (resp. \(\beta_{+}\)) be the distance from \(S_{-}\) (resp. \(S_{+}\)) with respect to \(\tilde{g}\). The discussion above grants us that both \(\beta_{-}\) and \(\beta_{+}\) satisfy the Laplacian comparison (3.4). In particular, so does \(\beta_{-}+\beta_{+}\). Since by construction \(\beta_{-}+\beta_{+}\) reaches its minimum value \(\operatorname{dist}(S_{-},S_{+})\) on the geodesic \(\sigma\), we then conclude by the strong maximum principle that \(\beta_{-}+\beta_{+}\) is constant and equal to \(\operatorname{dist}(S_{-},S_{+})\) on the whole manifold. It follows immediately that \(\beta=\beta_{+}\) satisfies
\[\Delta\beta+\frac{1}{f}\langle\nabla f\,|\,\nabla\beta\rangle\,=\,0\]
in the barrier sense. We are now exactly in the same situation reached in the proof of the Splitting Theorem for \(f\)-complete ends. We can then proceed exactly as after formula (3.3) to conclude that \((M,g)\) is isometric to a twisted product
\[\big{(}(a,b)\times\Sigma,\,f^{2}\,ds\otimes ds+g_{\Sigma}\big{)}\.\]
On the other hand, such a manifold is not conformally compact, as the metric \(\tilde{g}=g/f^{2}\) is degenerate as \(s\) approaches \(a\) or \(b\). We have thus reached a contradiction, implying that there were not multiple conformally compact ends in the first place.
This result generalizes [10, Theorem I.1], where the same thesis is obtained for conformally compact vacuum static solutions with negative cosmological constant. It is interesting to notice that the proof proposed in [10] also makes use of the conformal metric \(\tilde{g}=g/f^{2}\), which is exploited to invoke a spacetime censorship result from [1, Theorem 2.1].
### Splitting Theorem for mixed ends
For completeness, we include here the case where there are ends with different behaviours. This case was not considered in [20] but the proof is similar.
**Theorem 3.8**.: _Let \((M,g,f)\) be a substatic triple with ends that are either conformally compact or \(f\)-complete. If there is at least one \(f\)-complete end, then there cannot be any conformally compact end._
Proof.: This time the proof follows [16, Theorem C-(2)]. Suppose that there is a conformally compact end and an \(f\)-complete end. Then, one constructs a globally minimizing \(\tilde{g}\)-geodesic \(\sigma\) starting at a connected component \(S\) of the conformal boundary and reaching infinity. Let \(\beta_{-}\) be the distance from \(S\) and \(\beta_{+}\) be the Busemann function relative to \(\sigma\). As in the previous cases, from the Laplacian comparisons for both \(\beta_{-}\) and \(\beta_{+}\) and the fact that \(\beta_{-}+\beta_{+}\) achieves its minimum value \(0\) on \(\sigma\), we deduce that \(\beta=\beta_{+}\) satisfies
\[\Delta\beta+\frac{1}{f}\langle\nabla f\,|\,\nabla\beta\rangle\,=\,0\]
in the barrier sense. We now proceed as in the other cases to show that the manifold must be a twisted product
\[\left((0,+\infty)\times\Sigma,\,f^{2}\,ds\otimes ds+g_{\Sigma}\right)\,.\]
Again as in the proof of Theorem 3.7, we observe that the end corresponding to \(s=0\) cannot be conformally compact as the metric \(\tilde{g}=g/f^{2}\) becomes degenerate as \(s\to 0\). We have thus reached a contradiction, meaning that it is impossible to have an \(f\)-complete end and a conformally compact end at the same time.
This theorem, together with the other results in this Section (Theorem C and Theorem 3.7) strongly narrows the acceptable configurations of ends for a substatic triple. We sum up the topological information we have collected in the following statement.
**Corollary 3.9**.: _Let \((M,g,f)\) be a substatic triple with ends that are either conformally compact or \(f\)-complete. If there is more than one end, then there are exactly two ends, both \(f\)-complete, and \(\partial M=\mathcal{O}\)._
## 4. Asymptotic Volume Ratio and Willmore-type inequality
In this section we focus on \(f\)-complete ends and we introduce the notion of asymptotic volume ratio (AVR), in analogy with the classical case of nonpositive Ricci curvature, as the limit of the Bishop-Gromov monotonic quantity. In order to have a well defined AVR, we will need to focus our attention to the special case of uniform ends. Building on the notion of AVR we will finally prove the Willmore-type inequality mentioned in the introduction.
### Uniform \(f\)-complete ends
Here we introduce and comment the notion of uniformity of an \(f\)-complete end. For convenience, instead of working on the whole \(M\), we focus our attention on the end only. In other words, starting from the next definition and for most of this subsection, instead of working on the whole substatic triple \((M,g,f)\), we just consider a neighborhood \(E\) of our end and we focus our attention on the restriction \((E,g,f)\), which we refer to as a substatic \(f\)-complete end. It is easy to show that the definitions and statements below do not depend on the choice of the neighborhood \(E\) of our end.
**Definition 4.1**.: _Let \((E,g,f)\) be a substatic \(f\)-complete end. We say that \((E,g,f)\) is uniform, if, for any two compact hypersurfaces \(\Sigma_{1}\), \(\Sigma_{2}\) contained in the interior of \(E\) and every \(\delta>0\), there exists a compact set \(K\supset\partial E\) such that for any two unit speed \(\tilde{g}\)-geodesics \(\sigma_{1}\), \(\sigma_{2}\) minimizing the distance between \(\Sigma_{1}\), \(\Sigma_{2}\) and a point \(p=\sigma_{1}(t_{1})=\sigma_{2}(t_{2})\) outside \(K\), it holds_
\[\left|\frac{\int_{0}^{t_{1}}f^{2}(\sigma_{1}(t))dt}{\int_{0}^{t_{2}}f^{2}( \sigma_{2}(t))dt}\,-\,1\right|\,\leq\,\delta\,. \tag{4.1}\]
While the definition above is slightly technical, we point out that there are natural cases in which uniformity is guaranteed. We give here a couple of easily described families of uniform \(f\)-complete ends. The following result for instance guarantees us that an end is uniform as long as \(f\) goes to one at infinity.
**Proposition 4.2**.: _Let \((E,g,f)\) be a substatic end. If \(f\to 1\) at infinity, then \((E,g,f)\) is \(f\)-complete and uniform._
Proof.: The fact that the ends are \(f\)-complete has already been shown in far greater generality in Proposition 3.2. Let now \(\Sigma_{1}\), \(\Sigma_{2}\) be two hypersurfaces. Since \(f\to 1\) at infinity, for every \(\varepsilon\) the set \(K_{\varepsilon}=\{|f-1|>\varepsilon\}\) is compact. In particular \(1-\varepsilon<f<1+\varepsilon\) outside \(K_{\varepsilon}\). We consider now a \(\tilde{g}\)-geodesically convex compact set \(K\) containing \(K_{\varepsilon}\) and the two hypersurfaces \(\Sigma_{1}\), \(\Sigma_{2}\).
Let \(\sigma_{1}\), \(\sigma_{2}\) be two unit speed \(\tilde{g}\)-geodesics minimizing the distance between \(\Sigma_{1}\), \(\Sigma_{2}\) and a point \(p=\sigma_{1}(t_{1})=\sigma_{2}(t_{2})\) outside \(K_{\varepsilon}\). For \(j=1,2\), let \(0<T_{j}<t_{j}\) be the largest number such that \(\sigma_{j}(T_{j})\in K_{\varepsilon}\). We then have, for \(j=1,2\),
\[0\,<\,\int_{0}^{T_{j}}f^{2}(\sigma_{j}(t))dt\,<\,T_{j}\max_{K}f^{2}\,,\qquad( 1-\varepsilon)^{2}(t_{j}-T_{j})\,<\,\int_{T_{j}}^{t_{j}}f^{2}(\sigma_{j}(t)) dt\,<\,(1+\varepsilon)^{2}(t_{j}-T_{j})\,.\]
As a consequence, we estimate:
\[\frac{(1-\varepsilon)^{2}(t_{1}-T_{1})}{T_{2}\max_{K}f^{2}+(1-\varepsilon)^{2 }(t_{2}-T_{2})}\,<\,\frac{\int_{0}^{t_{1}}f^{2}(\sigma_{1}(t))dt}{\int_{0}^{t _{2}}f^{2}(\sigma_{2}(t))dt}\,<\,\frac{T_{1}\max_{K}f^{2}+(1+\varepsilon)^{2}( t_{1}-T_{1})}{(1-\varepsilon)^{2}(t_{2}-T_{2})}\,.\]
Since \(f\) is bounded at infinity, the quantity \(\max_{K}f^{2}\) is bounded by a constant independent of \(K\). Furthermore, we have \(T_{j}<\operatorname{diam}_{\tilde{g}}K_{\varepsilon}\), for \(j=1,2\), by construction. On the other hand, if we take the compact set \(K\) to be much larger than \(K_{\varepsilon}\), we can make \(t_{1}\) and \(t_{2}\) arbitrarily large. The uniformity estimate (4.1) follows then easily.
Another case in which uniformity is guaranteed is under the assumption that the norm of the gradient of \(f\) decays sufficiently fast.
**Proposition 4.3**.: _Let \((E,g,f)\) be a substatic \(f\)-complete end and let \(\rho\) be the \(\tilde{g}\)-distance from a point, where \(\tilde{g}=g/f^{2}\). If for some \(\varepsilon>0\) there exist a compact set \(K\supset\partial E\) and a constant \(C>0\) such that_
\[|\nabla f|<C\rho^{-1-\varepsilon}\]
_outside \(K\), then \((E,g,f)\) is uniform._
Proof.: Fix the compact hypersurfaces \(\Sigma_{1}\), \(\Sigma_{2}\), the point \(x\) and the constant \(\varepsilon>0\). Let \(K\supset\partial E\) be the compact set such that \(|\nabla f|<C\rho^{-1-\varepsilon}\) outside \(K\), where \(\rho\) is the \(\tilde{g}\)-distance from \(x\). Up to enlarging \(K\), we can suppose that \(x\), \(\Sigma_{1}\) and \(\Sigma_{2}\) are inside \(K\).
Let \(p\) be a point outside \(K\). For \(i=1,2\), consider the unit speed \(\tilde{g}\)-geodesic \(\sigma_{i}:[0,t_{i}]\to M\) minimizing the distance between \(\Sigma_{i}\) and \(p\) and such that \(\sigma_{i}(0)\in\Sigma_{i}\), \(\sigma_{i}(t_{i})=p\). We compare the value of \(f\) at a point \(\sigma_{i}(\tau)\) and at the point \(p\). Integrating along the geodesic, we find
\[\log f(p)\,=\,\log f(\sigma_{i}(\tau))+\int_{\tau}^{t_{i}}\big{\langle}\tilde{ \nabla}\log f\,\big{|}\,\dot{\sigma}_{i}\big{\rangle}_{\tilde{g}}(\sigma_{i}(t ))\,dt\,,\]
hence
\[\left|\log\left(\frac{f(\sigma_{i}(\tau))}{f(p)}\right)\right|\,\leq\,\int_{ \tau}^{t_{i}}\frac{1}{f}\big{|}\tilde{\nabla}f\big{|}_{\tilde{g}}(\sigma_{i}( t))\,dt=\,\int_{\tau}^{t_{i}}|\nabla f|(\sigma_{i}(t))\,dt\,. \tag{4.2}\]
We now exploit our hypothesis. Assume that the segment \(\sigma_{i_{|}_{[r,t_{i}]}}\) is outside \(K\). Then \(|\nabla f|\leq C\rho^{-1-\varepsilon}\) at the points of \(\sigma_{i_{|}_{[r,t_{i}]}}\), where \(\rho\) is the distance from \(x\). Notice that for all \(\tau\leq t\leq t_{i}\), \(\sigma_{i}(t)\) is at distance \(t\) from \(\Sigma_{i}\), so that by triangle inequality, for any \(y\in K\) it holds
\[t-\max_{y\in\Sigma_{i}}\mathrm{d}_{\tilde{g}}(x,y)\leq\rho(\sigma_{i}(t))\leq t +\max_{y\in\Sigma_{i}}\mathrm{d}_{\tilde{g}}(x,y)\,.\]
If we then take
\[t\geq 2\max_{y\in\Sigma_{i}}\mathrm{d}_{\tilde{g}}(x,y) \tag{4.3}\]
we get
\[|\nabla f|(\sigma_{i}(t))\,\leq\,C\rho^{-1-\varepsilon}(\sigma_{i}(t))\,\leq \,C(t-\max_{y\in\Sigma_{i}}\mathrm{d}_{\tilde{g}}(x,y))^{-1-\varepsilon}\, \leq\,C\left(\frac{t}{2}\right)^{-1-\varepsilon}=2^{1+\varepsilon}Ct^{-1- \varepsilon}\,.\]
If we then suppose that \(\tau\geq 2\max_{y\in\Sigma_{i}}\mathrm{d}_{\tilde{g}}(x,y)\), we deduce from (4.2) that
\[\left|\log\left(\frac{f(\sigma_{i}(\tau))}{f(p)}\right)\right|\,\leq\,2^{1+ \varepsilon}C\frac{\tau^{-\varepsilon}-t_{i}^{-\varepsilon}}{\varepsilon}\]
that is,
\[e^{-\frac{2^{1+\varepsilon}C}{\varepsilon}\tau^{-\varepsilon}}\,\leq\,\exp \left[2^{1+\varepsilon}C\frac{t_{i}^{-\varepsilon}-\tau^{-\varepsilon}}{ \varepsilon}\right]\,\leq\,\frac{f(\sigma_{i}(\tau))}{f(p)}\,\leq\,\exp\left[ 2^{1+\varepsilon}C\frac{\tau^{-\varepsilon}-t_{i}^{-\varepsilon}}{\varepsilon }\right]\,\leq\,e^{\frac{2^{1+\varepsilon}C}{\varepsilon}\tau^{-\varepsilon}}\,. \tag{4.4}\]
Let now \(\kappa>0\) be a large number and consider the compact set
\[K_{\kappa}\,=\,B_{\kappa}^{\tilde{g}}(K)\,:=\,\left\{y\in M\,:\,\mathrm{d}_{ \tilde{g}}(K,y)\leq\kappa\right\}.\]
Since \(\Sigma_{i}\subset K\), notice that if \(\sigma_{i}(t)\not\in K_{\kappa}\) then \(t>\kappa\). It follows that, up to taking \(\kappa\) large enough, inequality (4.3) holds and in particular \(|\nabla f|(\sigma_{i}(t))\leq 2^{1+\varepsilon}Ct^{-1-\varepsilon}\) for any \(t\) such that \(\sigma_{i}(t)\not\in K_{\kappa}\).
Conversely, also notice by triangle inequality that if
\[t>\kappa+\max_{y\in\Sigma_{i},z\in\partial K\setminus\partial E}\mathrm{d}_{ \tilde{g}}(y,z)\,,\]
then \(\sigma_{i}(t)\not\in K_{\kappa}\). Up to taking \(\kappa\) large enough, we can then also suppose that \(\sigma_{i}(t)\not\in K_{\kappa}\) for every \(t>2\kappa\). In particular, for any \(\tau>2\kappa\) we can apply estimate (4.4), obtaining
\[e^{-\frac{2C}{\varepsilon}\kappa^{-\varepsilon}}\,\leq\,\frac{f(\sigma_{i}( \tau))}{f(p)}\,\leq\,e^{\frac{2C}{\varepsilon}\kappa^{-\varepsilon}}\]
We are finally ready to prove uniformity at infinity with respect to the compact set \(K_{\kappa}\), for \(\kappa\) sufficiently large. As already noticed, \(\sigma_{i}(t)\in K_{2\kappa}\) for \(t<2\kappa\), hence
\[\frac{\int_{0}^{t_{1}}f^{2}(\sigma_{1}(t))dt}{\int_{0}^{t_{2}}f^{2}(\sigma_{2} (t))dt}\,\leq\,\frac{2\kappa\max_{K_{2\kappa}}f^{2}+\int_{2\kappa}^{t_{1}}f^{ 2}(\sigma_{1}(t))dt}{2\kappa\min_{K_{2\kappa}}f^{2}+\int_{2\kappa}^{t_{2}}f^{ 2}(\sigma_{2}(t))dt}\,\leq\,\frac{2\kappa\max_{K_{2\kappa}}f^{2}+(t_{1}-2 \kappa)f(p)^{2}e^{\frac{2C}{\varepsilon}\kappa^{-\varepsilon}}}{2\kappa\min_{K _{2\kappa}}f^{2}+(t_{2}-2\kappa)f(p)^{2}e^{-\frac{2C}{\varepsilon}\kappa^{- \varepsilon}}}\,.\]
Notice that \(t_{1}\) and \(t_{2}\) are comparable (their difference is bounded via the triangle inequality by the maximum of the distance between points of \(\Sigma_{1}\) and \(\Sigma_{2}\)). Therefore, for any \(\tilde{\varepsilon}>0\) arbitrarily small, we can find \(\tilde{\kappa}\) much larger than \(\kappa\) so that, assuming \(p\) is outside \(K_{\tilde{\kappa}}\) (in particular \(t_{1},t_{2}\) are also much larger than \(\kappa\)) it holds
\[\frac{\int_{0}^{t_{1}}f^{2}(\sigma_{1}(t))dt}{\int_{0}^{t_{2}}f^{2}(\sigma_{2} (t))dt}\,\leq\,(1+\tilde{\varepsilon})e^{\frac{4C}{\varepsilon}\kappa^{- \varepsilon}}\,.\]
Up to choosing \(\kappa\) large enough, we can also make the exponential term in the inequality above as close to \(1\) as necessary. Of course, exchanging the roles of \(\sigma_{1}\) and \(\sigma_{2}\) we also find the opposite bound. This proves uniformity.
Equipped with the notion of uniformity of the ends, we are now ready to define the substatic version of the asymptotic volume ratio.
**Definition 4.4**.: _Let \((M,g,f)\) be a substatic solution and let \(E\) be a uniform \(f\)-complete end. Let \(\rho\) be the distance function to a point or a hypersurface with respect to the metric \(\tilde{g}=g/f^{2}\) and \(\eta\) be the solution to (2.7) or (2.13), respectively. The Asymptotic Volume Ratio \(\operatorname{AVR}(E,g,f)\) of \(E\) is defined as_
\[\operatorname{AVR}(E,g,f)\,=\,\frac{1}{|\mathbb{S}^{n-1}|}\lim_{t\to+\infty} \int_{\{\rho=t\}\cap E}\frac{1}{\eta^{n-1}}d\sigma\,.\]
_If \((M,g,f)\) has a unique end \(E\), we refer to \(\operatorname{AVR}(E,g,f)\) with \(\operatorname{AVR}(M,g,f)\)._
The following basic fact motivates the introduction of the notion of uniform ends.
**Proposition 4.5**.: _The substatic Asymptotic Volume Ratio is well-defined on any uniform \(f\)-complete end. In other words, its definition does not depend on the choice of the point/hypersurface we are taking the distance \(\rho\) from._
Proof.: Let \(\rho\) be the \(\tilde{g}\)-distance from a point or a hypersurface. We consider the functional \(V(t)\) defined in (2.29), that we recall here for the reader's convenience:
\[V(t)\,=\,\frac{1}{|\mathbb{B}^{n}|t^{k}}\int_{\{0\leq\rho\leq t\}}\frac{\rho^ {k-1}}{f\eta^{n-1}}d\mu\,,\]
where \(k>0\) is a constant. A simple application of L'Hopital's rule tells us immediately that
\[\lim_{t\to+\infty}V(t) =\,\frac{1}{|\mathbb{B}^{n}|}\lim_{t\to+\infty}\frac{1}{t^{k}} \int_{\{\rho\leq t\}}\frac{\rho^{k-1}}{f\eta^{n-1}}d\mu\] \[=\,\frac{n}{k|\mathbb{S}^{n-1}|}\lim_{t\to+\infty}\int_{\{\rho=t \}}\frac{1}{\eta^{n-1}}d\sigma\] \[=\,\frac{n}{k}\lim_{t\to+\infty}A(t)\,. \tag{4.5}\]
In order to conclude the proof, it is then enough to show that \(\lim_{t\to+\infty}V(t)\) is independent of the choice of the point/hypersurface. In the rest of the proof, it is convenient to set \(k=1\) in our functional \(V\). Let \(\eta_{1}\), \(\eta_{2}\) be reparametrized distances with respect to two different points (resp. two different hypersurfaces) and let \(\delta\) be the distance between the two points (resp. the maximum distance between points of the two hypersurfaces) with respect to the metric \(\tilde{g}=g/f^{2}\). By triangle inequality we have the inclusion \(\{\rho_{1}\leq t-\delta\}\subset\{\rho_{2}\leq t\}\), therefore
\[V_{1}(t)-V_{2}(t) =\frac{1}{|\mathbb{B}^{n}|t}\int_{\{\rho_{1}\leq t\}}\frac{1}{f \eta_{1}^{n-1}}d\mu-\frac{1}{|\mathbb{B}^{n}|t}\int_{\{\rho_{2}\leq t\}} \frac{1}{f\eta_{2}^{n-1}}d\mu\] \[\leq\frac{1}{|\mathbb{B}^{n}|t}\int_{\{\rho_{1}\leq t\}}\frac{1}{ f\eta_{1}^{n-1}}d\mu-\frac{1}{|\mathbb{B}^{n}|t}\int_{\{\rho_{1}\leq t- \delta\}}\frac{1}{f\eta_{2}^{n-1}}d\mu\] \[\leq\frac{1}{|\mathbb{B}^{n}|t}\int_{\{t-\delta\leq\rho_{1}\leq t \}}\frac{1}{f\eta_{1}^{n-1}}d\mu+\frac{1}{|\mathbb{B}^{n}|t}\int_{\{\rho_{1} \leq t-\delta\}}\frac{1}{f\eta_{1}^{n-1}}\left(1-\frac{\eta_{1}^{n-1}}{\eta_ {2}^{n-1}}\right)d\mu\,.\]
Concerning the first integral, applying again L'Hopital's rule we find that its limit is the same as the limit of \(nA(t)-nA(t-\delta)\) at \(t\to+\infty\), where \(A\) is the usual area functional with respect to the distance \(\rho_{1}\). Since \(A(t)\) has a finite limit at infinity and \(\delta\) is fixed, this limit is zero. From the uniformity of the end and the fact that \(V(t)\) is bounded, we deduce that the second integral also goes to zero. Hence, we have found that \(\lim_{t\to+\infty}[V_{1}(t)-V_{2}(t)]\leq 0\). Switching the roles of \(V_{1}\) and \(V_{2}\) we find that the opposite inequality is also in place, hence the limits of \(V_{1}(t)\) and \(V_{2}(t)\) are the same, as wished.
**Remark 4.6**.: _As noted in Remark 2.4, \(\eta\) represents the distance along radial \(\tilde{g}\)-geodesics with respect to the metric \(\overline{g}=f^{2}g\). Providing a suitable Bishop-Gromov-type Theorem in terms of the \(\overline{g}\)-distance in place of \(\eta\) may be useful to cook a notion of Asymptotic Volume Ratio that does not need the notion of uniformity to be well defined._
The following is a basic yet fundamental consequence of the Splitting Theorem C.
**Lemma 4.7**.: _Let \((M,g,f)\) be a substatic triple with \(f\)-complete ends. If there is more than one uniform \(f\)-complete end, then all ends have vanishing asymptotic volume ratio._
Proof.: Suppose that there is more than one uniform \(f\)-complete end. Then the Splitting Theorem C implies that the manifold splits as a twisted product
\[(\mathbb{R}\times\Sigma,\,f^{2}\,ds\otimes ds+g_{\Sigma})\,,\]
for some \((n-1)\)-dimensional Riemannian manifold \((\Sigma,g_{\Sigma})\). Let \(\rho\) be the \(\tilde{g}\)-distance from the cross section \(\{s=0\}\) and \(\eta\) be defined by (2.13), as usual. Notice that the level sets of \(\rho\) and \(\eta\) are also level sets of \(s\), hence in particular the metric induced on any level set of \(\rho\) is \(g_{\Sigma}\). It follows then that
\[\operatorname{AVR}(E,g,f)\,=\,\frac{1}{|\mathbb{S}^{n-1}|}\lim_{t\to+\infty} \int_{\{\rho=t\}\cap E}\frac{1}{\eta^{n-1}}d\sigma\,=\,\lim_{t\to+\infty}\frac {|\Sigma|}{|\mathbb{S}^{n-1}|\eta^{n-1}_{\{\rho=t\}}}\,.\]
Since the end is \(f\)-complete, we have \(\eta\to+\infty\) at infinity, hence the above limit vanishes.
In light of the above Lemma, our main geometric inequalities (1.3) and (1.10) will only involve one end and the global \(\operatorname{AVR}(M,g,f)\).
In this framework, we now discuss some cases in which we are able to give more precise estimates for the Asymptotic Volume Ratio. A first simple estimate, in the case where the boundary is empty, is obtained from (2.28). Taking the limit of this formula as \(t\to+\infty\), assuming that the boundary is empty (so that the term \(A(t)\) appearing in that formula converges to the asymptotic volume ratio) we find the following
\[\operatorname{AVR}(M,g,f)\,f(p)^{n-1}\,\leq\,1\,.\]
This must hold for any point \(p\in M\). In particular, it follows that if \(\partial M=\O\) and \(f\) is not bounded then the Asymptotic Volume Ratio must vanish.
An important family of substatic manifolds having nonzero AVR is that of asymptotically flat triples, that we now define precisely.
**Definition 4.8**.: _A substatic triple \((M,g,f)\) is said to be asymptotically flat if_
* _there exists a compact domain_ \(K\supset\partial M\) _and a diffeomorphism (called_ chart at infinity_) between_ \(M\setminus K\) _and_ \(\mathbb{R}^{n}\) _minus a ball._
* _in the chart at infinity, it holds_ \(|g_{ij}-\delta_{ij}|=o(1)\) _and_ \(|f-1|=o(1)\) _as_ \(|x|\to+\infty\)_._
We remark that the usual definition of asymptotic flatness requires a higher degree of convergence of the metric \(g\) to the Euclidean one. However, the above definition is sufficient to compute precisely the asymptotic volume ratio.
**Proposition 4.9**.: _Let \((M,g,f)\) be an asymptotically flat substatic triple. Then \(\operatorname{AVR}(M,g,f)=1\)._
Proof.: The fact that the end is \(f\)-complete and uniform follows from Proposition 4.2. Let \(K\) be a compact set as in Definition 4.8 and let \(S=\{|x|=R\}\) be a large coordinate sphere contained in the chart at infinity. From Proposition 4.5 we know that the asymptotic volume ratio does not depend on the hypersurface we are taking the distance from. It is then convenient to work with the \(\tilde{g}\)-distance \(\rho\) from the coordinate sphere \(S\). As it follows from (4.5), we can also compute the AVR via the following limit
\[\operatorname{AVR}(M,g,f)\,=\,\frac{1}{|\mathbb{B}^{n}|}\lim_{t\to+\infty} \frac{1}{t^{n}}\int_{\{0\leq\rho\leq t\}}\frac{\rho^{n-1}}{f\eta^{n-1}}d\mu\,. \tag{4.6}\]
If the radius \(R\) of the coordinate sphere \(S\) is large, then we can assume \(|f-1|<\varepsilon\) and \(|g_{ij}-\delta_{ij}|<\varepsilon\) in \(\{|x|>R\}\) for some fixed small \(\varepsilon\). It is then easily seen that there exists \(\delta=\delta(\varepsilon)\) such that
\[\{R\leq|x|\leq(R+t)(1-\delta)\}\subset\{0\leq\rho\leq t\}\subset\{R\leq|x| \leq(R+t)(1+\delta)\}\]
for all \(t\). Since \(\eta\) grows as \(f^{2}\) along \(\tilde{g}\)-geodesics, we have \((1-\varepsilon)^{2}\rho<\eta<(1+\varepsilon)^{2}\rho\). It follows that the integral in (4.6) grows as the Euclidean volume of the annulus \(\{R<|x|<R+t\}\), or more explicitly as:
\[[(R+t)^{n}-R^{n}]\ |\mathbb{B}^{n}|\,\cong\,|\mathbb{B}^{n}|t^{n}\,.\]
The wished result follows easily.
### Willmore inequality
As a consequence of our definition of AVR and the Bishop-Gromov monotonicity of the area functional \(A(t)\) (Theorem 2.9), we obtain the Willmore inequality for hypersurfaces with nonnegative mean curvature of Theorem D. The following statement provides more details about the equality case.
**Theorem 4.10** (Willmore inequality).: _Let \((M,g,f)\) be a substatic solution with a uniform \(f\)-complete end. Let \(\Sigma\) be a hypersurface that is homologous to the boundary. Suppose that the mean curvature \(\mathrm{H}\) of \(\Sigma\) with respect to the normal pointing towards infinity satisfies \(\mathrm{H}>0\) pointwise. Then_
\[\int_{\Sigma}\left[\frac{\mathrm{H}}{(n-1)f}\right]^{n-1}d\sigma\,\geq\, \mathrm{AVR}(M,g,f)\,|\mathbb{S}^{n-1}|\,. \tag{4.7}\]
_If the equality holds, then the set \(U=\{\rho>0\}\) is isometric to \([0,+\infty)\times\Sigma\) with metric_
\[g\,=\,f^{2}d\rho\otimes d\rho+\eta^{2}g_{0}\,,\]
_where \(g_{0}\) is a metric on the level set \(\Sigma\). Furthermore, in \(U\) the functions \(f\) and \(\eta\) satisfy_
\[\eta\,=\,(\alpha+\beta)^{\frac{1}{n-1}}\,,\qquad f^{2}\,=\,\frac{\dot{\alpha} }{(n-1)(\alpha+\beta)^{\frac{n-2}{n-1}}}\]
_where \(\alpha\) is a function of \(\rho\) and \(\beta\) is a function on \(\Sigma\)._
Proof.: We recall from Theorem 2.9 that \(A(t)\) is monotonically nonincreasing. Taking the limit as \(t\to+\infty\) we then get
\[\frac{1}{|\mathbb{S}^{n-1}|}\int_{\Sigma}\left[\frac{\mathrm{H}}{(n-1)f} \right]^{n-1}d\sigma\,=\,A(0)\,\geq\,\lim_{t\to+\infty}A(t)\,=\,\mathrm{AVR}(M,g,f)\,.\]
This proves the inequality.
Furthermore, if the equality holds, then from the rigidity statement in Theorem 2.9 it follows
\[\frac{1}{\eta}\frac{\partial\eta}{\partial\theta^{i}}\,=\,\psi\eta-\varphi \frac{1}{\eta^{n-1}}\,,\qquad\frac{1}{f}\frac{\partial f}{\partial\theta^{i}} \,=\,\psi\eta+\frac{n-2}{2}\varphi\frac{1}{\eta^{n-1}}\,,\]
where \(\varphi\) and \(\psi\) are functions on \(\Sigma\). From the first of these equations, in particular we get
\[\frac{\partial}{\partial\theta^{i}}\left(\frac{1}{\eta}\right)\,=\,\psi- \varphi\frac{1}{\eta^{n}}\,.\]
We now focus on this identity near infinity: since our end is \(f\)-complete, we know that \(\eta\) is going to \(+\infty\). Furthermore, from the uniformity at infinity we can also prove that \(1/\eta\) goes to zero uniformly at infinity. To prove that, it is enough to apply the uniformity at infinity property to the two hypersurfaces \(\Sigma\) and \(\Sigma_{\delta}=\{\rho=\delta\}\). Notice that then the function \(\eta_{\delta}\) associated to \(\Sigma_{\delta}\) differs from \(\eta\) just by a constant \(k\), that is \(\eta_{\delta}=\eta+k\). Then uniformity at infinity implies precisely that for any \(\varepsilon\) there exists a compact set such that
\[\left|\frac{k}{\eta}\right|\,=\,\left|\frac{k+\eta}{\eta}-1\right|\,\leq\, \varepsilon\,.\]
Hence, \(1/\eta\) goes to \(0\) uniformly at infinity, as wished.
Given \(\varepsilon>0\), fix \(R>0\) big enough so that \(1/\eta<\varepsilon\) in \([R,+\infty)\times V\). If \(\psi\) is not everywhere vanishing, then there is an open set \(V\subset\Sigma\) such that \(\psi>\delta\) in \(V\) (the case \(\psi<-\delta\) is done in the exact same way). Therefore we would get
\[\frac{\partial}{\partial\theta^{i}}\left(\frac{1}{\eta}\right)\,>\,\delta-| \varphi|\,\varepsilon^{n}\,\geq\,\delta-\varepsilon^{n}\max_{\Sigma}|\varphi |=\tilde{\delta}\]
in \([R,+\infty)\times V\). Up to taking \(\varepsilon\) small enough, we can assume that \(\tilde{\delta}>0\). But then, for any two points \(p_{\rho}=(\rho,\theta^{1},\ldots,\theta^{i},\ldots,\theta^{n})\), \(q_{\rho}=(\rho,\theta^{1},\ldots,\theta^{i}+\lambda,\ldots,\theta^{n})\) belonging to \([R,+\infty)\times V\), we would deduce
\[\frac{1}{\eta}(p_{\rho})\,=\,\frac{1}{\eta}(q_{\rho})+\int_{0}^{\lambda}\frac{ \partial}{\partial\theta^{i}}\bigg{(}\frac{1}{\eta}\bigg{)}_{\mid_{(\rho, \theta^{1},\ldots,\theta^{i}+t,\ldots,\theta^{n})}}\,dt\,\geq\,\lambda\tilde{ \delta}\,,\]
which in turn would imply \(\lim_{\rho\to+\infty}(1/\eta)(p_{\rho})\geq\lambda\tilde{\delta}>0\), contradicting the fact that \(1/\eta\to 0\) at infinity. It follows then that
\[\psi=0\,.\]
Our constraints on \(f\) and \(\eta\) then become
\[\frac{1}{\eta}\frac{\partial\eta}{\partial\theta^{i}}\,=\,-\varphi\frac{1}{ \eta^{n-1}}\,,\qquad\frac{1}{f}\frac{\partial f}{\partial\theta^{i}}\,=\, \frac{n-2}{2}\varphi\frac{1}{\eta^{n-1}}\,. \tag{4.8}\]
The first equation can be rewritten as
\[\frac{\partial}{\partial\theta^{i}}\eta^{n-1}\,=\,-(n-1)\varphi\,.\]
Since \(\varphi\) does not depend on \(\rho\), it follows then that
\[\eta^{n-1}\,=\,\alpha+\beta\,,\]
where \(\alpha\) is a function of \(\rho\) and \(\beta\) is a function on \(\Sigma\). Taking the derivative with respect to \(\rho\) of this formula and using the fact that \(\partial_{\rho}\eta=f^{2}\), we then deduce
\[f^{2}\,=\,\frac{\dot{\alpha}}{(n-1)(\alpha+\beta)^{\frac{n-2}{n-1}}}\]
It is easy to check that for any \(f\) and \(\eta\) of this form (that is, for any choice of \(\alpha\) and \(\beta\)), formulas (4.8) are satisfied.
## 5. Isoperimetric Inequality for Substatic manifolds
In this Section, we focus our attention on substatic manifolds \((M,g,f)\) admitting an _exhaustion of outward minimising hypersurfaces_ homologous to \(\partial M\). A hypersurface \(\Sigma\) homologous to \(\partial M\) is outward minimizing if, denoting by \(\Omega\) the compact domain with \(\partial\Omega=\Sigma\sqcup\partial M\), we have
\[P(\Omega)\leq P(F)\]
for any bounded set \(F\supset\Omega\). We say that a sequence \((S_{j})_{j\in\mathbb{N}}\) of hypersurfaces homologous to \(\partial M\) exhaust \(M\) if, given a compact set \(K\subset M\), there exists an element \(S\) in the sequence such that \(K\subset\Omega\), for \(\Omega\) satisfying \(\partial\Omega=S\sqcup\partial M\). Conditions ensuring the existence of such an exhaustion are discussed in [13].
We start from showing that \(\partial M\) is a priori area minimizing. In showing so, we also derive that \(\partial M\) is _outermost_, that is, there exist no minimal submanifolds homologous to \(\partial M\) other than the boundary itself. These facts are the first main reason why we require the existence of (nonminimal) outward minimizing sets homologous to \(\partial M\). Since the following auxiliary result does not need any a priori growth at infinity assumption, we think it may have an independent interest.
**Proposition 5.1** (The boundary is outermost area-minimizing).: _Let \((M,g,f)\) be a substatic triple with horizon boundary. Assume that there exists an outward minimizing smooth hypersurface \(S\) homologous to \(\partial M\). Then, the horizon is outward minimizing, meaning that_
\[|\Sigma|\geq|\partial M| \tag{5.1}\]
_for any hypersurface \(\Sigma\) homologous to \(\partial M\). Moreover, it is outermost, that is there exists no other minimal hypersurfaces homologous to \(\partial M\)._
Proof.: Let \(\Omega\) be such that \(\partial\Omega=S\sqcup\partial M\). Indeed \(S\) being outward minimizing is mean-convex, and consequently the Maximum Principle implies it is disjoint from \(\partial M\) (see e.g. [11, Corollary 4.2]; it is a consequence of the strong comparison principle for quasilinear equations). We flow \(\Omega\) by weak Mean Curvature Flow, referring to the notion considered in [10]. In particular the analysis carried out by White [12] applies. Moreover, observe that the mean curvature of \(S\) is necessarily nonnegative, and in particular \(\Omega\) is mean-convex in the sense of [12, Section 3]. Since \(\partial M\) constitutes itself a (steady) MCF, the well-known [10, Inclusion Property 5.3] ensures that the possibly singular evolving sets \(\partial\Omega_{t}\setminus\partial M\) remain homologous to the horizon. By [12, Theorem 11.1], \(\partial\Omega_{t}\) must converge smoothly to a minimal hypersurface \(\Sigma\), necessarily homologous to \(\partial M\). We show that \(\Sigma\) can be the horizon only. Indeed, if this were not the case, \(\Sigma\) would be detached from \(\partial M\) by the Maximum Principle, and \((iii)\) in Proposition 2.7 would apply, foliating an outer neighbourhood of \(\Sigma\) with hypersurfaces of nonpositive mean curvature. But this is a contradiction, through the Maximum Principle for the mean curvature operator, applied on tangency points, with the smooth mean-convex Mean Curvature Flow smoothly approaching \(\Sigma\). Then, the Mean Curvature Flow of \(\Omega\) converges smoothly to \(\partial M\). Observe that this also implies that no minimal hypersurface homologous to \(\partial M\) contained in \(\Omega\) can exist. Indeed, if there were one, it would obviously remain fixed under MCF, and thus \(\Omega_{t}\) converging to \(\partial M\) would eventually go beyond it, contradicting [10, Inclusion Property 5.3]. The nonminimal outward minimizing sets forming an exhaustion, we in particular proved that \(\partial M\) is outermost.
Finally, recall that the outward minimizing property of the initial set is preserved along the flow, as it can be easily checked applying [12, One-Sided Minimization Theorem 3.5] (see [10, Lemma 5.6] for a proof in the smooth flow setting). Then, \(\partial M\) being one-sided limit of outward minimizing hypersurfaces homologous to the boundary, is outward minimizing as well.
As already pointed out in the Introduction, our proof of Theorem A ultimately builds on the application of the Willmore-type inequality (4.7) on hypersurfaces homologous to \(\partial M\) bounding a set that is isoperimetric with respect to the volume weighted by \(f\). In order to bypass the lack of existence, we will consider _constrained_ isoperimetric sets. We find convenient to extend \((M,g)\) over the horizon, letting \((N,g_{N})\) be the extended Riemannian manifold. This can be obtained through gluing another copy of \(M\) along its boundary, and endowing it with a smooth metric that coincides with \(g\) on the original manifold. The existence of such metric is ensured by [13, Theorem A]. Let \(S\) be homologous to \(\partial M\) and disjoint from it, and let \(\Omega\subset M\) have boundary \(S\sqcup\partial M\). Extend \(\Omega\) too, so to find \(\Omega_{N}\subset N\) satisfying \(\Omega_{N}\cap M=\Omega\), and let
\[B_{\varepsilon}^{N\setminus M}(\partial M)=\{p\in N\setminus M\,|\,\mathrm{d} _{N}(p,\partial M)\leq\varepsilon\},\]
for \(\varepsilon>0\) such that \(B_{\varepsilon}^{N\setminus M}(\partial M)\subset\Omega_{N}\), where \(\mathrm{d}_{N}\) is the distance induced by the metric \(g_{N}\). We are going to consider sets of finite perimeter \(E_{V}\) in \((N,g_{N})\) satisfying
\[|E_{V}\cap M|_{f}=V\qquad\qquad P(E_{V})=\inf\Big{\{}P(F)\,|\,B_{\varepsilon} ^{N\setminus M}(\partial M)\subset F\subset\Omega_{N},|F\cap M|_{f}=V\Big{\}} \tag{5.2}\]
for \(V<|\Omega|_{f}\), where we recall that given \(E\subset M\) we defined
\[|E|_{f}=\int_{E}fd\mu.\]
The following result gathers the main properties these constrained isoperimetric sets satisfy.
**Theorem 5.2** (Existence and structure of constrained \(f\)-isoperimetric sets).: _Let \((M,g,f)\) be a substatic triple with horizon boundary, of dimension \(n\leq 7\). Let \(S\) be a strictly mean-convex outward minimizing hypersurface homologous to \(\partial M\), and let \((N,g_{N}),\Omega,\Omega_{N}\) and \(B_{\varepsilon}^{N}(\partial M)\) as above, for \(\varepsilon>0\). Then, for any \(V<|\Omega|_{f}\), there exists \(E_{V}\subset\Omega_{N}\) satisfying (5.2). Moreover,_
1. \(\partial E_{V}\cap\partial M=\O\)_. Moreover,_ \(\partial(E_{V}\cap M)=\Sigma\sqcup\partial M\)_, where_ \(\Sigma\) _is a_ \(\mathscr{C}^{1,1}\)_-hypersurface._
2. _The set_ \(\Sigma\setminus S\) _is a smooth hypersurface. Moreover, there exists a_ positive _constant_ \(\lambda\) _such that_ \(\mathrm{H}(x)=\lambda f(x)\) _for any_ \(x\in\Sigma\setminus S\)
_._
3. _We have_ \[\lambda\geq\frac{\mathrm{H}}{f}(x)>0\] (5.3) _for_ \((n-1)\)_-almost any_ \(x\in\Sigma\)_._
Proof.: The existence of \(E_{V}\) directly follows from the Direct Method. Indeed, let \((F_{j})_{j\in\mathbb{N}}\) be a minimizing sequence for (5.2). Then, by compactness, up to subsequences it converges to a set \(E_{V}\subset\Omega_{N}\) in \(L^{1}\). In particular, we have
\[\lim_{j\to+\infty}\int_{M}f|\chi_{E_{V}}-\chi_{F_{j}}|d\mu\leq\lim_{j\to+\infty }\sup_{\Omega}f|(E_{V}\bigtriangleup F_{j})\cap M|=0.\]
So, \(|E_{V}\cap M|=V\). By the convergence almost everywhere one also deduces that \(B_{\varepsilon}^{N\setminus M}(\partial M)\subset E_{V}\subset\Omega_{N}\) is satisfied too. The lower semicontinuity of the perimeter also ensures that the infimum in (5.2) is attained by \(E_{V}\).
As far as the regularity of \(\partial(E_{V}\cap M)\) is concerned, let us first crucially observe that \(E_{V}\cap M\) is (constrained) isoperimetric in \(M\) endowed with the conformal metric \(\overline{g}=f^{2}g\) with respect to a perimeter and volume with the same weight, namely with respect to
\[P(E)=\int_{\partial^{*}E}f^{1-n}d\sigma_{\overline{g}},\quad|E|_{f}=\int_{E}f ^{1-n}d\mu_{\overline{g}},\]
where \(d\sigma_{\overline{g}}\) and \(d\mu_{\overline{g}}\) denote the area and volume measure induced by \(\overline{g}\) respectively. In particular, away from \(S\) and \(\partial M\), where \(\overline{g}\) becomes singular, we have that classical regularity for the weighted isoperimetric problem applies [11, Section 3.10], and implies the claimed smoothness. In order to prove the global \(\mathscr{C}^{1,1}\)-regularity, we mainly follow the nice exposition in [10, Section 6], taking advantage also of [13, Section 17]. We first show that \(E_{V}\cap M\) is an almost minimizer for the perimeter. This amounts to say that there exists \(r_{0}\) such that for every \(x\in\Omega\) and every \(r<r_{0}\)
\[P(E_{V}\cap M)\leq P(F)+\mathrm{C}r^{n} \tag{5.4}\]
holds for any \(F\) such that \((E_{V}\cap M)\bigtriangleup F\Subset B(x,r)\), for some constant \(\mathrm{C}\) independent of \(x\) and \(r\). Observe that \(B(x,r)\) can intersect \(\Omega_{N}\setminus\Omega\), and this is the main reason we did extend our substatic manifold. Let for simplicity \(E=E_{V}\cap M\), consider two small enough balls \(B_{1}\) and \(B_{2}\) centered on \(\partial E\setminus\partial M\) with \(B_{1},B_{2}\Subset M\setminus\partial M\), and let \(X_{1}\) and \(X_{2}\) be variation vector fields compactly supported in \(B_{1}\) and \(B_{2}\) respectively. Let \(E_{t}^{i}=\psi_{t}^{i}(E)\), where \(\psi_{t}^{i}\) is the flow of \(X_{i}\) at time \(t\), for \(i=1,2\). By [13, Proposition 17.8], we have
\[|E_{t}^{i}|_{f}=|E|+t\int_{\partial E}f\langle X_{i}|\nu_{E}\rangle d\sigma+O( t^{2}) \tag{5.5}\]
as \(t\to 0\), where \(\nu_{E}\) is a unit normal for \(E\). If \(f>c>0\) uniformly on \(B_{1}\cup B_{2}\), we deduce
\[\big{|}|E_{t}^{i}|-|E|\big{|}\geq\mathrm{C}|t| \tag{5.6}\]
for \(t\) in some small neighbourhood of \(0\) and for some uniform constant \(\mathrm{C}\). Moreover, the perimeter satisfies the usual expansion [13, Theorem 17.5]
\[P(E_{t}^{i})=P(E)+t\int_{\partial E}\mathrm{div}_{\partial E}X_{i}d\sigma+O(t^ {2})\]
as \(t\to 0\), where \(\mathrm{div}_{\partial E}\) denotes the tangential divergence \(\mathrm{div}_{\partial E}X_{i}=\mathrm{div}(X_{i})-\langle X_{i}|\nu_{E}\rangle\). Thus,
\[\big{|}|P(E_{t}^{i})|-|P(E)|\big{|}\leq\mathrm{C}|t|, \tag{5.7}\]
again for \(t\) in some small neighbourhood of \(0\) and for some uniform constant \(C\). We can now conclude the proof of (5.4) for the suitable competitors \(F\) as done for the proof of [10, Lemma 6.3]. Namely, letting \(F\) as above, we recover the possibly lost or gained \(f\)-volume \(\delta\) by the competitor \(F\cap\Omega\) by slightly deforming \(E\) inside \(B_{i}\) with \(i\) chosen so that \((F\bigtriangleup E)\cap B_{i}=\O\). Observe that
\(|\delta|\leq\mathrm{C}r^{n}\) for some suitable constant. Exploiting (5.6) and (5.7) we get a set \(\tilde{F}\) with \(|E|_{f}=|\tilde{F}|_{f}\) such that
\[P(\tilde{F})\leq P(F\cap\Omega)+\mathrm{C}|\delta|\leq P(F\cap\Omega)+\mathrm{C }r^{n}\]
for some suitable \(\mathrm{C}>0\), uniform for any \(r<r_{0}\), with \(r_{0}\) small enough. Since \(E\) is constrained \(f\)-isoperimetric, we have then
\[P(E)\leq P(\tilde{F})\leq P(F)+P(\Omega)-P(F\cup\Omega)+\mathrm{C}r^{n}. \tag{5.8}\]
Observe that \(F\cup\Omega\) may intersect \(\Omega_{N}\setminus\Omega\). On the other hand, it is easy to notice that sets with smooth boundary are in fact automatically almost minimizers for the perimeter (see e.g. the derivation of [13, (6-9)]), and so \(P(\Omega)\leq P(F\cup\Omega)+\mathrm{C}r^{n}\). Plugging it into (5.8) concludes the proof of \(E=E_{V}\cap M\) being almost minimizing. From this crucial property, one deduces that \(\partial E\) is \(C^{1,1/2}\) in a neighbourhood of \(\partial E\cap\partial\Omega\) exactly as exposed in [13, Proof of Proposition 6.1].
To establish the optimal \(\mathscr{C}^{1,1}\) regularity, we first take advantage of \((ii)\), which we proceed to prove. In order to make the comparison with the references easier, along this proof we are going to assume that \(\nu_{E}\) is the interior unit normal to \(E\), in the extended manifold. Let \(Y\) be a vector field supported in some small ball centered at some point of \(\partial E\), with a flow \(\psi\) such that \(\psi_{t}(E)\subset\Omega\), for \(t\) small enough. Let now \(X\) satisfy the same assumptions, with the additional requirement to be supported around the smooth part of \(\partial E\setminus\partial\Omega\) and such that \(\mathrm{supp}X\cap\mathrm{supp}Y=\O\). Assume also that the composition of the two flows gives a \(f\)-volume preserving diffeomorphism. Then, the first variation formula gives
\[0\leq\int_{\partial E}\mathrm{div}_{\partial E}(Y+X)d\sigma=\int_{\partial E} \mathrm{div}_{\partial E}Yd\sigma-\int_{\partial E}\mathrm{H}\langle X|\nu_{E }\rangle d\sigma. \tag{5.9}\]
Assume for the time being that \(Y\) is supported around a point where \(\partial E\setminus\partial\Omega\) is smooth. Then we can integrate by parts also in the integrand involving \(Y\), and obtain
\[0\leq-\int_{\partial E}\mathrm{H}\langle Y+X|\nu_{E}\rangle d\sigma.\]
Repeating the argument with \(-X\) and \(-Y\) in place of \(X\) and \(Y\), that is possible in the present case since these vector fields are supported away from \(\partial\Omega\), we actually get
\[0=\int_{\partial E}\mathrm{H}\langle Y+X|\nu_{E}\rangle d\sigma. \tag{5.10}\]
Moreover, the \(f\)-volume being preserved entails, by (5.5),
\[0=\int_{\partial E}f\langle Y+X|\nu_{E}\rangle d\sigma. \tag{5.11}\]
Choose now \(X\) and \(Y\) so that \(X=\alpha\nu_{E}\) and \(Y=-\beta\nu_{E}\) on \(\partial E\), with \(\alpha\) and \(\beta\) being smooth functions compactly supported on \(\partial E\). Combining (5.11) with (5.10) with this choice of \(X\) and \(Y\), we obtain
\[\frac{\int_{\partial E}\left(\frac{\mathrm{H}}{f}\right)f\alpha d\sigma}{\int _{\partial E}f\alpha d\sigma}=\frac{\int_{\partial E}\left(\frac{\mathrm{H}}{ f}\right)f\beta d\sigma}{\int_{\partial E}f\alpha d\sigma}=\frac{\int_{ \partial E}\left(\frac{\mathrm{H}}{f}\right)f\beta d\sigma}{\int_{\partial E }f\beta d\sigma}.\]
Then, since the support of \(\alpha\) and \(\beta\) on \(\partial E\) can be chosen arbitrarily close to any two points in the smooth part of \(\partial E\setminus\partial\Omega\), we conclude that there exists \(\lambda\in\mathbb{R}\) such that \(\mathrm{H}=\lambda f\) in the smooth part of \(\partial E\setminus\partial\Omega\). We plug this information into (5.9), for a vector field \(Y\) that is now supported in a ball centered on a point of \(\partial E\cap\partial\Omega\). Coupling with (5.11), this yields
\[\int_{\partial E}\mathrm{div}_{\partial E}Yd\sigma+\lambda\int_{\partial E}f \langle Y|\nu_{E}\rangle d\sigma\geq 0. \tag{5.12}\]
Writing (5.12) in local coordinates in a neighbourhood of a point in \(\partial E\cap\partial\Omega\), where \(\partial E\) is given by the graph of a function \(u:B\to\mathbb{R}\) and \(\partial\Omega\) as the graph of a function \(\psi:B\to\mathbb{R}\) with \(B\subset\mathbb{R}^{n-1}\)
and \(Y\) as a normal vector field cut off with a function \(\varphi\), it is a routine computation to check that, for some quasilinear elliptic operator \(L\), it holds
\[\int_{B}\varphi[-Lu+\lambda f(\cdot,u(\cdot))]d\mu\geq 0, \tag{5.13}\]
where \(\varphi\in\mathscr{C}_{c}^{\infty}(B)\) and \(\varphi\geq 0\) in \(\{u=\psi\}\). We address the interested reader to [14, Section 6C] and [13, Section 4.1] for details of these computations. In particular, the function \(u\) succumbs to the regularity theory for obstacle problems, that is \(u\in\mathscr{C}^{1,1}\), see [13, Theorem 3.8]. As a consequence, \(\partial E\) has a notion of mean curvature defined almost everywhere. Observe that the quasilinear elliptic operator \(Lu\) provides the mean curvature of \(\partial E\) at the point \((x,u(x))\), with \(x\in B\). We are going to take advantage also of the basic step leading to the regularity result recalled above. Namely, as nicely presented in [13, Proposition 3.2], the variational property (5.13) implies that \(u\) is also solution to the Euler-Lagrange equation
\[\int_{B}\varphi[-Lu+\lambda f(\cdot,u(\cdot))]d\mu=\int_{B}\xi\,d\mu, \tag{5.14}\]
given any \(\varphi\in\mathscr{C}_{c}^{\infty}(B)\), where
\[0\leq\xi\leq[-L\psi+\lambda f(\cdot,\psi(\cdot))]^{+}\chi_{\{u=\psi\}}.\]
We now proceed to show that \(\lambda>0\), that \(\partial E_{V}\) is disjoint from \(\partial M\) and that (5.3) holds, completing thus the proof. Let \(Y\) in (5.12) be supported on a neighbourhood of a point \(\partial E\cap S\). Integrating by parts the first summand in (5.12), and letting \(Y=\alpha\nu_{E}\) for some compactly supported nonnegative test function \(\alpha\), we get
\[\int_{\partial E}(\lambda f-\mathrm{H})\alpha\,d\sigma\geq 0.\]
The arbitrariness of \(\alpha\) implies that
\[\lambda\geq\frac{\mathrm{H}}{f}(x)\qquad\text{for $(n-1)$-almost any $x\in\partial E\setminus\partial M$.} \tag{5.15}\]
Since \(S\) is mean-convex, if the \((n-1)\)-induced measure of \(\partial E\cap S\) is strictly positive, then the (weak) mean curvature of such region is strictly positive [14, Lemma 6.10], and consequently (5.15) directly implies \(\lambda>0\) and (5.3). If instead the intersection is \((n-1)\)-negligible, then (5.14) implies that \(Lu=\lambda f\) holds in the weak sense outside of \(\partial M\). In particular, classical regularity theory implies that \(\partial E\setminus\partial M\) is smooth and that its mean curvature is given by \(\lambda f\). Assume by contradiction that \(\lambda<0\). Hence, by the Maximum Principle, \(\partial E\cap S=\O\). Then, \(S\) being outward minimizing acts as a barrier to minimize the perimeter among sets homologous to \(\partial M\) containing \(E\)[12, Theorem 2.10]. We call \(E^{*}\) such a minimizer. The boundary of such set is \(\mathscr{C}^{1,1}\)[11], and obviously it is outward minimizing, so that its (weak) mean curvature is nonnegative. Having assumed that \(\lambda<0\), and since \(\partial M\) is minimal, we deduce that \(\partial E^{*}\) is minimal itself, and by the Maximum Principle disjoint from \(\partial M\). However, this is a contradiction with \(\partial M\) being outermost, proved in Proposition 5.1. We established that \(\lambda\geq 0\), and that \(\lambda>0\) if \(\partial E\cap S\) has positive \((n-1)\)-measure. We focus our attention to (5.14) again, considering a neighbourhood of a point where \(\partial E\) meets \(\partial M\). Crucially observe that by the minimality of \(\partial M=\{f=0\}\) the right hand side of (5.14) in this case vanishes, and that thus by classical regularity \(\partial E\) is smooth in a neighbourhood of \(\partial M\). Since its mean curvature \(\mathrm{H}=\lambda f\geq 0\), the Maximum Principle implies that \(\partial E\) is disjoint from \(\partial M\). The possibility that \(\lambda=0\) in the case of negligible intersection \(\partial E\cap S\) is finally ruled out by Proposition 5.1 again, and (5.3) becomes simply (5.15).
The Willmore-type inequality (1.10) and the description of \(f\)-isoperimetric sets provided by Theorem 5.2 allow to carry out the proof of the Isoperimetric Inequality of Theorem A.
Proof of Theorem A.: If there is more than one end, then by Lemma 4.7 all ends have vanishing asymptotic volume ratio. In this case, (1.3) reduces to the just proved (5.1). Obviously, the same
is true for the one-ended case if \(\operatorname{AVR}(M,g,f)=0\). The core of the Theorem then lies in the one-ended case with \(\operatorname{AVR}(M,g,f)>0\).
We first carry out the proof in the more involved and relevant case of nonempty boundary. Let \(S\) be one of the outward minimizing hypersurfaces in the outward minimizing exhaustion, and let \(\Omega\) be such that \(\partial\Omega=\partial M\sqcup S\). For \(V<|\Omega|_{f}\), consider an \(f\)-isoperimetric set \(E_{V}\) constrained in \(\Omega\) with \(f\)-volume equal to \(V\), that is, satisfying (5.2). \(E_{V}\) exists and is subject to the properties described in Theorem 5.2. In particular, \(\partial E_{V}=\partial M\sqcup\Sigma_{V}\), with \(\Sigma_{V}\) a \(\mathscr{C}^{1,1}\) hypersurface. Varying \(V\), we also define \(I_{f}:(0,|\Omega|_{f})\to(0,+\infty)\) by \(I_{f}(V)=|\Sigma_{V}|\), the \(f\)-isoperimetric profile of \(\Omega\). It is argued as in the classical case that \(I_{f}\) is continuous. Indeed, \(E_{V+\varepsilon}\) converges in \(L^{1}\) to some \(\tilde{E}\), that in particular satisfies \(|\tilde{E}|_{f}=V\). We have that, for a fixed \(\delta>0\), by lower semicontinuity, for any \(\varepsilon\) close enough to zero such that
\[I_{f}(V)\leq P(\tilde{E})\leq P(E_{V+\varepsilon})+\delta=I_{f}(V+\varepsilon )+\delta\leq P(E_{V})+P(B_{\varepsilon})+\delta=I_{f}(V)+P(B_{\varepsilon})+\delta.\]
In the above inequality, \(B_{\varepsilon}\) is chosen so that \(|F\cup B_{\varepsilon}|_{f}=V+\varepsilon\) or \(|F\setminus B_{\varepsilon}|_{f}=V+\varepsilon\), according to the sign of \(\varepsilon\). Letting first \(\varepsilon\to 0\), and then \(\delta\to 0^{+}\) establishes the continuity of \(I_{f}\). Let now \(\varepsilon>0\). Let \(\Sigma^{\varepsilon}\) be an inward variation of \(\Sigma_{V}\) supported in \(\Sigma_{V}\setminus S\) such that \(|E_{V}^{\varepsilon}|_{f}=V-\varepsilon\), where \(E_{V}^{\varepsilon}\) is such that \(\partial E_{V}^{\varepsilon}=\partial\Sigma^{\varepsilon}\sqcup\partial M\). We have
\[\liminf_{\varepsilon\to 0^{+}}\frac{I_{f}^{\frac{n}{n-1}}(V)-I_{f}^{\frac{n}{n-1}}( V-\varepsilon)}{\varepsilon}\geq\liminf_{\varepsilon\to 0^{+}}\frac{|\Sigma|^{\frac{n}{n-1}} -|\Sigma^{\varepsilon}|^{\frac{n}{n-1}}}{\varepsilon} \tag{5.16}\]
Assume now that \(\Sigma^{\varepsilon}\) is obtained through a normal variation field coinciding with \(\varphi\nu\) on \(\Sigma\), with \(\varphi\in\mathscr{C}_{c}^{\infty}(\Sigma)\). Since the first variation of \(f\)-volume is given by the \(f\)-weighted area, the right hand side is computed as
\[\liminf_{\varepsilon\to 0^{+}}\frac{|\Sigma|^{\frac{n}{n-1}}-|\Sigma^{ \varepsilon}|^{\frac{n}{n-1}}}{\varepsilon}=\frac{n}{n-1}|\Sigma_{V}|^{\frac {1}{n-1}}\frac{\int_{\Sigma}\mathrm{H}\varphi d\sigma}{\int_{\Sigma}f\varphi d \sigma}=\frac{n}{n-1}|\Sigma_{V}|^{\frac{1}{n-1}}\lambda, \tag{5.17}\]
where \(\mathrm{H}=\lambda f\) on the support of \(\varphi\) is due to Theorem 5.2. Letting now \(W\) be the infimum of \(\int_{\Sigma}\mathrm{H}/fd\sigma\) taken among strictly mean-convex smooth hypersurfaces \(\Sigma\) homologous to \(\partial M\), we actually have, by the Substatic Willmore-type inequality (1.10)
\[\operatorname{AVR}(M,g,f)|\mathbb{S}^{n-1}|\leq W\leq\frac{1}{(n-1)^{n-1}}\int _{\Sigma_{V}}\left(\frac{\mathrm{H}}{f}\right)^{n-1}d\sigma\leq\frac{1}{(n-1 )^{n-1}}|\Sigma_{V}|\lambda^{n-1}, \tag{5.18}\]
since \(0<\mathrm{H}/f\leq\lambda\) by Theorem 5.2 for \((n-1)\)-almost any point on \(\Sigma_{V}\). The above inequality holds for the \(\mathscr{C}^{1,1}\)\(\Sigma_{V}\) since we can approximate it with smooth, strictly mean-convex hypersurfaces through Mean Curvature Flow, see [11, Lemma 5.6]. Observe that this is possible since, by Theorem 5.2, \(\Sigma_{V}\) is disjoint from \(\partial M\). Combining (5.16), (5.17) and (5.18) yields
\[\liminf_{\varepsilon\to 0^{+}}\frac{I_{f}^{\frac{n}{n-1}}(V)-I_{f}^{\frac{n}{n-1}} (V-\varepsilon)}{\varepsilon}\geq n\left[\operatorname{AVR}(M,g,f)|\mathbb{S}^{ n-1}|\right]^{\frac{1}{n-1}}. \tag{5.19}\]
Comparing with the reference warped product \(f\)-isoperimetric profile given by
\[J_{f}(V)=n^{\frac{n-1}{n}}\left[\operatorname{AVR}(M,g,f)|\mathbb{S}^{n-1}| \right]^{\frac{1}{n}}V^{\frac{n-1}{n}}, \tag{5.20}\]
whose derivative equals the right-hand side of (5.19), we deduce at once that the continuous function \(I_{f}^{n/(n-1)}-J_{f}^{n/(n-1)}\) has nonnegative Dini derivative, and is thus monotone nondecreasing. Hence, for \(V_{0}<V\), we get
\[I_{f}^{\frac{n}{n-1}}(V)\geq n\left[\operatorname{AVR}(M,g,f)|\mathbb{S}^{n-1}| \right]^{\frac{1}{n-1}}V+I_{f}^{\frac{n}{n-1}}(V_{0})-n\left[\operatorname{AVR} (M,g,f)|\mathbb{S}^{n-1}|\right]^{\frac{1}{n-1}}V_{0}. \tag{5.21}\]
Recall now that \(I_{f}(V_{0})=|\Sigma_{V_{0}}|\) for some \(\Sigma_{V_{0}}\) homologous to \(\partial M\). The boundary being minimizing, by Proposition 5.1, implies then that \(I_{f}(V_{0})\geq|\partial M|\). Plugging it into (5.21), and then letting \(V_{0}\) go to \(0\), leaves us with
\[I_{f}^{\frac{n}{n-1}}(V)-|\partial M|^{\frac{n}{n-1}}\geq n\left[\operatorname{ AVR}(M,g,f)|\mathbb{S}^{n-1}|\right]^{\frac{1}{n-1}}V.\]
By definition of Isoperimetric profile, the above inequality implies (1.3) for any hypersurface homologous to \(\partial M\) inside \(\Omega\), enclosing a set of volume \(V\). But the volumes being arbitrary and the outward minimizing envelopes forming an exhaustion, the proof of (1.3) is actually complete.
We are left to characterize the situation when some smooth \(\Sigma\) homologous to \(\partial M\) fulfils the equality in (1.3). Let \(V\) be the \(f\)-volume subtended by \(\Sigma\). Let \(\Omega=\partial M\sqcup S\), with \(S\) strictly mean-convex outward minimizing and \(\Sigma\subset\Omega\). As above, let \(I_{f}\) be the \(f\)-isoperimetric profile of \(\Omega\) and \(J_{f}\) the reference warped product \(f\)-isoperimetric profile defined in (5.20). By approximation, we observe that any other \(f\)-isoperimetric set constrained in \(\Omega\) of \(f\)-volume \(V_{0}\) satisfies the Isoperimetric inequality (1.3). Hence, by (5.21), we have
\[|\partial M|^{\frac{n}{n-1}}=I_{f}^{\frac{n}{n-1}}(V)-J_{f}^{\frac{n}{n-1}}(V )\geq I_{f}^{\frac{n}{n-1}}(V_{0})-J_{f}^{\frac{n}{n-1}}(V_{0})\geq|\partial M |^{\frac{n}{n-1}}.\]
As a consequence, any \(f\)-isoperimetric set of volume \(V_{0}\leq V\) satisfies the equality in the \(f\)-Isoperimetric inequality (1.3). Observe now, that by approximation with smooth sets in the possibly extended Riemannian manifold \((N,g_{N})\), this implies that such constrained \(f\)-Isoperimetric sets of volume \(V_{0}\) are in fact _globally_\(f\)-Isoperimetric, and consequently the regularity observed in Theorem 5.2 implies that any \(\Sigma_{V_{0}}\) is smooth. Retracing the steps that lead to (5.21), we have that the smooth hypersurface \(\Sigma_{V_{0}}\) satisfies the equality in the Willmore-type inequality in Theorem D. This triggers the rigidity stated there, and yields, for \(\Omega_{V_{0}}\) the domain enclosed between \(\Sigma_{V_{0}}\) and \(\partial M\), the isometry between \((M\setminus\Omega_{V_{0}},g)\) and \([s_{0},+\infty)\times\Sigma\) endowed with
\[g=f^{2}d\rho\otimes d\rho+\eta^{2}g_{\Sigma_{V_{0}}}. \tag{5.22}\]
In particular, since \(M\) has one end, the hypersurface \(\Sigma_{V_{0}}\) is necessarily connected. We now observe that, again due to the global \(f\)-isoperimetry of \(\Sigma_{V_{0}}\), the value of \(\mathrm{H}/f\) is constant on such hypersurface. But then, retracing the computations that lead to the isometry with (5.22), more precisely coupling (2.20) with (4.8), we deduce that \(f\) and \(\eta\) in (5.22) depend only on \(\rho\). Introduce now a new coordinate \(s\) defined by \(f^{2}(\rho)d\rho=ds\). Recall that \(\eta\) satisfies \(\partial_{\rho}\eta=f^{2}\), and thus \(\partial_{s}\eta=1\). Possibly translating the variable \(s\), we thus have
\[g=\frac{ds\otimes ds}{f^{2}(s)}+s^{2}g_{\Sigma_{V_{0}}}, \tag{5.23}\]
for any \(V_{0}\leq V\), and \(s\geq s_{0}\) for some \(s_{0}>0\).
Since we have proved that \(f\) is a function of the distance \(\rho\) from \(\Sigma_{V_{0}}\) only, in particular we have shown that \(f\) must be constant on \(\Sigma_{V_{0}}\). This must hold for all \(V_{0}\leq V\). Moreover, the (Hausdorff) distance between \(\Sigma_{V_{0}}\) and \(\partial M\) goes to \(0\) as \(V_{0}\to 0\), because otherwise the volume enclosed along the (sub)sequence would be necessarily bounded away from \(0\). But then, the level sets of \(f\) forming a regular foliation of a neighbourhood of \(\partial M\), we deduce that \(\Sigma_{V_{0}}\) must actually be a level set of \(f\), in particular diffeomorphic to \(\partial M\), for \(V_{0}\) small enough. Letting \(V_{0}\) to zero we thus extend the expression (5.23) to the whole manifold, that is (1.4). The connectedness of \(\partial M\) is again a consequence of \((M,g)\) being one ended.
As far as the characterization of \(\Sigma\) is concerned, we already showed that \(f\) is constant on it. If, by contradiction, \(s\) were not constant on \(\Sigma\), then, letting \(s_{\min}=\min\{s(p):p\in\Sigma\}\) and \(s_{\max}=\max\{s(p):p\in\Sigma\}\), \(\Sigma\) would lie in the region \([s_{\min},s_{\max}]\times\partial M\), where, \(f\) being constant, by (1.4) has the metric of a truncated cone. By (5.23) for \(V_{0}=V\), \(\Sigma\) is a totally umbilical constantly mean-curved hypersurface in such cone. Moreover, the constancy of \(f\) on such region makes the substatic condition simplify to nonnegative Ricci curvature. By [14, Lemma 3.8], \(\Sigma\) could then only be a level set of \(s\) or bound a flat round ball. The first possibility gives a contradiction with the initial assumption that \(s\) were not constant on \(\Sigma\), the second one with \(\Sigma\) being homologous to \(\partial M\). This concludes the proof of \(\Sigma\) being a level set of \(s\), and of Theorem A in the nonempty-boundary case.
We finally discuss the empty-boundary-case. It is immediately checked that the \(f\)-Isoperimetric inequality (1.3) follows with a pure simplification of the proof given above. When a hypersurface
\(\Sigma\) satisfies with equality the \(f\)-Isoperimetric inequality, arguing as done above for (5.23) we reach for an isometry between \((M\setminus\Omega_{\Sigma},g)\) and \(I=[\overline{s},+\infty)\times\Sigma\) endowed with
\[g=\frac{ds\otimes ds}{f^{2}(s)}+s^{2}g_{\Sigma}, \tag{5.24}\]
for \(\Omega_{\Sigma}\) enclosed by \(\Sigma\). Again, \(\Sigma\) must be connected, since \(M\) is one ended by Lemma 4.7. Now, we claim that \(\Sigma\) satisfies
\[\frac{n-1}{n}\int_{\Sigma}\frac{f}{\mathrm{H}}d\sigma=\int_{\Omega_{\Sigma}}fd\mu, \tag{5.25}\]
in fact saturating the substatic Heintze-Karcher inequality [13, Theorem 1.3] (see also [11, Theorem 3.6]) in boundaryless substatic manifolds. The analysis of the equality case worked out in [1, Theorem 3.1-\((ii)\)] then provides us with an isometry between \((\Omega_{\Sigma},g)\) and \(I\times\mathbb{S}^{n-1}\) endowed with
\[g=\frac{ds\otimes ds}{f^{2}(s)}+\left(\frac{s}{f(x)}\right)^{2}g_{\mathbb{S}^ {n-1}},\]
for \(x\in\Omega_{\Sigma}\) with \(\Sigma\) becoming a level set of \(s\). Coupled with (5.24) on the complement of \(\Omega\), this yields the desired rigidity statement.
In order to check (5.25), just observe that, since as above one has that \(\Sigma\) is \(f\)-isoperimetric, \(\mathrm{H}/f\)_is constant on such hypersurface_. Moreover, since it satisfies equality in the Willmore-type inequality (1.10), one has
\[\frac{\mathrm{H}}{f}\,|\Sigma|^{\frac{1}{n-1}}=(n-1)\left[\mathrm{AVR}(M,g,f) |\mathbb{S}^{n-1}|\right]^{\frac{1}{n-1}}.\]
Coupling with
\[|\Sigma|^{\frac{n}{n-1}}=n\left[\mathrm{AVR}(M,g,f)|\mathbb{S}^{n-1}|\right]^ {\frac{1}{n-1}}|\Omega_{\Sigma}|_{f},\]
it is straightforwardly seen that (5.25) holds, completing the proof.
## Appendix A Comments on the substatic condition
### Physical motivation
Here we give a physical interpretation of substatic triples, following [16, Lemma 3.8]. Let
\[L=\mathbb{R}\times M\,,\qquad\mathfrak{g}\,=\,-f^{2}dt\otimes dt+g\]
be a static spacetime satisfying the Einstein Field Equation
\[\mathrm{Ric}_{\mathfrak{g}}\,+\,\left(\Lambda-\frac{1}{2}\mathrm{R}_{ \mathfrak{g}}\right)\mathfrak{g}\,=\,T\,,\]
where \(T\) is the stress-energy tensor and \(\Lambda\in\mathbb{R}\) is the cosmological constant. Using standard formulas to express the Ricci tensor of a warped product, we find out that
\[\mathrm{Ric}_{\mathfrak{g}}(\partial_{t},\partial_{t})\,=\,f\Delta f\,,\qquad \mathrm{Ric}_{\mathfrak{g}}(\partial_{i},\partial_{j})\,=\,\mathrm{Ric}( \partial_{i},\partial_{j})-\frac{1}{f}\nabla^{2}f(\partial_{i},\partial_{j})\,.\]
In particular a simple computation gives
\[\mathrm{R}_{\mathfrak{g}}\,=\,\mathrm{R}-\frac{2}{f}\Delta f\,.\]
Putting these pieces of information inside the Einstein Field Equation, we get
\[T_{tt} \,=\,\left(-\Lambda+\frac{\mathrm{R}}{2}\right)f^{2}\,,\] \[T_{it} \,=\,0\,,\] \[T_{ij} \,=\,\mathrm{R}_{ij}-\frac{1}{f}\nabla^{2}_{ij}f+\left(\Lambda- \frac{\mathrm{R}}{2}+\frac{\Delta f}{f}\right)g_{ij}\,.\]
We now assume that the Null Energy Condition is satisfied. Namely, for any vector \(X=\partial_{t}+Y^{i}\partial_{i}\) with \(\mathfrak{g}(X,X)=0\) (that is, \(g(Y,Y)=f^{2}\)), we require \(T(X,X)=T_{tt}+T_{ij}Y^{i}Y^{j}\geq 0\). Using the above identities, this hypothesis tells us
\[0\,\leq\,T_{tt}+T_{ij}Y^{i}Y^{j}\,=\,\left(-\Lambda+\frac{\mathrm{R}}{2} \right)f^{2}+\left(\mathrm{Ric}-\frac{1}{f}\nabla^{2}f+\frac{\Delta f}{f}g \right)(Y,Y)+\left(\Lambda-\frac{\mathrm{R}}{2}\right)g(Y,Y)\,.\]
Recalling that \(g(Y,Y)=f^{2}\), we have obtained
\[\mathrm{NEC~{}holds} \Leftrightarrow \left(\mathrm{Ric}-\frac{1}{f}\nabla^{2}f+\frac{\Delta f}{f}g \right)(Y,Y)\geq 0\ \ \mathrm{for~{}all~{}}Y\ \mathrm{with}\ g(Y,Y)=f^{2}.\]
By rescaling of \(Y\), we then conclude that the Null Energy Condition on static spacetimes is equivalent to
\[\mathrm{Ric}-\frac{1}{f}\nabla^{2}f+\frac{\Delta f}{f}g\,\geq\,0\,.\]
In other words, a static spacetime satisfies the Null Energy Condition if and only if its spacelike slices are substatic.
Finally, we briefly discuss the physical interpretation of the conformal metric \(\tilde{g}=g/f^{2}\). In the context of static spacetimes, this metric is usually referred to as optical metric and has the property that \(\tilde{g}\)-geodesics lift to null geodesics in the spacetime metric \(\mathfrak{g}\). This follows easily from the fact that the trajectories of null geodesics do not change under a conformal change of metric, hence the null geodesics of \(\mathfrak{g}\) are the same as the null geodesics of \(f^{2}\mathfrak{g}=-dt\otimes dt+\tilde{g}\).
### Relation between \(\mathrm{CD}(0,1)\) and substatic condition
Let \((M,g,f)\) be a substatic triple and let \(\tilde{g}=g/f^{2}\). We want to show that \((M,\tilde{g},\psi)\) satisfies the \(\mathrm{CD}(0,1)\) condition, where \(\psi=-(n-1)\log f\). To this end, we need to rewrite the substatic condition in terms of the conformal metric. We start from the following formulas:
\[\widetilde{\nabla}^{2}f =\,\nabla^{2}f+\frac{1}{f}\left(2df\otimes df-|\nabla f|^{2}g \right)\,,\] \[\Delta_{\tilde{g}}f =\,f^{2}\Delta f-(n-2)f|\nabla f|^{2}\,,\]
In particular
\[\frac{1}{f}\nabla^{2}f-\frac{1}{f}\Delta fg =\,\frac{1}{f}\widetilde{\nabla}^{2}f-\frac{1}{f^{2}}\left(2df \otimes df-|\nabla f|^{2}g\right)-\frac{1}{f}\Delta_{\tilde{g}}f\,\tilde{g}-( n-2)|\nabla f|^{2}\tilde{g}\] \[=\,\frac{1}{f}\widetilde{\nabla}^{2}f-\frac{2}{f^{2}}df\otimes df -\frac{1}{f}\Delta_{\tilde{g}}f\tilde{g}-(n-3)\frac{1}{f^{2}}\big{|} \widetilde{\nabla}f\big{|}_{\tilde{g}}^{2}\,\tilde{g}\]
On the other hand, it is well known that the Ricci tensor \(\mathrm{Ric}\) of \(g\) and the Ricci tensor \(\mathrm{Ric}_{\tilde{g}}\) of the conformal metric \(\tilde{g}\) are related as follows
\[\mathrm{Ric}\,=\,\mathrm{Ric}_{\tilde{g}}-\frac{n-2}{f}\widetilde{\nabla}^{2}f +\frac{2(n-2)}{f^{2}}df\otimes df-\left(\frac{1}{f}\Delta_{\tilde{g}}f+\frac{ n-3}{f^{2}}|\widetilde{\nabla}f|_{\tilde{g}}^{2}\right)\tilde{g}\,.\]
Putting together the above formulas, we get
\[\mathrm{Ric}-\frac{1}{f}\nabla^{2}f+\frac{1}{f}\Delta fg =\,\mathrm{Ric}_{\tilde{g}}-\frac{n-1}{f}\widetilde{\nabla}^{2}f +\frac{2(n-1)}{f^{2}}df\otimes df\] \[=\,\mathrm{Ric}_{\tilde{g}}-(n-1)\widetilde{\nabla}^{2}(\log f)+(n-1)d\log f\otimes d\log f\] \[=\,\mathrm{Ric}_{\tilde{g}}+\widetilde{\nabla}^{2}\psi+\frac{1}{n -1}d\psi\otimes d\psi\,.\]
It follows then that, if \((M,g,f)\) satisfies the substatic condition, then \((M,\tilde{g},\psi)\) satisfies the \(\mathrm{CD}(0,1)\) condition.
### Li-Xia connections
In [10], Li and Xia consider the family of connections \(\mathrm{D}^{u\alpha\gamma}\), where \(u\in\mathscr{C}^{\infty}(M)\), \(\alpha,\gamma\in\mathbb{R}\), defined by
\[\mathrm{D}^{u\alpha\gamma}_{X}Y=\nabla_{X}Y+\alpha\left[X(u)Y+Y(u)X\right]+ \gamma\,g(X,Y)\nabla u\,.\]
They then compute the Ricci tensor \(\mathrm{Ric}^{u\alpha\gamma}\) induced by a connection \(\mathrm{D}^{u\alpha\gamma}\), showing that it is related to the usual Ricci tensor by
\[\mathrm{Ric}^{u\alpha\gamma}=\mathrm{Ric}-\left[(n-1)\alpha+\gamma\right] \nabla^{2}u+\left[(n-1)\alpha^{2}-\gamma^{2}\right]du\otimes du+\left[\gamma \Delta u+\left(\gamma^{2}+(n-1)\alpha\gamma\right)|\nabla u|^{2}\right]g\,.\]
When \(\alpha=0\) and \(\gamma=1\), in particular we have
\[\mathrm{Ric}^{u01}\,=\,\mathrm{Ric}-\nabla^{2}u-du\otimes du+\left[\Delta u+| \nabla u|^{2}\right]g\,,\]
which can be rewritten as follows by setting \(u=\log f\):
\[\mathrm{Ric}^{u01}\,=\,\mathrm{Ric}-\frac{1}{f}\nabla^{2}f+\frac{\Delta f}{f}g\,.\]
It is then clear that the condition \(\mathrm{Ric}^{u01}\geq 0\) is equivalent to the substatic condition.
Choosing instead \(\alpha=1/(n-1)\), \(\gamma=0\), setting \(v=-u\) one gets
\[\mathrm{Ric}^{u\frac{1}{n-1}0}=\mathrm{Ric}+\nabla^{2}v+\frac{1}{n-1}dv\otimes dv\,,\]
hence \(\mathrm{Ric}^{u\frac{1}{n-1}0}\geq 0\) gives the \(\mathrm{CD}(0,1)\) condition. In fact, the connection \(\mathrm{D}^{u\frac{1}{n-1}0}\) had already been considered in the work [11] that was focused on the \(\mathrm{CD}(0,1)\) case only.
Here we show that the two connections \(\mathrm{D}^{u01}\) and \(\mathrm{D}^{u\frac{1}{n-1}0}\) are in fact conformally related: let \((M,g)\) be a Riemannian manifold and let \(\nabla,\widetilde{\nabla}\) be the Levi-Civita connections corresponding to the metrics \(g\), \(\tilde{g}=g/f^{2}\), respectively. It is easy to show that \(\nabla\) and \(\widetilde{\nabla}\) are related a follows
\[\nabla_{X}Y\,=\,\widetilde{\nabla}_{X}Y+\frac{1}{f}\left[X(f)Y+Y(f)X-g(X,Y) \nabla f\right]\,=\,\widetilde{\nabla}_{X}Y+X(u)Y+Y(u)X-g(X,Y)\nabla u\,,\]
hence, setting \(\psi=-(n-1)u\):
\[\mathrm{D}^{u01}_{X}Y =\,\nabla_{X}Y+g(X,Y)\nabla u\] \[=\,\widetilde{\nabla}_{X}Y+X(u)Y+Y(u)X\] \[=\,\widetilde{\nabla}_{X}Y+\frac{1}{n-1}\left[X(-\psi)Y+Y(-\psi) X\right]\,.\]
This is then precisely \(\mathrm{D}^{-\psi\frac{1}{n-1}0}\) using \(\widetilde{\nabla}\) as the Levi-Civita connection in place of \(\nabla\).
Notice that in this subsection \(\psi\) and \(f\) have been introduced as the functions satisfying \(u=\log f\) and \(\psi=-(n-1)u\), hence they are related by \(\psi=-(n-1)\log f\), in agreement with Subsection A.2.
|
2310.17906 | Data-scientific study of Kronecker coefficients | We take a data-scientific approach to study whether Kronecker coefficients
are zero or not. Motivated by principal component analysis and kernel methods,
we define loadings of partitions and use them to describe a sufficient
condition for Kronecker coefficients to be nonzero. The results provide new
methods and perspectives for the study of these coefficients. | Kyu-Hwan Lee | 2023-10-27T05:52:43Z | http://arxiv.org/abs/2310.17906v2 | # Data-Scientific study of Kronecker coefficients
###### Abstract.
We take a data-scientific approach to study whether Kronecker coefficients are zero or not. Motivated by principal component analysis and kernel methods, we define _loadings_ of partitions and use them to describe a sufficient condition for Kronecker coefficients to be nonzero. The results provide new methods and perspectives for the study of these coefficients.
## 1. Introduction
For the last several years, it has been much discussed how AI and machine learning will change mathematics research (e.g. [2, 3, 4]). There is no doubt that machine learning has exceptional capability to recognize patterns in mathematical datasets (e.g. [5, 1], HLQ, HK, HLOa, HLOb, JKP). Nonetheless, the recent discovery [1] of a new phenomenon, called _murmation_, shows that considering mathematical objects in the framework of data-science already has great potential for new developments without regard to use of machine learning. All these circumstances seem to call us _to regard mathematics as a study of datasets1_.
Footnote 1: This viewpoint is not new. For instance, the Prime Number Theorem and the Birch–Swinnerton-Dyer Conjecture are results of this viewpoint.
In the previous article [1], where we refer the reader for the backgrounds of Kronecker coefficients, we applied standard machine learning tools to datasets of the Kronecker coefficients, and observed that the trained classifiers attained high accuracies (\(>98\%\)) in determining whether Kronecker coefficients are zero or not. The outcomes clearly suggest that further data-scientific analysis may reveal new structures in the datasets of the Kronecker coefficients. In this paper, we indeed pursue that direction; more precisely, we adopt ideas from principal component analysis (PCA) and kernel methods to define the _similitude_ matrix and the _difference_ matrix for the set \(\mathcal{P}(n)\) of partitions of \(n\). Then we introduce _loadings_ of the partitions in terms of eigenvectors associated to the largest eigenvalues of these matrices, and use the loadings to describe a sufficient condition for the Kronecker coefficients to be nonzero. This condition can be used very effectively. See (4.1) and Example 4.2 below it.
The observations made in this paper are purely data-scientific and experimental, and no attempts are undertaken to prove them using representation theory. Rigorous proofs will appear elsewhere. Also, it should be noted that our sufficient condition does not cover the _middle part_ where loadings for zero and nonzero Kronecker coefficients overlap. Since our method is a variation of PCA, it is essentially linear. In order to cover the middle part, it is likely that one needs to adopt some nonlinear methods. The aforementioned high accuracies reported in [1] indicate that we can go much deeper into the middle part using such methods.
After this introduction, in Section 2, we define the similitude and difference matrices and the loadings of partitions. In Section 3, we investigate the probabilistic distributions of loadings. In the final section, we consider the minimum values of the loadings to determine the Kronecker coefficients are zero or nonzero. In Appendix, we tabulate the loadings of partition in \(\mathcal{P}(n)\) for \(6\leq n\leq 12\).
### Acknowledgments
The author is grateful to Alex Davies and Been Kim for helpful discussions. He would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, and the Center for Quantum Structures in Modules and Spaces, Seoul, for support and hospitality. This work was partially supported by EPSRC grant #EP/R014604/1 and by a grant from the Simons Foundation (#712100).
## 2. Similitude and difference matrices
Let \(\mathfrak{S}_{n}\) be the symmetric group of degree \(n\) and consider representations of \(\mathfrak{S}_{n}\) over \(\mathbb{C}\). The irreducible representations \(S_{\lambda}\) of \(\mathfrak{S}_{n}\) are parametrized by partitions \(\lambda\in\mathcal{P}(n)\). Consider the tensor product of two irreducible representations \(S_{\lambda}\) and \(S_{\mu}\) for \(\lambda,\mu\in\mathcal{P}(n)\). Then the tensor product is decomposed into a sum of irreducible representations:
\[S_{\lambda}\otimes S_{\mu}=\bigoplus_{\nu\vdash n}g_{\lambda,\mu}^{\nu}S_{\nu} \quad(g_{\lambda,\mu}^{\nu}\in\mathbb{Z}_{\geq 0}).\]
The decomposition multiplicities \(g_{\lambda,\mu}^{\nu}\) are called the _Kronecker coefficients_.
There are symmetries among \(g_{\lambda,\mu}^{\nu}\).
**Lemma 2.1**.: [FH, p.61] Let \(\lambda,\mu,\nu\vdash n\). Then the Kronecker coefficients \(g_{\lambda,\mu}^{\nu}\) are invariant under the permutations of \(\lambda,\mu,\nu\). That is, we have
\[g_{\lambda,\mu}^{\nu}=g_{\mu,\lambda}^{\nu}=g_{\lambda,\nu}^{\mu}=g_{\nu, \lambda}^{\mu}=g_{\mu,\nu}^{\lambda}=g_{\nu,\mu}^{\lambda}.\]
For a partition \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots)\) of \(n\), define \(d_{\lambda}\coloneqq n-\lambda_{1}\), called the _depth_ of \(\lambda\). The following theorem provides a necessary condition for the Kronecker coefficient \(g_{\lambda,\mu}^{\nu}\) to be nonzero. Other necessary conditions for \(g_{\lambda,\mu}^{\nu}\neq 0\), which generalize Horn inequalities, can be found in [Res]. We will describe a sufficient condition for for \(g_{\lambda,\mu}^{\nu}\neq 0\) in this paper.
**Theorem 2.2**.: [JK, Theorem 2.9.22] If \(g_{\lambda,\mu}^{\nu}\neq 0\) then
\[|d_{\lambda}-d_{\mu}|\leq d_{\nu}\leq d_{\lambda}+d_{\mu}. \tag{2.1}\]
Now, for \(n\in\mathbb{Z}_{>0}\), let \(\mathcal{P}(n)\) be the set of partitions of \(n\) as before. We identify each element \(\lambda\) of \(\mathcal{P}(n)\) with a sequence of length \(n\) by adding \(0\)-entries as many as needed. For example, when \(n=6\), we have
\[\mathcal{P}(6)=\{ (6,0,0,0,0,0),(5,1,0,0,0,0),(4,2,0,0,0,0),(4,1,1,0,0,0),\] \[(3,3,0,0,0,0),(3,2,1,0,0,0),(3,1,1,1,0,0),(2,2,2,0,0,0),\] \[(2,2,1,1,0,0),(2,1,1,1,1,0),(1,1,1,1,1,1)\}.\]
We consider \(\mathcal{P}(n)\) as an ordered set by the lexicographic order as in the above example.
When there is no peril of confusion, we will skip writing \(0\)'s in the sequence. For instance, we write \((5,1)\) for \((5,1,0,0,0,0)\). Moreover, when the same part is repeated multiple times we may abbreviate it into an exponent. For example, \((2,1,1,1,1,1)\) may be written as \((2,1^{5})\). The size of the set \(\mathcal{P}(n)\) will be denoted by \(p(n)\), and the set of triples \(\mathbf{t}=(\lambda,\mu,\nu)\) of partitions of \(n\) will be denote by \(\mathcal{P}(n)^{3}\coloneqq\mathcal{P}(n)\times\mathcal{P}(n)\times\mathcal{ P}(n)\). A partition is depicted by a collection of left-justified rows of boxes. For example, partition \((5,4,1)\) is depicted by. The _conjugate_ or _transpose_ of a partition is defined to be the flip of the original diagram along the main diagonal. Hence the conjugate of \((5,4,1)\) is \((3,2,2,2,1)\) as you can see below:
Let \(\mathsf{P}_{n}\) be the \(p(n)\times n\) matrix having elements of \(\mathcal{P}(n)\) as rows, and define the \(p(n)\times p(n)\) symmetric matrix
\[\mathsf{Y}_{n}\coloneqq\mathsf{P}_{n}\mathsf{P}_{n}^{\top}.\]
The matrix \(\mathsf{Y}_{n}\) will be called the _similitude_ matrix of \(\mathcal{P}(n)\). For example, we have
\[\mathsf{P}_{6}=\begin{bmatrix}6&0&0&0&0&0\\ 5&1&0&0&0&0\\ 4&2&0&0&0&0\\ 4&1&1&0&0&0\\ 3&3&0&0&0&0\\ 3&2&1&0&0&0\\ 3&1&1&1&0&0\\ 2&2&2&0&0&0\\ 2&2&1&1&1&0\\ 1&1&1&1&1&1\end{bmatrix}\quad\text{and}\quad\quad\mathsf{Y}_{6}=\begin{bmatrix}3 6&30&24&24&18&18&18&12&12&12&6\\ 30&26&22&21&18&17&16&12&12&11&6\\ 24&22&20&18&18&16&14&12&12&10&6\\ 24&21&18&18&15&15&14&12&11&10&6\\ 18&18&18&15&18&15&12&12&12&9&6\\ 18&17&16&15&15&14&12&12&11&9&6\\ 18&16&14&14&12&12&12&10&10&9&6\\ 12&12&12&12&12&10&12&10&8&6\\ 12&12&12&11&12&11&10&10&10&8&6\\ 12&11&10&10&9&9&9&8&8&8&6\\ 6&6&6&6&6&6&6&6&6&6&6&6\\ \end{bmatrix}.\]
Note that an entry \(y_{\lambda,\mu}\) of \(\mathsf{Y}_{n}=[y_{\lambda,\mu}]\) is indexed by \(\lambda,\mu\in\mathcal{P}(n)\).
**Definition 2.3**.: Let \(\mathbf{v}=(v_{\lambda})_{\lambda\in\mathcal{P}(n)}\) be an eigenvector of the largest eigenvalue of \(\mathsf{Y}_{n}\) such that \(v_{\lambda}>0\) for all \(\lambda\in\mathcal{P}(n)\). Denote by \(v_{\max}\) (resp. \(v_{\min}\)) a maximum (resp. minimum) of \(\{v_{\lambda}\}_{\lambda\in\mathcal{P}(n)}\). Define
\[r_{\lambda}\coloneqq 100\times\frac{v_{\lambda}-v_{\min}}{v_{\max}-v_{\min}} \quad\text{for $\lambda\in\mathcal{P}(n)$}.\]
The value \(r_{\lambda}\) is called the _\(r\)-loading_ of partition \(\lambda\in\mathcal{P}(n)\).
**Remark 2.4**.: An efficient algorithm to calculate an eigenvector \(\mathbf{v}\) in Definition 2.3 is the _power iteration_: Let \(\mathbf{v}_{0}=(1,0,\ldots,0)^{\top}\) be the first standard column vector. Inductively, for \(k=0,1,2,\ldots,\) define
\[\mathbf{v}_{k+1}=\frac{\mathsf{Y}_{n}\mathbf{v}_{k}}{\|\mathsf{Y}_{n}\mathbf{ v}_{k}\|_{2}},\]
where \(\|(x_{1},x_{2},\ldots,x_{n})^{\top}\|_{2}=(\sum_{i=1}^{n}x_{i}^{2})^{1/2}\). Then the limit
\[\mathbf{v}=\lim_{k\to\infty}\mathbf{v}_{k}\]
is an eigenvector of the largest eigenvalue of \(\mathsf{Y}_{n}\).
For example, when \(n=6\), we have
\[\mathbf{v}_{1} =(0.5203,0.4336,0.3468,0.3468,0.2601,0.2601,0.2601,0.1734,0.1734,0. 1734,0.0867)^{\top}\] \[\mathbf{v}_{2} =(0.4514,0.4022,0.3530,0.3377,0.3038,0.2885,0.2670,0.2240,0.2178,0. 1934,0.1188)^{\top}\] \[\mathbf{v}_{3} =(0.4441,0.3985,0.3530,0.3366,0.3074,0.2910,0.2678,0.2291,0.2222,0.1957,0.1225)^{\top}\] \[\mathbf{v}_{4} =(0.4434,0.3982,0.3529,0.3365,0.3077,0.2913,0.2678,0.2296,0.2226,0. 1960,0.1229)^{\top}\] \[\mathbf{v}_{5} =(0.4433,0.3981,0.3529,0.3365,0.3077,0.2913,0.2678,0.2297,0.2226,0. 1960,0.1229)^{\top}\] \[\mathbf{v}_{6} =(0.4433,0.3981,0.3529,0.3365,0.3077,0.2913,0.2678,0.2297,0.2227,0. 1960,0.1229)^{\top},\]
where equality means approximation. Thus we can take as an approximation
\[\mathbf{v}=(0.4433,0.3981,0.3529,0.3365,0.3077,0.2913,0.2678,0.2297,0.2227,0. 1960,0.1229)^{\top},\]
and the \(r\)-loadings are given by
\[(r_{\lambda})_{\lambda\in\mathcal{P}(n)}=(100.00,85.89,71.79,66.66,57.68,52.55,45.23,33.32,31.12,22.81,0.00).\]
In this case of \(n=6\), we see that the \(r\)-loadings are compatible with the lexicographic order. In particular, the partition (6) has \(r\)-loading \(100\) and \((1,1,1,1,1,1)\) has \(r\)-loading \(0\). However, in general, the \(r\)-loadings are _not completely_ compatible with the lexicographic order though they are strongly correlated. For instance, when \(n=9\), the partition \((5,1,1,1,1)\) has \(r\)-loading \(55.32\), while \((4,4,1)\) has \(56.55\). See Appendix A for the values of \(r\)-loadings.
Define a \(p(n)\times p(n)\) symmetric matrix \(\mathsf{Z}_{n}=[z_{\lambda,\mu}]_{\lambda,\mu\in\mathcal{P}(n)}\) by
\[z_{\lambda,\mu}=\|\lambda-\mu\|_{1}\coloneqq\sum_{i=1}^{n}|\lambda_{i}-\mu_{i}|\]
for \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n})\in\mathcal{P}(n)\). The matrix \(\mathsf{Z}_{n}\) will be called the _difference_ matrix of \(\mathcal{P}(n)\). For example, we have
\[\mathsf{Z}_{6}=\left[\begin{array}{cccccccccc}0&2&4&4&6&6&6&8&8&8&10\\ 2&0&2&2&4&4&4&6&6&6&8\\ 4&2&0&2&2&2&4&4&4&6&8\\ 4&2&2&0&4&2&2&4&4&4&6\\ 6&4&2&4&0&2&4&4&4&6&8\\ 6&4&2&2&2&0&2&2&2&4&6\\ 6&4&4&2&4&2&0&4&2&2&4\\ 8&6&4&4&4&2&4&0&2&4&6\\ 8&6&4&4&4&2&2&0&2&4\\ 8&6&4&4&2&2&0&2&4\\ 8&6&6&4&6&4&2&4&2&0&2\\ 10&8&8&6&8&6&4&6&4&2&0\end{array}\right].\]
**Definition 2.5**.: Let \(\mathbf{w}=(w_{\lambda})_{\lambda\in\mathcal{P}(n)}\) be an eigenvector of the largest eigenvalue of \(\mathsf{Z}_{n}\) such that \(w_{\lambda}>0\) for all \(\lambda\in\mathcal{P}(n)\). Denote by \(w_{\max}\) (resp. \(w_{\min}\)) a maximum (resp. minimum) of \(\{w_{\lambda}\}_{\lambda\in\mathcal{P}(n)}\). Define
\[b_{\lambda}\coloneqq 100\times\frac{w_{\lambda}-w_{\min}}{w_{\max}-w_{\min}} \quad\text{for $\lambda\in\mathcal{P}(n)$}.\]
The value \(b_{\lambda}\) is called the _\(b\)-loading_ of partition \(\lambda\in\mathcal{P}(n)\).
The power iteration in Remark 2.4 works equally well to compute \(\mathbf{w}\): Let \(\mathbf{w}_{0}=(1,0,\ldots,0)^{\top}\) and define
\[\mathbf{w}_{k+1}=\frac{\mathsf{Z}_{n}\mathbf{w}_{k}}{\|\mathsf{Z}_{n}\mathbf{ w}_{k}\|_{2}}.\]
Then the limit
\[\mathbf{w}=\lim_{k\to\infty}\mathbf{w}_{k}\]
is an eigenvector of the largest eigenvalue of \(\mathsf{Z}_{n}\).
For example, when \(n=6\), we have
\[\mathbf{w}_{1} =(0.0000,0.0958,0.1916,0.1916,0.2873,0.2873,0.2873,0.3831,0.3831,0. 3831,0.4789)^{\top}\] \[\mathbf{w}_{2} =(0.5177,0.3705,0.2992,0.2565,0.3087,0.2042,0.2042,0.2517,0.1947,0.2280,0.3277)^{\top}\] \[\vdots\] \[\mathbf{w}_{10} =(0.4046,0.2962,0.2662,0.2394,0.3061,0.2318,0.2393,0.3060,0.2662,0. 2961,0.4044)^{\top}\] \[\mathbf{w}_{11} =(0.4045,0.2961,0.2662,0.2393,0.3061,0.2318,0.2393,0.3061,0.2662,0. 2962,0.4045)^{\top}\] \[\mathbf{w}_{12} =(0.4045,0.2961,0.2662,0.2393,0.3061,0.2318,0.2393,0.3061,0.2662,0. 2961,0.4045)^{\top},\]
where equality means approximation. Thus we can take as an approximation
\[\mathbf{w}=(0.4045,0.2961,0.2662,0.2393,0.3061,0.2318,0.2393,0.3061,0.2662,0. 2961,0.4045)^{\top},\]
and the \(b\)-loadings are given by
\[(b_{\lambda})_{\lambda\in\mathcal{P}(n)}=(100.00,37.25,19.93,4.36,43.01,0.00,4.36,43.01,19.93,37.25,100.00).\]
Notice that the partitions \((6,0,0,0,0,0)\) and \((1,1,1,1,1,1)\) both have \(b\)-loading \(100\) and the partition \((3,2,1,0,0,0)\) has \(b\)-loading \(0\). In general, we observe that
\[\text{if $\lambda$ and $\mu$ are conjugate in $\mathcal{P}(n)$, then their $b$-loadings are the same, i.e., $b_{\lambda}=b_{\mu}$.} \tag{2.2}\]
**Remark 2.6**.: It would be interesting to combinatorially characterize the loadings of \(\lambda\in\mathcal{P}(n)\).
For \(\mathbf{t}=(\lambda,\mu,\nu)\in\mathcal{P}(n)^{3}\), we will write
\[g(\mathbf{t})\coloneqq g_{\lambda,\mu}^{\nu}.\]
**Definition 2.7**.: Let \(\mathbf{t}=(\lambda,\mu,\nu)\in\mathcal{P}(n)^{3}\). Define the _\(r\)-loading_ of \(\mathbf{t}\), denoted by \(r(\mathbf{t})\), to be the sum of the \(r\)-loadings of \(\lambda,\mu\) and \(\nu\), i.e.,
\[r(\mathbf{t})\coloneqq r_{\lambda}+r_{\mu}+r_{\nu}.\]
Similarly, define the _\(b\)-loading_ of \(\mathbf{t}\), denoted by \(b(\mathbf{t})\), to be the sum of the \(b\)-loadings of \(\lambda,\mu\) and \(\nu\), i.e.,
\[b(\mathbf{t})\coloneqq b_{\lambda}+b_{\mu}+b_{\nu}.\]
### Connections to PCA and kernel method
The definitions of similitude and difference matrices are closely related to PCA and kernel methods (see, e.g., [11]), respectively. Indeed, we look at the matrix \(\mathsf{P}_{n}^{\top}\) as a data matrix. For example, when \(n=6\), we have
\[\mathsf{P}_{6}^{\top}=\left[\begin{array}{cccccccccccc}6&5&4&4&3&3&3&2&2&2&1\\ 0&1&2&1&3&2&1&2&2&1&1\\ 0&0&0&1&0&1&1&2&1&1&1\\ 0&0&0&0&0&0&1&0&1&1&1\\ 0&0&0&0&0&0&0&0&1&1\\ 0&0&0&0&0&0&0&0&0&1\end{array}\right],\]
and consider this as a data matrix of \(6\) data points with \(11\) features.
Since the average of each column is \(1\) for \(\mathsf{P}_{n}^{\top}\), the covariance matrix of the data matrix \(\mathsf{P}_{n}^{\top}\) is \((\mathsf{P}_{n}-\mathbb{1})(\mathsf{P}_{n}-\mathbb{1})^{\top}\), where \(\mathbb{1}\) is the matrix with all entries equal to \(1\). As there seems to be no meaningful difference in computational results, we take the similitude matrix \(\mathsf{Y}_{n}=\mathsf{P}_{n}\mathsf{P}_{n}^{\top}\) to be a replacement of the covariance matrix. Then an eigenvector of the largest eigenvalue of \(\mathsf{Y}_{n}\) is nothing but a weight vector of the first principal component, and this leads to the definition of \(r\)-loadings.
The idea of a kernel method is to embed a dataset into a different space of (usually) higher dimension. In order to utilize this idea, we consider the matrix \(\mathsf{P}_{n}\) as a data matrix with \(p(n)\) data points and \(n\) features. Then we map a partition \(\lambda\), which is an \(n\)-dimensional row vector of \(\mathsf{P}_{n}\), onto the \(p(n)\)-dimensional vector \((\|\lambda-\mu\|_{1})_{\mu\in\mathcal{P}(n)}\), and the resulting new matrix is exactly the difference matrix \(\mathsf{Z}_{n}\). For example, when \(n=6\), we have
\[\mathsf{P}_{6}=\begin{bmatrix}6&0&0&0&0&0\\ 5&1&0&0&0&0\\ 4&2&0&0&0&0\\ 4&1&1&0&0&0\\ 3&3&0&0&0&0\\ 3&2&1&0&0&0\\ 3&1&1&1&0&0\\ 2&2&2&0&0&0\\ 2&2&1&1&1&0\\ 2&1&1&1&1&0\\ 1&1&1&1&1&1\end{bmatrix}\mapsto\quad\mathsf{Z}_{6}=\left[\begin{array}{ ccccccccc}0&2&4&4&6&6&6&8&8&8&10\\ 2&0&2&2&4&4&4&6&6&8\\ 4&2&2&0&4&2&2&4&4&4&6\\ 6&4&2&4&0&2&4&4&4&6\\ 6&4&2&2&2&0&2&2&4&6\\ 6&4&4&2&4&2&0&4&2&2&4\\ 8&6&4&4&4&2&4&0&2&4&6\\ 8&6&4&4&4&2&2&0&2&4\\ 8&6&6&4&6&4&2&4&2&0&2\\ 10&8&8&6&8&6&4&6&4&2&0\end{array}\right].\]
Since the difference matrix \(\mathsf{Z}_{n}\) is a symmetric matrix, we consider an eigenvector of the largest eigenvalue of \(\mathsf{Z}_{n}\) to obtain the direction of largest variations in the differences. This leads to the definition of \(b\)-loadings.
## 3. Distributions of loadings
In this section, we present the histograms of loadings and describe the corresponding distributions. First, we consider all the triples of \(\mathbf{t}\in\mathcal{P}(n)^{3}\), and after that, separate them according to whether \(g(\mathbf{t})\neq 0\) or \(=0\).
Figure 1 has the histograms of \(r\)-loadings of \(\mathbf{t}\in\mathcal{P}(n)^{3}\) for \(n=14,15,16\). According to what the histograms suggest, we conjecture that _the distribution of the \(r\)-loadings of \(\mathbf{t}\) converges to a normal distribution as \(n\to\infty\)_, and sketch the curves of normal distributions on the histograms.
Here we note that the mean is not exactly \(150\). Actually, the mean values of the \(r\)-loadings are \(\approx 148.86,148.15,147.65\) for \(n=14,15,16\), respectively.
Similarly, Figure 2 shows the histograms of \(b\)-loadings of \(\mathbf{t}\in\mathcal{P}(n)^{3}\) for \(n=14,15,16\), and we conjecture that _the distribution of the \(b\)-loadings of \(\mathbf{t}\) is a gamma distribution as \(n\to\infty\)_, and draw the curves of gamma distributions on the histograms. The mean values of the \(b\)-loadings are \(\approx 72.07,66.71,63.48\) for \(n=14,15,16\), respectively.
When \(n=14,15,16\), the histograms of loadings of partitions \(\lambda\in\mathcal{P}(n)\) do not have enough number of points to tell which distributions they follow. (Note that \(p(16)=231\).) Nonetheless, it seems reasonable to expect that the \(r\)-loadings of \(\lambda\) follow a normal distribution and that the \(b\)-loadings of \(\lambda\) follow a gamma distribution. Then the loadings of \(\mathbf{t}\in\mathcal{P}(n)^{3}\) will have the distributions given as a sum of three independent distributions. (Recall Definition 2.7.) Figure 3 has the histograms of loadings of \(\lambda\) and \(\mathbf{t}\) when \(n=20\), which seem to be consistent with this expectation.
## 4. Separation of \(g(\mathbf{t})\neq 0\) from \(g(\mathbf{t})=0\)
In this section, we consider the distributions of loadings according to whether the Kronecker coefficients \(g(\mathbf{t})\) are zero or nonzero. Using minimum values of loadings in each case, we will obtain vertical lines which separate the distributions of these two cases.
In Figures 4-7, we present the ranges and histograms of loadings of \(\mathbf{t}\in\mathcal{P}(n)^{3}\) for \(n=10,11,12,13\) according to whether \(g(\mathbf{t})\neq 0\) (red) or \(=0\) (blue). As one can see, the ranges and histograms do not vary much as \(n\) varies. The separation between the regions corresponding to \(g(\mathbf{t})\neq 0\) (red) and
Figure 1. Histograms of \(r\)-loadings of \(\mathbf{t}\in\mathcal{P}(n)^{3}\) for \(n=14,15,16\) from left to right along with curves (red) of normal distributions
Figure 2. Histograms of \(b\)-loadings of \(\mathbf{t}\in\mathcal{P}(n)^{3}\) for \(n=14,15,16\) from left to right along with curves (red) of gamma distributions
\(=0\) (blue) is more distinctive in the case of \(b\)-loadings. It is clear that we may use the minimum values of loadings to obtain vertical lines that separate the red regions from the blue ones.
With this in mind, define
\[r_{\star} \coloneqq\min\{r(\mathbf{t}):g(\mathbf{t})\neq 0,\mathbf{t}\in \mathcal{P}(n)^{3}\}\quad\text{ and }\] \[b_{\star} \coloneqq\min\{b(\mathbf{t}):g(\mathbf{t})=0,\mathbf{t}\in \mathcal{P}(n)^{3}\}.\]
Then, for \(\mathbf{t}\in\mathcal{P}(n)^{3}\),
\[\text{if }r(\mathbf{t})<r_{\star}\text{ then }g(\mathbf{t})=0\text{ \ and \ }\boxed{if }b(\mathbf{t})<b_{\star}\text{ then }g(\mathbf{t})\neq 0\text{ }. \tag{4.1}\]
This provides sufficient conditions for \(g(\mathbf{t})=0\) and \(g(\mathbf{t})\neq 0\), respectively, once we know the values of \(r_{\star}\) and \(b_{\star}\). However, the values \(r_{\star}\) do not turn out to be very useful for bigger \(n\) in distinguishing \(g(\mathbf{t})=0\) from \(g(\mathbf{t})\neq 0\), though they are interesting for their own sake and can be useful for further analysis. See Example 4.2 (2).
**Remark 4.1**.: It appears that the \(b\)-loadings of \(\mathbf{t}\) with \(g(\mathbf{t})\neq 0\) is a gamma distribution by itself. See the histogram and the curve of a gamma distribution when \(n=13\) in Figure 8.
Figure 5. Histograms of \(r\)-loadings for \(n=10\) (top-left), \(11\) (top-right), \(12\) (bottom-left) and \(13\) (bottom-right). The red (resp. blue) region represents the numbers of \(\mathbf{t}\) such that \(g(\mathbf{t})\neq 0\) (resp. \(g(\mathbf{t})=0\)).
Figure 6. Ranges of \(b\)-loadings for \(n=10,11,12,13\) from top to bottom. A red (resp. blue) dot at \((x,1)\) (resp. \((x,0)\)) corresponds to \(\mathbf{t}\in\mathcal{P}(n)^{3}\) with \(b(\mathbf{t})=x\) and \(g(\mathbf{t})\neq 0\) (resp. \(g(\mathbf{t})=0\)).
In the rest of this section, computational results of the values of \(r_{\star}\) and \(b_{\star}\) for \(6\leq n\leq 20\) will be presented along with some conjectures. This information can be used very effectively as illustrated in the example below.
**Example 4.2**.:
1. When \(n=18\), we obtain \(b_{\star}\approx 44.18\). Now that the \(b\)-loading of \[\mathbf{t}=((12,4,2),(8,4,2,2,1,1),(5,4,3,3,1,1,1))\] is readily computed to be approximately \(41.07<b_{\star}\), we immediately conclude that \(g(\mathbf{t})\neq 0\) by (4.1).
2. When \(n=20\), there are \(246,491,883\) triples \(\mathbf{t}\in\mathcal{P}(20)\). Among them, \(78,382,890\) triples satisfy \(b(\mathbf{t})<b_{\star}\approx 43.74\) so that \(g(\mathbf{t})\neq 0\). The percentage of these triples is about \(31.8\%\). In contrast, \(909,200\) triples satisfy \(r(\mathbf{t})<r_{\star}\approx 70.88\) and the percentage is only \(0.37\%\).
### \(r\)-loadings results
We compute and record \(r_{\star}\) and \(\mathbf{t}=(\lambda,\mu,\nu)\) such that \(r_{\star}=r(\mathbf{t})\) and \(\lambda\geq\mu\geq\nu\) lexicographically, for \(6\leq n\leq 20\) in Table 1. We do not consider \(n\leq 5\) because they seem to be too small for statistical analysis.
Figure 8. Histogram and curve (red) of a gamma distribution when \(n=13\)
Figure 7. Histograms of \(b\)-loadings for \(n=10\) (top-left), \(11\) (top-right), \(12\) (bottom-left) and \(13\) (bottom-right). The red (resp. blue) region represents the numbers of \(\mathbf{t}\) such that \(g(\mathbf{t})\neq 0\) (resp. \(g(\mathbf{t})=0\)).
Based on the results of \(n=8,12,16,20\) as written in blue in Table 1, we make the following conjecture.
**Conjecture 4.3**.: When \(n=4k\) (\(k\geq 2\)), the values \(r_{\star}\) are attained by \(\mathbf{t}=((k^{4}),(2^{2k}),(2^{2k}))\).
As an exhaustive computation for all possible triples becomes exponentially expensive, we assume that Conjecture 4.3 is true and continue computation. The results are in Table 2. Since we know \(\mathbf{t}\) exactly under Conjecture 4.3, we could calculate \(r_{\star}\) for \(n\) much bigger than those \(n\) in the case of \(b_{\star}\) that will be presented in Table 4.
**Remark 4.4**.: The values of \(r_{\star}\) seem to keep decreasing though slowly. However, it is not clear whether \(r_{\star}\) becomes bounded or not as \(n\to\infty\).
Notice that we have a sufficient condition for \(g(\mathbf{t})=0\) by taking the contrapositive of (2.1):
\[d_{\nu}<|d_{\lambda}-d_{\mu}|\quad\text{ or }\quad d_{\nu}>d_{\lambda}+d_{\mu} \quad\Longrightarrow\quad g(\mathbf{t})=0. \tag{4.2}\]
As \(r_{\star}\) provides another sufficient condition for \(g(\mathbf{t})=0\) in (4.1), one may be curious about their relationship. As a matter of fact, we observe that
\[r_{\star}<r(\mathbf{t})\quad\text{ for any $\mathbf{t}$ satisfying the condition in \eqref{eq:d_t
Based on the results in Table 3--in particular, on the results of \(n=6,9,12,15,18\) as written in blue--we make the following conjecture.
**Conjecture 4.5**.: For \(n\geq 6\), the values \(b_{\star}\) are attained by \(\mathbf{t}=(\lambda,\mu,\nu)\) such that \(\lambda=\mu\) or \(\mu=\nu\). Moreover, when \(n=3k\), \(k\geq 2\), the values \(b_{\star}\) are attained by \(\mathbf{t}=(\lambda,\mu,\nu)\) such that \(\lambda=\mu=\nu\).
As an exhaustive computation for all possible triples becomes exponentially expensive, we assume that Conjecture 4.5 is true for \(n=3k\) and continue computation. The results are in Table 4.
**Remark 4.6**.: The values of \(b_{\star}\) seem to be fluctuating with decreasing amplitudes as \(n\) increases. However, it is not clear if \(b_{\star}\) converges as \(n\to\infty\).
## Appendix A Table of Loadings
We tabulate the \(r\)-loading \(r_{\lambda}\) and \(b\)-loading \(b_{\lambda}\) of each partition \(\lambda\in\mathcal{P}(n)\) for \(6\leq n\leq 12\).
\begin{tabular}{|c|c|c|} \hline \(\lambda\) & \(r_{\lambda}\) & \(b_{\lambda}\) \\ \hline \((6,0,0,0,0,0)\) & 100.0 & 100.0 \\ \((5,1,0,0,0,0)\) & 85.8934 & 37.252 \\ \((4,2,0,0,0,0)\) & 71.7868 & 19.9271 \\ \((4,1,1,0,0,0)\) & 66.6591 & 4.363 \\ \((3,3,0,0,0,0)\) & 57.6803 & 43.005 \\ \((3,2,1,0,0,0)\) & 52.5256 & 0.0 \\ \((3,1,1,1,0,0)\) & 45.2311 & 4.363 \\ \((2,2,2,0,0,0)\) & 33.3183 & 43.005 \\ \((2,2,1,1,0,0)\) & 31.1245 & 19.9271 \\ \((2,1,1,1,1,0)\) & 22.8133 & 37.252 \\ \((1,1,1,1,1,1)\) & 0.0 & 100.0 \\ \hline \((7,0,0,0,0,0,0)\) & 100.0 & 100.0 \\ \((6,1,0,0,0,0,0)\) & 88.302 & 47.507 \\ \((5,2,0,0,0,0,0)\) & 76.604 & 26.483 \\ \((5,1,1,0,0,0,0)\) & 72.8338 & 13.1061 \\ \((4,3,0,0,0,0,0)\) & 64.906 & 36.928 \\ \((4,2,1,0,0,0,0)\) & 61.1358 & 0.0 \\ \((4,1,1,1,0,0,0)\) & 55.5306 & 1.81 \\ \((3,3,1,0,0,0,0,0)\) & 49.4378 & 21.735 \\ \((3,2,2,0,0,0,0,0)\) & 45.6676 & 21.735 \\ \((3,2,1,1,0,0,0)\) & 43.8236 & 0.0 \\ \((3,1,1,1,1,0,0)\) & 37.3978 & 13.1061 \\ \((2,2,2,1,0,0,0,0)\) & 28.3644 & 36.928 \\ \((2,2,1,1,1,0,0)\) & 25.6998 & 26.483 \\ \((2,1,1,1,1,0)\) & 18.7933 & 47.507 \\ \((1,1,1,1,1,1)\) & 0.0 & 100.0 \\ \hline \((8,0,0,0,0,0,0)\) & 100.0 & \((7,1,1,0,0,0,0,0)\) & 72.6788 \\ \((7,1,0,0,0,0,0,0)\) & 90.5921 & 58.055 \\ \((6,2,0,0,0,0,0,0)\) & 81.1842 & 35.198 \\ \((6,1,1,0,0,0,0,0)\) & 77.6539 & 28.246 \\ \((5,3,0,0,0,0,0,0)\) & 71.7763 & 34.854 \\ \((5,2,1,0,0,0,0,0)\) & 68.2461 & 9.7331 \\ \((5,1,1,1,0,0,0,0)\) & 63.194 & 12.637 \\ \((4,4,0,0,0,0,0,0,0)\) & 62.3685 & 48.552 \\ \((4,3,1,0,0,0,0,0)\) & 58.8382 & 12.506 \\ \((4,2,2,0,0,0,0,0)\) & 55.3079 & 15.265 \\ \((4,2,1,1,0,0,0,0)\) & 53.7861 & 0.0 \\ \((4,1,1,1,1,0,0,0)\) & 47.8449 & 12.637 \\ \((3,3,2,0,0,0,0,0,0)\) & 45.9 & 30.531 \\ \((3,3,1,1,0,0,0,0)\) & 44.3782 & 15.265 \\ \((3,2,2,1,0,0,0,0,0)\) & 40.8479 & 15.265 \\ \((3,2,1,1,1,0,0,0,0)\) & 38.4379 & 9.7331 \\ \((3,1,1,1,1,1,0,0)\) & 38.437 & 9.7331 \\ \((3,1,1,1,1,1,0,0)\) & 32.0837 & 28.246 \\ \((2,2,2,0,0,0,0,0)\) & 26.3879 & 48.552 \\ \((2,2,2,1,1,0,0,0)\) & 25.4988 & 34.854 \\ \((2,2,1,1,1,1,0,0)\) & 22.6758 & 35.198 \\ \((2,1,1,1,1,1,0)\) & 16.0886 & 58.055 \\ \((1,1,1,1,1,1,1,0)\) & 100.0 \\ \hline \((9,0,0,0,0,0,0,0)\) & 100.0 & 100.0 \\ \((8,1,0,0,0,0,0,0)\) & 91.876 & 62.802 \\ \((7,2,0,0,0,0,0,0)\) & 83.7521 & 39.559 \\ \((7,1,1,0,0,0,0,0)\) & 80.9205 & 33.587 \\ \((6,3,0,0,0,0,0,0)\) & 75.6281 & 34.591 \\ \((6,2,1,0,0,0,0,0)\) & 72.7965 & 13.273 \\ \((6,1,1,1,0,0,0,0)\) & 68.4825 & 16.425 \\ \((5,4,0,0,0,0,0,0)\) & 64.906 & 36.928 \\ \((2,2,2,1,1,0,0,0)\) & 61.1358 & 45.12 \\ \((2,2,2,2,0,0,0,0,0)\) & 61.1358 & 45.12 \\ \((2,2,2,2,1,0,0,0,0)\) & 62.3685 & 45.12 \\ \((2,2,2,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((2,2,2,1,1,0,0,0)\) & 62.3685 & 45.12 \\ \((4,1,1,1,1,0,0,0)\) & 62.3685 & 45.12 \\ \((3,3,2,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((4,2,1,1,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,0,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,0,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,0,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,0,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,0,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,0,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,1,1,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,0,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,1,1,1,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,0,0,0,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,1,1,1,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,1,1,1,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,1,1,1,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,1,1,1,1,0,0)\) & 62.3685 & 45.12 \\ \((5,3,1,0,0,0,0,0,0)\) & 62.3685 & 45.1
\begin{tabular}{|c|c|c|} \hline \(\lambda\) & \(r_{\lambda}\) & \(b_{\lambda}\) \\ \hline \((2,2,2,1,1,1,1,0,0,0)\) & 20.5405 & 36.947 \\ \((2,2,1,1,1,1,1,1,0,0)\) & 17.8237 & 45.12 \\ \((2,1,1,1,1,1,1,1,1,0)\) & 12.3875 & 67.7441 \\ \((1,1,1,1,1,1,1,1,1,1,1,1)\) & 0.0 & 100.0 \\ \hline \((11,0,0,0,0,0,0,0,0,0)\) & 100.0 & 100.0 \\ \((10,1,0,0,0,0,0,0,0,0,0)\) & 93.8295 & 71.265 \\ \((9,2,0,0,0,0,0,0,0,0,0)\) & 87.6591 & 49.697 \\ \((9,1,1,0,0,0,0,0,0,0)\) & 85.397 & 46.624 \\ \((8,3,0,0,0,0,0,0,0,0,0)\) & 81.4886 & 39.924 \\ \((8,2,1,0,0,0,0,0,0,0)\) & 79.2265 & 26.3731 \\ \((8,1,1,1,0,0,0,0,0,0,0)\) & 75.8034 & 28.711 \\ \((7,4,0,0,0,0,0,0,0,0)\) & 75.3182 & 39.780 \\ \((7,3,1,0,0,0,0,0,0,0)\) & 73.0561 & 18.329 \\ \((7,2,0,0,0,0,0,0,0)\) & 70.794 & 18.329 \\ \((7,2,1,0,0,0,0,0,0)\) & 69.6329 & 10.1901 \\ \((7,1,1,1,0,0,0,0,0,0)\) & 65.58 & 17.872 \\ \((6,5,0,0,0,0,0,0,0,0)\) & 69.1477 & 45.913 \\ \((6,4,1,0,0,0,0,0,0,0,0)\) & 66.8856 & 20.801 \\ \((6,3,2,0,0,0,0,0,0,0)\) & 64.6236 & 12.90 \\ \((6,3,1,1,0,0,0,0,0,0,0)\) & 63.4625 & 4.762 \\ \((6,2,1,0,0,0,0,0,0,0)\) & 61.2004 & 4.762 \\ \((6,2,1,1,0,0,0,0,0,0)\) & 61.2004 & 4.762 \\ \((6,2,1,1,1,0,0,0,0,0,0)\) & 59.4095 & 1.967 \\ \((6,1,1,1,1,0,0,0,0,0)\) & 54.969 & 14.412 \\ \((5,5,1,0,0,0,0,0,0,0)\) & 58.4531 & 18.834 \\ \((5,4,1,1,0,0,0,0,0,0,0)\) & 57.292 & 10.694 \\ \((5,3,3,0,0,0,0,0,0,0)\) & 56.191 & 21.014 \\ \((5,3,2,1,0,0,0,0,0,0,0)\) & 55.0299 & 2.795 \\ \((5,3,1,1,1,0,0,0,0,0)\) & 53.2391 & 0.0 \\ \((5,2,2,0,0,0,0,0,0,0)\) & 51.6068 & 10.694 \\ \((5,2,2,0,0,0,0,0,0,0)\) & 51.6068 & 10.694 \\ \((5,2,2,1,1,0,0,0,0,0,0)\) & 50.977 & 0.0 \\ \((5,2,1,1,1,1,0,0,0,0,0)\) & 48.7986 & 1.967 \\ \((5,1,1,1,1,1,0,0,0,0,0)\) & 44.1465 & 17.872 \\ \((4,4,3,0,0,0,0,0,0,0)\) & 50.0206 & 31.70 \\ \((4,4,2,1,0,0,0,0,0,0)\) & 48.8595 & 13.4 \\ \((4,4,1,1,1,0,0,0,0,0,0)\) & 47.0686 & 10.694 \\ \((4,3,3,1,0,0,0,0,0,0)\) & 46.5974 & 15.6 \\ \((4,3,2,2,0,0,0,0,0,0,0)\) & 45.4363 & 13.4 \\ \((4,3,2,2,0,0,0,0,0,0)\) & 44.8065 & 2.795 \\ \((4,3,1,1,1,0,0,0,0,0,0)\) & 42.6281 & 4.762 \\ \((4,2,2,2,1,0,0,0,0,0,0)\) & 41.3834 & 10.694 \\ \((4,2,1,1,1,0,0,0,0,0)\) & 40.366 & 4.762 \\ \((4,2,1,1,1,1,0,0,0,0)\) & 40.366 & 4.762 \\ \((4,2,1,1,1,1,0,0,0,0)\) & 40.37961 & 10.1901 \\ \((4,1,1,1,1,1,1,0,0,0,0)\) & 33.1883 & 28.711 \\ \((3,3,3,2,0,0,0,0,0,0)\) & 37.0038 & 31.70 \\ \((3,3,1,1,0,0,0,0,0,0)\) & 36.374 & 21.014 \\ \((3,3,2,2,1,0,0,0,0,0,0)\) & 35.2129 & 18.834 \\ \((3,3,2,1,1,1,0,0,0,0,0)\) & 34.1956 & 12.90 \\ \((3,3,2,1,1,1,0,0,0,0)\) & 34.1956 & 12.90 \\ \((3,3,1,1,1,1,0,0,0,0)\) & 31.8056 & 18.329 \\ \((3,2,2,2,0,0,0,0,0,0)\) & 31.1599 & 30.394 \\ \((3,2,2,1,1,0,0,0,0,0)\) & 30.7724 & 20.801 \\ \((3,2,2,1,1,1,0,0,0,0)\) & 29.5435 & 18.329 \\ \((3,2,1,1,1,1,0,0,0,0)\) & 27.0179 & 26.3731 \\ \((3,1,1,1,1,1,1,0,0,0)\) & 22.1582 & 46.624 \\ \((2,2,2,2,1,0,0,0,0,0,0)\) & 20.545913 & (4,1,1,1,1,1,1,0,0,0,0)\) & 30.1207 \\ \((2,2,2,1,1,1,0,0,0,0,0)\) & 20.5459 & 45.913 \\ \((2,2,2,1,1,1,0,0,0,0)\) & 19.9499 & 39.780 \\ \((2,2,2,1,1,1,1,0,0,0)\) & 18.5853 & 39.924 \\ \((2,2,1,1,1,1,1,0,0,0)\) & 19.878 \\ \((2,1,1,1,1,1,1,1,1,0)\) & 11.0873 & 71.265 \\ \((1,1,1,1,1,1,1,1,1,1)\) & 0.0 & 100.0 \\ \hline \((12,0,0,0,0,0,0,0,0,0)\) & 100.0 & 100.0 \\ \((11,1,0,0,0,0,0,0,0)\) & 100.0 & 100.0 \\ \((11,1,0,0,0,0,0,0,0,0)\) & 94.5958 & 74.832 \\ \((3,3,1,1,1,1,0,0,0,0,0)\) & 29.2906 & 23.775 \\ \((3,2,2,2,1,1,0,0,0,0,0)\) & 19.9499 & 39.780 \\ \((2,2,2,1,1,1,1,0,0,0,0)\) & 18.5853 & 39.924 \\ \((2,2,1,1,1,1,1,0,0,0,0)\) & 19.878 \\ \((2,1,1,1,1,1,1,1,0,0)\) & 11.0873 & 71.265 \\ \((1,1,1,1,1,1,1,1,1,1)\) & 0.0 & 100.0 \\ \hline \((12,0,0,0,0,0,0,0,0,0)\) & 100.00 & 100.0 \\ \((11,1,0,0,0,0,0,0,0,0)\) & 100 |
2307.03364 | Distilled Pruning: Using Synthetic Data to Win the Lottery | This work introduces a novel approach to pruning deep learning models by
using distilled data. Unlike conventional strategies which primarily focus on
architectural or algorithmic optimization, our method reconsiders the role of
data in these scenarios. Distilled datasets capture essential patterns from
larger datasets, and we demonstrate how to leverage this capability to enable a
computationally efficient pruning process. Our approach can find sparse,
trainable subnetworks (a.k.a. Lottery Tickets) up to 5x faster than Iterative
Magnitude Pruning at comparable sparsity on CIFAR-10. The experimental results
highlight the potential of using distilled data for resource-efficient neural
network pruning, model compression, and neural architecture search. | Luke McDermott, Daniel Cummings | 2023-07-07T03:07:28Z | http://arxiv.org/abs/2307.03364v3 | # Distilled Pruning: Using Synthetic Data to Win the Lottery
###### Abstract
This work introduces a novel approach to pruning deep learning models by using distilled data. Unlike conventional strategies which primarily focus on architectural or algorithmic optimization, our method reconsiders the role of data in these scenarios. Distilled datasets capture essential patterns from larger datasets, and we demonstrate how to leverage this capability to enable a computationally efficient pruning process. Our approach can find sparse, trainable subnetworks (a.k.a. lottery tickets) up to 5x faster than Iterative Magnitude Pruning at comparable sparsity on CIFAR-10. The experimental results highlight the potential of using distilled data for resource-efficient neural network pruning, model compression, and neural architecture search.
## 1 Introduction
As prevalent types of deep learning models continue to grow in size and scale, the study of model compression techniques continues to be vitally essential as it addresses the issues of cost-effectiveness, limited computational resources, and model complexity or latency. One key capability in this field, neural network pruning (Lecun et al., 1989; Han et al., 2015), has naturally risen in popularity as it aims to prune or cut out unnecessary parameters in models. Early pruning literature believed that, while dense, overparameterized models are important for training, they are not necessary for inference. This led pruning to be viewed as a post-training procedure, focusing on efficiency of models at inference. Frankle and Carbin (2019) have shown that this is not the case, emphasizing the potential for pruning at initialization. The Lottery Ticket Hypothesis states that sparse, trainable subnetworks exist at initialization within these dense, overparameterized neural networks. To find these subnetworks or lottery tickets, Iterative Magnitude Pruning (IMP) is augmented with weight rewinding. The IMP process iterates by training a network, pruning the lowest magnitude weights, and rewinding the weights to their initial values or to some point early in training. This repeats until the desired sparsity1 is achieved. With weight rewinding, IMP requires the use of post-training information to find optimal masks at initialization. This algorithm enables the study of sparse neural architecture (Paul et al., 2022; Chen et al., 2021; Ma et al., 2021; Frankle et al., 2020), providing a way for researchers to consistently find "lucky" lottery tickets.
Footnote 1: We denote sparsity as percentage of parameters pruned.
Even as a fundamental research tool, IMP is largely inefficient due to the extensive retraining process. To achieve some sparse model with IMP, one must retrain some network numerous times over to achieve the mask, then retrain one final time to validate the sparsity mask. To address this issue, we employ the same framework as IMP, but instead use distilled data (Wang et al., 2020), essentially a summarized version of our training data, in the inner training loop to approximate trained weights. As a result, sparsity masks can be generated in considerably less time while still being capable of achieving full accuracy when trained with real training data. We show in our setting that distilled data can pick winning tickets.
Previous work, such as Paul et al. (2022), demonstrated that subsets of the training data are sufficient for finding lottery tickets. We improve upon this idea by distilling the essential features of a class into a few synthetic images. Data distillation condenses a dataset into a small, synthetic sample, which, when used for training, yields similar performance to training on the real dataset.
Often, this means reducing a dataset to 1, 10, or 50 images per class. This topic has seen rapidly growing interest due to the benefits of lower computational overhead for model training and can broadly be separately into the subcategories of meta-model matching, gradient matching, distribution matches, and trajectory matching (Sachdeva and McAuley, 2023; Zhou et al., 2022; Cazenavette et al., 2022; Loo et al., 2023; Nguyen et al., 2021).
A downside of state-of-the-art data distillation methods is that they require significant memory overhead which limits their ability to scale to larger model sizes (and thus lack of cross-architecture generalizability). Recent works like Loo et al. (2023) and Zhou et al. (2022) have explored the transferrability of datasets generated by such methods on ResNet (He et al., 2015) and VGG (Simonyan and Zisserman, 2015), but since these results leave a lot of room for improvement, we focus our work to more computationally tractable convolutional networks. Despite concerns of distillation methods, in our unique setting with heavy retraining in IMP, poorly-generalizing distillation methods still show substantial utility in improving the retraining process since we only have to optimize the distilled data for one model family.
In this paper, we introduce data distillation as a means to accelerate retraining in iterative pruning methods, while still accurately identifying winning tickets for the original dataset. We emphasize the use for distilled pruning as a means of rapid experimentation in pruning and NAS research, taking advantage of the efficiency/performance trade off. Data distillation and neural network nruning can be viewed as orthogonal approaches to computational efficiency, so data distillation provides additional speed up in retraining that can be used with other efficient pruning methods, not just IMP.
2. **Method** Formally, the Lottery Ticket Hypothesis (Frankle and Carbin, 2019) conjectures that for some randomly initialized, dense neural network \(f(x;\theta)\), there exists a non-trivial binary mask \(m\in\{0,1\}^{|\theta|}\), such that when trained in isolation on some training data \(D_{\text{train}}\), the subnetwork \(f(x;\text{train}(\theta\odot m,D_{\text{train}}))\) achieves similar performance to \(f(x;\text{train}(\theta,D_{\text{train}}))\). We denote \(\odot\) as elementwise multiplication and assume there exists some sufficient SGD-based train function, \(\text{\emph{train}}:\ \mathbb{R}^{|\theta|}\rightarrow\mathbb{R}^{|\theta|}\). To find such \(m\), pruning researchers employ IMP as follows: 1) Train the network for \(n\)-epochs, 2) remove 20% of the non-pruned weights prioritizing by lowest magnitude, 3) rewind the weights back to initialization or some early point in training, 4) Iterate Steps 1-3 until desired sparsity. Here, sparsity is defined as the percentage of parameters pruned. We employ a simple augmentation to the original IMP algorithm by replacing the training data, \(D_{\text{train}}\), needed to find the sparsity mask with distilled data, \(D_{\text{syn}}\), as demonstrated by the Algorithm 1. We also train for some \(t\)-many epochs on the distilled data, while preserving the \(n\)-long training with real data at the end. The source of distilled data is largely plug-and-play, and we encourage researchers and practitioners alike to use the most applicable distillation method that fits their performance needs and computational budget. In future work, we plan to benchmark across different data distillation methods. Specifically for our experiments, we utilize MTT as demonstrated by Cazenavette et al. (2022) due to ease of reproducibility. MTT leverages the concept of expert trajectories, which are snapshots of parameters from models trained on the real dataset. The goal is to induce a similar trajectory in the student model trained on synthetic data, leading to similar test performance. We refer the reader to the original paper for implementation level details.
3. **Experiments** For our experiments, we chose AlexNet (Krizhevsky et al., 2017) for CIFAR-10 (Krizhevsky, 2009) and a 128-width ConvNet for CIFAR-100 (Krizhevsky, 2009) to maintain consistency with experiments in previous literature by Cazenavette et al. (2022). We distilled each class down to 10 or 50 images,
denote as 10 ipc (images per class) or 50 ipc. The distilled CIFAR-10 has a size of 100 or 500 training images and 1,000 or 5,000 for CIFAR-100.
### Sparsity Analysis
In our setting, distilled data is appropriately finding lottery tickets at non-trivial sparsities, showing that at 50 ipc the approximated weights from distilled training are sufficient for IMP. Figure 1 shows we achieve relatively comparable performance to IMP at mid to high sparsities and even outperform at low sparsities for CIFAR-10. For CIFAR-100, we see a fall off earlier as Distilled Pruning finds lottery tickets up to only 50% sparsity. For both datasets, 10 ipc performs poorly as expected due to low performance even on data distillation objectives (Cazenavette et al., 2022; Zhou et al., 2022; Loo et al., 2023; Nguyen et al., 2021). We believe with the current state of data distillaton methods, Distilled Pruning may not scale to deeper networks or to datasets with high amounts of outliers yet. As a rapidly evolving field, we expect this to change soon as the field matures.
### Efficiency Analysis
In Figure 1, we present compelling evidence showcasing the significant speedup achieved with distilled pruning compared to standard IMP. Measured on an Nvidia RTX A4000 GPU, we achieve an
Figure 1: Sparsity Mask performance for AlexNet on CIFAR-10 and a 128-width ConvNet on CIFAR-100 across methods. Best seed of each method is bolded. We pruned 20% of weights at each iteration up to 30 iterations for CIFAR-10 and 20 for CIFAR-100. Random mask selects weights at random each iteration. lottery tickets exist if test accuracy of sparse model achieves or surpasses the dense model accuracy as shown in black. Time-to-mask measured by time to prune and retrain the sparsity mask with real data.
average of 55 seconds per distilled training session on CIFAR-10 distilled to 50 ipc, compared to 7.25 minutes per training on real data. Distilled Pruning found a lottery ticket of comparable accuracy at roughly 90% sparsity in CIFAR-10, resulting in a 5x speed up. While distilled pruning with ipc10 looks useful here, the performance drop off is too large for the minimal improvement in time-to-mask. It is worth noting that the major computational burden associated with distilled pruning lies in the final retraining phase using real data. Consequently, in scenarios where validation of a sparsity mask is unnecessary, distilled pruning enables us to generate masks 8 times faster than with IMP.
One of the key advantages of distilled pruning is the ability to rapidly prototype and experiment, particularly for researchers working within a limited set of datasets or compute resources. Synthetic data is generated once per dataset, providing a means for quick and convenient experimentation. Moreover, synthetic data is pre-computed and publicly available for popular datasets, further streaming the research process. By capitalizing on the plug-and-play nature of distillation methods, any advancements in data distillation techniques can directly translate into speed improvements for distilled pruning. Because of this, we exclude time-to-distill from our plot. For reference, MTT has one of the largest computational costs for distillation, but only takes an additional 133 minutes to distill CIFAR-10 to 50 images per class (Cazenavette et al., 2022). We emphasize that for popular datasets, these are often pre-computed and publicly available with state-of-the-art distillation methods.
### Instability Analysis
To gain deeper insights into the distinctions between winning tickets obtained through distilled pruning and those discovered using IMP, we employ an _instability analysis_ inspired by Frankle et al. (2020). As described in previous literature, lottery tickets exhibit linear mode connectivity, representing stability to noise from stochastic gradient descent (SGD). In Figure 2, the selected sparsity masks were trained using two distinct permutations of real training data. Then, we performed a linear interpolation between the trained weights of the two networks. This process allowed us to observe the linear mode connectivity and assess the stability of the models. A drop in test accuracy during this interpolation means the model is unstable.
We observed distinctions in the lottery tickets yielded through the two methods. In the case of IMP-generated subnetworks, we observed the need for rewinding to an early point in training
Figure 2: The test accuracy for interpolated weights between two models trained with different SGD noise with AlexNet and CIFAR-10. Each plot uses a fixed sparsity mask found by IMP or Distilled Pruning. A drop in accuracy implies no linear mode connectivity or instability to SGD noise. Distilled Pruning uses 50 images per class.
(specifically, after one epoch, \(k=1\)) as opposed to initialization, aligning with previous work (Frankle et al., 2020). In contrast, the lottery tickets identified through distilled pruning proved to be drastically more stable against SGD noise, not requiring any rewinding. We found our tickets maintained linear mode connectivity at extreme sparsities, only falling during model collapse.
These discoveries hint at the possibility of distilled pruning producing a different type of lottery ticket, where trained weights approximated by distilled data might provide unique and valuable insights into the Lottery Ticket Hypothesis. However, as Vlaar and Frankle (2022) discuss, data augmentation, initializations, and optimizers all play significant roles in linear interpolation. Even then, they show stability does not always predict test accuracy. Therefore, we believe further research is necessary to fully understand why distilled data-generated tickets exhibit such stability, even when rewound to initialization.
## 4 Conclusion
In this pilot study, we explore the effect of data distillation on neural network pruning. The implications of distilled pruning extend far beyond its direct applications. Fast prototyping becomes more accessible for researchers who leverage pruning techniques, as the distilled pruning framework enables swift iteration and experimentation with various pruning configurations. Additionally, distilled pruning serves as a valuable tool for Neural Architecture Search (NAS) validation, facilitating the assessment of architectures' performance and characteristics. One notable advantage within pruning is the flexibility it provides in terms of pruning granularity. Researchers can substantially increase in the number of pruning iterations, allowing for pruning of smaller amounts of weights per iteration. This hyper-iterative approach grants precise control over the levels of pruning, enabling fine-grained exploration of the sparsity spectrum. Distilled pruning effectively reduces the sample complexity of mask generation, thereby opening up new avenues for stochastic approaches to IMP or even larger-scale NAS methods.
While our research focuses on highlighting the speed-up achieved with distilled pruning, we acknowledge that there is a trade-off in performance compared to the standard IMP method. As data distillation as a field matures, we expect to close the performance gap and apply this method to larger models and datasets. In future work, we plan to test a wider range of novel distillation methods such as Zhou et al. (2022) and Loo et al. (2023) while exploring the scalability of distilled pruning with larger architectures.
## 5 Broader Impact Statement
Our proposed solution employs distilled data, which leads to significant computational savings during the pruning process. This reduction in computational requirements directly translates to diminished CO2 emissions, contributing to more sustainable AI research and development practices. Our approach would generalize well to models outside computer vision and could lead to more effective and efficient pruning solutions in areas such as natural language processing, and generative architectures. Moreover, this work can make advanced neural network design more accessible to a broader range of researchers and developers, reducing the expertise and compute infrastructure required to prune high-performing networks.
One possible risk of our approach is the loss of detail when using distilled data. While data distillation aims to retain as much useful information as possible, there's a risk that some important outlier data could be lost in the process, potentially leading to unexpected model performance or biased outcomes. To counter the potential risks associated with data distillation, it's crucial to validate distilled datasets thoroughly against real-world data to ensure they adequately represent the problem space. While data distillation holds considerable promise for the future enhancement of deep learning, the drawbacks and related mitigation strategies should continue to be carefully studied. |
2304.05245 | Semi-stability and local wall-crossing for hermitian Yang-Mills
connections | We consider a sufficiently smooth semi-stable holomorphic vector bundle over
a compact K\"ahler manifold. Assuming the automorphism group of its graded
object to be abelian, we provide a semialgebraic decomposition of a
neighbourhood of the polarisation in the K\"ahler cone into chambers
characterising (in)stability. For a path in a stable chamber converging to the
initial polarisation, we show that the associated HYM connections converge to
an HYM connection on the graded object. | Andrew Clarke, Carl Tipler | 2023-04-11T14:32:51Z | http://arxiv.org/abs/2304.05245v1 | # Semi-stability and local wall-crossing for Hermitian Yang-Mills connections
###### Abstract.
We consider a sufficiently smooth semi-stable holomorphic vector bundle over a compact Kahler manifold. Assuming the automorphism group of its graded object to be abelian, we provide a semialgebraic decomposition of a neighbourhood of the polarisation in the Kahler cone into chambers characterising (in)stability. For a path in a stable chamber converging to the initial polarisation, we show that the associated HYM connections converge to an HYM connection on the graded object.
2010 Mathematics Subject Classification: Primary: 53C07, Secondary: 53C55, 14J60
## 1. Introduction
Slope stability, as introduced by Mumford for curves and generalized by Take-moto in higher dimension ([17, 21]) is a stability notion that can be used to construct and study moduli spaces of vector bundles. It depends on a choice of a polarisation, the variations of which being responsible for wall-crossing phenomena (see [10] for a survey on constructions and variations of moduli spaces of stable bundles). By the Hitchin-Kobayashi correspondence ([13, 16, 22, 6]), a holomorphic vector bundle over a compact Kahler manifold is slope polystable if and only if it carries an Hermite-Einstein metric, or equivalently an hermitian Yang-Mills connection. While wall-crossing phenomena describe global variations of the moduli spaces from the algebraic point of view, our focus is on the local variations, from the analytic point of view, and inspired by [3]. While a stable bundle will carry an Hermite-Einstein metric with respect to any nearby polarisation, the situation for a semi-stable bundle is much more delicate, and this is the problem that we address in this paper.
To state our results, let us first introduce some notations. Denote the Kahler cone of a compact Kahler manifold \(X\) by \(\mathcal{K}_{X}\subset H^{1,1}(X,\mathbb{R})\). Recall that a semistable holomorphic vector bundle \(E\to(X,[\omega])\) is said to be _sufficiently smooth_ if its graded object \(\operatorname{Gr}(E)\) is locally free. In that case, we define
\[\mathfrak{C}_{[\omega]}=\{F\subset E\,|\,\mu_{[\omega]}(F)=\mu_{[\omega]}(E)\}\]
the set of subbundles of \(E\) with same slope. Such bundles are all built out of successive extensions of the stable pieces of the graded object, and thus \(\mathfrak{C}_{[\omega]}=\{F_{1},\ldots,F_{p}\}\) is finite. For each \(F_{i}\in\mathfrak{C}_{[\omega]}\), the map
\[\begin{array}{rcl}\nu_{i}:&\mathcal{K}_{X}&\to&\mathbb{R}\\ &\alpha&\mapsto&\frac{c_{1}(E)\cdot\alpha^{n-1}}{\operatorname{rank}(E)}- \frac{c_{1}(F_{i})\cdot\alpha^{n-1}}{\operatorname{rank}(F_{i})}\end{array}\]
## 1. Introduction
Let \(E\) be a field of characteristic \(\mathbb{C}\), and let \(\mathcal{S}_{s,R}\) be a field of characteristic \(\mathbb{C}\). We say that \(\mathcal{S}_{s,R}\) is _strongly smooth_ if \(\mathcal{S}_{s,R}\) is a _strongly smooth
are obtained for perturbations of Kahler classes and semi-stable bundles. However, in [19], an extra hypothesis on \(\operatorname{Gr}(E)\) was required (unicity of the Jordan-Holder filtration with locally-free stable quotients), the proof was more technical, and the results only hold for perturbations along lines (of the form \(t\mapsto[\omega]+t\alpha\)) in the Kahler cone.
**Remark 1.4**.: The hypothesis on \(E\) being simple can easily be dropped. In general, one might look for polystable perturbations of a semi-stable bundle. In that case, one should start from a direct sum of semi-stable bundles of the same slopes, and deal with each summands separatly. To ease the exposition, we will restrict to the simple case.
It is not difficult to see that Theorem 1.2 holds true for specific compact families of semi-stable holomorphic vector bundles that share \(\operatorname{Gr}(E)\) as graded object (as in the similar problem for cscK metrics, see the discussion in [18, Section 4.5]). The next step towards the understanding of local wall-crossing phenomena for HYM connections is to obtain a uniform version of Theorem 1.2 for all small deformations of \(\operatorname{Gr}(E)\) at once.
**Organisation of the paper:** First, in Section 2, we recall the basics on HYM connections and slope stability. Then, in Section 3, we produce a family of Kuranishi slices parametrising small deformations of \(\operatorname{Gr}(E)\), where each slice depends on a Kahler class near \([\omega]\). This step is inspired by [20, 2, 3, 18], and enables to reduce the problem to the study of a family of finite dimensional moment maps. Section 4 is then devoted to the proof of Theorem 1.2, relying on the recent technique from [18] to control the perturbed moment maps.
**Acknowledgments:** The authors benefited from visits to LMBA and Gotheborg University; they would like to thank these welcoming institutions for providing stimulating work environments. The idea of this project emerged from discussions with Lars Martin Sektnan, whom we thank for sharing his ideas and insight. We also thank Julius Ross for kindly answering our questions on the chamber structure, and pointing to us reference [11]. AC is partially supported by the grants BRIDGES ANR-FAPESP ANR-21-CE40-0017 and Projeto CAPES - PrInt UFRJ 88887.311615/2018-00. CT is partially supported by the grants MARGE ANR-21-CE40-0011 and BRIDGES ANR-FAPESP ANR-21-CE40-0017.
## 2. Preliminaries
In Sections 2.1 and 2.2 we introduce the notions of HYM connections and slope stability, together with some general results, and refer the reader to [14] and [12].
### The hermitian Yang-Mills equation
Let \(E\to X\) be a holomorphic vector bundle over a compact Kahler manifold \(X\). A hermitian metric on \(E\) is _Hermite-Einstein_ with respect to a Kahler metric with Kahler form \(\omega\) if the curvature \(F_{h}\in\Omega^{2}\left(X,\operatorname{End}E\right)\) of the corresponding Chern connection satisfies
\[\Lambda_{\omega}\left(iF_{h}\right)=c\operatorname{Id}_{E} \tag{2.1}\]
for some real constant \(c\). Equivalently, if \(h\) is some hermitian metric on the smooth complex vector bundle underlying \(E\), a hermitian connection \(A\) on \((E,h)\) is said to
be _hermitian Yang-Mills_ if it satisfies
\[\left\{\begin{array}{rcl}F_{A}^{0,2}&=&0,\\ \Lambda_{\omega}\left(iF_{A}\right)&=&c\,\mathrm{Id}_{E}\,.\end{array}\right.\]
The first equation of this system implies that the \((0,1)\)-part of \(A\) determines a holomorphic structure on \(E\), while the second that \(h\) is Hermite-Einstein for this holomorphic structure. We will try to find hermitian Yang-Mills connections within the complex gauge group orbit, which we now define. The _complex gauge group_ is
\[\mathscr{G}^{\mathbb{C}}(E)=\Gamma\left(\mathrm{GL}\left(E,\mathbb{C}\right) \right),\]
and its _hermitian_ version is
\[\mathscr{G}^{\mathbb{C}}(E,h)=\Gamma\left(\mathrm{GL}\left(E,\mathbb{C} \right)\right)\cap\Gamma\left(\mathrm{End}_{H}(E,h)\right),\]
where \(\mathrm{End}_{H}(E,h)\) stands for the hermitian endomorphisms of \((E,h)\). Note that if \(\bar{\partial}\) is the Dolbeault operator defining the holomorphic structure on \(E\), then \(f\circ\bar{\partial}\circ f^{-1}\) defines a biholomorphic complex structure on \(E\). Let \(d_{A}=\partial_{A}+\bar{\partial}_{A}\) be the Chern connection of \((E,h)\) with respect to the original complex structure (that is \(\bar{\partial}_{A}=\bar{\partial}\)). Then the Chern connection \(A^{f}\) of \(h\) with respect to \(f\circ\bar{\partial}\circ f^{-1}\) is
\[d_{A^{f}}=(f^{*})^{-1}\circ\partial_{A}\circ(f^{*})+f\circ\bar{\partial}\circ f ^{-1}.\]
Solving the hermitian Yang-Mills equation is equivalent to solving
\[\Psi(s)=c\,\mathrm{Id}_{E}\]
where
\[\begin{array}{rcl}\Psi:&\mathrm{Lie}(\mathscr{G}^{\mathbb{C}}(E,h))& \longrightarrow&\mathrm{Lie}(\mathscr{G}^{\mathbb{C}}(E,h))\\ s&\longmapsto&i\Lambda_{\omega}(F_{A^{\mathrm{exp}(s)}}),\end{array}\]
and where \(\mathrm{Lie}(\mathscr{G}^{\mathbb{C}}(E,h)):=i\Gamma(\mathrm{End}_{H}(E,h))\) is the tangent space to \(\mathscr{G}^{\mathbb{C}}(E,h)\) at the identity. For a connection \(A\) on \(E\), the Laplace operator \(\Delta_{A}\) is
\[\Delta_{A}=i\Lambda_{\omega}\left(\bar{\partial}_{A}\partial_{A}-\partial_{A} \bar{\partial}_{A}\right). \tag{2.2}\]
If \(A_{\mathrm{End}\,E}\) denote the connection induced by \(A\) on \(\mathrm{End}\,E\), then :
**Lemma 2.1**.: _If \(A\) is the Chern connection of \((E,\overline{\partial},h)\), the differential of \(\Psi\) at identity is_
\[d\Psi_{\mathrm{Id}_{E}}=\Delta_{A_{\mathrm{End}\,E}}.\]
_If moreover \(A\) is assumed to be hermitian Yang-Mills, then the kernel of \(\Delta_{A_{\mathrm{End}\,E}}\) acting on \(\Gamma(\mathrm{End}(E))\) is given by the Lie algebra \(\mathfrak{aut}(E)\) of the space of automorphisms \(\mathrm{Aut}(E)\) of \((E,\overline{\partial})\)._
The last statement about the kernel follows from the Kahler identities and the Akizuki-Nakano identity that imply \(\Delta_{A_{\mathrm{End}\,E}}=\partial_{A}^{*}\partial_{A}+\bar{\partial}_{A}^ {*}\bar{\partial}_{A}\), the two terms of which are equal if \(A\) is Hermitian Yang-Mills. The operator \(\Delta_{A_{\mathrm{End}\,E}}\) being elliptic and self-adjoint, \(\mathfrak{aut}(E)\) will then appear as a cokernel in the linear theory for perturbations of hermitian Yang-Mills connections.
### Slope stability
We recall some basic facts about slope stability, as introduced by [17, 21], and refer the interested reader to [12] for a detailed treatment. We denote here \(L:=[\omega]\) the polarisation of the \(n\)-dimensional Kahler manifold \(X\).
**Definition 2.2**.: For \(\mathcal{E}\) a torsion-free coherent sheaf on \(X\), the slope \(\mu_{L}(\mathcal{E})\in\mathbb{Q}\) (with respect to \(L\)) is given by the intersection formula
\[\mu_{L}(\mathcal{E})=\frac{\deg_{L}(\mathcal{E})}{\operatorname{rank}( \mathcal{E})}, \tag{2.3}\]
where \(\operatorname{rank}(\mathcal{E})\) denotes the rank of \(\mathcal{E}\) while \(\deg_{L}(\mathcal{E})=c_{1}(\mathcal{E})\cdot L^{n-1}\) stands for its degree. Then, \(\mathcal{E}\) is said to be _slope semi-stable_ (resp. _slope stable_) with respect to \(L\) if for any coherent and saturated subsheaf \(\mathcal{F}\) of \(\mathcal{E}\) with \(0<\operatorname{rank}(\mathcal{F})<\operatorname{rank}(\mathcal{E})\), one has
\[\mu_{L}(\mathcal{F})\leq\mu_{L}(\mathcal{E})\,\big{(}\text{ resp. }\mu_{L}( \mathcal{F})<\mu_{L}(\mathcal{E})\big{)}.\]
A direct sum of slope stable sheaves of the same slope is said to be _slope polystable_.
In this paper, we will often omit "slope" and simply refer to stability of a sheaf, the polarisation being implicit. We will make the standard identification of a holomorphic vector bundle \(E\) with its sheaf of sections, and thus talk about slope stability notions for vector bundles as well. In that case slope stability relates nicely to differential geometry via the Hitchin-Kobayashi correspondence :
**Theorem 2.3** ([13, 16, 22, 6]).: _There exists a Hermite-Einstein metric on \(E\) with respect to \(\omega\) if and only if \(E\) is polystable with respect to \(L\)_
We will be mostly interested in semi-stable vector bundles. A _Jordan-Holder filtration_ for a torsion-free sheaf \(\mathcal{E}\) is a filtration by coherent and saturated subsheaves:
\[0=\mathcal{F}_{0}\subset\mathcal{F}_{1}\subset\ldots\subset\mathcal{F}_{\ell} =\mathcal{E}, \tag{2.4}\]
such that the corresponding quotients,
\[\mathcal{G}_{i}=\frac{\mathcal{F}_{i}}{\mathcal{F}_{i-1}}, \tag{2.5}\]
for \(i=1,\ldots,\ell\), are stable with slope \(\mu_{L}(\mathcal{G}_{i})=\mu_{L}(\mathcal{E})\). In particular, the graded object of this filtration
\[\operatorname{Gr}(\mathcal{E}):=\bigoplus_{i=1}^{l}\mathcal{G}_{i} \tag{2.6}\]
is polystable.
## 3. An adapted family of Kuranishi slices
Let \((X,[\omega])\) be a \(n\)-dimensional compact Kahler manifold, and let \(([\alpha_{i}])_{1\leq i\leq p}\) be a basis of the real vector space \(H^{1,1}(X,\mathbb{C})\cap H^{2}(X,\mathbb{R})\). For \(\underline{\varepsilon}=(\varepsilon_{1},\ldots,\varepsilon_{p})\in\mathbb{ R}^{p}\), we define
\[\omega_{\underline{\varepsilon}}:=\omega+\sum_{i=1}^{p}\varepsilon_{i}\alpha_ {i}\in\Omega^{1,1}(X,\mathbb{R}).\]
We fix a small neighbourhood \(U\) of zero in \(\mathbb{R}^{p}\) such that for any \(\underline{\varepsilon}\in U\), \([\omega_{\underline{\varepsilon}}]\) defines a Kahler class on \(X\). When considering slopes, we will use the notation \(L_{\underline{\varepsilon}}:=[\omega_{\underline{\varepsilon}}]\)
Let \(\mathrm{Gr}(E)=\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\) be a polystable holomorphic vector bundle over \((X,[\omega])\) with stable components \(\mathcal{G}_{i}\) (this will be the graded object of some semi-stable bundle later on). Thanks to the Hitchin-Kobayashi correspondence, we can fix an Hermite-Einstein metric \(h_{0}\) on \(\mathrm{Gr}(E)\). We denote by \(\overline{\partial}_{0}\) the holomorphic connection on \((\mathrm{Gr}(E),h_{0})\) and by \(A_{0}\) the associated (HYM) Chern connection. We will be interested in HYM connections on small deformations of \(\mathrm{Gr}(E)\), for the perturbed polarisations \(([\omega_{\underline{\varepsilon}}])_{\underline{\varepsilon}\in U}\). The goal of this section is to reduce the problem of finding a zero for the operator \(s\mapsto i\Lambda_{\omega_{\underline{\varepsilon}}}(F_{A_{0}^{\mathrm{exp}(s )}})-c_{\varepsilon}\mathrm{Id}\) in a gauge group orbit to a finite dimensional problem. Note that we don't need the asumption on \(\mathrm{Aut}(\mathrm{Gr}(E))\) being abelian in this section.
### Perturbing Kuranishi's slice
The automorphism group \(G:=\mathrm{Aut}(\mathrm{Gr}(E))\) is a reductive Lie group with Lie algebra \(\mathfrak{g}:=\mathfrak{aut}(\mathrm{Gr}(E))\) and compact form \(K\subset G\), with \(\mathfrak{k}:=\mathrm{Lie}(K)\).
Our starting point will be the following proposition, whose proof follows as in [15] (see also [2, 4] for a detailed treatment). We introduce the notation
\[V:=H^{0,1}(X,\mathrm{End}(\mathrm{Gr}(E)))\]
for the space of harmonic \((0,1)\)-forms with values in \(\mathrm{Gr}(E)\), where the metrics used to compute adjoints are \(\omega\) on \(X\) and \(h_{0}\) on \(\mathrm{Gr}(E)\). Note that the \(G\)-action on \(\mathrm{Gr}(E)\) induces a linear representation \(G\to\mathrm{GL}(V)\).
**Proposition 3.1**.: _There exists a holomorphic \(K\)-equivariant map_
\[\Phi:B\to\Omega^{0,1}(X,\mathrm{End}(\mathrm{Gr}(E)))\]
_from a ball around the origin \(B\subset V\) such that :_
1. \(\Phi(0)=0\)_;_
2. \(Z:=\{b\in B\mid(\overline{\partial}_{0}+\Phi(b))^{2}=0\}\) _is a complex subspace of_ \(B\)_;_
3. _if_ \((b,b^{\prime})\in Z^{2}\) _lie in the same_ \(G\)_-orbit, then_ \(\overline{\partial}_{0}+\Phi(b)\) _and_ \(\overline{\partial}_{0}+\Phi(b^{\prime})\) _induce isomorphic holomorphic bundle structures;_
4. _The_ \(\mathscr{G}^{\mathbb{C}}(\mathrm{Gr}(E))\)_-orbit of any small complex deformation of_ \(\mathrm{Gr}(E)\) _intersects_ \(\Phi(Z)\)_._
The space \(Z\) corresponds to the space of integrable Dolbeault operators in the image of \(\Phi\), and \(\Phi(B)\) is a _slice_ for the gauge group action on the set of Dolbeault operators nearby \(\overline{\partial}_{0}\).
The next step will be to perturb \(\Phi\). The ideas here go back to [7, 20], and our framework will be that of [2]. The strategy to do this in family with respect to the parameter \(\underline{\varepsilon}\) was inspired by [3, 18].
Given the metrics \(\omega\) and \(h_{0}\), together with the covariant derivatives given by \(A_{0}\), we can introduce \(L^{2,l}\) Sobolev norms on spaces of sections. We will denote by \(\mathcal{E}^{l}\) the \(L^{2,l}\) Sobolev completion of any space of sections \(\mathcal{E}\). In what follows, \(l\in\mathbb{N}^{*}\) will be assumed large enough for elements in \(\mathcal{E}^{l}\) to admit as much regularity as required.
**Proposition 3.2**.: _Up to shrinking \(U\times B\), there is a continuously differentiable map_
\[\widetilde{\Phi}:U\times B\to\Omega^{0,1}(X,\mathrm{End}(\mathrm{Gr}(E)))^{l}\]
_such that if \(A_{\underline{\varepsilon},b}\) is the Chern connection of \((\overline{\partial}_{0}+\widetilde{\Phi}(\underline{\varepsilon},b),h_{0})\):_
1. _for all_ \((\underline{\varepsilon},b)\in U\times Z\)_,_ \(\overline{\partial}_{0}+\widetilde{\Phi}(\underline{\varepsilon},b)\) _and_ \(\overline{\partial}_{0}+\Phi(b)\) _are gauge equivalent,_
2. _for all_ \(\underline{\varepsilon}\in U\)_, the map_ \(b\mapsto\Lambda_{\omega_{\underline{\varepsilon}}}iF_{A_{\underline{\varepsilon},b}^{k}}\) _takes values in_ \(\mathfrak{k}\)
**Remark 3.3**.: By elliptic regularity, elements in the image of \(\widetilde{\Phi}\) will actually be smooth. However, regularity of the map \(\widetilde{\Phi}\) is with respect to the \(L^{2,l}\) Sobolev norm.
To ease notations, we will use \(\Lambda_{\underline{\varepsilon}}\) for the Lefschetz operator of \(\omega_{\underline{\varepsilon}}\). We also introduce the topological constants
\[c_{\underline{\varepsilon}}=\frac{2\pi n}{\operatorname{vol}_{\omega_{ \underline{\varepsilon}}}(X)}\frac{\left(c_{1}(\operatorname{Gr}(E))\cup[ \omega_{\underline{\varepsilon}}]^{n-1}\right)[X]}{\operatorname{rank}( \operatorname{Gr}(E))}.\]
Proof.: For \(b\in B\), we will denote by \(A_{b}\) the Chern connection of \((\overline{\partial}_{0}+\Phi(b),h_{0})\). Note that in particular \(A_{0}\) is a HYM connection on \(\operatorname{Gr}(E)\). The aim is to apply the implicit function theorem to perturb \(A_{b}\) along gauge orbits in order to satisfy point (2) of the statement. For \(s\in\Gamma(X,\operatorname{End}(\operatorname{Gr}(E)))\) we define \(A_{b}(s)=A_{b}^{\exp(s)}\). By the regularity of \(\Phi\), the assignment \((b,s)\mapsto A_{b}(s)-A_{0}\) (resp. \((b,s)\mapsto F_{A_{b}(s)}\)) is smooth from \(B\times\Gamma(X,\operatorname{End}(\operatorname{Gr}(E)))^{l}\) to \(\Omega^{1}(X,\operatorname{End}(\operatorname{Gr}(E)))^{l-1}\) (resp. \(\Omega^{2}(X,\operatorname{End}(\operatorname{Gr}(E)))^{l-2}\)). We deduce that the operator
\[\begin{array}{ccc}\Psi:&U\times B\times\Gamma(X,\operatorname{End}_{H}( \operatorname{Gr}(E),h_{0}))^{l}&\to&\Gamma(X,\operatorname{End}_{H}( \operatorname{Gr}(E),h_{0}))^{l-2}\\ &(\underline{\varepsilon},b,s)&\mapsto&\Lambda_{\underline{\varepsilon}}iF_{A _{b}(s)}-c_{\underline{\varepsilon}}\operatorname{Id}_{\operatorname{Gr}(E)} \end{array}\]
is a \(\mathcal{C}^{1}\) map. As \(A_{0}\) is HYM, we have \(\Psi(0)=0\). By Lemma 2.1, its differential in the \(s\) direction at zero is given by the Laplace operator \(\Delta_{A_{0}}\) of \(A_{0}\), whose co-kernel is \(i\mathfrak{k}\subset\Gamma(X,\operatorname{End}_{H}(\operatorname{Gr}(E),h_{ 0}))\). Then, by a standard projection argument onto some orthogonal complement of \(i\mathfrak{k}\), we can apply the implicit function theorem and obtain a \(\mathcal{C}^{1}\) map \((\underline{\varepsilon},b)\mapsto s(\underline{\varepsilon},b)\) such that \(\Psi(b,\underline{\varepsilon},s(\underline{\varepsilon},b))\) lies in \(\mathfrak{k}\), and conclude the proof by setting
\[\widetilde{\Phi}(\underline{\varepsilon},b)=(A_{b}(\underline{\varepsilon},s (\underline{\varepsilon},b)))^{0,1}-A_{0}^{0,1}.\]
### The finite dimensional moment maps
We will now explain that for each \(\underline{\varepsilon}\in U\), the map
\[\begin{array}{ccc}\mu_{\underline{\varepsilon}}:&B&\to&\mathfrak{k}\\ b&\mapsto&\Lambda_{\underline{\varepsilon}}iF_{A_{\underline{\varepsilon}^{ \cdot}b}}-c_{\underline{\varepsilon}}\operatorname{Id}_{\operatorname{Gr}(E)} \end{array} \tag{3.1}\]
is a moment map for the \(K\)-action on \(B\), for suitable symplectic forms \(\Omega_{\underline{\varepsilon}}\) on \(B\). Recall from [1, 5] that for \(\underline{\varepsilon}\in U\), the gauge action of \(\mathscr{G}^{\mathbb{C}}(\operatorname{Gr}(E),h_{0})\) on the affine space \(\overline{\partial}_{0}+\Omega^{0,1}(X,\operatorname{End}(\operatorname{Gr} (E)))\) is hamiltonian for the symplectic form given, for \((a,b)\in\Omega^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E)))^{2}\), by
\[\Omega^{D}_{\underline{\varepsilon}}(a,b)=\int_{X}\operatorname{trace}(a\wedge b ^{*})\wedge\frac{\omega_{\underline{\varepsilon}}^{n-1}}{(n-1)!}, \tag{3.2}\]
with equivariant moment map \(\overline{\partial}\mapsto\Lambda_{\underline{\varepsilon}}F_{A_{\overline {\partial}}}\) where \(A_{\overline{\partial}}\) stands for the Chern connection of \((\overline{\partial},h_{0})\). Here, we identified the Lie algebra of \(\mathscr{G}^{\mathbb{C}}(\operatorname{Gr}(E),h_{0})\) with its dual by mean of the invariant pairing
\[\left\langle s_{1},s_{2}\right\rangle_{\underline{\varepsilon}}:=\int_{X} \operatorname{trace}(s_{1}\cdot s_{2}^{*})\ \frac{\omega_{\underline{\varepsilon}}^{n}}{n!}. \tag{3.3}\]
**Remark 3.4**.: We used above the Chern correspondence, for \(h_{0}\) fixed, between Dolbeault operators and hermitian connections to express the infinite dimensional moment map picture on the space of Dolbeault operators.
**Proposition 3.5**.: _Up to shrinking \(U\times B\), for all \(\underline{\varepsilon}\in U\), the map \(\overline{\partial}_{0}+\widetilde{\Phi}(\varepsilon,\cdot)\) is a \(K\)-equivariant map from \(B\) to \(\overline{\partial}_{0}+\Omega^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E)))\) whose image is a symplectic submanifold for \(\Omega^{D}_{\underline{\varepsilon}}\)._
Proof.: The equivariance follows easily from Proposition 3.1 and from the construction of \(\widetilde{\Phi}\) in the proof of Proposition 3.2. The map \(\widetilde{\Phi}\) is obtained by perturbing \(\Phi\). But \(\Phi\) is complex analytic with, by construction, injective differential at the origin (see e.g. the orginal proof [15] or [4]). Thus \(\Phi(B)\) is a complex subspace of \(\Omega^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E)))\). We deduce that, up to shrinking \(B\), \(\Phi\) induces an embedding of \(B\) such that the restriction of \(\Omega^{D}_{0}\) to \(\Phi(B)\) is non-degenerate (recall that \(\Omega^{D}_{0}\) is a Kahler form on the space of Dolbeault operators on \(X\)). As \(\widetilde{\Phi}(\underline{\varepsilon},\cdot)\) is obtained by a small and continuous perturbation of \(\Phi\), and as being a symplectic embedding is and open condition, the result follows.
From this result, we deduce that the map \(\mu_{\underline{\varepsilon}}\) defined in (3.1) is a moment map for the \(K\)-action on \(B\) with respect to the pulled back symplectic form
\[\Omega_{\underline{\varepsilon}}:=\widetilde{\Phi}(\varepsilon,\cdot)^{*} \Omega^{D}_{\underline{\varepsilon}},\]
and where we use the pairing \(\langle\cdot,\cdot\rangle_{\underline{\varepsilon}}\) defined in (3.3) to identify \(\mathfrak{k}\) with its dual.
We now assume that \(\operatorname{Gr}(E)\) is the graded object of a simple, semi-stable and sufficiently smooth holomorphic vector bundle \(E\) on \((X,[\omega])\). As \(E\) is built out of successive extensions of the stable components \(\mathcal{G}_{i}\)'s of \(\operatorname{Gr}(E)\), the Dolbeault operator \(\overline{\partial}_{E}\) on \(E\) is given by
\[\overline{\partial}_{E}=\overline{\partial}_{0}+\gamma\]
where \(\gamma\in\Omega^{0,1}(X,\operatorname{Gr}(E)^{*}\otimes\operatorname{Gr}(E))\) can be written
\[\gamma=\sum_{i<j}\gamma_{ij}\]
for (possibly vanishing) \(\gamma_{ij}\in\Omega^{0,1}(X,\mathcal{G}_{j}^{*}\otimes\mathcal{G}_{i})\). Elements
\[g:=g_{1}\operatorname{Id}_{\mathcal{G}_{1}}+\ldots,+g_{\ell}\operatorname{Id }_{\mathcal{G}_{\ell}}\in G,\]
for \((g_{i})\in(\mathbb{C}^{*})^{\ell}\), act on \(\overline{\partial}_{E}\) and produce isomorphic holomorphic vector bundles in the following way :
\[g\cdot\overline{\partial}_{E}=\overline{\partial}_{0}+\sum_{i<j}g_{i}g_{j}^{-1 }\gamma_{ij}. \tag{3.4}\]
In particular, for \(g=(t^{\ell},t^{\ell-1},\ldots,t)\), letting \(t\mapsto 0\), we can see \(E\) as a small complex deformation of \(\operatorname{Gr}(E)\). By Proposition 3.1, the holomorphic connection \(\overline{\partial}_{E}\) is gauge equivalent to an element \(\overline{\partial}_{b}:=\overline{\partial}_{0}+\Phi(b)\) for some \(b\in B\). Then, from properties of the maps \(\Phi\) and \(\widetilde{\Phi}\), for all \(\underline{\varepsilon}\in U\) and for all \(g\in G\), \(\overline{\partial}_{E}\) will be gauge equivalent to \(\overline{\partial}_{0}+\widetilde{\Phi}(\underline{\varepsilon},g\cdot b)\), provided \(g\cdot b\in B\). As a zero of \(\mu_{\underline{\varepsilon}}\) corresponds to a HYM connection on \((X,\omega_{\underline{\varepsilon}})\), we are left with the problem of characterising the existence of a zero for \(\mu_{\underline{\varepsilon}}\) in the \(G\)-orbit of \(b\).
## 4. Proof of the main results
We carry on with notations from the last section, and our goal now is to prove Proposition 1.1 and Theorem 1.2. This is where we will need to assume that
\(G=\operatorname{Aut}(\operatorname{Gr}(E))\) is abelian. As \(\operatorname{Gr}(E)=\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\) this is equivalent to the fact that all stable components \(\mathcal{G}_{i}\) are non isomorphic, and this gives the explicit description
\[\mathfrak{g}=\mathfrak{aut}(\operatorname{Gr}(E))=\bigoplus_{i=1}^{\ell} \mathbb{C}\cdot\operatorname{Id}_{\mathcal{G}_{i}}.\]
Up to shrinking \(U\), we will assume that the set \(\{[\omega_{\underline{\varepsilon}}],\,\underline{\varepsilon}\in U\}\) is a ball \(B([\omega],R)\subset\mathcal{K}_{X}\). By definition of the set \(\mathcal{S}_{u}\) in Section 1, it is clear that for any \([\omega^{\prime}]\in\mathcal{S}_{u,R}\), \(E\) will be unstable with respect to \([\omega^{\prime}]\). In the sequel, we will first show that, up to shrinking \(R\), \(E\) admits HYM connections with respect to all classes in \(\mathcal{S}_{s,R}\). Together with simplicity of \(E\), this will imply that \(E\) is stable for any class in \(\mathcal{S}_{s,R}\). Then, we will explore the convergence properties of the constructed HYM connections to conclude proof of Theorem 1.1. Finally, we will settle the semi-stable case, for classes in \(\mathcal{S}_{ss,R}\), and the proof of Proposition 1.1 will be complete.
### The local convex cone associated to the \(K\)-action
Our first goal is to show that for any \(\underline{\varepsilon}\in U\) such that \([\omega_{\underline{\varepsilon}}]\in\mathcal{S}_{s,R}\), there is a zero of \(\mu_{\underline{\varepsilon}}\) in
\[\mathcal{Z}:=G\cdot b\cap B.\]
In order to do so, we start by describing, at least locally, the images of \(\mathcal{Z}\) by the maps \((\mu_{\underline{\varepsilon}})_{\underline{\varepsilon}\in U}\). In this section, relying on [18], we will see that those images all contain translations of (a neighbourhood of the apex of) the same convex cone.
By simplicity of \(E\), the stabiliser of \(b\) under the \(K\)-action is reduced to the \(S^{1}\)-action induced by gauge transformations of the form \(e^{i\theta}\operatorname{Id}_{E}\). As those elements fix all the points in \(B\), elements in \(S^{1}\cdot\operatorname{Id}_{E}\) will play no role in the arguments that follow. Hence, we will work instead with the quotient torus \(K_{0}:=K/S^{1}\cdot\operatorname{Id}_{E}\). Note that the constants \(c_{\underline{\varepsilon}}\) that appear in the maps \(\mu_{\varepsilon}\) in (3.1) are chosen so that \(\langle\mu_{\underline{\varepsilon}},\operatorname{Id}_{E}\rangle_{ \underline{\varepsilon}}=0\). As the \(\mu_{\underline{\varepsilon}}\) take values in \(\mathfrak{k}\), this is equivalent to say \(\operatorname{trace}(\mu_{\underline{\varepsilon}})=0\). Hence, setting \(\mathfrak{k}_{0}\subset\mathfrak{k}\) to be the set of trace free elements in \(\bigoplus_{i=1}^{\ell}i\mathbb{R}\cdot\operatorname{Id}_{\mathcal{G}_{i}}\), we will consider the family of moment maps \(\mu_{\underline{\varepsilon}}:B\to\mathfrak{k}_{0}\) for the \(K_{0}\)-action, and we may, and will, assume that the stabiliser of \(b\) is trivial. Using the inner product \(\langle\cdot,\cdot\rangle_{\underline{\varepsilon}}\) to identify \(\mathfrak{k}_{0}\simeq\mathfrak{k}_{0}^{*}\), we can see the maps \(\mu_{\underline{\varepsilon}}\) as taking values in \(\mathfrak{k}_{0}^{*}\) :
\[\mu_{\varepsilon}^{*}:B\to\mathfrak{k}_{0}^{*}.\]
There is a weight decomposition of \(V\) under the abelian \(K\)-action
\[V:=\bigoplus_{m\in M}V_{m} \tag{4.1}\]
for \(M\subset\mathfrak{k}_{0}^{*}\) the lattice of characters of \(K_{0}\). In the matrix blocks decomposition of \(V=H^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E)))\) induced by \(\operatorname{Gr}(E)=\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\), using the product hermitian metric \(h_{0}\), we have
\[V=\bigoplus_{1\leq i,j\leq\ell}H^{0,1}(X,\mathcal{G}_{i}^{*}\otimes\mathcal{G }_{j}).\]
The action of \(g=(g_{1},\dots,g_{\ell})\in K_{0}\simeq(\mathbb{C}^{*})^{\ell}\) on \(\gamma_{ij}\in V_{ij}:=H^{0,1}(X,\mathcal{G}_{i}^{*}\otimes\mathcal{G}_{j})\) is given by:
\[g\cdot\gamma_{ij}=g_{i}g_{j}^{-1}\gamma_{ij}. \tag{4.2}\]
Thus, in the weight space decomposition (4.1), \(V_{ij}\) is the eigenspace with weight
\[m_{ij}:=(0,\dots,0,1,0,\dots,0,-1,0,\dots,0) \tag{4.3}\]
where \(+1\) appears in \(i\)-th position and \(-1\) in the \(j\)-th position. If we decompose \(b\) accordingly as
\[b=\sum_{ij}b_{ij}, \tag{4.4}\]
where \(b_{ij}\in V_{ij}\) is non zero, as \(\overline{\partial}_{E}=\overline{\partial}_{0}+\gamma\) with \(\gamma\) upper triangular, or equivalently as \(E\) is obtained as successive extensions of the stable components \(\mathcal{G}_{i}\)'s, only indices \((i,j)\) with \(i<j\) will appear in (4.4). From now on, we will restrict our setting to
\[B\cap\bigoplus_{b_{ij}\neq 0}V_{ij},\]
which we still denote by \(B\). That is, we only consider weight spaces that appear in the decomposition of \(b\). Similarily, we use the notation \(V\) for \(\bigoplus_{b_{ij}\neq 0}V_{ij}\).
To sum up, we are in the following setting :
* The compact torus \(K_{0}\) acts effectively and holomorphically on the complex vector space \(V\);
* There is a continous family of symplectic forms \((\Omega_{\underline{\varepsilon}})_{\underline{\varepsilon}\in U}\) on \(B\subset V\) around the origin, with respect to which the \(K_{0}\)-action is hamiltonian;
* The point \(b\in B\) has trivial stabiliser, \(0\) in its \(K_{0}^{\mathbb{C}}\)-orbit closure, and for all weight \(m_{ij}\in M\) appearing in the weight space decomposition of \(V\), \(b_{ij}\neq 0\).
* The restriction of the symplectic form \(\Omega_{0}\) to the \(K_{0}^{\mathbb{C}}\)-orbit of \(b\) is non-degenerate.
This last point follows as in the proof of Proposition 3.5. We set
\[\overline{\mathcal{Z}}:=B\cap(\overline{K_{0}^{\mathbb{C}}\cdot b}).\]
We also introduce
\[\sigma:=\sum_{b_{ij}\neq 0}\mathbb{R}_{+}\cdot m_{ij}\subset\mathfrak{k}_{0}^{*}\]
with \(\{m_{ij},\,b_{ij}\neq 0\}\) the set of weights that appear in the decomposition of \(b\in V\), and for \(\eta>0\)
\[\sigma_{\eta}:=\sum_{b_{ij}\neq 0}[0,\eta)\cdot m_{ij}\subset\mathfrak{k}_{0}^{ *}.\]
Note that by the local version of Atiyah and Guillemin-Sternberg's convexity theorem, there exists \(\eta>0\) such that \(\mu_{\underline{\varepsilon}}^{*}(0)+\sigma_{\eta}\subset\mu_{\underline{ \varepsilon}}^{*}(B)\) for all \(\underline{\varepsilon}\) small enough (see the equivariant Darboux Theorem [8, Theorem 3.2] combined with the local description of linear hamiltonian torus actions [8, Section 7.1]). By [18, Proposition 4.6], the properties \((R_{1})-(R_{4})\) listed above actually imply :
**Proposition 4.1**.: _Up to shrinking \(U\times B\), there exists \(\eta>0\) such that for all \(\underline{\varepsilon}\in U\),_
\[\mu_{\underline{\varepsilon}}^{*}(0)+\operatorname{Int}(\sigma_{\eta})\subset \mu_{\underline{\varepsilon}}^{*}(\mathcal{Z})\]
_and_
\[\mu_{\underline{\varepsilon}}^{*}(0)+\sigma_{\eta}\subset\mu_{\underline{ \varepsilon}}^{*}(\overline{\mathcal{Z}}).\]
**Remark 4.2**.: The fact that the interior of \(\mu_{\underline{\varepsilon}}^{*}(0)+\sigma_{\eta}\) is included in the image of the \(K_{0}^{\mathbb{C}}\)-orbit of \(b\) by \(\mu_{\underline{\varepsilon}}^{*}\) is not stated explicitely in [18], but follows from the discussion at the beginning of the proof of [18, Proposition 4.6].
Finding zeros of \(\mu_{\underline{\varepsilon}}\) when \([\omega_{\underline{\varepsilon}}]\in\mathcal{S}_{s,R}\)
We now assume that \([\omega_{\underline{\varepsilon}}]\in\mathcal{S}_{s,R}\). Note that from the definition of \(\mathcal{S}_{s,R}\), this is equivalent to the fact that for any \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\),
\[\mu_{L_{\underline{a}}}(\mathcal{F})<\mu_{L_{\underline{a}}}(E),\]
where we recall \(L_{\underline{\varepsilon}}=[\omega_{\underline{\varepsilon}}]\). From Proposition 4.1, to find a zero of \(\mu_{\underline{\varepsilon}}\) in \(\mathcal{Z}\) it is enough to show \(-\mu_{\underline{\varepsilon}}^{*}(0)\in\operatorname{Int}(\sigma_{\eta})\), which reduces to \(-\mu_{\underline{\varepsilon}}^{*}(0)\in\operatorname{Int}(\sigma)\) for small enough \(\underline{\varepsilon}\). Arguing as in [18, Lemma 4.8], \(\sigma\) and its dual
\[\sigma^{\vee}:=\{v\in\mathfrak{k}_{0}\mid\langle m,v\rangle\geq 0\;\forall m \in\sigma\}\]
are strongly convex rational polyhedral cones of dimension \(\ell-1\). Note that here the pairing \(\langle\cdot,\cdot\rangle\) is the natural duality pairing. By duality, \(\sigma=(\sigma^{\vee})^{\vee}\), and we are left with the condition
\[-\mu_{\underline{\varepsilon}}^{*}(0)\in\operatorname{Int}((\sigma^{\vee})^ {\vee}).\]
The cone \(\sigma^{\vee}\) can be written
\[\sigma^{\vee}=\sum_{\underline{a}\in\mathcal{A}}\mathbb{R}_{+}\cdot v_{ \underline{a}}\]
for a finite set of generators \(\{v_{\underline{a}}\}_{\underline{a}\in\mathcal{A}}\subset\mathfrak{k}_{0}\). Hence, our goal now is to show that for all \(\underline{a}\in\mathcal{A}\), \(\langle\mu_{\underline{\varepsilon}}^{*}(0),v_{\underline{a}}\rangle<0\), which by construction is equivalent to
\[\langle\mu_{\underline{\varepsilon}}(0),v_{\underline{a}}\rangle_{ \underline{\varepsilon}}<0, \tag{4.5}\]
under the assumption that for any \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\),
\[\mu_{L_{\underline{a}}}(\mathcal{F})<\mu_{L_{\underline{a}}}(E). \tag{4.6}\]
We will then study in more detail Equations (4.5) and (4.6). In order to simplify notation, in what follows, we will assume that all the stable components of \(\operatorname{Gr}(E)\) have rank one, so that \(\operatorname{trace}(\operatorname{Id}_{\mathcal{G}_{i}})=1\) for \(1\leq i\leq\ell\). The general case can easily be adapted, and is left to the reader.
#### 4.2.1. Condition (4.5) : generators of the dual cone
We will give here a more precise form for the generators \(\{v_{\underline{a}}\}_{\underline{a}\in\mathcal{A}}\) of \(\sigma^{\vee}\). Recall from [9, Section 1.2] the method to find such generators : as \(\sigma\) is \(\ell-1\)-dimensional, each of its facets is generated by \(\ell-2\) elements amongst its generators \((m_{ij})\). Then, a generator \(v_{\underline{a}}\) for \(\sigma^{\vee}\) will be an "inward pointing normal" to such a facet. Hence, if
\[v_{\underline{a}}=\sum_{i=1}^{\ell}a_{i}\operatorname{Id}_{\mathcal{G}_{i}}\]
is a generator of \(\sigma^{\vee}\), there exists a set \(\mathcal{S}:=\{m_{ij}\}\) of \(\ell-2\) generators of \(\sigma\) such that
\[\forall\;m_{ij}\in\mathcal{S},\;\langle m_{ij},v_{\underline{a}}\rangle=0.\]
Moreover, \(v_{\underline{a}}\in\mathfrak{k}_{0}\) should be trace free, and as we assume here \(\operatorname{rank}(\mathcal{G}_{i})=1\) for all stable components, it gives
\[\sum_{i=1}^{\ell}a_{i}=0.\]
**Lemma 4.3**.: _Up to scaling \(v_{\underline{a}}\), there exists a partition \(\{1,\ldots,\ell\}=I^{-}\cup I^{+}\) such that for all \(i\in I^{-}\), \(a_{i}=-\frac{1}{\sharp I^{-}}\) and for all \(i\in I^{+}\), \(a_{i}=\frac{1}{\sharp I^{+}}\), where \(\sharp\) stands for the cardinal of a set._
Proof.: The key is to observe that if \(m_{ij},m_{jk}\in\mathcal{S}^{2}\), then \(m_{ik}\notin\mathcal{S}\). Indeed, by (4.3), \(m_{ij}+m_{jk}=m_{ik}\), and those are generators of the cone. Equivalently, if \(m_{ij},m_{ik}\in\mathcal{S}^{2}\), then \(m_{jk}\notin\mathcal{S}\). We then assign an oriented graph \(G_{\underline{a}}\) to \(v_{\underline{a}}\). The vertices are labelled \(a_{1}\) to \(a_{\ell}\), and we draw an oriented edge from \(a_{i}\) to \(a_{j}\) if \(a_{i}=a_{j}\) and \(i<j\). For each \(m_{ij}\in\mathcal{S}\), \(\langle m_{ij},v_{\underline{a}}\rangle=0\) gives \(a_{i}=a_{j}\). Hence, \(G_{\underline{a}}\) has at least \(\ell-2\) edges. To prove the result, it is enough to show that \(G_{\underline{a}}\) has \(2\) connected components. Indeed, we can then set \(I^{-}=\{i\,|a_{i}<0\}\) and \(I^{+}=\{i\,|a_{i}>0\}\). All elements \(a_{i}\) for \(i\in I^{-}\) will correspond to the same connected component and be equal, and similarily with \(i\in I^{+}\). As \(\sum_{i=1}^{\ell}a_{i}=0\), we obtain the result by rescaling.
Proving that \(G_{\underline{a}}\) has two connected components is then routine. It has \(\ell\) vertices and \(\ell-2\) oriented edges, with the rule that if there is an edge from \(a_{i}\) to \(a_{j}\) and an edge from \(a_{i}\) to \(a_{k}\), then there is no edge from \(a_{j}\) to \(a_{k}\). We consider the number of edges that start from \(a_{1}\). If there are \(\ell-2\) of those, then the connected component of \(a_{1}\) has at least \(\ell-1\) vertices, and we are left with at most \(1\) singleton for the other component. The fact that \(v_{\underline{a}}\) is trace free imposes that there are at least \(2\) connected components, and we are done in that case. Then, if there are \(\ell-2-k\) edges from \(a_{1}\), its connected component has at least \(\ell-1-k\) elements, and we are left with at most \(k+1\) vertices and \(k\) edges for the other components. But its easy to show, by induction on \(k\), that the rule stated above implies that there will be at most \(1\) connected component for such a graph with \(k+1\) vertices and \(k\) edges, and we are done.
We can now translate condition (4.5). By Lemma 4.3, it is equivalent to
\[\frac{\sum_{i\in I^{+}}\langle\mu_{\underline{\varepsilon}}(0),\operatorname{Id }_{\mathcal{G}_{i}}\rangle_{\underline{\varepsilon}}}{\sharp I^{+}}<\frac{ \sum_{i\in I^{-}}\langle\mu_{\underline{\varepsilon}}(0),\operatorname{Id}_{ \mathcal{G}_{i}}\rangle_{\underline{\varepsilon}}}{\sharp I^{-}}. \tag{4.7}\]
#### 4.2.2. Condition (4.6) : one parameter degenerations
We will associate to each generator \(v_{\underline{a}}\) of \(\sigma^{\vee}\) a subsheaf \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\). Geometrically, the idea is that \(v_{\underline{a}}\in\mathfrak{k}_{0}\) generates a one-parameter subgroup of \(K_{0}\) and a degeneration of \(E\) to \(\mathcal{F}\oplus E/\mathcal{F}\), to which is assigned the Hilbert-Mumford weight \(\mu_{L_{\underline{\varepsilon}}}(\mathcal{F})-\mu_{L_{\underline{ \varepsilon}}}(E)<0\). We let \(v_{\underline{a}}=\sum_{i=1}^{\ell}a_{i}\operatorname{Id}_{\mathcal{G}_{i}} \in\sigma^{\vee}\) a generator as above, and define
\[\mathcal{F}_{\underline{a}}=\bigoplus_{i\in I^{+}}\mathcal{G}_{i},\]
as a _smooth_ complex vector bundle, and will show that \(\overline{\partial}_{E}(\mathcal{F}_{\underline{a}})\subset\Omega^{0,1}(X, \mathcal{F}_{\underline{a}})\). This implies that \(\mathcal{F}_{\underline{a}}\in\mathfrak{E}_{[\omega]}\) as a _holomorphic_ vector bundle, with Dolbeault operator the restriction of \(\overline{\partial}_{E}\). Recall that \(\overline{\partial}_{E}=\overline{\partial}_{0}+\gamma=\overline{\partial}_{0 }+\sum_{b_{ij}\neq 0}\gamma_{ij}\), that is, by choice of \(b\), the weights that appear in the weight decomposition of \(\gamma\) are the same as those that appear in the decomposition of \(b\). In the matrix blocks decomposition given by \(\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\), the operator \(\overline{\partial}_{0}\) is diagonal, and thus sends \(\mathcal{F}_{\underline{a}}\) to \(\Omega^{0,1}(X,\mathcal{F}_{\underline{a}})\). We need to show that for each \(j\in I^{+}\), \(\gamma(\mathcal{G}_{j})\subset\Omega^{0,1}(X,\mathcal{F}_{\underline{a}})\). As \(v_{\underline{a}}\in\sigma^{\vee}\), it satisfies, for any generator \(m_{ij}\) of \(\sigma\) :
\[\langle m_{ij},v_{\underline{a}}\rangle\geq 0,\]
that is, for all \((i,j)\) with \(i<j\) and \(b_{ij}\neq 0\),
\[a_{i}-a_{j}\geq 0.\]
As \(j\in I^{+}\), this implies \(a_{i}\geq a_{j}>0\). Hence, if \(i<j\) is such that \(b_{ij}\neq 0\), then \(i\in I^{+}\). Equivalently, for \(i<j\), \(i\in I^{-}\) implies \(\gamma_{ij}=0\), and thus we see that \(\gamma(\mathcal{G}_{j})\subset\Omega^{0,1}(X,\mathcal{F}_{\underline{a}})\), and hence \(\overline{\partial}_{E}(\mathcal{F}_{\underline{a}})\subset\Omega^{0,1}(X, \mathcal{F}_{\underline{a}})\).
Then we have \(\mathcal{F}_{\underline{a}}\in\mathfrak{E}_{[\omega]}\) and Condition (4.6) gives
\[\mu_{L_{\underline{z}}}(\mathcal{F}_{\underline{a}})<\mu_{L_{\underline{z}}}(E),\]
which, by the see-saw property of slopes (see e.g. [19, Corollary 3.5] ), is equivalent to
\[\mu_{L_{\underline{z}}}(\mathcal{F}_{\underline{a}})<\mu_{L_{\underline{z}}}( E/\mathcal{F}_{\underline{a}})\]
and thus (recall we assume \(\operatorname{rank}(\mathcal{G}_{i})=1\)):
\[\frac{\sum_{i\in I^{+}}\mu_{L_{\underline{z}}}(\mathcal{G}_{i})}{\sharp I^{+} }<\frac{\sum_{i\in I^{-}}\mu_{L_{\underline{z}}}(\mathcal{G}_{i})}{\sharp I^{- }}. \tag{4.8}\]
### Conclusion
By Chern-Weil theory, using the fact that \(A_{0}\) and \(A_{\varepsilon,0}\) are gauge-equivalent by point (2) of Proposition 3.2, we have
\[\mu_{L_{\underline{z}}}(\mathcal{G}_{i}) = c_{1}(\mathcal{G}_{i})\cdot[\omega_{\underline{z}}]^{n-1}\] \[= \frac{1}{2\pi}\langle\mu_{\underline{z}}(0),\mathrm{Id}_{ \mathcal{G}_{i}}\rangle_{\underline{z}}+\frac{\underline{c}_{\underline{z}}}{ 2\pi}\langle\mathrm{Id}_{E},\mathrm{Id}_{\mathcal{G}_{i}}\rangle_{\underline{ z}}.\]
We also have for all \(i\)'s,
\[\langle\mathrm{Id}_{E},\mathrm{Id}_{\mathcal{G}_{i}}\rangle_{\underline{z}}= \frac{[\omega_{\underline{z}}]^{n}}{n!}.\]
Inequality (4.8) implies Inequality (4.7), which concludes the existence of \(b_{\underline{z}}\in\mathcal{Z}\) such that \(\mu_{\underline{z}}(b_{\underline{z}})=0\). Then, by construction, the associated connections \(A_{\underline{z},b_{\underline{z}}}\) provide HYM connections with respect to \(\omega_{\underline{z}}\) on bundles gauge equivalent to \(E\), where the gauge equivalences are given by elements in the finite dimensional Lie group \(\operatorname{Aut}(\operatorname{Gr}(E))\). To show the convergence property of the connections as stated in Theorem 1.2, consider a path \(t\mapsto[\omega_{t}]\) in a connected component of \(\mathcal{S}_{s,R}\), converging to \([\omega]\) when \(t\mapsto 0\). This corresponds to a path \(t\mapsto\underline{\varepsilon}(t)\in U\). It is then enough to show that the connections \(A_{\underline{z}(t),b_{\underline{z}(t)}}\) converge to \(A_{0}=A_{0,0}\) in any \(L^{2,l}\) Sobolev norm. By construction of \(A_{\underline{z},b}\) in Proposition 3.2, it is enough to prove that \(b_{\underline{z}(t)}\) converges to \(0\) when \(\underline{\varepsilon}(t)\to 0\). Recall from [8, Theorem 3.2 and Section 7.1] that \(B\) can be chosen so that \(\mu_{\underline{z}}^{*}\) is given by
\[\mu_{\underline{z}}^{*}(b^{\prime})=\mu_{\underline{z}}^{*}(0)+\sum_{ij}||b^{ \prime}_{ij}||_{\underline{z}}^{2}\cdot m_{ij}, \tag{4.9}\]
for some norm \(||\cdot||_{\underline{z}}\) that depends continously on \(\underline{\varepsilon}\). As \(\mu_{\underline{z}}(0)\underset{\underline{\varepsilon}\to 0}{\longrightarrow}\mu_{0}(0)=0\), the equation \(\mu_{\underline{z}}^{*}(b_{\underline{z}})=0\) implies that for all \((i,j)\), \(||(b_{\underline{z}})_{ij}||_{\underline{z}}\underset{\underline{z}\to 0}{ \longrightarrow}0\). As the norms \(||\cdot||_{\underline{z}}\) vary continuously, they are mutually bounded, and thus \(b_{\underline{z}(t)}\underset{t\to 0}{\longrightarrow}0\), which concludes the proof of the convergence property in Theorem 1.2. What remains is the semi-stable case. We need to show that if \([\omega_{\underline{z}}]\in\mathcal{S}_{ss,R}\), \(E\) is semi-stable for \([\omega_{\underline{z}}]\). The only remaining case to study is when for all \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\), \(\mu_{L_{\underline{z}}}(\mathcal{F})\leq\mu_{L_{\underline{z}}}(E)\), with at least one equality. In that situation, the discussion in the last two sections shows that \(-\mu_{\underline{z}}(0)\in\sigma\) will lie in the boundary of \(\sigma\). Hence, by Proposition 4.1, there is a boundary point \(b^{\prime}\in\overline{\mathcal{Z}}\) in the orbit closure of \(b\) with \(\mu_{\underline{z}}(b^{\prime})=0\). This point corresponds to a HYM connection on a vector bundle that is then polystable for the holomorphic structure given by \(A_{\underline{z},b^{\prime}}^{0,1}\), with respect to \(L_{\underline{z}}\). As this bundle corresponds to a boundary point in the complex orbit of \(b\), it admits a small complex deformation to \(E\). As semi-stability is an open condition, we deduce that \(E\) is itself semi-stable for \(L_{\underline{z}}\).
**Remark 4.4**.: A little adaptation of the previous arguments leads to the following result. Assume that \(t\mapsto[\omega_{\underline{\varepsilon}(t)}]\) is a path in a connected component of \(\mathcal{S}_{s,R}\) that converges to \([\omega_{\underline{\varepsilon}(0)}]\in\mathcal{S}_{ss,R}\). Then, we can find a filtration
\[0\subset\mathcal{F}_{1}\subset\ldots\subset\mathcal{F}_{l}=E\]
of \(E\) by subbundles \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\) such that for \(i\in\{1,\ldots,l\}\),
\[\mu_{L_{\underline{\varepsilon}(0)}}(\mathcal{F}_{i})=\mu_{L_{\underline{ \varepsilon}(0)}}(E) \tag{4.10}\]
and \(\operatorname{rank}(\mathcal{F}_{i-1})\) is maximal amongst the ranks of elements \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\) satisfying (4.10) and \(\mathcal{F}\subset\mathcal{F}_{i}\). We then set
\[\mathcal{G}:=\bigoplus_{i=1}^{l}\mathcal{F}_{i}/\mathcal{F}_{i-1}.\]
Then, there is a path \((A_{t})_{t\in(0,1]}\) of HYM connections on \(E\) with respect to the Kahler metrics \((\omega_{\underline{\varepsilon}(t)})_{t\in(0,1]}\) such that \(\lim_{t\to 0}A_{t}=A_{0}\) is a HYM connection on \(\mathcal{G}\to(X,[\omega_{\underline{\varepsilon}(0)}])\), that is, the holomorphic connection of \(A_{0}\) is equivalent to the one on \(\mathcal{G}\).
|
2304.12739 | Adaptive Representations of Sound for Automatic Insect Recognition | Insect population numbers and biodiversity have been rapidly declining with
time, and monitoring these trends has become increasingly important for
conservation measures to be effectively implemented. But monitoring methods are
often invasive, time and resource intense, and prone to various biases. Many
insect species produce characteristic sounds that can easily be detected and
recorded without large cost or effort. Using deep learning methods, insect
sounds from field recordings could be automatically detected and classified to
monitor biodiversity and species distribution ranges. We implement this using
recently published datasets of insect sounds (Orthoptera and Cicadidae) and
machine learning methods and evaluate their potential for acoustic insect
monitoring. We compare the performance of the conventional spectrogram-based
audio representation against LEAF, a new adaptive and waveform-based frontend.
LEAF achieved better classification performance than the mel-spectrogram
frontend by adapting its feature extraction parameters during training. This
result is encouraging for future implementations of deep learning technology
for automatic insect sound recognition, especially as larger datasets become
available. | Marius FaiÃ, Dan Stowell | 2023-04-25T11:28:13Z | http://arxiv.org/abs/2304.12739v1 | # Adaptive Representations of Sound
###### Abstract
Insect population numbers and biodiversity have been rapidly declining with time, and monitoring these trends has become increasingly important for conservation measures to be effectively implemented. But monitoring methods are often invasive, time and resource intense, and prone to various biases. Many insect species produce characteristic sounds that can easily be detected and recorded without large cost or effort. Using deep learning methods, insect sounds from field recordings could be automatically detected and classified to monitor biodiversity and species distribution ranges. We implement this using recently published datasets of insect sounds (Orthoptera and Cicadidae) and machine learning methods and evaluate their potential for acoustic insect monitoring. We compare the performance of the conventional spectrogram-based audio representation against LEAF, a new adaptive and waveform-based frontend. LEAF achieved better classification performance than the mel-spectrogram frontend by adapting its feature extraction parameters during training. This result is encouraging for future implementations of deep learning technology for automatic insect sound recognition, especially as larger datasets become available.
### Author summary
Insects are crucial members of our ecosystems. These often small and evasive animals have a big impact on their surroundings, and there is widespread concern about possible population declines. However, it can be difficult to monitor them in sufficient detail. We investigated an under-used evidence stream for insect monitoring: their sounds. Combining recent advances in deep learning, with newly curated open datasets of insect sound, we were able to train machine learning systems to identify insect species with encouraging strong performance. Since insect sounds are very different from human sounds, a key part of our investigation was to compare a standard (spectrographic) representation of sound against an automatically-optimized representation called LEAF. Across three different datasets we found LEAF led to more reliable species recognition. Our work demonstrates that sound recognition can be effective as a new evidence stream for insect monitoring.
### Introduction
The insect order Orthoptera forms the animal clade with the most species capable of acoustic communication, with about 16,000 species using acoustic signals for sexual communication, and even more species displaying acoustic defensive signaling [1]. The sounds are produced by stridulation, where body parts are rubbed against each other to create audible vibrations, with one body part having a row of fine teeth and the other being equipped with a spectrum that sets the teeth into vibration. Most of the 3200 species in the family Cicadidae produce sound by rapidly deforming tymbal membranes, producing series of loud clicking sounds that set the tymbals into resonance [2, 3, 4]. Many of these sounds are species-specific, and in some cases are key criteria for species identification [5].
Declines in insect population numbers have been receiving wide attention in the scientific community as well as the public, but many of these reports only sample a small number of representative species or focus on limited geographic locations [6, 7]. To implement effective conservation efforts, populations need to be monitored more closely and widely across species and geographic locations [6]. Insects are an especially difficult group to detect with conventional monitoring methods, mainly due to their small size, camouflage and cryptic lifestyles in often inaccessible and difficult environments such as tropical rainforests [8]. Such species might be detected much more easily by the sounds they produce. Acoustic monitoring methods focused on Orthoptera have been successfully used for detection of
presence and absence of species, determining distribution ranges, evaluating quality and deterioration of habitats and detection of otherwise cryptic species [9], since they can function as indicator species [10]. Additionally, this method is mostly non-invasive, less elaborate than other common monitoring approaches and could be automated to a high degree [8]. Video monitoring in comparison, is highly dependent on lighting conditions and direct visual contact with the subjects, and consumes more energy as well as data storage [11].
In the present work, we develop a robust method for acoustic classification of orthopteran and cicada species, using a deep learning method that can adapt to acoustic characteristics of the targeted insects. Previous attempts of identifying Orthoptera by their sounds have focused on using manual extraction of sound features such as carrier frequency or pulse rates [9]. These features must be manually selected and their parameters defined before use for automatic classification. These features might not perform well in all situations however, such as when background noise disturbs waveform feature measurements, when non-target species produce very similar sounds, or when target species show strong variation of certain parameters. For example, ambient temperature during the recording can influence the frequency of Orthoptera song as a result of being poikilothermic organisms [12]. Orthoptera regulate their speed of muscular contraction with the ambient temperature during song production. This results in higher frequency sounds and especially increased pulse rates with higher temperatures in most Orthoptera [12, 13].
Deep learning methods are a more recent promising approach for acoustic monitoring tasks, as they can classify complex acoustic signals with high accuracy and little to no manual preprocessing of the input data [14]. Combined with sound event detection (SED), long-form field recordings can be classified without any manual extraction of features or relevant clips to be identified. There are however a number of challenges to overcome, some practical and some related to the specific species traits. For applying machine learning methods, large, diverse and balanced annotated datasets are needed to train and test the algorithms.
Before an audio recording can be fed into a neural network to be analyzed, the high-resolution waveform has to be reduced to a feature space that can be processed and interpreted by a neural network [15, 16]. The common approach for audio classification tasks has historically been inspired by the human perception of frequency and loudness. This is in part due to the focus of many of the early audio classification tasks that were heavily
researched: speech or language recognition, or music-based analysis tasks [16]. All the relevant acoustic information for these tasks is contained in and optimized for human auditory perception, or vice versa. Humans experience frequency and loudness on non-linear scales [17]. Linear changes in frequency towards the lower frequency spectrum generally sound more obvious, while the same difference in frequency applied to a higher register can be undetectable to the human ear. In compressing the spectral energy of a signal for analysis in a neural network, these characteristics of human perception are applied with the use of the so-called mel-filter banks.
First, the input audio waveform is transformed into a spectrogram using the short-time Fourier transform (STFT), dissecting the signal into pure sine-wave frequencies and their respective energies [15, 17]. Then, the mel-filter banks are applied, consisting of triangular bandpass filters, spaced along a logarithmic scale over the sampled frequency spectrum. These filters pool the energy of all frequencies that lie within their range, using a windowing function. This reduces the resolution from a high sample rate down to a number of frequency bins that can be easily analyzed. Following this, loudness compression is applied, also based on the non-linearity of human hearing [15], resulting in a mel-spectrogram, that can essentially be treated like an image by a neural network. These processing methods, especially the filter banks, rely on hand-crafted parameters, that may not relate in any way to the sounds to be analyzed in a specific task. The logarithmic frequency scaling for example results in high spectral resolution in lower frequency ranges, but groups together larger and larger frequency ranges in higher registers, thereby potentially obscuring relevant high-frequency information and focusing on lower frequency bands when they do not necessarily contain relevant information (Fig. 1).
Insect sounds are not generated using a source-filter mechanism as in mammals or birds, but with stridulatory or tymbal mechanisms that create a different structure of frequencies and overtones [2, 3, 4, 13, 18, 19]. Generally, insect sounds are much higher in frequency than most mammal or bird sounds, with many species producing ultrasonic sounds [2, 20, 21]. This emphasis on high-frequency sounds, sometimes entirely and far outside of the human hearing range (\(\sim\)20 Hz - 20 kHz) could have an impact on the performance of audio classification networks, depending on their approach. It is likely that the Mel-filter bank approach based on human perception is not optimal to recognize and discriminate between subtle differences in high frequencies for many insect sounds, even if it works well enough for other sounds such as birdsong.
Recent work in deep learning has introduced adaptive, waveform-based methods such as LEAF [15], replacing the predefined spectrogram calculation with a parametric transform whose parameters are optimized at the same time as the rest of the network. These could potentially optimize their extraction of audio features to better fit insect sounds. The LEAF frontend allows the adjustment of filter frequency and bandwidth as well as normalization and time-pooling parameters during training to adapt to the data [15]. This frontend has been evaluated on a diverse set of audio classification tasks involving human-centric sound such as
Figure 1: Two spectrograms of the same recording of _Gryllus campestris_. Spectrogram A displays the frequency axis linearly in Hz. Spectrogram B uses the mel frequency scale, which compresses the frequency axis to show higher resolution in lower frequency bands than in higher bands, mimicking the human perception of frequency. Both spectrograms display the same spectrum of frequencies. Due to the mostly high-frequency information and empty low frequencies in this recording, the Mel spectrogram B obscures a large amount of information compared to the linear spectrogram A.
language, music, emotion, speaker recognition and more, and has shown improved performance over the standard Mel spectrogram approach in many cases [15]. But so far, it has not been evaluated on classification tasks involving sound sources that are less fit to the human perception of sound. For uses like insect species recognition that are much higher pitched and structured differently than human sounds, this frontend could be especially advantageous. It could adapt to the characteristics of insect sounds by learning increasing spectral resolution in higher frequency ranges, selecting and focusing on meaningful frequency bands that are otherwise pooled together, and learning how to ideally pool and compress these bands individually. Accordingly, the high resolution in lower frequency ranges that is present in Mel-filter bank approaches could be reduced or completely omitted, since it is rarely present in insect sounds [20].
The potential of deep learning methods for insect sound classification has not been studied extensively yet, especially their performance with adaptive frontends and extended sample rates/frequency ranges. In the present work, the performance of two different machine learning approaches will be tested in species classification of insect sound recordings, with only one species present at once. Complicating environmental conditions like distance from the recorder or background noise will be introduced by data augmentation methods to increase the diversity of the data set and improve the generalizability of the networks. The goal is to explore the potential for using deep learning methods to classify Orthoptera and Cicadidae with sounds recorded by entomologists and citizen scientists, and to evaluate the potential advantage of adaptive frontends for feature extraction of non-human, high-frequency sounds.
### Methods
We tested the performance of two audio feature extraction methods acting as frontends to a convolutional neural network. We compared the classic mel-spectrogram frontend to the adaptive and waveform-based frontend LEAF. It is initialized to function similarly to the mel frontend before training, but its parameters can be adjusted during training [15]. As a backend classifier, a convolutional neural network optimized for audio classification was implemented and adapted [22]. The frontends were tested on three increasingly large datasets of insect recordings.
#### InsectSet32
Since larger collections of insect recordings have only recently become publicly available, the dataset used for initial tests ("InsectSet32") was compiled from private collections of Orthoptera and Cicadidae recordings (Orthoptera dataset by Baudewijn Ode and Cicadidae dataset by Ed Baker, both unpublished). Only files in WAV audio format with sample rates of 44.1 kHz or higher were included. All files were converted to mono and the sample rates were standardized to 44.1 kHz by down sampling higher resolution recordings. The files were manually auditioned to exclude files that contained strong noise interference, sounds of multiple species or other audio distortions and artifacts. Many recordings included voice over commentary at the beginning of the recordings. Only the last ten seconds of audio from these recordings were used, to automatically exclude the commentary. Only species with at least four usable audio recordings were included in the final dataset. Overall, 32 species were selected, with 335 files and a total recording length of 57 minutes and four seconds (Table 1). Between species, the number of files ranges from four to 22 files and the length from 40 seconds to almost nine minutes of audio material for a single species. The files range in length from less than one second to several minutes.
For training and evaluating the two frontends, InsectSet32 was split into the training, validation and test sets [11]. Due to the low number of files in some classes, the split into the three subsets was done for all classes individually to ensure that each class is represented in
\begin{table}
\begin{tabular}{l c c|l c c|l c c} \hline \hline
**Baudewijn Ode - Orthoptera** & & & \multicolumn{2}{l}{**Ed Baker - Cicadidae**} & & & \\ \hline
**Species** & **n** & **min:s** & **Species** & **n** & **min:s** & **Species** & **n** & **min:s** \\ \hline Chorthippus bijuttulus & 20 & 3:43 & Azanicola zuluensis & 4 & 0:40 & Platyleura divisa & 6 & 1:00 \\ Chorthippus brunneus & 13 & 2:15 & Brevisiana brevis & 5 & 0:50 & Platyleura haglundi & 5 & 0:50 \\ Gryllus camexpstris & 22 & 3:38 & Kikibia muta & 6 & 1:00 & Platyleura hiripennis & 6 & 0:54 \\ Nembbius sylvestris & 18 & 8:54 & Myopsalta leona & 7 & 1:10 & Platyleura intercapedinis & 5 & 0:50 \\ Oecanthus pellucens & 14 & 4:27 & Myopsalta longicauda & 4 & 0:40 & Platyleura plumosa & 19 & 3:09 \\ Pholioptera griseoaptera & 15 & 1:54 & Myopsalta mackinlayi & 7 & 1:08 & Platyleura sp04 & 8 & 1:20 \\ Pseudochromippus parallelus & 17 & 2:01 & Myopsalta melanobasis & 5 & 0:43 & Platyleura sp10 & 16 & 2:24 \\ Roeseliana roeselli & 12 & 1:03 & Myopsalta xerograsidia & 6 & 1:00 & Platyleura sp11 c Ghirtipennis & 4 & 0:40 \\ Tettigonia viridissima & 16 & 1:34 & Platyleura capensis & 6 & 1:00 & Platyleura sp12 c Ghirtipennis & 10 & 1:40 \\ & & & Platyleura cfeatenata & 22 & 3:34 & Platyleura sp13 & 12 & 2:00 \\ & & & Platyleura chalybeaa & 7 & 1:10 & Pycna semiclara & 9 & 1:30 \\ & & & Platyleura deusta & 9 & 1:23 & & & \\ \hline \end{tabular}
\end{table}
Table 1: InsectSet32: 335 files from 32 species with a total recording length of 57 minutes and four seconds were selected from two different source datasets (Orthoptera dataset by Baudewijn Ode and Cicadidae dataset by Ed Baker). Number of files (n) and total length of recordings (min:s) per species.
all subsets. The resulting split amounts to 62.7% of the files being used for training, 15.2% for validation and 22.1% for testing. The dataset is publicly available on zenodo.org [23].
#### InsectSet47
After initial tests on InsectSet32 were conducted, a large collection of high-quality Orthoptera recordings by experts and citizen scientists was published on xeno-canto.org. From this collection, WAV files with sample rates of at least 44.1 kHz were downloaded and manually auditioned to compile a more diverse dataset together with the recordings from InsectSet32. Many recordings had been filtered or upsampled to 44.1 kHz by the uploaders, which was evident by a lack of audio information in certain frequency areas (most commonly above 16 kHz due to initially lower sample rates). Only full spectrum recordings were selected.
The files include sound snippets of single insect calls only seconds in length as well as long-term recordings of insect songs reaching up to 20 minutes. Many of the longer files included periods of silences without insect sounds. To exclude these silent periods, files that contained periods without insect sound of more than five seconds were edited into one or more files that contained only the insect sounds. The resulting edited snippets from one original recording were treated as one audio example to prevent them from ending up in multiple data sub-sets (train, test, validation) during the model training and evaluation process. Only species with at least ten usable recordings were included in the dataset. The recordings from the source datasets used for InsectSet32 (by Baudewijn Ode and Ed Baker) were also included in this selection process. Due to the more detailed editing process used for Dataset47, more audio material was gathered this time, but less species were included due to the higher minimum number of files per species. Therefore, InsectSet32 is only partially included in InsectSet47. Overall, 47 species were selected for InsectSet47, with overall 1006 files and a total recording length of 22 hours (Table 2).
### InsectSet66
InsectSet47 was expanded to include even more species and audio examples with citizen scientist recordings from iNaturalist.org. More frequently than in the previous source collections, many recordings had been filtered, data-compressed or heavily edited, including time-stretching and pitch shifting. These files were not selected. Additionally, a substantial number of recordings were submitted multiple times as separate observations. These recordings were only included once in the final dataset, unless they were logged as multiple different species, in which case they were completely excluded. Otherwise, the same selection process as before was used and the dataset was expanded to include 66 species ("InsectSet66"), 1554 recordings and a total length of over 24 hours (Table 3). Between species, the number of files ranges from ten files and a minimum length of 80 seconds to 152 files and almost 98 minutes of audio material for a single species.
\begin{table}
\begin{tabular}{l l l l l|l l l} \hline
**Species** & **n** & **min:s** & **Species** & **n** & **min:s** & **Species** & **n** & **min:s** \\ \hline Orthippus bightutulus & 52 & 29:49 & Acheta domesticus & 23 & 55:38 & Gomphoecurs sibiricus & 14 & 26:04 \\ Stenobothrus stigmaticus & 39 & 5:31 & Occanthus spellucens & 22 & 28:38 & Barbitsites yersini & 14 & 19:59 \\ Chorthippus mollis & 38 & 27:35 & Platylpelura cf catenata & 22 & 17:46 & Pholidoptera aptera & 13 & 10:31 \\ Gryllus campestris & 38 & 94:21 & Omoecstus ruffipes & 21 & 16:28 & Pholidoptera littoralis & 13 & 4:00 \\ Conocophalus fuscus & 34 & 53:06 & Pholidoptera griscooptera & 21 & 11:46 & Mertioptera brachyptera & 13 & 20:29 \\ Roeseliana roeselii & 33 & 33:39 & Chorthippus apricarius & 20 & 28:27 & Leptopbys punctatissima & 13 & 26:47 \\ Pseudochlorippus parallelus & 33 & 24:36 & Phancoptera falcata & 20 & 28:29 & Pseudochlorippus montanus & 12 & 11:29 \\ Chorthippus brunneus & 32 & 20:58 & Myrmleootertis maculatus & 20 & 55:06 & Platylpelura sp13 & 12 & 7:01 \\ Tettigonia cantans & 32 & 57:15 & Platylpelura plumosa & 19 & 14:41 & Chorthippus albomarginatus & 11 & 40:29 \\ Decticus vermucivorus & 31 & 71:30 & Stenobothrus lineatus & 18 & 32:41 & Eubholdoptera schmidti & 11 & 9:39 \\ Ephippiger diurmus & 29 & 39:33 & Conocophalus dorsalis & 18 & 23:07 & Melanogrylus deserts & 11 & 25:24 \\ Gomphocerippus rufus & 28 & 29:38 & Chrysocrhraon dispar & 17 & 15:35 & Tylopsis illifolia & 11 & 3:30 \\ Nembbius sylvestris & 28 & 38:11 & Gryllus bimaculatus & 17 & 27:32 & Omoecstus petraeux & 10 & 9:21 \\ Gampscelis glabra & 26 & 55:01 & Platylpelura sp10 & 17 & 17:55 & Chorthippus vaggans & 10 & 11:43 \\ Omoecstus viridulans & 25 & 45:25 & Phancoptera nana & 16 & 29:53 & Platylpelura sp12 cf hirtipennis & 10 & 7:41 \\ Tettigonia viridissima & 24 & 25:30 & Platyleelis albopunctata & 15 & 24:44 & & \\ \hline \end{tabular}
\end{table}
Table 2: InsectSet47: 1006 files from 47 species with a total recording length of 22 hours were selected mainly from xeno-canto.org, as well as two private collections (Orthoptera dataset by Baudewijn Ode and Cicadidae dataset by Ed Baker). Number of files (n) and total length of recordings (min:s) per species.
InsectSet47 and InsectSet66 were split into the training, validation and test sets while ensuring a roughly equal distribution of audio files and audio material for every species in all three datasets. To achieve this, files were sorted by file length for each species separately. They were then distributed into the three datasets by following a repeating pattern. The two longest files are moved into the training set, the third largest into the validation set, the fourth largest into the test set. The files at positions five and six go into the training set again, the seventh largest into the validation set, the eighth into the test set. The ninth and tenth files are moved into the training set and the pattern is repeated for the remaining files if there are more than ten (1: train, 2: train, 3: val, 4: test, 5: train, 6: train, 7: val, 8: test, 9: train, 10: train, 11: repeat from 1). This resulted in a 60/20/20 split (train/validation/test) by file number and
\begin{table}
\begin{tabular}{l l l l l|l l l} \hline
**Species** & **n** & **h:min:s** & **Species** & **n** & **h:min:s** & **Species** & **n** & **h:min:s** \\ \hline Yoyetta celis & 152 & 0:11:16 & Alecta curivcosta & 23 & 0:04:04 & Gomboreus sibiricus & 14 & 0:26:05 \\ Gryllus camepstris & 57 & 1:37:39 & Platypelura efcatenata & 22 & 0:17:47 & Barbitistes yersini & 14 & 0:19:59 \\ Chorthippus biguttulus & 53 & 0:30:25 & Omoecstus rufipes & 22 & 0:16:34 & Psaldo plaga & 14 & 0:04:21 \\ Galanga labeculata & 43 & 0:06:16 & Chorthippus apricarius & 21 & 0:28:35 & Popplessita notialis & 14 & 0:02:58 \\ Yoyetta repeters & 40 & 0:05:23 & Myrmleotetotix maculatus & 21 & 1:05:37 & Pholidoptera litterorials & 13 & 0:04:00 \\ Chorthippus mollis & 39 & 0:27:50 & Cicada orni & 21 & 0:06:50 & Pasucheodhorhippus montanus & 13 & 0:11:36 \\ Stenobothrus stigmatisus & 39 & 0:05:31 & Phaneroptera falcata & 20 & 0:28:30 & Leptopbyes puncatis-sima & 13 & 0:26:48 \\ Pseudochorhippus parallleus & 37 & 0:25:08 & Gryllus bimaculatus & 20 & 0:28:44 & Cyclochilia austrasiaae & 13 & 0:01:53 \\ Roeseliana roeselii & 37 & 0:34:34 & Platypelura plumosa & 19 & 0:14:42 & Platypelura sp13 & 12 & 0:07:01 \\ Tettigonia cantans & 37 & 0:58:10 & Stenobothrus lineatus & 19 & 0:34:27 & Chorthippus albomar-giants & 11 & 0:40:29 \\ Conoccephalus fiscus & 36 & 0:53:34 & Clinopsalta autamma & 19 & 0:04:16 & Eupholidoptera schmidti & 11 & 0:09:40 \\ Chorthippus brunneus & 35 & 0:21:57 & Phaneroptera nana & 18 & 0:30:50 & Melanogyllus desertus & 11 & 0:25:24 \\ Decticus vernucivorus & 34 & 1:15:04 & Conoccephalus dorsalis & 18 & 0:23:07 & Tylopsis lilifolia & 11 & 0:03:30 \\ Tettigonia viridissima & 33 & 0:27:26 & Platypelura sp10 & 17 & 0:17:55 & Ruspolia nitidula & 11 & 0:12:35 \\ Ephippiger diurnus & 31 & 0:39:51 & Chrysochron dispar & 17 & 0:15:36 & Dicoroprocta eugraphica & 11 & 0:05:07 \\ Nembbius sylvestris & 30 & 0:38:44 & Pholidoptera aptera & 16 & 0:10:55 & Platypelura & 10 & 0:07:42 \\ Oceanthus pellucens & 29 & 0:30:32 & Eumolioogrylus borglasensis & 16 & 0:10:56 & Omoecstus petraeus & 10 & 0:09:22 \\ Gomboceripus rufus & 28 & 0:29:38 & Platypelaxis albumoctata & 15 & 0:24:45 & Stauroderus scalars & 10 & 0:20:43 \\ Pholidoptera griscoaptera & 27 & 0:14:07 & Atrapsalta corticina & 15 & 0:02:15 & Chorthippus vaggans & 10 & 0:11:43 \\ Omoecstus viridulus & 27 & 0:45:48 & Neotibicen pruninosus & 15 & 0:04:41 & Bicolorana bicolor & 10 & 0:09:19 \\ Gampsolecis glabra & 27 & 0:55:18 & Atrapsalta encaustica & 15 & 0:04:33 & Popplessita aeroides & 10 & 0:01:46 \\ Acheta domesticus & 24 & 0:56:48 & Metrioptera brachyptera & 14 & 0:20:56 & Atrapsalta collina & 10 & 0:01:20 \\ \hline \end{tabular}
\end{table}
Table 3: InsectSet66: 1554 files from 66 species with a total recording length of 24 hours and 32 minutes were selected from five different source datasets (Orthoptera and Cicadidae datasets from iNaturalist, Orthoptera dataset from xeno-canto, Orthoptera dataset by Baudewijn Ode and Cicadidae dataset by Ed Baker). Number of files (n) and total length of recordings (h:min:s) per species.
a 64/19.5/16.5 split by file length. InsectSet47 and InsectSet66 are publicly available on
zenodo.org [24].
Since the recordings varied in duration, they had to be divided into segments of a fixed length that can be fed into the network. A length of five seconds was chosen, as most calls were either short and rhythmical or long and static. Repeating sequences of longer than five seconds were not commonly observed in the dataset, therefore it was assumed that a length of five seconds would not eliminate species-specific rhythmic characteristics in the calls. Short files were looped until they reached five seconds in length. Longer files were sequentially spliced into chunks of five seconds, with an overlap of 3.75 seconds. When the splitting window reached the end of a file, the beginning of the recording was wrapped around to extend the chunk to five seconds, as long as the minimum remaining time of a chunk was at least 1.25 seconds.
For deep learning, it is standard practice to expand modest-sized training data through a synthetic process of audio augmentation, and we applied this to all three datasets. The training set of InsectSet32 was expanded with ten generations of audio augmentations using the python package "audiomentations" (github.com/iver56/audiomentations). The processing steps included "FrequencyMask", which erases a band of frequencies around a random center frequency, with bandwidth as a parameter that can be randomized within a defined range (0.06 - 0.22). This augmentation step was applied with a chance of 50%. After frequency masking, the signal was mixed with Gaussian noise, using the "AddGaussianSNR" function. The ratio of signal to noise was randomized between 25 and 80 dB. This ratio was tuned to range from barely noticeable addition of noise to heavy noise disturbance without obscuring the relevant audio information in noisy source recordings. This was applied to every file. After mixing with noise, the files were augmented with impulse responses (IRs) recorded in natural outside settings. The IRs were selected from a dataset of recordings made in various locations at high sample rates [25]. Eleven IRs from three different outside locations (two forest locations, one campus location) were selected from this dataset and randomly applied during augmentation with a chance of 70%. The IR-processed files were mixed with their original version at random mix ratios to achieve additional variation in the severity of the effect.
For InsectSet47 and InsectSet66, online data augmentation was used, due to the vastly increased amount of audio material. From the package "torch_audiomentations" (github.com/asteroid-team/torch-audiomentations), the functions "AddColoredNoise" and "ApplyImpulseResponse" were used. Their parameters were tweaked to mimic the augmentations used in the smaller dataset. Unfortunately, a functionally similar function to the frequency masking used on the smaller dataset was not available in the package. As an alternative, the opportunity to vary the frequency distribution of the noise augmentation was used as an alternative to frequency masking. The frequency power decay was randomized between -2 and 1.5. The signal to noise ratio was randomized between 25 and 40 dB, with an overall probability of augmentation of 90%. Impulse responses were applied with a probability of 70%, with delay compensation enabled. The same IR files as in the smaller dataset were used [25] and mixed at a randomized mix ratio. Both augmentations were applied and randomized per example in a batch (Fig. 2).
The frontends that were compared are the conventional mel spectrogram included in the python package torchaudio (MelSpectrogram) and the adaptive, waveform-based frontend LEAF [15]. The mel spectrograms were generated based on the audio waveforms before the
Fig. 2: Example of the data augmentation workflow used on the training set (InsectSet47 and InsectSet66). Noise is added at a randomized signal-to-noise ratio and frequency distribution. Then an impulse response from an outdoor location is applied at a randomized mix ratio.
files were input into the convolutional network. When using the LEAF frontend, the full waveforms were directly input to the network and then processed by the frontend, since many of its parameters like filter frequency and bandwidth, per-channel compression and normalization, and lowpass pooling can be learned and therefore need to be part of the network to benefit from gradient descent learning. The initialization parameters of the two frontends were defined as similarly as possible to create a fair comparison. The files were imported at a sample rate of 44.1 kHz. They were transformed from an input shape of [1; 220500] (one channel mono audio; 44.1 kHz for five seconds) to a representation shape of [1; 64; 1500] by the frontends, with 64 filter bands on the frequency axis and 1500 steps dividing the time axis. The window length was set at twice the length of the stride for both frontends (stride: 3.335 ms, window size: 6.67 ms). The filter bank used in the LEAF frontend was initialized on the same scale as the mel frontend, between 0 and 22.05 kHz. The inputs were combined into batches of 14 and fed into the network.
Additional tests were conducted to test the impact of the filterbank and PCEN components that make up the LEAF frontend. The models were trained on InsectSet47 and InsectSet66 using the same model architecture and LEAF frontend configuration as before, but the adjustment of either the filterbank or PCEN parameters during the training process were deactivated. This means that in the test case "leafFB" the filterbank parameters were adjusted during training, but the compression parameters of the PCEN component remained in the initialized state. In the test case "leafPCEN", the filterbank and temporal pooling parameters remained frozen in their initialized state, while only the PCEN compression parameters of the frontend were trained.
The network backend was adapted from a convolutional neural network created using pyTorch that was optimized for audio classification [22]. It consists of four convolutional layers (Conv2d) with rectified linear units (ReLU) and batch normalization (BatchNorm2d). After the convolutional layers, the feature maps were pooled (AdaptiveAvgPool2d) and flattened, and finally input into a linear layer (Linear) that returns a prediction value for each of the classes contained in the dataset. The highest prediction value was picked as the final predicted class for each training example. To avoid overfitting of the network on the small training dataset, dropout was implemented on the final linear layer (dropout rate of 0.4), as well as L2 regularization of the weights (weight decay of 0.001). The dropout rate was decreased to 0.23 for InsectSet47 and InsectSet66 since the models were underfitting as a
result of the increased complexity of data. A fifth convolutional layer was added to the model for additional tests. Overall, the main model with four layers contains 28,319 trainable parameters that are adjusted during the training phase, with the inclusion of the LEAF frontend.
During the training process early stopping was employed, which evaluates the network performance after each epoch by running an inference step on the validation set. The loss value of the validation set is used to estimate how well the network will perform on the test set during final evaluation. Each time the validation loss decreases, the current network state is saved. If the validation loss does not decrease any further in eight consecutive epochs, the training is stopped and the final test evaluation is performed on the last saved network state from eight epochs earlier. The accuracy of the two approaches was determined by the percentage of correctly classified items in the test set, as well as the f1-score, precision and recall [11]. Due to the randomness included in the training process due to dataset shuffling and network initialization, the training and evaluation outcomes can vary substantially between runs using the exact same parameters and datasets. To achieve a stable and comparable result on the small dataset, both models were computed five times each on InsectSet32 and three times each on InsectSet47 and InsectSet66. The best performing runs trained on InsectSet47 and InsectSet66 were trained again with an added fifth convolutional layer to test the effect of a larger model on the classification performance. All scripts used for preparing and classifying the data are publicly available on GitHub [26, 27].
## Results
### InsectSet32
The median classification accuracy score for five runs using the mel frontend model is 62%, with scores for the different runs ranging between 57% and 67% (Table 4). The median classification accuracy for the LEAF models is 76% with a range from 59% to 78% (Table 4). The mel frontend achieved a median validation loss of 1.49, while the LEAF frontend had a lower median validation loss of 1.24 (Table 4). When looking at the additional performance metrics F1-score, recall and precision, even the worst performing LEAF run outperforms all of the mel runs (Table 4).
The majority of misclassifications (Fig. 3) lie within the two biggest genera represented in InsectSet32, _Myopsalta_ and _Playpleura_ (5 and 14 species respectively, of 32 in total; Table 1). Species in these genera were most often misclassified as other members of their own genus. One particular species, _M. leona_, caused many misclassifications within its genus, despite being correctly classified itself. Similarly, within the genus _Playpleura_, the
\begin{table}
\begin{tabular}{|c|c|c c c c|c c|} \hline \hline & \multicolumn{3}{|c|}{**Test**} & \multicolumn{3}{|c|}{**Validation**} \\ \hline
**Dataset** & **Model** & **Accuracy** & **F1-score** & **Recall** & **Precision** & **Accuracy** & **Loss** \\ \hline InsectSet32 & mel-4 & 0.62 & 0.52 & 0.53 & 0.61 & 0.60 & 1.49 \\ & & 0.57 - 0.67 & 0.47 - 0.56 & 0.49 - 0.58 & 0.52 - 0.64 & 0.57 - 0.65 & 1.37 - 1.68 \\ \cline{2-8} & LEAF-4 & 0.76 & 0.66 & 0.68 & 0.70 & 0.71 & 1.24 \\ & & 0.59 - 0.78 & 0.61 - 0.69 & 0.60 - 0.71 & 0.67 - 0.73 & 0.61 - 0.76 & 1.00 - 1.40 \\ \hline InsectSet47 & mel-4 & 0.77 & 0.66 & 0.66 & 0.69 & 0.75 & 0.98 \\ & & 0.70 - 0.77 & 0.56 - 0.67 & 0.57 - 0.67 & 0.63 - 0.74 & 0.71 - 0.77 & 0.92 - 1.14 \\ & LEAF-4 & 0.81 & 0.71 & 0.72 & 0.77 & 0.84 & 0.72 \\ & & 0.79 - 0.83 & 0.71 - 0.77 & 0.71 - 0.76 & 0.74 - 0.83 & 0.83 - 0.86 & 0.72 - 0.74 \\ \cline{2-8} & mel-5 & 0.85 & 0.78 & 0.79 & 0.81 & 0.83 & 0.69 \\ & LEAF-5 & 0.86 & 0.81 & 0.81 & 0.85 & 0.88 & 0.58 \\ \hline InsectSet66 & mel-4 & 0.78 & 0.66 & 0.66 & 0.73 & 0.76 & 0.98 \\ & & 0.75 - 0.78 & 0.65 - 0.69 & 0.64 - 0.69 & 0.73 - 0.74 & 0.76 - 0.76 & 0.97 - 0.98 \\ & LEAF-4 & 0.80 & 0.68 & 0.68 & 0.77 & 0.83 & 0.81 \\ & & 0.79 - 0.81 & 0.67 - 0.71 & 0.67 - 0.70 & 0.74 - 0.77 & 0.80 - 0.84 & 0.79 - 0.86 \\ & mel-5 & 0.82 & 0.74 & 0.74 & 0.80 & 0.81 & 0.82 \\ \cline{2-8} & LEAF-5 & 0.83 & 0.76 & 0.77 & 0.81 & 0.85 & 0.73 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test and validation scores for all trained models with mel and LEAF frontends on insect sound datasets of three different sizes. The median as well as the lower and upper limits are reported from training multiple runs of the same model with different randomization seeds and four convolutional layers (five runs each for InsectSet32, three runs each for InsectSet47 and InsectSet66). The best performing models were also trained with an additional convolutional layer, indicated by the number in the model name.
species _P. plumosa_ and _P. sp12cfhirtipennis_ were frequently labeled incorrectly as other members of the same genus.
The confusion matrix showing the performance of the LEAF frontend reflects the overall better performance since it displays a clearer diagonal line of accurate classifications, with less incorrect classifications around it (Fig. 4). All test files of the species _Brevisiana brevis_
Fig. 3: Classification outcome for all 32 species in the test set using the best run of the mel frontend performing at 67% classification accuracy. The vertical axis displays the true labels of the files, the horizontal axis shows the predicted labels, sorted alphabetically. Classifications within the two biggest genera _Platypeleura_ and _Myopsalta_ are highlighted for comparison to the LEAF confusion matrix.
were incorrectly classified as _Platypleura haglundi_. The species _P. intercapedinis_ and _P. sp11 cfhirtipennis_ were never correctly classified either but confused with different species of the same genus. The concentration of misclassifications in the two largest genera _Myopsalta_ and _Platypleura_ is much less pronounced compared to the mel frontend run, especially the performance within _Myopsalta_ is significantly better (Figs. 3&4).
Fig. 4: Classification outcome for all 32 species in the test set using the best run of the LEAF frontend performing at 78% classification accuracy. The vertical axis displays the true labels of the files, the horizontal axis shows the predicted labels, sorted alphabetically. Classifications within the two biggest genera _Platypleura_ and _Myopsalta_ are highlighted for comparison to the mel confusion matrix.
The filters employed by the LEAF frontend were initialized on a scale closely matched to the mel scale but were adjusted in center frequency and bandwidth during training on InsectSet32 (Fig. 5). After sorting the filters by their center frequencies, they continue to largely adhere to the initialization curve (Fig. 5 C&F). Without sorting however, it is clear that many filters were adjusted from their original position (Fig. 5 B&E). Substantial changes in the frequencies of several filters occurred around 2 kHz and above 15 kHz, where some filters were adjusted by up to several kilohertz, especially with the highest filter at initialization being shifted from 22.05 kHz down to approximately 13 kHz (Fig. 5 B). The ordering along the frequency axis is heavily disturbed, since the center frequencies do not steadily increase with increasing filter number, as was the case on the initialized scale (Fig. 5 B&E). This means that in the LEAF output matrices, adjacent values on the axis containing frequency information do not necessarily represent adjacent frequency bins, which is usually the case when using hand-crafted representations such as mel filter banks. Filter density increased around 0.85 kHz (see Fig. 5 D, \(\approx 900\) mel) and between roughly 14-15 kHz (Fig. 5 B), but slightly decreased between 18 and 20 kHz (Fig. 5 B) and around 2.4 kHz (see Fig. 5 D, \(\approx 1700\) mel). Four filters are located close to zero mel/kHz after training, leaving a gap up to approximately 500 mel (\(\approx 0.4\) kHz), where the very lowest insect sound frequencies occur in this dataset (Fig. 5 D).
#### InsectSet47
On the expanded InsectSet47, the median classification performance that was achieved with the mel frontend was 77% and a median loss of 0.98 on the validation set. This is a significant improvement in performance compared to InsectSet32, despite the increased number of species (Table 2). The LEAF frontend gained a less substantial increase in classification performance, but still outperforms the mel frontend in all three runs with a median 81% classification accuracy and substantially lower loss of 0.72 (Table 4). The difference between the frontends decreased overall however, compared to InsectSet32. The models trained with an additional convolutional layers improved even further in performance. The mel frontend gained a larger increase in classification performance from this, reaching 85%, while LEAF performed only slightly better at 86% (Table 4).
Figure 5: Center frequencies of all 64 filters used in the best performing LEAF run on InsectSet32. Plots A and D show the initialization curve before training, which is based on the mel scale. Plots B and E show the deviation of each filter from their initialized position after training. Plots C and F show the filters sorted by center frequency, and demonstrate the overall coverage of the frequency range, but do not represent the real ordering in the LEAF representations. Violin plots show the density of filters over the frequency spectrum, the orange line shows the initialization curve for comparison.
Using both frontends, misclassifications between the groups of Orthoptera and Cicadidae are negligible (Suppl. Figs. 1&2). In general, classification errors appear more frequently with closely related species. The LEAF frontend was able to improve performance over the mel frontend by reducing the large number of misclassifications in the genus Acrididae (Suppl. Figs. 1&2). In the genus Playtpleura, nearly all audio examples of two species (_P. sp12cfhirtipennis_ and _P. sp13_) were classified as _P. plumosa_ by the mel frontend (Suppl. Fig. 1). The LEAF frontend managed to reduce the incorrect classifications to _P. plumosa_ roughly by half, by compromising half of the correct classifications of that species (Suppl. Fig. 2).
#### InsectSet66
The models trained on InsectSet66 showed similar results to InsectSet47, again despite the increase in the number of classes. The mel frontend slightly improved its median classification performance from 77% to 78% on this larger dataset, while the LEAF performance decreased from 81% to 80% (Table 4). The median loss stayed on the same level as on InsectSet47 for the mel frontend with 0.98, but increased for the LEAF frontend from 0.72 on InsectSet47 to 0.81 on InsectSet66 (Table 4). The performance, when trained with five convolutional layers, improved again for both frontends, where the LEAF frontend only has a small advantage with 83% compared to the 82% reached with the mel frontend (Table 4). For both frontends, incorrect classifications of Orthoptera species as Hemiptera are almost non-existent. Classifications in the opposite direction do appear, but are rare (Suppl. Figs. 3&4). In general, misclassifications appear most often within the genera. The confusion matrices of LEAF and mel do not show obvious differences or trends, likely since the overall classification performance is similar.
#### leafPCEN
The training of the leafPCEN frontend, which retains the trainable PCEN part of LEAF, but freezes its filterbank and pooling parameters, did not succeed. The validation accuracy and loss values showed large spikes and did not converge effectively. Three runs were trained on InsectSet47, but a median classification accuracy on the test set of only 71% was reached, which is substantially worse than the standard LEAF or even mel frontends performances (Table 4). Because of this, the frontend was not trained on InsectSet66.
### leafFB
The leafFB frontend, which has a trainable filterbank, but uses the initialized PCEN component of the LEAF frontend, performed better than the leafPCEN frontend, and managed to converge despite occasional spikes of the accuracy and loss values during training. On InsectSet47, leafFB reached a median classification accuracy of 81% and a median loss value of 0.74 (Table 5), performing slightly better than the standard LEAF frontend (Table 4). On InsectSet66, the performance decreased to a median of 79% classification accuracy and a median loss of 0.79 (Table 5), which is slightly worse than the LEAF frontend (Table 4). On both datasets, more variation in performance between the runs was observed, meaning that some leafFB runs did perform substantially worse than LEAF (Tables 4&5).
### Discussion
The focus of this work was mostly to compare a traditional handcrafted feature extraction method (mel) against an adaptive and waveform-based method (LEAF), while also testing the viability of deep learning methods to classify insect sounds, specifically of Orthoptera and Cicadidae. Three datasets were used for this comparison, with increasing number of audio files, as well as numbers of species. In all settings, the adaptive frontend LEAF outperformed the mel frontend (Table 4), by adjusting its filter bank and compression parameters to fit the data (Fig. 5). This effect was most pronounced on the smallest dataset InsectSet32, where LEAF reached a classification accuracy of 78%, compared to 67% using mel (Table 4). On the expanded dataset InsectSet47, the performance of both frontends improved in comparison to InsectSet32, despite the increased number of species. This is likely due to the
\begin{table}
\begin{tabular}{|c|c|c c c c|c c|} \hline & \multicolumn{2}{c|}{**Test**} & \multicolumn{2}{c|}{**Validation**} \\ \hline
**Dataset** & **Model** & **Accuracy** & **F1-score** & **Recall** & **Precision** & **Accuracy** & **Loss** \\ \hline InsectSet47 & leafFB-4 & 0.81 & 0.73 & 0.73 & 0.79 & 0.84 & 0.74 \\ & 0.72 – 0.83 & 0.6 –0.75 & 0.6 –0.75 & 0.74 – 0.82 & 0.73 – 0.86 & 0.71 – 1.14 \\ \hline InsectSet66 & leafFB-4 & 0.79 & 0.67 & 0.67 & 0.72 & 0.82 & 0.79 \\ & 0.7 – 0.81 & 0.59 – 0.69 & 0.59 – 0.68 & 0.69 – 0.76 & 0.72 – 0.84 & 0.79 – 1.22 \\ \hline \end{tabular}
\end{table}
Table 5: Test and validation scores for the trained models using the leafFB frontend. The median as well as the lower and upper limits are reported from training three runs of the same model with different randomization seeds and four convolutional layers.
much higher number and length of audio examples, allowing the models to generalize better on unseen data. The difference in performance between the frontends decreased however.
The performance of the mel frontend on largest dataset InsectSet66 overall remained
roughly on the same level as on InsectSet47, even though a substantial number of species was added, but not a large amount of audio material (Table 3).
Since the performance seemed to plateau at this level, we hypothesized that the complexity of the backend classifier was reaching a limit and was not able to process the full amount of information contained in the larger datasets. This could have obscured an advantage in the feature extraction performance by the frontends. To rule this out, more tests were conducted on InsectSet47 and InsectSet66 by adding an additional convolutional layer to the models, with the expectation that this would allow the LEAF performance to increase more than the mel performance. This modification led to increased classification performance in all cases, but actually decreased the difference between the frontends (Table 4). On InsectSet47, the mel frontend improved substantially from 77% to 85%, while the LEAF frontend only
improved from 83% to 86% (Table 4). On InsectSet66, the mel frontend improved from 78% to 82% and LEAF from 81% to 83% (Table 4). This suggests that the ability of the LEAF frontend to adjust feature extraction parameters might be more relevant when there is only a limited number of audio examples.
In similar comparisons on more human-centric audio classification tasks (language, emotion, birdsong, music etc.), LEAF outperformed mel spectrograms on a diverse range of tasks, but not all, and in many cases by smaller margins than in this comparison [15]. Since the sounds in this application are very different in structure and frequency content from human-
associated sounds, the difference in performance between LEAF and mel was expected to be larger than in the previous comparisons. LEAF can learn a large number of parameters and adapt to the input data, while the mel frontends parameters are completely fixed and not necessarily ideal when not used with human sounds. The relevant information in insect sound is largely located in the higher frequency spectrum (above 5 kHz), where mel spectrograms are more imprecise due to increasingly wider pooling of frequencies. The LEAF frontend adjusted filter center frequencies and bandwidths, as well as compression and time-pooling parameters to better fit the data and reveal details that could be obscured by the mel frontend fixed parameters (Fig. 5).
The confusion matrices generated from InsectSet32 shed some light on where the differences in performance lie between the two approaches (Figs. 3&4). Using the mel frontend, the majority of incorrect classifications is found between species of the genus _Platypleura_, which represents almost half of the species included in the dataset with 14 out of 32, and in the second largest genus _Myopsalta_, with five species (Table 1). These two groups make up the majority of the species in InsectSet32 and it is therefore more likely for them to contain a majority of the misclassifications. However, the fact that many of their false classifications are within species of the same genus suggests that their sounds could be similar in structure and hard for the network to distinguish. Apparently, the trained parameters of the LEAF frontend led much better performance in these two genera than when using the mel frontend, since there are less false predictions within these genera and false predictions outside of these genera remain roughly the same (Figs. 3&4). The confusion matrices generated from InsectSet47 and InsectSet66 do not reveal clear differences between the frontends, since the overall performance is much more similar (Figs. 1-4). It is possible that due to the larger diversity of species and genera, the LEAF frontend was not able to tune its parameters to distinguish between specific sound characteristics to the same extent as in InsectSet32.
The overall coverage of filters over the frequency spectrum was not significantly changed during training of the LEAF frontends. When looking at the filter distribution after training, the filters still mostly lie close to the initialization curve that is based on the mel scale (Fig. 5 C&F). While changes in filter density occurred in some frequency bands, a dramatic shift of all filters shifting to higher frequencies or a change to a completely different curve was not observed. When considering the changes of every individual filter however, it is clear that many filters changed position quite significantly, sometimes by several thousand Hertz (Fig. 5 B&E). The ascending order of filter bands along the frequency axis is heavily disturbed after training, meaning that adjacent rows in the LEAF output matrices do not necessarily contain adjacent bands in the frequency domain. Interestingly, this was not observed in the original paper introducing the LEAF frontend [15] nor in a paper improving the performance of the frontend [28]. After training the frontend on the AudioSet dataset [29] and the SpeechCommands dataset [30] at sample rates of 16 kHz, the filters still followed the initialization curve much more closely and the ordering along the frequency axis was conserved in both papers [15, 28]. This was interpreted as a demonstration that the mel scale is a strong initialization curve for these tasks, with the learnable filter parameters in the
LEAF frontend mostly providing an opportunity for adapting to a slightly more appropriate frequency range [15, 28].
The AudioSet dataset contains many human-centric sounds such as speech and music, as well as a diverse set of environmental sounds, animal sounds and more, with 527 classes and multiple labels per recording [29]. The SpeechCommands dataset contains over 100,000 samples of spoken words [30]. Perhaps such a diversity of sounds and classes, as well as the use of a much lower sample rate of 16 kHz [15] constrained the adjustment of filter frequencies compared to the significantly smaller datasets used in our comparison which focus on a more fine-grained classification task. It is also possible that ordering along the frequency axis is more important for classifying sounds that contain defined harmonic structures such as human speech, music, instruments or birdsong. The often noisy and inharmonic sounds produced by Orthoptera and Cicadidae might not require this due to their more uniform and comparably undefined sonic structure over the spectrum.
Since the LEAF frontend is a combination of a learnable filter bank and learnable PCEN compression, we wanted to determine the influence of the individual components on the improved performance over the mel frontend. Especially since the overall filter bank curve was not adjusted as strongly as expected and because PCEN as a replacement for the conventional log-compression has been shown to be advantageous in some, but not all cases for classifying environmental sounds [31, 32, 33]. A modification of the LEAF frontend with disabled training of the filterbank and temporal pooling parameters, but trainable PCEN parameters was tested, called leafPCEN. This frontend should essentially function like a standard mel frontend with an added trainable PCEN component since the initialized LEAF filterbank functions like a mel filterbank. Surprisingly, leafPCEN did not train successfully and even performed worse than the normal mel frontend (Table 5). It has been observed in previous work that in some applications, depending on the signal and background noise characteristics, trainable PCEN parameters can fail to converge on ideal values and lead to suboptimal feature extraction [31, 33]. It appears that in the LEAF frontend, without the trainable filterbank, the PCEN component can be unstable and collapse into poor configurations. The leafFB frontend, which retains the trainable filterbank and pooling of LEAF, but disables training on the PCEN compression parameters, performed at roughly the same level as the standard LEAF frontend, although with more variation between the runs (Tables 4&5). This suggests that the adjustment of the filterbank parameters specifically lead
to a better configuration than the standard mel frontend and increased the classification performance.
The high occurrence of adjustments and shuffling of individual LEAF filters could justify testing different initialization curves than the mel scale. While this scale has been shown to be robust and advantageous for classifying human-centric sounds [15], it might not be the ideal initialization curve for insect sounds, especially because the theoretical justifications for the use of the mel-scale do not apply to the specific characteristics of insect sounds. Perhaps the filter distributions learned in this study are local optima that could be reached from the mel curve as a starting point, but expert-designed initialization curves could allow the frontend to reach a better and more generalizable filter distribution for insect sounds in a shorter amount of training time, which would be advantageous. One experiment testing a different initialization curve was conducted with randomized center frequency values that were sorted in ascending order [28]. During training, the filter values were adjusted to a more appropriate frequency range for the data, but the overall performance was lower than when using a mel initialization curve, when tested on the SpeechCommands dataset [28, 30]. This, again, shows that the mel scale is very robust and useful for human sounds, but also that LEAF can learn useful filter distributions even when not initialized on an ideal scale [28]. This further justifies the exploration of alternative initialization scales for usage of the LEAF frontend with non-human sounds.
To achieve further improvement of classification performance, especially if machine learning methods are going to be implemented in species conservation efforts, larger and more diverse datasets should be the focus. In this work, up to 66 species were represented, with a minimum of 10 recordings per class. This could be a realistic number of species for monitoring specific environments or even larger geographic areas. But for future implementations, existing datasets are not sufficient and have to represent all species that occur in the environments where automatic classification methods are going to be deployed, and the number of recordings per species must be increased. If datasets with higher sample rates are going to be used for classification, conventional mel spectrogram frontends may prove to be even less useful compared to adaptive frontends. Especially for species that produce sounds entirely within the ultrasonic range, which are common in Orthoptera and some Cicadidae [34], the lower resolution in high-frequency bands would be increasingly disadvantageous compared to adaptive frontends.
While compiling the datasets for this work, special attention was paid to exclude recordings with low audio quality and especially recordings that contain sounds from multiple insect species, even if other species were barely noticeable in the background. Since many of the recordings from the source databases are submissions from citizen-scientists that did not meet the quality standards for this work, a large amount of audio material was not included in these datasets. Lowering the quality standards would allow the inclusion of many more species and audio examples. Whether this would be beneficial remains to be tested, since the added amount of audio material could offset the negative effects of lower quality recordings.
Considering the relatively simple network architecture and small datasets, these results are encouraging for future applications with high potential for further improvements through optimizing model parameters and diversifying datasets. The advantage in performance by using LEAF, despite being small in some cases, identifies adaptive frontends as a potentially valuable replacement for approaches with hand-crafted parameters to extract features for insect audio classification. Before these methods can be applied in conservation efforts, datasets need to be increased in size and species diversity, and the networks that are used must be improved to reach higher overall accuracy. These methods also need to be integrated with sound-event detection methods to automatically identify relevant clips from longer automatic recordings. This work presents a first step for optimizing an important part of the classification network and shows encouraging results and methods for successful future implementations of this technology.
## Acknowledgments
MF was supported by a Martin &Temminck Fellowship (Naturalis Biodiversity Center).
We kindly thank Baudewijn Ode and Ed Baker for the use of their sound collections, as well as the contributors to Xeno Canto and iNaturalist. |
2307.13177 | A Splitting Approach to Dynamic Mode Decomposition of Nonlinear Systems | Reduced-order models have long been used to understand the behavior of
nonlinear partial differential equations (PDEs). Naturally, reduced-order
modeling techniques come at the price of computational accuracy for a decrease
in computation time. Optimization techniques are studied to improve either or
both of these objectives and decrease the total computational cost of the
problem. This paper focuses on the dynamic mode decomposition (DMD) applied to
nonlinear PDEs with periodic boundary conditions. It provides a study of a
newly proposed optimization framework for the DMD method called the Split DMD. | Jovan Žigić | 2023-07-24T23:55:24Z | http://arxiv.org/abs/2307.13177v1 | # A Splitting Approach to Dynamic Mode Decomposition of Nonlinear Systems
###### Abstract
Reduced-order models have long been used to understand the behavior of nonlinear partial differential equations (PDEs). Naturally, reduced-order modeling techniques come at the price of computational accuracy for a decrease in computation time. Optimization techniques are studied to improve either or both of these objectives and decrease the total computational cost of the problem. This paper focuses on the dynamic mode decomposition (DMD) applied to nonlinear PDEs with periodic boundary conditions. It provides a study of a newly proposed optimization framework for the DMD method called the Split DMD.
**AMS subject classifications**: Primary 65K10, Secondary 65M22.
## 1 Introduction
The Navier-Stokes (NS) equations are the primary mathematical model for understanding the behavior of fluids. The existence and smoothness of the NS equations [8] is considered to be one of the most important open problems in mathematics, and challenges in their numerical simulation is a barrier to understanding the physical phenomenon of turbulence. Due to the difficulty of studying this problem directly, problems in the form of nonlinear partial differential equations that exhibit similar properties to the NS equations are studied as preliminary steps towards building a wider understanding of the field. In fact, the use of the proper orthogonal decomposition method to build reduced-order models for fluids was accelerated with the publication of the first edition of the monograph "Turbulence, Coherent Structures, Dynamical Systems and Symmetry" [9].
Reduced-order models has since evolved as its own discipline with applications to control [5][4][6], optimization [16][1], and uncertainty quantification [7]. Naturally, reduced-order modeling techniques come at the price of computational accuracy for a decrease in computation time. Optimization techniques are studied to improve either or both of these objectives and decrease the total computational cost of the problem.
Split Dynamic Mode Decomposition
By definition of the term [17] and observation of simulated models governed by nonlinear PDEs, bifurcation present in system dynamics greatly alters system behavior over time. When approaching such a problem with DMD, this bifurcation effect leads to issues arising with selecting DMD modes that represent both transient dynamics and states of equilibrium that follow. This phenomena is a natural obstacle to both the DMD algorithm [11] and the Levenberg-Marquardt algorithm [12][13] crucial to the optimized DMD (OD) method [3][2].
In the DMD algorithm, the primary issue generated by effects of bifurcation is selecting DMD modes that accurately represent the dominant coherent structures, or pattern of behavior, of the dynamical system over the time interval. In a chaotic dynamical system, this pattern between the transient state and equilibrium state is not explicit. Thus, the linear problem that provides a solution for the low-rank Koopman operator is forced to compensate for the information provided throughout the entire interval, and consequently choose DMD modes that represent two distinct patterns. Thus, the fact that the resulting reduced-order system modeled by the DMD modes provides one pattern, or one set of coherent structures, to represent multiple distinct patterns is inherently flawed.
In the Levenberg-Marquardt algorithm (LMA), the primary issue generated by effects of transient states is the increased difficulty to solve the nonlinear least squares problem [15], which directly contributes to computational complexity at the expense of time. Since approximating nonlinear PDEs presents nonconvex optimization problems through the LMA, globalization strategies such as trust region or line search methods must be employed to find stationary points. Generally, less iterations are required to solve the DMD problems when they contain simpler dynamics. However, when solving subproblems within each iteration becomes an increasingly difficult task, the cost of computational time becomes significant to the point of computational infeasibility.
This section proposes the **split DMD** algorithm as a solution to the issue of selecting DMD or OD modes for nonlinear PDEs whose coherent structures vary over time. The split DMD algorithm uses a simple modification prior to the DMD routine to choose a domain for evaluation without changing the existing DMD method of computing system modes. The contribution to the existing DMD procedure comes from splitting the entire time interval into several subintervals before computing the low-rank Koopman modes. The **split lines**, determining the boundaries of each subinterval, are selected by the **n-split algorithm** described in the next section. The \(n\)-split algorithm produces \(n\) number of subintervals by a recursive method.
The goal of the algorithm is to separate the simulation interval into subintervals that have different ranges of data values, and are hypothesized to represent the time intervals with distinct system coherent structures. By heuristic reasoning, \(n\) number of split lines are chosen as an initial guess for range comparison. These split lines are defined by the set of points in time:
\[t_{split}=\{t_{0},t_{1},t_{2},\ldots,t_{n}\} \tag{1}\]
where a time interval \([0,T]\) implies \(t_{0}=0\) and \(t_{n}=T\), so that the data (or system values) \(Z(x,t)\) with respect to time \(t\) and space \(x\) is separated into subintervals:
\[Z_{split}=\{Z(x,[t_{0},t_{1}]),Z(x,[t_{1},t_{2}]),\ldots,Z(x,[t_{n-1},t_{n}])\} \tag{2}\]
By random selection, a line of space \(x\) denoted \(x_{test}\) is chosen to compare the system behavior across subintervals of time. Two tests are used to determine whether or not to merge two adjacent intervals. These involve either the difference in data value range or the length of the subinterval.
## 3 Implementation of the Split DMD Algorithm
For the first test, called the \(\varepsilon\) test, the necessary difference in range to retain a split line selection is data that exceeds a lower bound tolerance \(\varepsilon\), which is heuristically chosen as a fraction of the data range. Furthermore, the minimum change \(\varepsilon\) in data is inspected at both upper and lower bounds of the subinterval data ranges, as illustrated in the following figures:
In Figures 2 and 2, the two-dimensional shapes represent the change in range of system values in time at a spatial point \(x\). For an example of the \(\varepsilon\) test, if the data range is \([0,1]\), a chosen \(10\%\) tolerance or \(\varepsilon=0.1\) change in either bound between adjacent subintervals is the minimum amount required to retain the split line. Otherwise, if the adjacent subintervals for \(k\in\{1,2,\ldots,n-1\}\) satisfy both of the following relationships:
\[\left|\max_{t\in[t_{k-1},t_{k}]}Z(x_{test},t)-\max_{t\in[t_{k},t_{k+1}]}Z(x_{ test},t)\right|<\varepsilon \tag{3}\]
\[\left|\min_{t\in[t_{k-1},t_{k}]}Z(x_{test},t)-\min_{t\in[t_{k},t_{k+1}]}Z(x_{ test},t)\right|<\varepsilon \tag{4}\]
then the split line \(t_{k}\) is identified as unnecessary and discarded from the set of split lines \(t_{split}\).
For the second test, called the \(\delta\) test, the necessary length of a subinterval to retain a split line selection is a lower bound tolerance \(\delta\), which is heuristically chosen as a fraction of the data time interval. As in the first test, an illustration of whether or not a split line is retained or discarded is portrayed by the following figures:
For an example of the \(\delta\) test, if the data range is \([0,1]\), a chosen \(10\%\) tolerance or \(\delta=0.1\) subinterval length \([t_{k-1},t_{k}]\) for \(k\in\{1,2,\ldots,n-1\}\) is the minimum amount required to retain the split line. Otherwise, if the following relationship is satisfied:
\[|t_{k}-t_{k-1}|<\delta \tag{5}\]
then the split line \(t_{k}\) is identified as unnecessary and discarded from the set of split lines \(t_{split}\). The value of the second subinterval length \(|t_{k+1}-t_{k}|\) stemming from the \(\varepsilon\) test is only evaluated for \(k=n-1\), since each adjacent subinterval pair is evaluated across the entire data time interval proceeding from the initial time \(t=0\) to the final time \(t=T\).
Recursion is performed until a sufficient number of iterations is reached. In the case that all chosen splits \(t_{k}\) for \(k\in\{1,2,\ldots,n-1\}\) are discarded, the method restarts with a different selection of \(t_{split}\) values by adding in one more split than was used in the previous iteration. Consequently, for data displaying one distinct pattern across the time interval, it is expected that no split lines within the interval are necessary to be retained as long as the \(\varepsilon\) and \(\delta\) tests are satisfied to the given tolerances.
When a split line \(t_{k}\) is retained, or the split line \(t_{k}\) has "passed" the \(\varepsilon\) and \(\delta\) tests, the algorithm is run recursively within the subinterval \([t_{k-1},t_{k}]\) in order to find any further distinct patterns in the data subinterval.
After completing the maximum number of iterations, the algorithm outputs \(t_{split}\) with split lines that satisfy the \(\varepsilon\) and \(\delta\) tests for a chosen point in space \(x\). For added robustness, the algorithm is run for a number of different \(x_{test}\) values, and the resulting \(t_{split}\) with split lines that consistently satisfy the \(\varepsilon\) and \(\delta\) tests for these spatial points are chosen to define the subintervals for the DMD routine that follows.
A more explicit breakdown of the \(n\)-split algorithmic process is given by the following pseudo-code:
```
0: dataset \(Z\), split lines \(t_{split}\), maximum iterations \(M\) output: split lines \(t_{split}\), iteration count while iterations \(<M\)do Choose \(x_{k}\) randomly from the spatial nodes of \(Z\)
```
\begin{table}
\begin{tabular}{l l} \hline \hline
**input** & dataset \(Z\), split lines \(t_{split}\), maximum iterations \(M\) \\
**output** & split lines \(t_{split}\), iteration count \\
**while** & iterations \(<M\)do Choose \(x_{k}\) randomly from the spatial nodes of \(Z\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(n\)-split Algorithm
Set \(s\) as number of split lines
Set \(\varepsilon\) as fraction of \(\left\|Z(x_{k},[0,T])\right\|_{\infty}\) and \(\delta\) as fraction of \([0,T]\)
**for**\(k=1\) to \(s-1\)**do**
**if**
\[\max\left\{\begin{array}{c}\left|\max Z(x_{k},[t_{k-1},t_{k}])-\max Z(x_{k},[t _{k}t_{k+1}])\right|\\ \left|\min_{t\in[t_{k-1},t_{k}]}Z-\min_{t\in[t_{k},t_{k+1}]}Z\right|\end{array} \right\}>\epsilon\]
**then**
**if**: \(|t_{k}-t_{k-1}|>\delta\)**then**
Retain split line \(t_{k}\) and add a new split line in \([t_{k-1},t_{k}]\)
Run \(n\)-split algorithm on subinterval \(Z(x,[t_{k-1},t_{k}])\) and add new split lines as required
Increase iteration count by output in previous step
**else**
return
**end if**: \(k=s-1\) and \(|t_{k+1}-t_{k}|>\delta\)**then**
similar steps as previous if statement
**endif**
**end if**
**end for**
**if** no split lines are retained **then**
Increase iteration count by 1 and choose a new guess for \(t_{split}\)
Run \(n\)-split algorithm on \(Z\) and replace current split lines by output
Increase iteration count by output in previous step
**endif**
Discard redundant split lines
**end while**
Thus, stepping out of the \(n\)-split algorithm, the split DMD algorithmic process is given by the following pseudo-code:
**input**: rank of approximation \(r\), dataset \(Z\)
**output**: Order-\(r\) approximations \(Z_{DMD}\), Order-\(r\) DMD modes \(\Phi^{DMD}\)
Choose split lines \(t_{split}\) heuristically and set maximum number of iterations \(M\)
\begin{table}
\begin{tabular}{l l} \hline \hline
**input**: & rank of approximation \(r\), dataset \(Z\)
**output**: Order-\(r\) approximations \(Z_{DMD}\), Order-\(r\) DMD modes \(\Phi^{DMD}\)
Choose split lines \(t_{split}\) heuristically and set maximum number of iterations \(M\)
**for** enough iterations to reach steady-state solution **do**
Run \(n\)-split algorithm on \(Z\)
**end for**
Retain steady-state selection of split lines \(t_{split}\)
Set \(s\) as number of split lines
**for**\(k=1\) to \(s\)**do**
Run DMD algorithm to output DMD modes \(\Phi_{k}^{DMD}\)
and rank-\(r\)\(Z_{DMD}(x,[t_{k-1},t_{k}])\)
**end for**
Set \(\Phi^{DMD}=\left\{\Phi_{k}^{DMD}\right\}_{k=1}^{s}\)
return \(Z_{DMD},\Phi^{DMD}\)
## 4 Results
The main purpose of this study is to compare the effectiveness of the split DMD method, i.e. selecting DMD modes in subdomains of the entire solution, relative to DMD methods that select DMD modes across the entire domain on a nonlinear PDE problem. A simplified version of the proposed \(n\)-split algorithm in Section 2 was implemented for this study, by using an evenly-spaced initial guess for the split lines and a heuristic choice for the number of initial split lines.
### Comparison of Split OD and OD
The split DMD algorithm was applied using both OD and standard DMD to the following form of the Kuramoto-Sivashinsky (KS) equation:
\[\frac{\partial w}{\partial t}+\frac{\partial^{4}w}{\partial x^{4}}=-2w\frac{ \partial w}{\partial x}-\frac{\partial^{2}w}{\partial x^{2}} \tag{6}\]
The stabilizing fourth-order and destabilizing second-order terms in the KS equation mimic the Navier-Stokes' energy behavior [9]. An important feature of the KS equation is the **bifurcation parameter**\(L\), the length of the periodic domain where the model is studied, that determines the behavior of the dynamical system. To perform a non-dimensionalization of the KS equation to a unit periodic domain \(\bar{x}=\frac{\bar{x}}{L}\), define \(\varepsilon=\frac{1}{L^{2}}\) so that the non-dimensional model is
\[\frac{\partial w}{\partial t}+\varepsilon^{2}\frac{\partial^{4}w}{\partial \bar{x}^{4}}=-2\varepsilon w\frac{\partial w}{\partial\bar{x}}-\varepsilon \frac{\partial^{2}w}{\partial\bar{x}^{2}} \tag{7}\]
This exposes the relationships between length versus nonlinearity and stabilizing versus destabilizing terms. Solutions to the non-dimensional equations above are considered in the domain
\[(\bar{x},t)=\mathbb{T}\times(0,T)\]
with periodic domain \(\mathbb{T}=(0,1)\), different values of the length parameter \(L\) (and thus \(\varepsilon\)) and the final time \(T\). These equations are solved with periodic boundary conditions
\[w(0,t)=w(1,t),\ \ w_{\bar{x}}(0,t)=w_{\bar{x}}(1,t),\] \[w_{\bar{x}\bar{x}}(0,t)=w_{\bar{x}\bar{x}}(1,t),\ \ w_{\bar{x} \bar{x}\bar{x}}(0,t)=w_{\bar{x}\bar{x}\bar{x}}(1,t)\]
and initial condition
\[w(\bar{x},0)=\frac{\sin(4\pi\bar{x})}{\sqrt{\varepsilon}} \tag{8}\]
The dynamical behavior of a system modeled by the KS equation is summarized in the following table [9].
The periodic domain lengths \(L\) initially tested were 12.6, 13.2, and 402.3. The following figures portray a visual inspection of the improvement of reduced-order approximation to the full model due to the splitting procedure of the split DMD algorithm:
\begin{table}
\begin{tabular}{l c l} \hline Length \(L\) & \(\varepsilon\) & Solution Behavior \\ \hline
12.5664 \(\approx\) 4\(\pi\) & 0.00633257 & Bifurcation \\
12.8767 & 0.00603102 & Heteroclinic Bifurcation \\
13.1403 & 0.00579148 & Hopf Bifurcation \\
402.2590 \(\approx\) 128\(\pi\) & 0.00000618 & “Chaos” \\ \hline \end{tabular}
\end{table}
Table 3: Dynamical Behavior of Kuramoto-Sivashinsky
The relative error of the split OD models provided in the preceding figures shows a clear improvement in comparison to the standard OD models with respect to the finite element solution. Without splitting, it is evident that OD does not provide a representation of the data that captures the coherent structures throughout the time interval.
The following table provides a comparison of the split OD ROMs versus the OD ROMs for \(t=400\), with information on each reduced-order model concerning:
1. Number of splits used: \(n\)
2. Length of the periodic time interval (bifurcation parameter): \(L\)
3. Rank of the approximation: \(r\)
4. Computation error: \(\left\|r(t)\right\|_{2}\)
5. Computation time (in seconds)
The table suggests a substantial improvement in using the splitting approach.
### Sensitivity of Initial Condition
For the chaotic dynamics (\(L=402.3\)), the sensitivity of the results in Section 4.1 was tested by adding a random perturbation to the discretization of the previous initial condition. Letting \(\beta(x)=\text{unif}\left(0,\frac{1}{20}\right)\), a uniformly distributed random number at each spatial location (which is 161 in this case), the new initial conditions are:
\[w(x,0)=\frac{\sin(4\pi x)}{\sqrt{\varepsilon}}+\beta(x),\ \ w_{x}(x,0)=\frac{4\pi \cos(4\pi x)}{\sqrt{\varepsilon}} \tag{9}\]
The results are summarized in the following table, compared to the standard OD and DMD models :
### Sensitivity of Bifurcation Parameter
For the chaotic dynamics (\(L=402.3\)), the sensitivity of the results in Section 4.1 was tested by shifting the bifurcation parameter to \(L=402.35\).
The results are summarized in the following table:
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline ROM & \(n\)-Split & Length & Rank & Computation error & Time \\ \hline DMD & 0 & 402.3 & 13 & 28.927186508313124 & 0.1634884 \\ DMD & 10 & 402.3 & 13 & 5.798752827124941 & 0.17863 \\ \hline OD & 0 & 402.3 & 13 & 22.233228342958030 & \(2.616457798\times 10^{2}\) \\ OD & 10 & 402.3 & 13 & 2.739207339916917 & \(1.858652532\times 10^{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Split DMD ROM vs Split OD ROM, \(\beta\) shift
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(n\)-Split & Length & Rank & Computation error & Time \\ \hline
0 & 12.60 & 11 & 0.539273074481482 & 1.656649834\(\times\) 10\({}^{2}\) \\
4 & 12.60 & 11 & 0.086861903201658 & 1.424075578\(\times\) 10\({}^{2}\) \\ \hline
0 & 13.20 & 13 & 0.450356679589967 & 2.765350954\(\times\) 10\({}^{2}\) \\
4 & 13.20 & 13 & 0.187951801209479 & 1.874434272\(\times\) 10\({}^{2}\) \\ \hline
0 & 402.3 & 13 & 18.071816203629595 & 3.078671183\(\times\) 10\({}^{2}\) \\
10 & 402.3 & 13 & 2.266344147532708 & 1.270328812\(\times\) 10\({}^{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Split OD ROM vs OD ROM, \(t=400\)
### Sensitivity of Split Lines
For the chaotic case in Section 4.1, the sensitivity of the results was tested by shifting the split lines. The four sensitivity test cases for the 10-split OD model are uniform shifts by \(\pm 1,\pm 3\) seconds in each of the interior time splits.
The results are summarized in the following table, with the "Shift" column values measured in seconds:
## 5 Discussion
As mentioned in Section 2, the OD method without splitting has issues reconstructing systems with bifurcation present. Using the \(n\)-split algorithm, the split OD method reconstructs the system in question to a level of accuracy sufficient for recognizing the effects of bifurcation on the system dynamics.
The standard DMD algorithm does not provide any useful results in the test case in Section 4.1. By the \(\beta\) shift in Section 4.2, it is evident that the split DMD is able to provide results that sufficiently reconstruct the system dynamics in certain cases. The anticipation is that the split DMD, although computationally cheap, is limited in its potential solution accuracy. In any case, accuracy of a DMD solution which rivals an OD solution for the KS equation is not expected.
An increase in periodic length affects the solution to the standard OD ROM significantly, whereas the 10-split OD model maintains a high level of accuracy. Furthermore, the robustness to where the split occurs in the chaotic dynamics implies that a sufficient number of splits is enough to produce an accurate reconstruction of the finite element solution.
The main issue with the splitting approach is the inability to predict future time states with the resulting modes. In the \(L=13.2\) case, the solution for each split can simply be "copied" in anticipation of the evident recurring pattern. However, in the \(L=402.3\) case, the chaotic dynamics (implying differing coherent structures for each split) do not provide a gateway to predict the the chaotic pattern.
The conclusion of these results is that a decrease in computation time and increase in accuracy of solutions suggests that the \(n\)-split algorithm for DMD methods is superior to the standard DMD and optimized DMD
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(n\)-Split & \begin{tabular}{c} \end{tabular} & \begin{tabular}{c} Rank \\ \end{tabular} & \begin{tabular}{c} Computation error \\ \end{tabular} &
\begin{tabular}{c} Time \\ \end{tabular} \\ \hline
10 & +3 & 402.3 & 13 & 3.362777585745632 & \(1.464054566\times 10^{2}\) \\
10 & +1 & 402.3 & 13 & 4.197440908311128 & \(1.340162133\times 10^{2}\) \\
10 & -1 & 402.3 & 13 & 5.702330507412169 & \(1.3498605\times 10^{2}\) \\
10 & -3 & 402.3 & 13 & 4.645326326310565 & \(1.42945084\times 10^{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Split OD ROM with split line shifts
methods for reconstructing a solution to the KS equation.
The numerical study applying the split DMD method to the KS equation suggests that there is a benefit to using this splitting approach for modeling dynamical systems with certain structural features. However, the work described in this text did not factor in many possibilities that may come from studying dynamical systems theory. Papers such as [10] provides a method to choose modes over a longer time period that are not biased by the transient dynamics. Page 52 of [11] provides a numerical method for inspecting the attracting manifolds of the dynamical system. [14] and Chapter 3 of [11] provide direction towards future state prediction based on Koopman theory. Further study on these topics can lead to a more sophisticated approach towards creating a metric for attractors when chaotic dynamics are involved.
|
2303.01084 | Enabling Low-Overhead Over-the-Air Synchronization Using Online Learning | Accurate network synchronization is a key enabler for services such as
coherent transmission, cooperative decoding, and localization in distributed
and cell-free networks. Unlike centralized networks, where synchronization is
generally needed between a user and a base station, synchronization in
distributed networks needs to be maintained between several cooperative
devices, which is an inherently challenging task due to hardware imperfections
and environmental influences on the clock, such as temperature. As a result,
distributed networks have to be frequently synchronized, introducing a
significant synchronization overhead. In this paper, we propose an
online-LSTM-based model for clock skew and drift compensation, to elongate the
period at which synchronization signals are needed, decreasing the
synchronization overhead. We conducted comprehensive experimental results to
assess the performance of the proposed model. Our measurement-based results
show that the proposed model reduces the need for re-synchronization between
devices by an order of magnitude, keeping devices synchronized with a precision
of at least 10 microseconds with a probability 90%. | Dieter Verbruggen, Hazem Sallouha, Sofie Pollin | 2023-03-02T09:13:36Z | http://arxiv.org/abs/2303.01084v1 | # Enabling Low-Overhead Over-the-Air Synchronization Using Online Learning
###### Abstract
Accurate network synchronization is a key enabler for services such as coherent transmission, cooperative decoding, and localization in distributed and cell-free networks. Unlike centralized networks, where synchronization is generally needed between a user and a base station, synchronization in distributed networks needs to be maintained between several cooperative devices, which is an inherently challenging task due to hardware imperfections and environmental influences on the clock, such as temperature. As a result, distributed networks have to be frequently synchronized, introducing a significant synchronization overhead. In this paper, we propose an online-LSTM-based model for clock skew and drift compensation, to elongate the period at which synchronization signals are needed, decreasing the synchronization overhead. We conducted comprehensive experimental results to assess the performance of the proposed model. Our measurement-based results show that the proposed model reduces the need for re-synchronization between devices by an order of magnitude, keeping devices synchronized with a precision of at least 10 microseconds with a probability 90%.
Synchronization, distributed networks, drift compensation, LSTM, online learning
## I Introduction
Recent advances in wireless networks are showing a clear trend in moving away from the conventional centralized star-based architectures to distributed alternatives, addressing contention, interference, and coexistence challenges. This trend can be seen in most recent processing paradigms, such as edge-computing [1], and federated learning [2] as well as networking paradigms, such as cell-free [3], and crowdsourced networks [4]. In order to realize the full potential of these paradigms, accurate network synchronization is needed as a key enabler for essential services such as coherent transmission [5], cooperative decoding [6], and localization [7]. While centralized networks can achieve high-precision synchronization via wired-based infrastructure or over-the-air (OTA) with acceptable overhead, distributed networks may not have wired-based infrastructure due to cost and geographical constraints [7], and existing OTA synchronization solutions are unscalable for distributed networks due to the excessive overhead needed to synchronize significantly more devices when compared to centralized networks [4, 5, 6, 7, 8].
The essence of network synchronization is to keep a target group of devices running with the same clock reference. However, due to imperfect hardware, represented by the clock circuits, as well as environmental factors, such as temperature, a device's local clock may deviate from the common reference. One solution is to have extra hardware with every device in the network, such as a GPS disciplined oscillator (GPSDO), acting as an accurate time source for individual devices [9]. However, extra-hardware-based solutions are typically expensive in terms of cost and power consumption. Another widely adopted synchronization method in modern wireless systems, such as 3GPP LTE (3rd Generation Partnership Project Long Term Evolution) standardization, relies on periodic synchronization pilots exchange between target devices and a reference access point [10]. These pilot signals can be used in an oriented way, as in the case of LTE, or opportunistically in distributed networks, as in the case of crowdsourced networks [6]. The tradeoff with this pilot-based synchronization method is the pilot overhead. For instance, in a distributed network, where multiple reference access points work cooperatively to serve multiple devices, the pilot-based synchronization overhead becomes overwhelming, significantly reducing the network's spectral efficiency [8]. In fact, the rate at which synchronization pilots are needed in distributed networks is determined by both the quality of the device's crystal oscillator and the application's maximum tolerated clock offset.
The model of digital clocks in wireless devices can be represented by a time-series process, i.e., discrete stochastic process [11]. An accurate clock model is a key enabler for clock drift prediction, which promises a reduced synchronization overhead by relying on the clock model to compensate for clock drifts, minimizing the frequency at which synchronization pilots are needed [6, 7]. Recent state-of-the-art works addressed the digital clock modeling by using autoregressive models along with a Kalman filter [11] or by exploiting long short-term memory (LSTM)-based recurrent neural networks (RNNs) [7]. While both autoregressive and LSTM clock models showed promising performance in predicting the clock drifts, these models were trained in an offline manner, and hence frequent retraining would be needed to account for the time-varying nature of clock drifts as well as changes in the target environment [12]. Online learning offers attractive solutions to address the need for comprehensive model retraining by exploiting the sequentially available training data to update and adapt the trained model [13]. Online learning techniques have been employed in several areas of distributed wireless networks, such as inferring system conditions and
performing adaptive resource allocations [13]. The promising performance of online learning methods and the urgent need for synchronization methods with low pilot signals overhead and low training overhead drive our study in this paper.
### _Related Works_
Several works in the literature presented methods to address the clock drift problem in large-scale networks, passively by using reference signals, such as LTE pilots or automatic dependent surveillance-broadcast (ADS-B) signals [14], or actively by propagating synchronization signals over the network [15]. These state-of-the-art methods are mainly conducted for wireless sensor networks with limited cost and computational capabilities. While some works consider compensation methods based on linear programming and multi-cast [16], most research reports are centered around temperature-assisted methods [16, 17, 18]. This arises from the strong correlation between the working temperature and the oscillator frequency used in wireless devices.
Obtaining an accurate temperature-frequency model requires continuous measurements of the oscillator frequency, which is a challenging task for devices with limited computational power. In [17], the authors considered a static model by which the temperature-frequency model is approximated using a second-order or third-order polynomial. The weights for the second-order and third-order polynomials are estimated using priory measurements. Haapala _et al._[19] presented a dynamic model based on lookup tables and interpolation between known temperature-frequency pairs. However, these aforementioned works targeted low-end clock oscillators, which have a more outspoken temperature-frequency correlation compared to the high-end oscillators considered in this work.
### _Contribution and Paper Structure_
In this paper, we propose and experimentally evaluate a method to track the local clock of devices based on opportunistically existing signals, such as LTE signals, and environment-dependent features, such as temperature measurements. The proposed method exploits LTE signals opportunistically and combines them with an online LSTM-based prediction model, enabling accurate local clock drift prediction. This accurate prediction significantly reduces the need for re-synchronization between devices, and hence the overall network synchronization overhead. In particular, our measurement-based results show that the proposed model reduces the need for re-synchronization between devices from 2 min without any compensation to 55 minutes to keep devices synchronized with a precision of 10 microseconds in 90% of the time. The main contribution of this paper is twofold.
* First, we introduce a novel LSTM-based clock model that uses online learning, which unlike existing works in the literature, does not require comprehensive and frequent retraining. The proposed model predicts and compensates for devices' local clocks, minimizing the need for re-synchronization signals between devices. We rely on the fact that the majority of wireless transceivers use the same oscillator for both the local clock and the radio frequency (RF) front-end, which facilitates accurate measurements of the oscillator frequency by measuring the frequency offset of the RF front-end, enabling our proposed method to adapt the model based on the sequentially coming measurements in an online fashion.
* Second, we conduct extensive measurements using off-the-shelf software-defined radios (SDR) to collect their local clock measurements, LTE pilot signals they receive, as well as temperature measurements. We consider SDRs as they can resemble interesting use cases such as cell-free and crowdsourced networks, and they can be used in many applications, some of which require strict time synchronization.
The rest of the paper is organized as follows. Section II presents the system model. In Section III, we introduce our proposed synchronization method. The experiment design and data collection are detailed in Section IV. Subsequently, we present the performance evaluation results in Section V. Finally, the paper is concluded in Section VI.
## II System Model
This paper considers a set of stationary transceivers, represented by SDRs, distributed randomly in a given area. The area of interest can be a mix of an indoor and outdoor environment. We assume that this target group of nodes is within communication range of an LTE base station, enabling us to use LTE signals opportunistically as a synchronization reference to measure the oscillator frequency. Our aim is to keep our target group of transceivers synchronized with the precision requirements for an application, such as collaborative spectrum sensing and cooperative decoding. Each node in the network has its own single oscillator (TG2016SBN) to generate the local notion of time, represented by the clock ticks, for the RF front-end as well as for the baseband processing. In the literature, three different terms are commonly used when discussing the non-ideal behaviour of clocks:
* _Offset_ is the time difference between two clocks; when this value is zero, both clocks are perfectly synchronized.
* _Skew_ is the difference between clocks frequencies.
* _Drift_ refers to small variations on the skew, usually as a consequence of environmental changes such as temperature [20].
Throughout this paper, ppm (parts per million) is used as a way of generalizing the results as it is frequency-independent. Furthermore, using ppm enables the added benefit of quickly calculating the clock offset after a specific period. For example, an oscillator with a constant ppm of 0.1 will result in a clock offset of 0.1 \(\mu\)sec after 1 sec. If this system requires a maximum clock offset of 6 \(\mu\)sec, the clocks must be synchronized each minute.
## III LSTM-Based Online Learning Method
In this section, we introduce our proposed LSTM-based clock model with its online training. Given the nature of the clock modeling problem, in which the true reference of the
clock is measured or obtained via pilot signals sequentially, we adopted an online learning method to keep our LSTM-based model updated. This online method eliminates the need to retrain the underlying model with sizable training data, which translates into lower training overhead. We consider an RNN with one LSTM layer followed by a fully connected layer. The Adam optimizer [21] is used with a learning of 0.001 to train this network. Table I details the network architecture.The goal of this LSTM-based network is to estimate the ppm of the oscillator, exploiting the fact that varying temperature is one of the main causes for clock drifts [7].In the following, we detail the different design parameters of our proposed approach.
1. [leftmargin=*]
2. _Lag_: The lag defines the number of samples from the past on which the current prediction depends. We rely on the PAFC, a known metric that indicates the impact of different lags, to select the lag. From Fig. 1, it can be concluded that lags bigger than five do not have a significant impact as their correlations are below the 95% confidence interval. Accordingly, a lag of five is chosen for the proposed model.
3. _Input features_: The input features considered in the proposed method are the timestamp and temperature. In particular, the timestamp is the seconds-of-the-day time counted from midnight.
4. _State and gate activation functions_: For the state and gate activation functions, the default activation functions used in LSTM-based RNN are chosen, which are \(tanh\) for the state activation and \(sigmoid\) for the gate activation.
5. _Number of initial epochs (\(N_{initial}\)) and hidden states_: The number of initial epochs and the hidden states are obtained using a Monte Carlo simulation, from which we selected the best-performing parameters. The number of initial epochs controls how often the model trains on the data. Values of 25 and 24 have been chosen empirically for the number of initial epochs and the size of the hidden state, respectively, as they result in the best trade-off between learning the underlying model and learning the noise of the measurements.
6. _Time between online measurements_ (\(\Delta t_{\text{online}}\)): We define the time \(\Delta t_{\text{online}}\), as the period at which we update our LSTM-based model with new training data. For instance, a \(\Delta t_{\text{online}}\) = 20 min, means that we update our LSTM-based model with the new training data every 20 minutes.
7. _Number of online epochs (\(N_{online}\))_: The number of online epochs represents the number of times the model trains on the sequentially arriving data points. Intuitively, the number of online epochs depends on the chosen \(\Delta t_{\text{online}}\). A lower \(\Delta t_{\text{online}}\) requires a lower number of epochs as the RNN is retrained more frequently over shorter periods of time.
Considering the design parameters of the proposed LSTM-based online learning method, in the following section, we design an experiment to assess the performance of the proposed method with real-life measurements.
## IV Experiment Design and Data Collection
In this section, we introduce our experiment setup and the data collection process, in which we focus on the node level, aiming to compensate for any drift and skew resulting from the hardware imperfection.
### _Experiment Setup_
The experiment setup depicted in Fig. 2, consists of two Pluto-SDRs with a common oscillator, reference clock, and a single-tone generator.
* The Pluto-SDR1, produced by Analog Devices, is a low-cost SDR used in academia and for educational purposes. The Pluto-SDR is widely adopted in the wireless community for a broad range of applications, and it has an advanced RF front-end, making it an appealing candidate for our experiments. However, this off-the-shelf SDR has a low-end oscillator with a quality of 25 ppm, which is unsuitable for time-sensitive and synchronization purposes. This low-end oscillator can be bypassed by connecting an external oscillator, easily replacing the low-end oscillator with a high-end one on-demand. Footnote 1: [https://wiki.analog.com/university/tools/pluto](https://wiki.analog.com/university/tools/pluto)
* The common external oscillator for this experiment is the Mini Precision GPS Reference Clock designed by Leo Bodnar2. This clock is programmable for almost all frequencies between 400Hz and 810MHz; some frequencies are not achievable due to hardware limitations. With the GPS antenna attached, this oscillator is GPS disciplined, approaching 1e\(-\)6 ppm. Without the GPS antenna, the stability of the oscillator is dictated by the high-end internal oscillator (TG2016SBN), making it prone to the environment with a ppm of 0.5 between -30 and 85 \({}^{\circ}\)C. Footnote 2: [http://www.leebodnar.com](http://www.leebodnar.com)
* The single-tone generator is an extra Pluto-SDR with a GPSDO. This Pluto-SDR transmits a single sine wave of chosen frequency on a carrier frequency. Due to the stable oscillator, the generated signal and the carrier frequency are accurate and stable.
Fig. 1: The sample PACF of the measured clock offset with 95% confidence interval.
\begin{table}
\begin{tabular}{|c||c|c|} \hline Layer index & Type & Details \\ \hline \hline
1 & Input & Units: 5 \\ \hline \multirow{3}{*}{2} & \multirow{3}{*}{LSTM} & Hidden States: 24 \\ \cline{3-3} & & State activation: \(tanh\) \\ \cline{3-3} & & Gate activation: \(sigmoid\) \\ \hline
3 & Fully connected & Units: 24 \\ \hline \end{tabular}
\end{table} TABLE I: RNN Model architecture.
As shown in Fig. 2, the two Pluto-SDRs share the same external oscillator and measure the ppm of this external oscillator with different methods; using single-tone as a reference for the first one, and LTE Primary Synchronization Signals (PSS) as a reference for the second one. Both these methods exploit the RF front-end of the Pluto-SDR to measure the oscillator's ppm. An offset between the frequency of the oscillator \(f_{xo}\) and its nominal frequency \(f_{xo,nom}\) will result in an offset between the sampling frequency \(f_{s}\) and its nominal frequency \(f_{s,nom}\). This relation between frequencies and the corresponding ppm can be described as
\[\text{pmm}=\frac{f_{x}-f_{x,nom}}{f_{s,nom}}10^{6}\,,\quad\forall x\in\{xo,s, sine\}\,, \tag{1}\]
where subscripts \(xo\), \(s\), and \(sine\) are used to refer to oscillator, sampling, and sine wave, respectively. In the following, we detail the single-tone and the LTE as references for our measurements.
#### Iv-A1 Single-tone-based as a benchmark
The single-tone generator transmits a single sine wave with a frequency \(f_{sine,nom}\) of 160 kHz on a carrier frequency of 2.4 GHz over a coaxial cable. When the receiver's sampling frequency \(f_{s}\) deviates from the nominal sampling rate \(f_{s,nom}\) of 5 Msps, the received sine wave will be received with a deviated frequency \(f_{sine}\). Accordingly, the ppm can be calculated using (1) with subscript \(x\) being \(sine\). It is worth noting that this single-tone method, commonly used for SDR's calibration, is only used as a benchmark reference for our LSTM-based method with LTE signals.
#### Iv-A2 LTE-based
Our LTE-based method relies on counting the samples in between PSS, which are transmitted every \(5ms\) by Frequency Division Duplex (FDD)-LTE base stations [22]. This method captures one second of samples containing around 200 PSS. The PSS signals are detected using time correlation, up-sampling, and peak detection. By calculating the average number of samples between peaks, the effective sampling frequency \(f_{s}\) can be calculated, and thus the \(f_{xo}\) and ppm, using (1). This LTE-based method is a promising solution for the synchronization problem in networks with large-scale deployments, such as crowdsourced networks [4].
### _Data Collection and Model Training_
#### Iv-B1 Data collection
In order to collect the dataset for the performance evaluation of the proposed method, we used the setup shown in Fig. 2 with the GPS front-end disconnected, representing a node in the system model without GPSDO. This experiment has been conducted outdoors to capture a wide range of temperatures compared to the slow-varying temperature indoors. The collected data consists of 1) timestamp (second of the day), 2) temperature (measurements collected by the onboard temperature sensor of the Pluto-SDR), 3) LTE-based ppm measurement, and 4) single-tone-based ppm measurement (benchmark). The measurements were collected continuously and averaged per one-minute window to improve the precision and reduce the noise of the measurements (cf. V-A). In total, 4,200 data points were collected over a span of 70 hours.
#### Iv-B2 Training and prediction
In order to train and assess our proposed online LSTM-based method, we used 1440 data points corresponding to the measurements taken on the first day as the initial training data for the proposed LSTM model. Subsequently, the remaining data is divided into two groups for our online learning method. The first group is used for testing our online prediction using temperature and timestamp as input features, and the second group is used sequentially for our proposed LSTM online learning, i.e., updating and adapting the model when new labelled data arrives.
## V Performance Evaluation Results
In this section, we assess the accuracy of the single-tone-based and LTE-based ppm measurements and assess the performance of the proposed approach.
### _Ppm Measurement Accuracy_
To assess the accuracy of the single-tone-based and LTE-based ppm measurements, the experiment setup shown by Fig. 2 is used. We connected the GPS front-end, improving the external oscillator's stability. The external oscillator is programmed to sweep from \(-0.5\) ppm till 0.5 ppm with a step of 0.025 ppm, simulating an oscillator with a fixed skew. We measure the oscillator ppm using the single-tone-based and LTE-based methods and compare the measurements against the corresponding programmed ppm. The resulting measurements are filtered for outliers, and some steps in the sweep have been omitted as empirical comparison concluded that the external oscillator could not accurately produce this ppm. In total, we collected around 21,400 single-tone-based and 2,840 LTE-based measurements.
Table II shows the details of the ppm measurements. The accuracy, in Table II, indicates the bias in the measurements and is defined as the average difference between the measurement and the corresponding programmed ppm. Both methods show a bias in their measurements. However, in both cases, the bias is relatively small, e.g., a bias of 1e\(-3\) introduces a clock offset of 1 nanosec every second or 3.6 usec every hour. The precision of the measurements, shown in Table II, is defined as the standard deviation of the difference between the measurement and the corresponding programmed ppm. The measurements
\begin{table}
\begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & Single-tone-based & LTE-based \\ \hline Accuracy & 0.395e\(-3\) & 0.812e\(-3\) \\ \hline Precision & 0.396e\(-3\) & 16.602e\(-3\) \\ \hline \end{tabular}
\end{table} TABLE II: Details of the ppm measurements.
Fig. 2: Experiment setup for Data collection (with/without GPS front-end).
of the single-tone-based method are precise, introducing less than \(25ns\) of error per minute. Confirming that the single-tone-based measurements can be used as a benchmark for our model. The LTE-based method is less precise than the single-tone-based method, as it introduces an error 1 usec each minute. The precision can be further improved by averaging over multiple measurements. The averaging introduces a trade-off between measurement accuracy and measurement time. In our data collection, averaging over one minute is chosen.
Fig. (a)a and Fig. (b)b show, respectively, the residual error and the precision of the sweep with steps of 0.025 ppm. The residual error is defined as the difference between the programmed and measured ppm. The precision of the single-tone-based method is relatively constant compared to the almost LTE-based method. The precision of the LTE-based method increases in the regions around 0, -0.35, and 0.35 ppm as these values correspond to an integer amount of samples between PSS peaks. The decrease in the other regions is due to the limited accuracy of the peak detection.
### _Evaluation of the Proposed LSTM Model_
In the previous subsection, we defined the maximum measurement accuracy of both the single-tone-based and LTE-based methods. In this subsection, we introduce the performance evaluation of our proposed model. Fig. 4 shows a sample of the ppm predicted by the proposed LSTM model, the LTE-based ppm measurements and the corresponding single-tone-based ppm measurements. Here the \(\Delta_{\text{online}}\) is chosen to be 20 minutes. The data points used for online learning are shown using stars.
The ppm prediction of the proposed LTE-based LSTM model is compared against three compensation methods. The first compensation method, named _No compensation_, does not compensate for the oscillator's ppm; the predicted ppm of this method is zero. _Single-tone-based compensation_ compensates the oscillator's ppm using a predicted ppm measured with the single-tone-based method; the predicted ppm of this method stays constant until the subsequent measurement. Similar to single-tone-based compensation, _LTE-based without LSTM compensation_ compensates with a constant predicted ppm, which is measured based on LTE without relying on any clock modeling. The predicted ppm of the proposed LSTM model is obtained by training, and subsequently adapting the model constantly in an online manner, with the corresponding temperatures and timestamps.
The local notion of time is compensated using the predicted ppm. The smaller the difference between the predicted ppm and the actual ppm, the smaller the clock offset, which is defined as the difference between the compensated local notion of time and the global notion of time (i.e., the absolute time). We use this clock offset as the performance metric to compare the different compensation methods. Setting this clock offset to zero corresponds to perfect synchronization. We synchronize the local clock every \(\Delta_{\text{online}}\) minutes, combining the perfect synchronization with a ppm measurement of the oscillator and, in the case of the proposed online LSTM model, also with an update of the model.
The change in clock offset in microseconds after one minute is calculated by multiplying the difference between the actual ppm and the predicted ppm by 60. The clock offset at a specific moment can be calculated by adding all the previous changes in the clock offset starting from the previous perfect synchronization, as a perfect synchronization resets the clock offset. Fig. 5 presents the average clock offset for the proposed online LSTM model with different \(N_{\text{online}}\) considering \(\Delta_{\text{online}}\) of 25 minutes. Based on the figure, we opt for \(N_{\text{online}}\) of six, since increasing \(N_{\text{online}}\) further does not provide noticeable performance gains for the corresponding \(\Delta_{\text{online}}\) of 25 minutes. Fig. 6 shows the cumulative distribution function (CDF) of the clock offset for different compensation methods by which the system is synchronized every 25 minutes. A lower clock
Fig. 4: Samples of collected data using the single-tone-based method, LTE-based method and the predictions of the proposed online LSTM model. The stars show the measurements used for online learning.
Fig. 5: Average clock offset with \(\Delta_{\text{online}}\) of 25 minutes for the proposed online LSTM with different \(N_{\text{online}}\).
Fig. 3: (a) Residual error distribution. (b) Precision per ppm for both LTE and single-tone.
offset means a better synchronization performance. Among these four compensation methods, No compensation performs the worse. Compensating the ppm is necessary to enable low-overhead synchronization. Fig. 6 shows that our proposed online LSTM-based method, with \(N_{\text{online}}\) equals to six, outperforms the LTE-based compensation without LSTM, closing the gap compared to the single-tone-based compensation benchmark.
Table III presents the required \(\Delta t_{\text{online}}\) for the considered compensation methods. An optimal \(\Delta t_{\text{online}}\) can be chosen depending on the network's synchronization requirements and the maximum allowed synchronization overhead. For example, when a system requires a maximum clock offset of 10 \(\upmu\)sec for 90% of the time, the corresponding \(\Delta t_{\text{online}}\) can be measured for the different compensation methods considered as shown in Table III. Considering the proposed method, the network only has to re-synchronize every 55 minutes. Moreover, Table III illustrates that the proposed method reduces the synchronization overhead by around 95% compared to _No compensation_ and more than 50% compared to the _LTE-based without LSTM_ compensation. Finally, the computation time of our proposed method is assessed on a Raspberry Pi 4. The proposed method's computation times for initial training, prediction, and online learning phases are 14.9 seconds, 22.52 ms, and 242.1 ms, respectively.
## VI Conclusion
We proposed an online LSTM model for clock skew and drift compensation by exploiting temperature and LTE-based oscillator's ppm measurements, aiming to reduce over-the-air synchronization overhead in distributed wireless networks. The proposed model has been validated with real-life measurements and compared to different compensation methods, such as constant compensation with LTE-based and single-tone-based measurements. Our results showed that with LTE-based ppm measurements, the proposed online LSTM method could reduce the synchronization overhead needed to maintain a synchronization precision of 10 \(\upmu\)sec by several tens of minutes compared to methods with periodic synchronization signals without online learning. Fine-tuning the proposed method to be more application specific is an interesting future work.
|
2301.10928 | TIP: A Trust Inference and Propagation Model in Multi-Human Multi-Robot
Teams | Trust has been identified as a central factor for effective human-robot
teaming. Existing literature on trust modeling predominantly focuses on dyadic
human-autonomy teams where one human agent interacts with one robot. There is
little, if not no, research on trust modeling in teams consisting of multiple
human agents and multiple robotic agents.
To fill this research gap, we present the trust inference and propagation
(TIP) model for trust modeling in multi-human multi-robot teams. We assert that
in a multi-human multi-robot team, there exist two types of experiences that
any human agent has with any robot: direct and indirect experiences. The TIP
model presents a novel mathematical framework that explicitly accounts for both
types of experiences. To evaluate the model, we conducted a human-subject
experiment with 15 pairs of participants (N=30). Each pair performed a search
and detection task with two drones. Results show that our TIP model
successfully captured the underlying trust dynamics and significantly
outperformed a baseline model. To the best of our knowledge, the TIP model is
the first mathematical framework for computational trust modeling in
multi-human multi-robot teams. | Yaohui Guo, X. Jessie Yang, Cong Shi | 2023-01-26T04:25:53Z | http://arxiv.org/abs/2301.10928v1 | # TIP: A Trust Inference and Propagation Model in Multi-Human Multi-Robot Teams
###### Abstract.
Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (\(N=30\)). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.
team of teams, multi-operator multi-autonomy (MOMA) +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer
and prove theoretically that trust converges after repeated (direct and indirect) interactions. To evaluate the proposed TIP model, we conducted a human-subject experiment with 15 pairs of participants (\(N=30\)). Each pair worked with two drones to perform a threat detection task for 15 sessions. We compared the TIP model (i.e., accounts for both the direct and indirect experiences) and a direct-experience-only model (i.e., only accounts for the direct experience a human agent has with a robot). Results show that the TIP model successfully captured people's trust dynamics with a significantly smaller root-mean-square error (RMSE) compared to the direct-experience-only model. To the best of our knowledge, the proposed TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.
## 2. Related Work
Several computational trust models in dyadic human-robot teams exist (K
including \(x\)'s prior experiences \(\alpha_{0}^{x,A}\) and \(\beta_{0}^{x,A}\), the unit direct experience gains \(s_{A}^{x}\) and \(f_{A}^{x}\), and the unit indirect experience gains \(\bar{s}_{A}^{x}\) and \(\bar{f}_{A}^{x}\). We denote the indices of \(x\)'s direct and indirect trust updates with \(A\) up to time \(k\) as \(D_{k}\) and \(\bar{D}_{k}\), respectively. Then, we can compute \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\), according to Eqs. (1) and (2), as
\[\alpha_{k}^{x,A}= \alpha_{0}^{x,A}+s^{x,A}\sum_{j\in D_{k}}p_{j}^{A}+\hat{s}\sum_{j \in\bar{D}_{k}}t_{j}^{x,y}\left[t_{j}^{y,A}-t_{j-1}^{x,A}\right]^{+}\] \[\beta_{k}^{x,A}= \beta_{0}^{x,A}+f^{x,A}\sum_{j\in D_{k}}\bar{p}_{j}^{A}+\hat{f} \sum_{j\in\bar{D}_{k}}t_{j}^{x,y}\left[t_{j-1}^{x,A}-t_{j}^{y,A}\right]^{+}. \tag{3}\]
The optimal parameter \(\theta_{x}^{x,A}\) maximizes the log likelihood function
\[H\left(\theta^{x,A}\right):=\sum_{k=0}^{K}\log\text{Beta}\left(t_{k}^{x,A} \middle|\alpha_{k}^{x,A},\beta_{k}^{x,A}\right), \tag{4}\]
where \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\) are defined in Eq. (3).
We note that \(\log\text{Beta}(t_{k}^{x,A}\middle|\alpha_{k}^{x,A},\beta_{k}^{x,A})\) is concave in \(\theta^{x,A}\) by the composite rule that the function is concave in \((\alpha_{k}^{x,A},\beta_{k}^{x,A})\) and \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\) are non-decreasing linear functions of \(\theta^{x,A}\). Consequently, \(H(\theta^{x,A})\) is concave in \(\theta^{x,A}\) because it is a summation of several concave functions. Therefore, we can run the gradient descent method to compute the optimal parameters.
## 4. Human-Subject Study
We conducted a human-subject experiment to evaluate the proposed model. The experiment, inspired by (Han et al., 2017), simulated a threat detection task, where two human agents work with two smart drones to search for threats at multiple sites.
### Participants
A total of \(N=30\) participants (average age = 25.3 years, SD = 4.3 ages, 16 females, 14 males) with normal or corrected-to-normal vision formed 15 teams and participated in the experiment. Each participant received a base payment of $15 and a bonus of up to $10 depending on their team performance. To promote cooperation between a pair of players, team performance instead of individual performance was used to calculate the bonus.
### Experimental Task and Design
In the experiment, a pair of participants performed a simulated threat detection task with two assistant drones for \(K=15\) sessions on two separate desktop computers. At each session, each participant was assigned one drone and worked on the detection tasks. After the session, they were asked to report their trust in each drone and their trust in their human teammate. For clarity, we named the two drones \(A\) and \(B\) and colored them in red and blue, respectively; and we denoted the participants \(x\) and \(y\). A trust rating is denoted as \(t_{k}^{a,b}\), where the superscript \(a\in\{x,y\}\) stands for the trustor, the superscript \(b\in\{x,y,A,B\}\) stands for the trustee, and the subscript \(k\) is the session index. For example, \(t_{2}^{x,A}\) is person \(x\)'s trust in drone \(A\) after the 2nd session. The range of a trust rating is \([0,1]\), where 0 stands for "(do) not trust at all" and 1 stands for "trust completely". The flow of the experimental task is illustrated in Fig. 2(a).
**Initial trust rating**: At the start, each participant gave their initial trust in the two drones based on their prior experience with automation/robots. Additionally, they gave their initial trust in each other. These trust ratings were indexed by 0, e.g., \(x\)'s initial trust rating on \(A\) was denoted as \(t_{0}^{x,A}\).
**Robot assignment**: At each session, each participant was randomly assigned one drone as his or her assistant robot.
**Detection task**: Each session consisted of 10 locations to detect. As shown in Fig. 2(b), four views were present at each location. If a threat, which appeared like a combatant, was in any of the views, the participant should click the 'Danger' button; otherwise, they should click the 'Clear' button. Meanwhile, his or her drone would assist and highlight a view if the drone detected a threat there. In addition, a 3-second timer was set for each location. If a participant did not click either button before the timer counted down to zero, the testbed would move to the next location automatically. After all the 10 locations, an end-of-session screen was shown, displaying how many correct choices the participant and the drone had made in the current session. Correct choices mean correctly identifying threats or declaring 'Clear' within 3 seconds.
**Trust rating:** After each session, participants reported three trust values. First, each participant updated his or her trust in the drone s/he just worked with, i.e., through direct experience. Next, each participant submitted and communicated their trust score to their human teammate. After that, each participant updated his or her trust in the drone the human teammate just worked with (i.e., the other drone) and his or her trust in the human teammate. After participants completed all 15 sessions, the experiment ended.
### Experimental Procedure
Before the experiment, each participant signed a consent form and filled out a demographic survey. Two practice sessions were provided, wherein a practice drone was used to assist the participants.
Figure 3. Experimental task and design
The participants were told that the practice drone differed from the two drones used in the real experiment. After the experiment started, the assignment of drones was randomized in each group. Specifically, we assigned drone \(A\) with equal change to either participant and then assigned drone \(B\) to the other participant. The threat detection accuracy of the practice drone, drone \(A\), and drone \(B\) were set to 80%, 90%, and 60%, respectively.
## 5. Results and Discussion
We use the gradient descent method in Sec. 3.3 to compute the optimal parameters \(\theta_{s}^{p_{i},A}\) and \(\theta_{s}^{p_{i},B}\) for each participant \(p_{i}\). The fitting results are shown in Fig. 4. We calculate the performance measurements of drone \(A\) at session \(k\) as \(\rho_{k}^{A}=A_{k}/10\) and \(\overline{p}_{k}^{A}=1-p_{k}^{A}\), where \(A_{k}\) is the number of correct choices drone \(A\) made in the \(k\)th session; and we define \(\rho_{k}^{B}\) and \(\overline{p}_{k}^{B}\) similarly. To measure the performance of the model, we calculate the fitting error at each session for each participant as \(\epsilon_{k}^{p_{i},R}=|\mu_{k}^{p_{i},R}-\mu_{k}^{p_{i},R}|\), \(R\in\{A,B\}\), where \(\epsilon_{k}^{p_{i},R}\) is the participant's self-reported trust while \(\mu_{k}^{p_{i},R}\) is the expected trust defined in section 3.2 and computed based on \(\theta_{s}^{p_{i},R}\); and, we calculate the root-mean-square error (RMSE) between the ground truth and the expected trust value as
\[\text{RMSE}^{R}=\left[\frac{1}{N}\sum_{i=1}^{N}\frac{1}{k+1}\sum_{k=0}^{K} \left(\epsilon_{k}^{p_{i},R}\right)^{2}\right]^{1/2},\]
for \(R\in\{A,B\}\). The RMSE results for the TIP model are \(\text{RMSE}^{A}=0.057\) and \(\text{RMSE}^{B}=0.082\).
Fig. 4 shows the fitting results of the TIP model. The shaded regions indicate the 90% confidence interval of the Beta distribution at each session. We observe that for most participants, such as 7-2 and 10-2, the proposed TIP model can accurately fit the trust curve with a narrow confidence interval; but for some other participants, such as 5-2 and 8-1, the model cannot fit the trust curve due to trust oscillation. However, in the latter case, the fitted curve has a similar trend with the ground truth and can cover most data points with the 90% confidence interval.
For comparison, we consider a direct-update-only model that only accounts for the direct experience a human agent has with a robot. The direct-update-only model is equivalent to the TIP model with zero unit indirect experience gains, i.e., \(\tilde{s}^{x,A}=\tilde{t}^{x,A}=0\). We recompute the model parameters for the direct-update-only model, and the corresponding RMSE errors are \(\text{RMSE}^{A}=0.085\) and \(\text{RMSE}^{B}=0.107\). Furthermore, we compare each participant's fitting error \(\tilde{\pi}^{p_{i},R}:=1/(K+1)\sum_{k=0}^{K}\epsilon_{k}^{p_{i},R}\) of the TIP model (\(A\): \(0.044\pm 0.037\); \(B\): \(0.069\pm 0.045\)) and that of the direct-update-only model (\(A\): \(0.075\pm 0.041\); \(B\): \(0.095\pm 0.051\)) using a paired-sample t-test. Results show that the former is significantly smaller than the latter, with \(t(29)=-6.18,p<.001\) for drone \(A\), and \(t(29)=-7.31\), \(p<.001\) for drone \(B\).
## 6. Acknowledgement
This work is supported by the National Science Foundation under Grant No. 2045009 and the Air Force Office of Scientific Research under Grant No. FA9550-20-1-0406.
Figure 4. Fitting results. Red curves are for drone \(A\) while blue curves are for drone \(B\). The solid lines are the participants’ self-reported trust, while the dashed lines are the expected trust value predicted by the model. The shaded areas indicate the 90% probability interval of the Beta distribution at each session. The index \(i\)-\(j\) stands for the \(j\)th participant in the \(i\)th group. |
2303.06933 | Distributionally Robust Chance-Constrained Optimization for Hierarchical
UAV-based MEC | Multi-access edge computing (MEC) is regarded as a promising technology in
the sixth-generation communication. However, the antenna gain is always
affected by the environment when unmanned aerial vehicles (UAVs) are served as
MEC platforms, resulting in unexpected channel errors. In order to deal with
the problem and reduce the power consumption in the UAV-based MEC, we jointly
optimize the access scheme and power allocation in the hierarchical UAV-based
MEC. Specifically, UAVs are deployed in the lower layer to collect data from
ground users. Moreover, a UAV with powerful computation ability is deployed in
the upper layer to assist with computing. The goal is to guarantee the quality
of service and minimize the total power consumption. We consider the errors
caused by various perturbations in realistic circumstances and formulate a
distributionally robust chance-constrained optimization problem with an
uncertainty set. The problem with chance constraints is intractable. To tackle
this issue, we utilize the conditional value-at-risk method to reformulate the
problem into a semidefinite programming form. Then, a joint algorithm for
access scheme and power allocation is designed. Finally, we conduct simulations
to demonstrate the efficiency of the proposed algorithm. | Can Cui, Ziye Jia, Chao Dong, Zhuang Ling, Jiahao You, Qihui Wu | 2023-03-13T08:56:59Z | http://arxiv.org/abs/2303.06933v1 | # Distributionally Robust Chance-Constrained Optimization for Hierarchical UAV-based MEC
###### Abstract
Multi-access edge computing (MEC) is regarded as a promising technology in the sixth-generation communication. However, the antenna gain is always affected by the environment when unmanned aerial vehicles (UAVs) are served as MEC platforms, resulting in unexpected channel errors. In order to deal with the problem and reduce the power consumption in the UAV-based MEC, we jointly optimize the access scheme and power allocation in the hierarchical UAV-based MEC. Specifically, UAVs are deployed in the lower layer to collect data from ground users. Moreover, a UAV with powerful computation ability is deployed in the upper layer to assist with computing. The goal is to guarantee the quality of service and minimize the total power consumption. We consider the errors caused by various perturbations in realistic circumstances and formulate a distributionally robust chance-constrained optimization problem with an uncertainty set. The problem with chance constraints is intractable. To tackle this issue, we utilize the conditional value-at-risk method to reformulate the problem into a semidefinite programming form. Then, a joint algorithm for access scheme and power allocation is designed. Finally, we conduct simulations to demonstrate the efficiency of the proposed algorithm.
Unmanned aerial vehicles, multi-access edge computing, distributionally robust optimization, conditional value-at-risk.
## I Introduction
Due to the rapid development of smart devices and artificial intelligence, there exist a growing number of application data to be processed, which is an outstanding characteristic in the sixth generation of communication system [1]. Besides, an emergent architecture, multi-access edge computing (MEC), becomes prevalent in recent years, which contributes to reducing the system delay and improving the quality of service (QoS) [2]. Furthermore, unmanned aerial vehicles (UAVs) are well known due to the easy deployment and flexible movement, which attract tremendous attentions in both academics and industries. UAVs can serve as communication relay nodes, mobile base stations (BSs), as well as computing platforms for MEC [3, 4]. However, since there exist uncertain environmental factors such as wind, temperature, and airflow, the gains of UAVs are under uncertainty sets due to the environmental factors. For instance, the effect of wind results in the fluctuation of transmission gain within a certain range, and the extreme temperature leads to transmission gain errors. The effects of such unpredictable errors cannot be ignored in practical applications.
As illustrated that UAV-enabled MEC system can provide benefits for energy-efficient task offloading, there exist a couple of related works [5]. For example, [6] studies a UAV-based mobile cloud computing system to minimize total energy consumption. In [7], Zeng _et al._ propose an algorithm to maximize the throughput by jointly optimizing the trajectory of the UAV swarm along with the transmit power. [8] presents a UAV-assisted MEC for task offloading which is solved with a deep reinforcement learning method. In [9], a response delay optimization algorithm is suggested for the deployment of UAVs as well as MEC servers. However, the uncertain errors of transmission gains are ignored or described as a certain distribution in the aforementioned works, which are not realistic in the practical environment. As above, in this paper, we consider the errors without distribution information to minimize the total power consumption in the hierarchical UAV-based MEC. Since there exist unpredictable parameters such as gain errors, which are in an uncertainty set, the proposed optimization problem is still intractable. Hence, a distibutionally robust optimization (DRO) problem considering the uncertainty of gains is further proposed to deal with this issue.
Both the stochastic programming and robust optimization can be applied to uncertain optimization problems, and the distribution information of random parameters is necessary, which cannot be obtained in some scenarios [10, 11]. However, DRO can be implemented without such detailed statistical information and provide more conservative solutions [12]. Besides, without distribution information on the uncertainty, the DRO problems are usually computationally-prohibitive. One approach to solving the DRO problem utilizes the conditional value-at-risk (CVaR) with limited statistical information [13, 14]. Utilizing CVaR with historical data, we formulate the chance-constrained problem into a semidefinite programming (SDP) form, which can be handled with feasible solutions [15]. In this paper, we build the uncertainty set for the error parameters with historical data, enabling more practical applications. Accordingly, the system model can be formulated as a DRO problem and solved via the CVaR mechanism.
In detail, we propose a hierarchical UAV-assisted MEC system to complete data processing and transmit data back to the ground station. The system is applicable to many real scenarios, such as real-time monitoring and disaster rescue for remote areas short of BS coverage [16, 17]. Note that the UAVs are always constrained by energy, the total power consumption should be optimized, and the QoS of users also should be guaranteed. Taking into account the errors from antenna gains, we propose a chance-constrained problem with the uncertainty set and handle it by the CVaR mechanism. The main contributions of our work are summarized as follows.
* A hierarchical UAV-based MEC system is proposed, including the lower-layer UAVs to collect data and the upper-layer UAV for relay. Besides, both layers of UAVs are equipped with computation resources for MEC. Furthermore, the gain uncertainty from transceivers is considered for practical scenarios, and we present a corresponding system model to minimize the total power consumption under such uncertainty.
* The problem is formulated with an uncertainty set for transceiver gain errors, and to handle this issue, we propose a DRO-based mechanism, which is non-convex. Then, we approximate the chance constraints by the CVaR method and reformulate the issue into a tractable SDP form.
* To deal with the reformulated SDP problem, we present an algorithm to jointly optimize both the access scheme and power allocation. Finally, we conduct simulations to evaluate the performance of the proposed algorithm, and the results verify the effectiveness.
The rest of this paper is organized as follows. Section II presents the system model and corresponding problem formulation. Section III employs the CVaR-based mechanism to reformulate the original problem with chance constraints into a SDP form, and the corresponding algorithm is designed. Simulation results and the analyses are provided in Section IV. Finally, conclusions are drawn in Section V.
## II System Model and Problem Formulation
In this paper, we consider a hierarchical UAV-assisted MEC system, in which lower-layer UAVs are deployed to collect and compute the data produced by ground users and transmit them back to the BS. Due to the limited computation resources of UAVs, a UAV with strong computing ability is deployed in the upper layer to handle the compute-intensive tasks. Lower-layer UAVs can complete the computation tasks locally and transmit the results to the BS, or transmit the data to the upper-layer UAV for further computation to reduce power consumption. Then, the upper-layer UAV transmits the processed results to the ground BS. The hierarchical UAV-based MEC scenario is shown in Fig. 1.
### _System Model_
#### Ii-A1 Communication Model
In this subsection, the communication model of UAV-to-UAV and UAV-to-BS is investigated. We utilize \(\mathcal{N}=\{1,2,\ldots,i,\ldots,N\}\) to indicate the set of UAVs of the lower layer. Let \(p_{i}^{d}\) denote the transmit power of _i-th_ lower-layer UAV. \(g_{i}\) denotes the _i-th_ UAV transceiver antenna gain and \(B_{i}\) indicates the bandwidth. Consequently, the data transmission rate from the _i-th_ UAV in the lower layer to the upper-layer UAV or the BS is expressed as:
\[r_{i}=B_{i}\log_{2}(1+\frac{\gamma_{i}p_{i}^{d}{\left|{g_{i}}\right|}^{2}}{ \sigma_{s}^{2}}),\forall i\in\{1,2,\ldots,N\}, \tag{1}\]
where \(\sigma_{s}^{2}\) is the variance of white Gaussian noise with a mean value of \(0\). \(\gamma_{i}\) is the path loss related to the distance between _i-th_ lower-layer UAV and upper-layer UAV, or between _i-th_ lower-layer UAV and BS.
We assume the data transmission adopts the orthogonal multiple access technology in this system, so the interference between different data can be ignored. Moreover, only one powerful UAV is considered in the upper layer.
Let \(B_{h}\) denote the bandwidth of the upper-layer UAV. \(p_{h}^{d}\) is the transmit power, \(g_{h}\) denotes the transceiver antenna gain of the UAV hovering in the upper layer, and \(\gamma_{h}\) represents the path loss. According to the Shannon formula, the data transmission rate of the upper layer UAV towards the BS is calculated as:
\[r_{h}=B_{h}\log_{2}(1+\frac{\gamma_{h}p_{h}^{d}{\left|{g_{h}}\right|}^{2}}{ \sigma_{s}^{2}}). \tag{2}\]
We assume \(L_{i}\) represents the data length transmitted by the _i-th_ UAV. \(x_{i}\in\{0,1\}\) is a binary indicator for computing mode. \(x_{i}=1\) represents the _i-th_ UAV transmits the users' data to the upper-layer UAV for further computation, and \(x_{i}=0\) denotes the _i-th_ UAV computes collected data locally and transmit them back to the BS.
Based on the above formulas, we can obtain the transmission delay as follows:
\[t_{i}^{d}=\frac{L_{i}}{r_{i}},\forall i\in\{1,2,\ldots,N\}, \tag{3}\]
and
\[t_{h}^{d}=\frac{\sum_{i}^{N}{x_{i}L_{i}}}{r_{h}}, \tag{4}\]
where \(t_{i}^{d}\) denotes the transmission delay of the _i-th_ UAV in the lower layer, and \(t_{h}^{d}\) is the transmission delay of the upper-layer
Fig. 1: A hierarchical UAV-based MEC scenario.
UAV, respectively. Further, the total transmission power \(p_{r}^{d}\) of UAVs in the lower layer is
\[p_{r}^{d}=\sum_{i=1}^{N}p_{i}^{d}. \tag{5}\]
#### Ii-A2 Uncertainty Model
Take into account the transceiver antenna gain errors influenced by the environmental factors denoted as \(\Delta g_{i}\) and \(\Delta g_{h}\), i.e.,
\[g_{i}=\overline{g}_{i}+\Delta g_{i},\forall i\in\{1,2,\ldots,N\}, \tag{6}\]
and
\[g_{h}=\overline{g}_{h}+\Delta g_{h}, \tag{7}\]
where \(\overline{g}_{i}\) and \(\overline{g}_{h}\) indicate the theoretical antenna transmission gain of the _i-th_ lower-layer UAV and upper-layer UAV, respectively. Similarly, \(\Delta g_{i}\) and \(\Delta g_{h}\) represent the antenna gain errors produced by actual environmental impacts. The estimation error \(\Delta g_{i}\) and \(\Delta g_{h}\) are assumed to follow an unknown distribution \(\mathbb{P}\) with mean \(\mathbb{E}_{\mathbb{P}}(\Delta g_{i})=\mu_{i}\) and \(\mathbb{E}_{\mathbb{P}}(\Delta g_{h})=\mu_{h}\), as well as the variance \(\mathbb{D}_{\mathbb{P}}(\Delta g_{i})=\sigma_{i}^{2}\) and \(\mathbb{D}_{\mathbb{P}}(\Delta g_{h})=\sigma_{h}^{2}\), respectively. Uncertainty set \(\mathcal{P}\) represents for all possible probability distributions of \(\Delta g_{i}\) and \(\Delta g_{h}\), i.e., \(\mathbb{P}\in\mathcal{P}\).
#### Ii-A3 Computation Model
Let variable \(c_{i}\) represent the complexity of the computing tasks processed by the _i-th_ UAV. \(\eta_{i}\) and \(\eta_{h}\) denote the computational power consumption coefficient of the _i-th_ lower-layer UAV and the upper-layer UAV, respectively, which represent the power required to calculate the data of each cycle. Then we can formulate the computation power of _i-th_ lower-layer UAV and the upper-layer UAV as:
\[p_{i}^{c}=\eta_{i}c_{i},\forall i\in\{1,2,\ldots,N\}, \tag{8}\]
and
\[p_{h}^{c}=\eta_{h}\sum_{i=1}^{N}x_{i}c_{i}. \tag{9}\]
With the above formulations we can obtain the total computation power consumption of UAVs in the lower layers as:
\[p_{r}^{c}=\sum_{i=1}^{N}(1-x_{i})p_{i}^{c}. \tag{10}\]
#### Ii-A4 Power Consumption
According to the above discussion, ignoring the power required for UAV hovering and flying, which is a constant value, the total power is expressed as:
\[P_{total}=p_{r}^{d}+p_{h}^{d}+p_{r}^{c}+p_{h}^{c}. \tag{11}\]
### _Problem Formulation_
In this part, we focus on reducing the total power in the system for a better service. The target is to minimize \(P_{total}\) by jointly optimizing the UAV access scheme and power allocation when the QoS meets users' demands. Hence, the problem is formulated as
\[\textbf{P0}: \min_{x_{i},p_{i}^{d},p_{h}^{d}}P_{total},\] (12) s.t. \[C1:\textbf{Pr}\{t_{i}^{d}\leq t_{i,max}^{d}\}\geq\alpha_{1}, \forall i, \tag{13}\] \[C2:\textbf{Pr}\{t_{h}^{d}\leq t_{h,max}^{d}\}\geq\alpha_{2},\] (14) \[C3:0\leq\sum_{i=1}^{N}x_{i}\leq m, \tag{15}\]
\[C4:x_{i}\in\{0,1\},\forall i, \tag{16}\] \[C5:p_{i}^{d}\geq 0,\forall i,\] (17) \[C6:p_{h}^{d}\geq 0, \tag{18}\]
where \(C1\) and \(C2\) are the chance constraints, indicating that the transmission delays of UAVs are constrained in a probabilistic manner. In other words, the delays should not be larger than the tolerable delay \(t_{i,max}^{d}\) and \(t_{h,max}^{d}\) with probabilities of \(\alpha_{1}\) and \(\alpha_{2}\) at a minimum, respectively. Futhermore, \(\alpha_{1}\), \(\alpha_{2}\in(0,1)\). \(C3\) denotes that the upper-layer UAV with powerful computation ability can at most access \(m\) UAVs at the same time. \(C4\) denotes \(x_{i}\) is a binary variable. \(C5\) and \(C6\) guarantee that \(p_{i}^{d}\) and \(p_{h}^{d}\) are non-negative continuous variables.
Due to the chance contraints (13) and (14), the transmission delay \(t_{i}^{d}\) and \(t_{h}^{d}\) in **P0** are difficult to obtain. However, there exists one effective way to handle the problem with distributionally robust method. Specifically, let \(\underset{\mathbb{P}\in\mathcal{P}}{inf}\) represent the lower bound of the probability under the probability distribution \(\mathbb{P}\), and (13) and (14) can be rewritten by:
\[\underset{\mathbb{P}\in\mathcal{P}}{inf}\quad\textbf{Pr}_{\mathbb{P}}\{t_{i}^ {d}\leq t_{i,max}^{d}\}\geq\alpha_{1},\forall i\in\{1,2,\ldots,N\}, \tag{19}\]
and
\[\underset{\mathbb{P}\in\mathcal{P}}{inf}\quad\textbf{Pr}_{\mathbb{P}}\{t_{h}^ {d}\leq t_{h,max}^{d}\}\geq\alpha_{2}, \tag{20}\]
respectively, which are distributionally robust chance constraints (DRCCs).
## III Reformulation and Algorithm
Since the DRCC problems (19) and (20) are intractable with the uncertainty set, in this section, we employ the CVaR mechanism to handle the issue and then design an algorithm to obtain the final solution.
### _CVaR Based Mechanism_
Generally, the value-at-risk (VaR) of a variable \(u\) with the safety factor \(\alpha\) is defined as the minimal value of \(v\), and \(u\) is no more than \(v\) at a possibility of \(\alpha\)[18], i.e.,
\[VaR_{\alpha}(u)=\min\{v|P(u\leq v)\geq\alpha\}. \tag{21}\]
It is noted that VaR is non-convex and discontinuous. Based on VaR, we can consequently propose the definition of CVaR, which is defined as the conditional expectation of \(u\) when \(u\geq VaR_{\alpha}(u)\), i.e.,
\[CVaR_{\alpha}(u)=E[u|u\geq VaR_{\alpha}(u)]. \tag{22}\]
The relationship between VaR and CVaR is shown in Fig. 2. It is obvious that \(CVaR_{\alpha}(u)\geq VaR_{\alpha}(u)\)[12]. Furthermore, CVaR is a conservative approximate estimation of loss, which is robust. As is proposed in [12], for a given measurable loss function \(\varphi(\xi)\): \(\mathbb{R}^{k}\rightarrow\mathbb{R}\), CVaR under the safety factor \(\alpha\) concerning the probability distribution \(\mathbb{P}\) on \(\mathbb{R}^{k}\) is expressed as:
\[\mathbb{P}-CVaR_{\alpha}(\varphi(\xi))=\underset{\beta\in\mathbb{R}}{inf}\{ \beta+\frac{1}{1-\alpha}\mathbb{E}_{\mathbb{P}}\left[\max\left(0,\varphi(\xi)- \beta\right)\right]\}. \tag{23}\]
As is shown in Fig. 2, according to the definition, we can then obtain the following formula:
\[\mathbb{P}\{\varphi(\xi)\leq\mathbb{P}-CVaR_{\alpha}(\varphi(\xi))\}\geq\alpha. \tag{24}\]
As above, the CVaR constraint can be formed as:
\[\begin{array}{c}\underset{\mathbb{P}\in\mathcal{P}}{sup}\quad\mathbb{P}-CVaR_ {\alpha}(\varphi(\xi))\leq 0,\forall\mathbb{P}\in\mathcal{P}\Leftrightarrow\\ \underset{\mathbb{P}\in\mathcal{P}}{inf}\quad\mathbb{P}\{\varphi(\xi)\leq 0 \}\geq\alpha.\end{array} \tag{25}\]
Formula (25) represents that the conservative approximation of DRCC on the right side can be constituted by the CVaR constraint on the left side. In other words, CVaR represents a convex function of the random parameter. Then, we can reformulate the CVaR constraint into a tractable SDP form.
**Lemma 1**: _Let \(\varphi(\xi)=\xi^{\top}\mathbf{\Theta}\xi+\mathbf{\theta}^{\top}\xi+\mathbf{\theta}^{0}\) for \(\mathbf{\Theta}\in\mathbb{S}^{k}\), \(\mathbf{\theta}\in\mathbb{R}^{k}\), and \(\mathbf{\theta}^{0}\in\mathbf{R}\), and then, the worst-case CVaR \(\underset{\mathbb{P}\in\mathcal{P}}{sup}\quad\mathbb{P}-CVaR_{\alpha}(\varphi (\xi))\) can be rewritten into a tractable SDP as follows:_
\[\begin{array}{c}inf\quad\beta+\frac{1}{1-\alpha}\langle\mathbf{\Omega},\mathbf{H} \rangle,\\ \text{s.t.}\quad\mathbf{H}\in\mathbb{S}^{k+1},\quad\beta\in\mathbb{R},\\ \mathbf{H}\succeq\mathbf{0},\\ \mathbf{H}-\begin{bmatrix}\mathbf{\Theta}&\frac{1}{2}\mathbf{\theta}\\ \frac{1}{2}\mathbf{\theta}^{\top}&\mathbf{\theta}^{0}-\beta\end{bmatrix}\succeq\mathbf{0}, \end{array} \tag{26}\]
_where \(\mathbf{\Omega}=\begin{bmatrix}\Sigma+\mu\mu^{\top}&\mu\\ \mu^{\top}&1\end{bmatrix}\), \(\Sigma\in\mathbb{S}^{k}\) is the covariance of the random vector \(\xi\), and \(\mu\in\mathbb{R}^{k}\) is the mean matrix. Furthermore, \(\mathbf{H}\) is an auxiliary matrix and \(\beta\) is an auxiliary variable. \(\langle\mathbf{\Omega},\mathbf{H}\rangle=\operatorname{Tr}(\mathbf{\Omega}\mathbf{H})\) represents the trace scalar product. \(\mathbf{H}\succeq\mathbf{0}\) represents that the matrix \(\mathbf{H}\) is positive semidefinite [15]._
Therefore, according to Lemma 1, we transform the DRO into the form of SDP by the CVaR mechanism, which is a tractable and computationally efficient approximation technique. This form can be regarded as an extension of linear programming, and the effectiveness of this solution is proved in previous works [19].
### _Problem Reformulation_
As for (19), a DRCC problem, the first-order Taylor expansion of the loss function is
\[\varphi(\xi_{1})=L_{i}\sigma_{s}^{2}-B_{i}\gamma_{i}p_{i}^{d}t_{i,max}^{d}\xi _{1}^{2}, \tag{27}\]
where \(\xi_{1}\) denotes the random parameter \(|g_{i}|\).
CVaR can construct a convex approximation for the chance constraints. According to Lemma 1, we transform (27) into a SDP problem, i.e.,
\[\begin{array}{c}inf\quad\beta_{1}+\frac{1}{1-\alpha_{1}}\langle\mathbf{\Omega}_ {1},\mathbf{H}_{1}\rangle\leq 0,\\ \text{s.t.}\quad\mathbf{H}_{1}\in\mathbb{S}^{2},\quad\beta_{1}\in\mathbb{R},\\ \mathbf{H}_{1}\succeq\mathbf{0},\\ \mathbf{H}_{1}-\begin{bmatrix}-B_{i}\gamma_{i}p_{i}^{d}t_{i,max}^{d}&0\\ 0&L_{i}\sigma_{s}^{2}-\beta_{1}\end{bmatrix}\succeq\mathbf{0},\end{array} \tag{28}\]
where \(\mathbf{H}_{1}\) and \(\beta_{1}\) are both auxiliary variables. Let \(\mu_{g_{i}}=\overline{g}_{i}+\mu_{i}\) present the mean of \(g_{i}\), we have
\[\Omega_{1}=\begin{bmatrix}\sigma_{i}^{2}+\mu_{g_{i}}\mu_{g_{i}}^{\top}&\mu_{g _{i}}\\ \mu_{g_{i}}^{\top}&1\end{bmatrix}. \tag{29}\]
Likewise, let \(\xi_{2}\) denote the random parameter \(|g_{h}|\), and the loss function in (20) can be rewritten as:
\[\varphi(\xi_{2})=\sum_{i=1}^{N}x_{i}L_{i}\sigma_{s}^{2}-B_{h}\gamma_{h}p_{h}^{ d}t_{h,max}^{d}\xi_{2}^{2}, \tag{30}\]
which can be further presented in a SDP form, i.e.,
\[\begin{array}{c}inf\quad\beta_{2}+\frac{1}{1-\alpha_{2}}\langle\mathbf{\Omega}_ {2},\mathbf{H}_{2}\rangle\leq 0,\\ \text{s.t.}\quad\mathbf{H}_{2}\in\mathbb{S}^{2},\quad\beta_{2}\in\mathbb{R},\\ \mathbf{H}_{2}\succeq\mathbf{0},\\ \mathbf{H}_{2}-\begin{bmatrix}-B_{h}\gamma_{h}p_{h}^{d}t_{h,max}^{d}&0\\ 0&\sum_{i=1}^{N}x_{i}L_{i}\sigma_{s}^{2}-\beta_{2}\end{bmatrix}\succeq\mathbf{0}. \end{array} \tag{31}\]
Similarly, \(\mathbf{H}_{2}\) is the auxiliary matrix, and \(\beta_{2}\) is the auxiliary variable for the CVaR constraint. \(\mu_{g_{h}}=\overline{g}_{h}+\mu_{h}\) is the mean value of \(g_{h}\). Then we obtain
\[\Omega_{2}=\begin{bmatrix}\sigma_{h}^{2}+\mu_{g_{h}}\mu_{g_{h}}^{\top}&\mu_{g _{h}}\\ \mu_{g_{h}}^{\top}&1\end{bmatrix}. \tag{32}\]
Finally, based on above discussions, the chance-constrained problem (12) under the uncertainty set can be reformulated into a SDP form, i.e.,
\[\begin{array}{c}\textbf{P1}:\quad\underset{\mathbf{H}_{1},\mathbf{H}_{2},\beta_{1},\beta_{2}}{\min}\quad P_{total},\\ \text{s.t.}\quad(15)-(18),(28),(31),\\ \text{$\forall i\in\{1,2,\ldots,N\}$},\end{array}\]
which is still non-convex with the binary variable \(x_{i}\), and continuous variables \(p_{i}^{d}\) and \(p_{h}^{d}\). The problem is in the form of mixed integer non-linear programming. To tackle with this problem, we design a joint optimization algorithm on access scheme and power allocation, as shown in Algorithm 1.
Fig. 2: Relationship between VaR and CVaR.
```
0: Network parameters, \(\mathbf{c_{i}}\), \(\mathbf{L_{i}}\).
0:\(\mathbf{x_{i}}\), \(\mathbf{p_{i}^{d}}\), \(\mathbf{p_{h}^{d}}\) and minimum system power consumption \(\mathbf{optval}\).
1:repeat
2: Solve the subproblem of access-scheme on \(\mathbf{x_{i}}\) to obtain \(\mathbf{x_{i}^{*}}\) and the power consumption \(\mathbf{\tau_{1}}\). \(\mathbf{x_{i}}\to\mathbf{x_{i}^{*}}\).
3: Solve the subproblem of power \(\mathbf{p^{d}}\) to obtain \(\mathbf{p^{d*}}\) and the power consumption \(\mathbf{\tau_{2}}\). \(\mathbf{p^{d}}\to\mathbf{p^{d*}}\).
4:until Results converge.
5: Obtain the final total power \(\mathbf{optval}\).
```
**Algorithm 1** Joint Optimization Algorithm on Access Scheme and Power Allocation.
### _Algorithm Design_
It is observed that due to the two decision variables \(x\) and \(p^{d}\), problem **P1** is still non-convex. To solve this problem, we divide it into two subproblems. The first subproblem is concerning the access scheme \(x\), and the second subproblem is related to the transmission power allocation plan \(p^{d}\).
As shown in step 2 of Algorithm 1, in the first subproblem, an initial value \(p^{d}\) is assumed to minimize the computation power consumption and ensure the access scheme, i.e., \(x\). In other words, the lower-layer UAVs make a choice to complete the computation work locally or transform it to the upper-layer UAV based on the task complexity. The problem with integers is solved by MOSEK. Then, in the second subproblem, according to the access scheme \(x^{*}\) that is obtained in step 2, we can further substitute it to solve the transmission power allocation program. Specifically, the subproblem can be tackled by fixing the access scheme, i.e., \(x^{*}\). Then, in step 3 of Algorithm 1, the transmission power is allocated and variables \(p_{i}^{d}\) as well as \(p_{h}^{d}\) are obtained. Repeat the process until the solutions of two subproblems converge. The final result, i.e., the total power consumption \(P_{total}\) in the hierarchical UAV-assisted MEC system, can be figured out as \(\mathbf{optval}\).
is considered in a more realistic world, can obtain a better result than that in the ideal situation. Specifically, considering the errors between the real situation and the ideal situation, the designed algorithm only needs to obtain the mean and variance of the gain errors from the historical data, instead of obtaining the true distribution or probability function of the errors. The experimental results verify the effectiveness of the proposed joint optimization algorithm on access scheme and power allocation.
In Fig. 5, we investigate the performance of total power consumption, i,e., the corresponding system power _v.s._ lower-layer UAV numbers for different maximum transmission delays. It depicts the influence of the maximum tolerance on the total power. With the increment of maximum delay to be satisfied, the transmission power consumption reduces. It is accounted for the fact that transmission rates become lower for all UAVs when the tolerance transmission delay increases.
## V Conclusions
In this paper, we study a hierarchical UAV-assisted MEC scenario to optimize the power consumption. To tackle with the potential uncertainty impacted by environmental factors, which is not ignorable in practical circumstances, a DRO problem based on the uncertainty set is proposed. Then, with the CVaR mechanism, the original problem is reformulated into a SDP form. To jointly optimize the reformulation on the access scheme and power allocation, we further design an algorithm to handle it and obtain the final solution. By conducting simulation experiments, the robustness and feasibility of the proposed algorithm compared with the non-robust method are verified.
|
2302.09929 | Magnetic frame-dragging correction to the electromagnetic solution of a
compact neutron star | Neutron stars are usually modelled as spherical, rotating perfect conductors
with a predominant intrinsic dipolar magnetic field anchored to their stellar
crust. Due to their compactness, General Relativity corrections must be
accounted for in Maxwell's equations, leading to modified interior and exterior
electromagnetic solutions. We present analytical solutions for slowly-rotating
magnetised neutron stars taking into account the magnetic frame-dragging
correction. For typical compactness values, i.e. $R_s \sim 0.5 [R_*]$, we show
that the new terms lead to a percent order correction in the magnetic field
orientation and strength compared to the case with no magnetic frame-dragging
correction. Also, we obtain a self-consistent redistribution of the surface
azimuthal current. We verify the validity of the derived solution through
two-dimensional particle-in-cell simulations of an isolated neutron star.
Defining the azimuthal electric and magnetic field amplitudes during the
transient phase as observables, we prove that the magnetic frame-dragging
correction reduces the transient wave amplitude, as expected from the
analytical solution. We show that simulations are more accurate and stable when
we include all first-order terms. The increased accuracy at lower
spatiotemporal resolutions translates into a reduction in simulation runtimes. | R. Torres, T. Grismayer, F. Cruz, L. O. Silva | 2023-02-20T11:50:46Z | http://arxiv.org/abs/2302.09929v2 | # Magnetic frame-dragging correction to the electromagnetic solution of a compact neutron star
###### Abstract
Neutron stars are commonly modelled as a spherical, rotating perfect conductors with a predominant intrinsic dipolar magnetic field anchored to their stellar crust. If compact enough, General Relativity modifies Maxwell's equations, leading to changes in the interior and exterior electromagnetic solutions. We present analytic solutions for a slowly-rotating magnetized neutron star that include the frame-dragging correction to the magnetic field components. For typical compactness values, i.e. \(R_{s}\sim 0.5[R_{*}]\), we show that the new terms account for a \(0.43\%\) decrease in magnetic field strength at the equator and an average \(1\%\) vectorial angle correction, both compared to the case without the magnetic frame-dragging correction. This correction leads to a self-consistent redistribution of the surface azimuthal current. We tested the validity of the derived solution by prescribing it as an initial value problem to two-dimensional particle-in-cell simulations. We observe a lower early-stage transient amplitude which reflects the proximity between the derived and exact solutions. At later times, our solution reduces the azimuthal electric field amplitude by almost an order of magnitude, demonstrating that simulations are more accurate at the expense of a more involved initialization. We show that this can potentially lead to a reduction of simulation runtimes.
## I Introduction
Compact objects have long been theorized to power non-thermal emission. In their vicinity, general-relativistic (GR) effects can couple to strong electromagnetic (EM) fields and promote particle acceleration, pair creation and, potentially, pulsed emission. These plasma-mediated processes are dependent on the underlying magnetic field configuration. In neutron stars, these effects are critically important to take into account due to their very high intrinsic magnetic fields, of order \(10^{11}-10^{14}\) G.
The stationary electromagnetic solution to Maxwell's equations of an idealized magnetic star was first obtained by Deutsch [4]. This solution considered a perfect rotating spherical conductor with a misalignment between the dipolar magnetic moment and rotation axis, in flat spacetime background metric (Minkowski). The first works to include the effects of curved spacetimes were for non-rotating neutron stars, i.e. using a Schwarzschild background metric. These solutions have shown that the amplitude of the magnetic field increases in comparison to the flat spacetime case when considering the same dipolar moment [6; 12; 18] or multipolar moments [1]. As for slow-rotating neutron stars, the rigid-body rotation of the compact mass leads to a general-relativistic effect called the drag of the inertial frames or frame-dragging effect. This effect induces an electric field close to the central body that was shown to decrease the magnitude of the unipolar induction from the rotation of the stellar crust [11]. Solutions for the exterior electromagnetic fields were obtained for the aligned rotator [10] and the oblique rotator configuration [15; 16]. These works explored the implications for leptonic acceleration and subsequent radiation in pulsar vacuum gaps. Another approach allowed the exterior field lines to move faster than the crust, e.g. for a fast-rotating neutron star core [8; 11]. More recently, several numerical works, employing spectral methods to solve Maxwell's equations in the slow-rotation approximation of general relativity, have obtained approximate solutions up to third order in the spin parameter [13]. This work was generalized for the multipolar case in the strong gravity regime [14] to show that introducing small-scale magnetic field structures could enhance pair production in the vacuum gap and increase the plasma multiplicity, in agreement with observations.
The purpose of this paper is to extend the set of analytical solutions available in the literature to include the frame-dragging effect in both the electric and magnetic fields for the aligned rotator configuration. As we will show here, these corrections lead to more accurate simulations, with potential savings in runtimes. We use the 3+1 formalism of electrodynamics in curved spacetime as described in Section II. In Section III, we show how to obtain the stationary solution to an aligned rotator. In Section IV, we introduce the numerical setup that is used in Section V for a numerical realization of the solution derived, using a general-relativistic particle-in-cell code. Conclusions and future prospects are outlined in Section VI.
## II General relativity
For the sake of completeness, we revisit an intuitive way to split the spacetime into a universal time and an absolute space. This formalism is designated as the 3+1 split and allows for a user-friendly manner to treat vectorial and tensorial fields in general relativity [17]. Combining with the fact that even for very highly magnetized stars the electromagnetic contribution is negligibly small when compared with the total mass density, we can avoid the coupled Einstein-Maxwell's equations and solve the general-relativistic Maxwell's equations on a fixed curved background metric.
Throughout the paper, we use units in which \(c=G=1\) and the \((-,+,+,+)\) metric signature.
### Slowly-rotating spacetime
To describe the background metric of a compact neutron star, we adopt the slow-rotation approximation of the Kerr metric, which in the Boyer-Lindquist coordinate system \((t,r,\theta,\phi)\)
takes the form [17]:
\[\mathrm{d}s^{2}=-e^{2\Theta(r)}\mathrm{d}t^{2}+e^{2\Lambda(r)}\mathrm{d}r^{2}-2 \omega(r)r^{2}\sin^{2}\theta\mathrm{d}t\mathrm{d}\phi+r^{2}\mathrm{d}\Omega^{2}, \tag{1}\]
where \(\mathrm{d}\Omega^{2}=\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\). This metric considers the Schwarzschild radius, \(R_{s}\), located beneath the stellar surface, at \(R_{*}\), and captures the differential rotation \(\omega(r)\) caused by the frame-dragging effect. This effect is associated with the angular velocity of a free-falling inertial frame. The slow-rotation approximation is constructed on the premise that the intrinsic angular velocity of the star, \(\Omega_{*}\), is much faster than \(\omega(r)\), given by
\[\omega(r)\equiv\frac{\mathrm{d}\phi}{\mathrm{d}t}=-\beta^{\phi}\approx 0.21 \Omega_{*}\frac{R_{s}}{R_{*}-R_{s}}\left(\frac{R_{*}}{r}\right)^{3}. \tag{2}\]
This system is divided into two domains: the interior and the exterior of the star. The exterior part is well known, and the metric functions are given by
\[e^{\Phi(r)}=e^{-\Lambda(r)}\equiv\alpha(r)=\sqrt{1-\frac{2M}{r}},\text{ for }r>R_{*}, \tag{3}\]
while the interior part is more complicated and requires knowledge of the constituents and structure of the star. Throughout this paper, we adopt a reduced model and consider a star with constant density, corresponding to the _stiff-matter_ equation of state [16].
### Maxwell's equations
In this fixed background metric, Maxwell's equations take the form [9]:
\[\mathbf{\nabla}\cdot\mathbf{E} =4\pi\rho, \tag{4}\] \[\mathbf{\nabla}\cdot\mathbf{B} =0,\] (5) \[-\partial_{t}\mathbf{B} =\mathbf{\nabla}\times(\alpha\mathbf{E}+\mathbf{\beta}\times\mathbf{B})\,,\] (6) \[\partial_{t}\mathbf{E} =\mathbf{\nabla}\times(\alpha\mathbf{B}-\mathbf{\beta}\times\mathbf{B})-\alpha \mathbf{j}+\rho\mathbf{\beta}, \tag{7}\]
where physical quantities \(\rho\), \(j\), \(\mathbf{B}\) and \(\mathbf{E}\) are measured by zero angular momentum observers (ZAMOs) in a frame that is corotating with absolute space. The shift vector, \(\mathbf{\beta}\), corresponds to the relative motion between the absolute space and the spherical coordinate grid, i.e. (\(r\), \(\theta\), \(\phi\)). The lapse function, \(\alpha\), is the ratio of the ticking rate of the local fiducial observer clock and the universal time \(t\). In a sense, one can think of the lapse function as a converter of quantities measured with local clocks to the universal time coordinate.
In this paper, we use the orthonormal basis vectors and vectorial components, i.e. \(e_{t}\equiv e_{i}/h_{i}\) and \(A^{i}\equiv h_{i}A^{i}\) such that \(\mathbf{A}=A^{i}e_{i}=A^{i}e_{i}\), where \(h_{i}^{2}\) are the diagonal terms of the spatial 3-metric. These basis vectors and vectorial components are often called the _orthogonal basis_ and _physical components_, respectively, and are very useful as they allow three-dimensional vectorial operations to be easily generalized to curved geometry. For example, the curl of a generic vector is given by:
\[\mathbf{\nabla}\times\mathbf{A}=\varepsilon^{ijk}\frac{\partial A_{j}}{\partial x^{i} }e_{k}=\frac{1}{h_{1}h_{2}h_{3}}\begin{vmatrix}h_{1}e_{1}&h_{2}e_{2}&h_{3}e_{ 3}\\ \dfrac{\partial}{\partial x^{1}}&\dfrac{\partial}{\partial x^{2}}&\dfrac{ \partial}{\partial x^{3}}\\ h_{1}A^{1}&h_{2}A^{2}&h_{3}A^{3}\end{vmatrix}. \tag{8}\]
Taking this into account, the general-relativistic Faraday (6) and Ampere (7) equations in the exterior domain, componentwise, translate to:
\[r\sin\theta\partial_{t}B^{\hat{r}} =-\alpha\partial_{\theta}\left(E^{\hat{\phi}}\sin\theta\right), \tag{9}\] \[r\alpha^{-1}\partial_{t}B^{\hat{\theta}} =\partial_{r}\left(\alpha rE^{\hat{\phi}}\right),\] (10) \[r\alpha^{-1}\partial_{t}B^{\hat{\phi}} =-\partial_{r}\left(r\alpha E^{\hat{\theta}}\right)+\partial_{ \theta}E^{\hat{r}}+\sin\theta\partial_{r}\left(\omega r^{2}B^{\hat{r}}\right)+\] \[+\omega r\alpha^{-1}\partial_{\theta}\left(\sin\theta B^{\hat{ \theta}}\right),\] (11) \[r\sin\theta\partial_{t}E^{\hat{r}} =\alpha\partial_{\theta}\left(B^{\hat{\phi}}\sin\theta\right)- \alpha r\sin\theta j^{\hat{r}},\] (12) \[r\alpha^{-1}\partial_{t}E^{\hat{\theta}} =-\partial_{r}\left(\alpha rB^{\hat{\phi}}\right)-rj^{\hat{\theta }},\] (13) \[r\alpha^{-1}\partial_{t}E^{\hat{\phi}} =\partial_{r}\left(r\alpha B^{\hat{\theta}}\right)-\partial_{ \theta}B^{\hat{r}}+\sin\theta\partial_{r}\left(\omega r^{2}E^{\hat{r}}\right)+\] \[+\omega\alpha^{-1}r\partial_{\theta}\left(\sin\theta E^{\hat{ \theta}}\right)-rj^{\hat{\phi}}-\omega\alpha^{-1}r^{2}\sin\theta\rho, \tag{14}\]
where we have neglected derivatives along the azimuthal direction, \(\partial_{\phi}\), as we will be tackling the aligned rotator case in an axisymmetric setup. In addition, we are interested in stationary solutions, meaning that \(\partial_{t}\) terms are set to zero from this point onwards.
## III Stationary solutions to an aligned rotator
In this section, we search for electromagnetic solutions of the general-relativistic Maxwell's equations presented in the previous section. We consider the aligned rotator case, where the magnetic and rotation axis are aligned (\(\chi=0^{\circ}\)). The aligned and the perpendicular (\(\chi=90^{\circ}\)) rotators constitute a basis to study the generic oblique rotator. However, in this paper, we solely focus on the first case.
As was addressed in the literature [e.g. 13], the rotating dipole in a fixed background metric is a complex problem for which only approximate solutions were found. We will follow closely the approach first presented by Rezzolla et al. [16], where each field component is expanded in powers of the frame-dragging frequency (or, Lense-Thirring frequency):
\[B^{\hat{\theta}} =B^{\hat{\theta}}_{(0)_{r}}(r)\cos\theta+B^{\hat{\theta}}_{(1)}(r, \theta)+B^{\hat{\theta}}_{(2)}(r,\theta)+\mathcal{O}(\omega^{3}), \tag{15}\] \[B^{\hat{\theta}} =B^{\hat{\theta}}_{(0)_{r}}(r)\sin\theta+B^{\hat{\theta}}_{(1)}(r, \theta)+B^{\hat{\theta}}_{(2)}(r,\theta)+\mathcal{O}(\omega^{3}),\] (16) \[E^{\hat{\rho}} =E^{\hat{\rho}}_{(0)}(r,\theta)+E^{\hat{\rho}}_{(1)}(r,\theta)+E^{ \hat{\rho}}_{(2)}(r,\theta)+\mathcal{O}(\omega^{3}),\] (17) \[E^{\hat{\theta}} =E^{\hat{\theta}}_{(0)}(r,\theta)+E^{\hat{\theta}}_{(1)}(r,\theta)+E^ {\hat{\theta}}_{(2)}(r,\theta)+\mathcal{O}(\omega^{3}),\] (18) \[E^{\hat{\phi}} =B^{\hat{\phi}}=0, \tag{19}\]
where the subscript in brackets gives the expansion order, e.g. \(B^{\hat{r}}_{(2)}\propto\omega^{2}\). Henceforth, we omit the explicit coordinate dependence and add it in subscript, e.g. \(B^{\hat{r}}_{(0)_{\sigma}}\left(r\right)\equiv B^{\hat{r}}_{(0)_{\sigma}}\). The azimuthal components are zero due to the axisymmetric constraint. The magnetic field at zeroth-order has the dipolar angular dependence, but we allow the remaining radial and angular eigenfunctions to be self-consistently generated from Maxwell's equations.
The approach we follow consists of inserting the ansatz (15)-(19) into equations (11) and (14), and closing the system with the divergencelessness constraint for the magnetic field (5) and electric field in vacuum (4). These equations are then expanded in powers of the frame-dragging frequency. In the following subsections, we will treat one order at a time, starting from the zeroth-order terms.
### Zeroth-order solutions
The zeroth-order terms correspond to the case where we neglect the frame-dragging effect, i.e. a rotating neutron star in a fixed Schwarzschild background metric.
Following the steps detailed above, and looking for separable solutions of the field components, i.e. \(E^{\hat{r}}_{(0)}\left(r,\theta\right)=E^{\hat{r}}_{(0)_{\sigma}}E^{\hat{r}}_{ (0)_{\sigma}}\), we find:
\[\partial_{r}\left(r\alpha E^{\hat{\theta}}_{(0)_{\sigma}}\right) E^{\hat{\theta}}_{(0)_{\sigma}}-E^{\hat{r}}_{(0)_{\sigma}}\partial_{\theta} \left(E^{\hat{r}}_{(0)_{\sigma}}\right)=0, \tag{20}\] \[\partial_{r}\left(r\alpha B^{\hat{\theta}}_{(0)_{\sigma}}\right) \sin\theta-B^{\hat{r}}_{(0)_{\sigma}}\partial_{\theta}\left(\cos\theta \right)=0,\] (21) \[\partial_{r}\left(r^{2}B^{\hat{r}}_{(0)_{\sigma}}\right)\sin \theta\cos\theta+\alpha^{-1}rB^{\hat{\theta}}_{(0)_{\sigma}}\partial_{\theta} \left(\sin^{2}\theta\right)=0,\] (22) \[\partial_{r}\left(r^{2}E^{\hat{r}}_{(0)_{\sigma}}\right)\sin \theta E^{\hat{r}}_{(0)_{\sigma}}+\alpha^{-1}rE^{\hat{\theta}}_{(0)_{\sigma}} \partial_{\theta}\left(\sin\theta E^{\hat{\theta}}_{(0)_{\sigma}}\right)=0, \tag{23}\]
where equations (20) and (21) correspond to the collection of zeroth-order terms of equations (11) and (14), respectively. Also, equations (22) and (23) are the collection of zeroth-order terms of equations (5) and (4), respectively.
The angular dependence of the electric field components can be determined by decoupling the angular and radial parts of equations (20) and (23). This condition can be achieved through:
\[E^{\hat{\theta}}_{(0)_{\sigma}} \propto\partial_{\theta}\left(E^{\hat{r}}_{(0)_{\sigma}}\right), \tag{24}\] \[\sin\theta E^{\hat{r}}_{(0)_{\sigma}} \propto\partial_{\theta}\left(\sin\theta E^{\hat{\theta}}_{(0)_{ \sigma}}\right), \tag{25}\]
which leads to the angular eigenfunction solution
\[E^{\hat{r}}_{(0)_{\sigma}} =3\cos^{2}\theta-1, \tag{26}\] \[E^{\hat{\theta}}_{(0)_{\sigma}} =\sin\theta\cos\theta, \tag{27}\]
and, consequently,
\[\sin\theta\cos\theta\left[\partial_{r}\left(r\alpha E^{\hat{\theta}}_{(0)_ {\sigma}}\right)+6E^{\hat{r}}_{(0)_{r}}\right] =0, \tag{28}\] \[\sin\theta\left[\partial_{r}\left(r\alpha B^{\hat{\theta}}_{(0)_ {r}}\right)+B^{\hat{r}}_{(0)_{r}}\right] =0,\] (29) \[\sin\theta\cos\theta\left[\partial_{r}\left(r^{2}B^{\hat{r}}_{(0) _{\sigma}}\right)+2\alpha^{-1}rB^{\hat{\theta}}_{(0)_{r}}\right] =0,\] (30) \[\sin\theta\left(3\cos^{2}\theta-1\right)\left[\partial_{r}\left(r ^{2}E^{\hat{r}}_{(0)_{\sigma}}\right)+\alpha^{-1}rE^{\hat{\theta}}_{(0)_{r}} \right] =0. \tag{31}\]
The radial eigenfunctions can be determined by solving the following differential equations, obtained by combining equation (28) with (31) and equation (29) with (30):
\[\partial_{r}\left(\alpha^{2}\partial_{r}\left(r^{2}E^{\hat{r}}_{ (0)_{r}}\right)\right)-6E^{\hat{r}}_{(0)_{r}} =0, \tag{32}\] \[\partial_{r}\left(\alpha^{2}\partial_{r}\left(r^{2}B^{\hat{r}}_{(0 )_{r}}\right)\right)-2B^{\hat{r}}_{(0)_{r}} =0,\] (33) \[B^{\hat{\theta}}_{(0)_{r}} =-\frac{\alpha}{2r}\partial_{r}\left(r^{2}B^{\hat{r}}_{(0)_{r}} \right),\] (34) \[E^{\hat{\theta}}_{(0)_{r}} =-\frac{\alpha}{r}\partial_{r}\left(r^{2}E^{\hat{r}}_{(0)_{r}} \right). \tag{35}\]
The solution to equations (32) and (33) can be obtained more easily if one recasts them into the Legendre form by introducing \(x=1-r/M\). The final solution here presented satisfies the physical condition of the vanishing field amplitude at infinity:
\[B^{\hat{r}}_{(0)_{r}} =\frac{C_{1}}{8}\left[\ln\left(1-\frac{2M}{r}\right)+\frac{2M}{r }\left(1+\frac{M}{r}\right)\right], \tag{36}\] \[B^{\hat{\theta}}_{(0)_{\sigma}} =-\frac{\alpha(r)C_{1}}{8}\left[\ln\left(1-\frac{2M}{r}\right)+ \frac{2M}{r}\left(1+\frac{M}{r-2M}\right)\right],\] (37) \[E^{\hat{r}}_{(0)_{r}} =\frac{C_{2}}{4}\left[\left(3-\frac{2r}{M}\right)\ln\left(1-\frac {2M}{r}\right)+\frac{2M}{3r}\left(3+\frac{M}{r}\right)-4\right],\] (38) \[E^{\hat{\theta}}_{(0)_{\sigma}} =-\frac{3\alpha(r)C_{2}}{2}\left[\left(1-\frac{r}{M}\right)\ln \left(1-\frac{2M}{r}\right)-\right.\] \[\left.-\frac{2M^{2}}{3r\left(r-2M\right)}-2\right], \tag{39}\]
where \(C_{1}\) and \(C_{2}\) are integration constants found by applying the Newtonian limit and demanding the continuity of the solution across the star surface (see section III.4).
### First-order solutions
In this section, we consider the inclusion of the first-order terms, i.e. the terms that are proportional to the frame-dragging frequency \(\omega\). Following the same procedure as in
the previous subsection, we obtain:
\[-\partial_{r}\left(r\alpha E_{(1)}^{\hat{\theta}}\right)+\partial_{ \theta}E_{(1)}^{\hat{\theta}}-3\omega rB_{(0)_{r}}^{\hat{\theta}},\sin\theta \cos\theta=0, \tag{40}\] \[\partial_{r}\left(r^{2}E_{(1)}^{\hat{\theta}}\right)\sin\theta+ \alpha^{-1}r\partial_{\theta}\left(\sin\theta\ E_{(1)}^{\hat{\theta}}\right)=0,\] (41) \[\partial_{r}\left(r\alpha B_{(1)}^{\hat{\theta}}\right)-\partial_ {\theta}B_{(1)}^{\hat{\theta}}-3\omega rE_{(0)_{r}}^{\hat{r}},\sin\theta \left(3\cos^{2}\theta-1\right)=0,\] (42) \[\partial_{r}\left(r^{2}B_{(1)}^{\hat{\theta}}\right)\sin\theta+ \alpha^{-1}r\partial_{\theta}\left(\sin\theta\ B_{(1)}^{\hat{\theta}}\right)=0. \tag{43}\]
From the equations above, we see that the first-order field components are more complicated as they depend explicitly on the radial eigenfunctions of the zeroth-order terms. Due to this increased complexity of including complicated source terms, we will address the electric and magnetic field components separately, starting with the electric field equations that are less involved.
#### iii.1.1 Electric field equations
Once again, we look for separable solutions to the field components of the type:
\[E_{(1)}^{\hat{\theta}}=E_{(1)_{r}}^{\hat{r}}E_{(1)_{\theta}}^{ \hat{r}}, \tag{44}\] \[E_{(1)}^{\hat{\theta}}=E_{(1)_{r}}^{\hat{\theta}}E_{(1)_{\theta }}^{\hat{\theta}}. \tag{45}\]
Inserting equations (44)-(45) into (40)-(41) yields the following decoupling conditions:
\[E_{(1)_{\theta}}^{\hat{\theta}}\propto\partial_{\theta}E_{(1)_{ \theta}}^{\hat{r}}\propto\sin\theta\cos\theta, \tag{46}\] \[E_{(1)_{\theta}}^{\hat{r}}\sin\theta\propto\partial_{\theta} \left(\sin\theta\ E_{(1)_{\theta}}^{\hat{\theta}}\right), \tag{47}\]
By imposing that \(E_{(1)_{\theta}}^{\hat{\theta}}\equiv s_{1}\sin\theta\cos\theta\), it follows:
\[\partial_{\theta}\left(\sin\theta\ E_{(1)_{\theta}}^{\hat{\theta}}\right)=s_ {1}\sin\theta\left(3\cos^{2}\theta-1\right), \tag{48}\]
\[E_{(1)_{\theta}}^{\hat{r}}\sin\theta=s_{1}s_{2}\sin\theta\left(3\cos^{2} \theta-1\right), \tag{49}\]
\[\partial_{\theta}E_{(1)_{\theta}}^{\hat{r}}=-6s_{1}s_{2}\sin\theta\cos\theta, \tag{50}\]
where \(s_{1}\) and \(s_{2}\) are constants of proportionality.
Selecting the proportionality constants such that the angular profiles depend only on explicit trigonometric expressions (\(s_{1}=s_{2}=1\)), yields:
\[\sin\theta\cos\theta\left[-\partial_{r}\left(r\alpha E_{(1)_{r}} ^{\hat{\theta}}\right)-6E_{(1)_{r}}^{\hat{r}}-3\omega rB_{(0)_{r}}^{\hat{ \theta}}\right]=0, \tag{51}\] \[\sin\theta\left(3\cos^{2}\theta-1\right)\left[\partial_{r}\left( r^{2}E_{(1)_{r}}^{\hat{r}}\right)+\alpha^{-1}rE_{(1)_{r}}^{\hat{\theta}}\right]=0, \tag{52}\]
and, hence,
\[E_{(1)_{\theta}}^{\hat{r}}=\left(3\cos^{2}\theta-1\right), \tag{53}\] \[E_{(1)_{\theta}}^{\hat{\theta}}=\sin\theta\cos\theta,\] (54) \[\partial_{r}\left(\alpha^{2}\partial_{r}\left(r^{2}E_{(1)_{r}}^{ \hat{r}}\right)\right)-6E_{(1)_{r}}^{\hat{\theta}}-3\omega rB_{(0)_{r}}^{ \hat{\theta}}=0,\] (55) \[E_{(1)_{r}}^{\hat{\theta}}=-\frac{\alpha}{r}\partial\left(r^{2} E_{(1)_{r}}^{\hat{r}}\right). \tag{56}\]
The solutions to (55)-(56), satisfying the physical condition of the vanishing field amplitude at infinity, are:
\[E_{(1)_{r}}^{\hat{r}}=\mathcal{C}_{3}\left[\left(3-\frac{2r}{M }\right)\ln\left(1-\frac{2r}{M}\right)+\frac{2M}{r}\left(1+\frac{M}{3r} \right)-4\right]-\] \[-\frac{2\mathcal{C}_{4}}{3}\frac{M^{2}}{4r^{2}}\left[\ln\left(1- \frac{2M}{r}\right)+\frac{2M}{r}\right], \tag{57}\] \[E_{(1)_{r}}^{\hat{\theta}}=-6\alpha(r)\mathcal{C}_{3}\left[\left( 1-\frac{r}{M}\right)\ln\left(1-\frac{2M}{r}\right)-\right.\] \[\left.-\frac{2M^{2}}{3r\left(r-2M\right)}-2\right]+\frac{2\alpha(r) \mathcal{C}_{4}}{3}\left[\frac{M^{4}}{r^{3}\left(r-2M\right)}\right], \tag{58}\]
where \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}=3\omega_{0}C_{1}/(8M^{2})\) are integration constants determined later in this paper (see section III.4), and \(\omega_{0}\) is the frame-dragging frequency stripped of its radial dependence (i.e. \(\omega(r)=\omega_{0}/r^{3}\)).
#### iii.1.2 Magnetic field equations
The solution that decouples the system of differential equations for the magnetic field components is:
\[B_{(1)}^{\hat{\theta}}=B_{(1)_{1}}^{\hat{\theta}}B_{(1)_{\theta 1}}^{\hat{\theta}}+B_{(1)_{r}2}^{\hat{\theta}}B_{(1)_{\theta 2}}^{\hat{\theta}}, \tag{59}\] \[B_{(1)}^{\hat{\theta}}=B_{(1)_{1}}^{\hat{\theta}}B_{(1)_{\theta 1}}^{\hat{\theta}}+B_{(1)_{r}2}^{\hat{\theta}}B_{(1)_{\theta 2}}^{\hat{\theta}}. \tag{60}\]
Note that these solutions require an extra term compared to (44)-(45), motivating the separate analysis from the electric field equations.
Inserting equations (59)-(60) into (42)-(43) yields the following decoupling conditions for the first component equations:
\[B_{(1)_{\theta 1}}^{\hat{\theta}}\propto\partial_{\theta}B_{(1)_{ \theta 1}}^{\hat{\theta}}\propto\sin\theta\left(3\cos^{2}\theta-1\right), \tag{61}\] \[B_{(1)_{\theta 1}}^{\hat{\theta}}\sin\theta\propto\partial_{\theta} \left(\sin\theta\ B_{(1)_{\theta 1}}^{\hat{\theta}}\right). \tag{62}\]
Proceeding in the same manner, we can now impose that \(B_{(1)_{\theta 1}}^{\hat{\theta}}\equiv s_{1}\sin\theta\left(3\cos^{2}\theta-1\right)\) and it follows:
\[\partial_{\theta}\left(\sin\theta\ B_{(1)_{\theta 1}}^{\hat{\theta}}\right)=4s_{1}\sin\theta\cos\theta\left(3\cos^{2}\theta-2 \right), \tag{63}\] \[B_{(1)_{\theta 1}}^{\hat{\theta}}\sin\theta=4s_{1}s_{2}\sin\theta\cos \theta\left(3\cos^{2}\theta-2\right),\] (64) \[\partial_{\theta}B_{(1)_{\theta 1}}^{\hat{\theta}}=-12s_{1}s_{2}\sin\theta \left(3\cos^{2}\theta-1\right)-4s_{1}s_{2}\sin\theta. \tag{65}\]
We select the constants such that the angular eigenfunctions are composed of explicit trigonometric functions
1):
\[B^{\hat{\rho}}_{(1)_{\sigma 1}}=\cos\theta\left(3\cos^{2}\theta-2 \right), \tag{66}\] \[B^{\hat{\theta}}_{(1)_{\sigma 1}}=\sin\theta\left(3\cos^{2}\theta-1 \right),\] (67) \[\sin\theta\left(3\cos^{2}-1\right)\left[\partial_{r}\left(r\alpha B ^{\hat{\theta}}_{(1)_{\tau 1}}\right)+3B^{\hat{\rho}}_{(1)_{\tau 1}}-r\omega E^{\hat{\rho}}_{(0)_{ \tau}}\right]+\] \[+\partial_{r}\left(r\alpha B^{\hat{\theta}}_{(1)_{\tau 2}}\right)B^{ \hat{\theta}}_{(1)_{\sigma 2}}+B^{\hat{\rho}}_{(1)_{\tau 1}}\sin\theta-B^{ \hat{\rho}}_{(1)_{\tau 2}}\partial_{\theta}B^{\hat{\rho}}_{(1)_{\sigma 2}}=0,\] (68) \[\sin\theta\cos\theta\left(3\cos^{2}-2\right)\left[\partial_{r} \left(r^{2}B^{\hat{\rho}}_{(1)_{\tau 1}}\right)+4\alpha^{-1}rB^{\hat{\theta}}_{(1)_{\tau 1}} \right]+\] \[\partial_{r}\left(r^{2}B^{\hat{\rho}}_{(1)_{\tau 2}}\right)B^{ \hat{\rho}}_{(1)_{\theta 2}}\sin\theta+\alpha^{-1}rB^{\hat{\theta}}_{(1)_{ \tau 2}}\partial_{\theta}\left(\sin\theta B^{\hat{\theta}}_{(1)_{\sigma 2}} \right)=0. \tag{69}\]
To decouple the second component equations it requires that:
\[B^{\hat{\theta}}_{(1)_{\sigma 2}}\propto\partial_{\theta}B^{ \hat{\rho}}_{(1)_{\sigma 2}}\propto\sin\theta, \tag{70}\] \[B^{\hat{\theta}}_{(1)_{\sigma 2}}\sin\theta\propto\partial_{ \theta}\left(\sin\theta\ B^{\hat{\theta}}_{(1)_{\sigma 2}}\right). \tag{71}\]
Thus, the natural choice is \(B^{\hat{\theta}}_{(1)_{\sigma 2}}=s_{1}\sin\theta\), which yields:
\[\partial_{\theta}\left(\sin\theta\ B^{\hat{\theta}}_{(1)_{\sigma 2}} \right)=2s_{1}\sin\theta\cos\theta, \tag{72}\] \[B^{\hat{\rho}}_{(1)_{\sigma 2}}\sin\theta=2s_{1}s_{2}\sin\theta \cos\theta,\] (73) \[\partial_{\theta}B^{\hat{\rho}}_{(1)_{\sigma 2}}=-2s_{1}s_{2}\sin\theta. \tag{74}\]
For this case, the constants are \(s_{1}=2s_{2}=1\), yielding:
\[B^{\hat{\tau}}_{(1)_{\sigma 2}}=\cos\theta, \tag{75}\] \[B^{\hat{\theta}}_{(1)_{\sigma 2}}=\sin\theta,\] (76) \[\sin\theta\left(3\cos^{2}-1\right)\left[\partial_{r}\left(r\alpha B ^{\hat{\theta}}_{(1)_{\tau 1}}\right)+3B^{\hat{\rho}}_{(1)_{\tau 1}}-r\omega E^{ \hat{\rho}}_{(0)_{\tau}}\right]+\] \[+\sin\theta\left[\partial_{r}\left(r\alpha B^{\hat{\theta}}_{(1) _{\tau 2}}\right)+B^{\hat{\rho}}_{(1)_{\tau 1}}-B^{\hat{\rho}}_{(1)_{\tau 2}} \right]=0,\] (77) \[\sin\theta\cos\theta\left(3\cos^{2}-2\right)\left[\partial_{r} \left(r^{2}B^{\hat{\rho}}_{(1)_{\tau 1}}\right)+4\alpha^{-1}rB^{\hat{\theta}}_{(1)_{ \tau 1}}\right]+\] \[+\sin\theta\cos\theta\left[\partial_{r}\left(r^{2}B^{\hat{\rho}}_ {(1)_{\tau 2}}\right)+2\alpha^{-1}rB^{\hat{\theta}}_{(1)_{\tau 2}} \right]=0, \tag{78}\]
which gives rise to the radial differential equations for the magnetic field components:
\[\partial_{r}\left(\alpha^{2}\partial_{r}\left(r^{2}B^{\hat{\rho} }_{(1)_{\tau 1}}\right)\right)-12B^{\hat{\rho}}_{(1)_{\tau 1}}+12\omega rE^{\hat{\rho}}_{(0)_{\tau}}=0, \tag{79}\] \[\partial_{r}\left(\alpha^{2}\partial_{r}\left(r^{2}B^{\hat{\rho}} _{(1)_{\tau 2}}\right)\right)-2B^{\hat{\rho}}_{(1)_{\tau 2}}-2B^{\hat{\rho}}_{(1)_{\tau 1}}=0,\] (80) \[B^{\hat{\theta}}_{(1)_{\tau 1}}=-\frac{\alpha}{4r}\partial_{r}\left(r^{2}B^{ \hat{\rho}}_{(1)_{\tau 1}}\right),\] (81) \[B^{\hat{\theta}}_{(1)_{\tau 2}}=-\frac{\alpha}{2r}\partial_{r}\left(r^{2}B^{ \hat{\rho}}_{(1)_{\tau 2}}\right). \tag{82}\]
Each magnetic field component is comprised of two terms. Consequently, we have four decoupled differential equations. We highlight the fact that the second term, \(B^{\hat{\rho}}_{(1)_{\tau 2}}\), depends explicitly on the first one, \(B^{\hat{\rho}}_{(1)_{\tau 1}}\).
The final solution to the system of differential equations presented above, satisfying the vanishing field amplitude at infinity, is given by:
\[B^{\hat{\rho}}_{(1)_{\tau 1}}=C_{5}\left[\frac{2M}{r}\left(6+\frac{M}{r} \right)+\frac{45r}{M}-75+\right.\] \[+\left.\left(36+\frac{15r}{2M}\left(\frac{3r}{M}-8\right)\right) \ln\left(1-\frac{2M}{r}\right)\right]+\] \[+\frac{C_{6}}{6}\frac{M}{r}\left[\frac{2M}{r}\left(\frac{M}{r}-3 \right)+\left(\frac{4M}{r}-3\right)\ln\left(1-\frac{2M}{r}\right)\right], \tag{83}\] \[B^{\hat{\rho}}_{(1)_{\tau 2}}=\frac{C_{7}}{8}\left[\frac{2M}{r}\left(\frac{M}{r}+1 \right)+\ln\left(1-\frac{2M}{r}\right)\right]+\] \[+C_{5}\left[\frac{2M}{r}\left(\frac{M}{r}+2\right)-15+\frac{9r}{M}+\right.\] \[+\frac{1}{2}\left(4-\frac{3r}{M}\right)^{2}\ln\left(1-\frac{2M}{r }\right)\right]-\] \[-\frac{C_{6}}{6}\left(\frac{M}{r}-2\right)\left[\left(\frac{M}{r} -1\right)\ln\left(1-\frac{2M}{r}\right)-\frac{2M}{r}\right],\] (84) \[B^{\hat{\theta}}_{(1)_{\tau 1}}=-\alpha(r)\frac{M}{4r}\left\{6C_{5}\left[4-\frac{30r}{M} \left(1-\frac{r}{M}\right)+\frac{4M}{r-2M}+\right.\right.\] \[+\left.\left.\frac{3r}{M}\left(4-\frac{5r}{M}\left(2-\frac{r}{M} \right)\right)\ln\left(1-\frac{2M}{r}\right)\right]-\] \[-\frac{C_{6}}{6}\left[\frac{2M}{r-2M}+\frac{2M}{r}\left(\frac{M}{r }+2\right)+3\ln\left(1-\frac{2M}{r}\right)\right]\right\},\] (85) \[B^{\hat{\theta}}_{(1)_{\tau 2}}=-\alpha(r)\frac{M}{2r}\left\{\frac{C_{7}}{4}\left[2+ \frac{2M}{r-2M}+\frac{r}{M}\ln\left(1-\frac{2M}{r}\right)\right]+\] \[+2C_{5}\left[2\left(2-\frac{9r}{M}\left(1-\frac{r}{M}\right)+ \frac{2M}{r-2M}\right)+\right.\] \[+\left.\frac{r}{M}\left(\frac{3r}{M}-4\right)\left(\frac{3r}{M}-2 \right)\ln\left(1-\frac{2M}{r}\right)\right]+\] \[+\left.\frac{C_{6}}{6}\left[\frac{M}{r}-8-\frac{3M}{r-2M}+\left(3 -\frac{4r}{M}\right)\ln\left(1-\frac{2M}{r}\right)\right]\right\}, \tag{86}\]
where \(C_{5}\), \(C_{6}=\omega_{0}C_{2}/M^{2}\) and \(C_{7}\) are constants of integration to be determined later on this paper (see section III.4).
These terms constitute the frame-dragging correction to the magnetic field components and, to the extent of our knowledge, have not been described in previous works in analytical and closed form. An important aspect, here highlighted, is the fact that general relativity leads to even multipoles of the initial seeding field. In the current paper, we are seeding the dipolar magnetic field, and the outcome is the generation of a quadrupolar magnetic field correction. Despite not being shown here explicitly, the procedure presented can be used to
determine the second-order correction terms:
\[E^{\hat{r}}_{(2)\sigma_{1}} =\left(3\cos^{2}\theta-1\right), \tag{87}\] \[E^{\hat{r}}_{(2)\sigma_{2}} =\left(15\cos^{4}\theta-15\cos^{2}\theta+2\right),\] (88) \[E^{\hat{\theta}}_{(2)\sigma_{1}} =\sin\theta\cos\theta,\] (89) \[E^{\hat{\theta}}_{(2)\sigma_{2}} =\sin\theta\cos\theta\left(3\cos^{2}\theta-2\right),\] (90) \[B^{\hat{r}}_{(2)\sigma_{1}} =\cos\theta\left(3\cos^{2}\theta-2\right),\] (91) \[B^{\hat{r}}_{(2)\sigma_{2}} =\cos\theta,\] (92) \[B^{\hat{\theta}}_{(2)\sigma_{1}} =\sin\theta\left(3\cos^{2}\theta-1\right),\] (93) \[B^{\hat{\theta}}_{(2)\sigma_{2}} =\sin\theta, \tag{94}\]
showing the appearance of an octopolar electric field component, and new corrections to previously existing multipolar amplitudes. However, these corrections are neglected in this paper as we are considering the slow-rotating approximation of the Kerr metric.
### Interior solution
To obtain the complete solution one needs to know the interior solution inside the neutron star to get the integration coefficients via the interface matching conditions. In this subsection, we follow the same methodology as Rezzolla et al. [16], assuming the neutron star as a perfect conductor (\(\sigma\rightarrow\infty\)) and looking for a radially uniform interior magnetic field solution (i.e., \(B^{\hat{k}}(r,\theta)=B^{\hat{k}}(\theta)\)), corresponding to the _stiff-matter_ equation of state case.
#### iv.3.1 Perfect conductor compact neutron star
The perfect conductor constraint provides a way to solve Maxwell's equations using the general-relativistic Ohm's law [in the ZAMO frame, e.g. 16]:
\[j^{\hat{t}} =\rho+\sigma\frac{\bar{\omega}r\sin\theta}{e^{\Phi}}E^{\hat{ \phi}}_{\text{in}}, \tag{95}\] \[j^{\hat{r}} =\sigma\left(E^{\hat{r}}_{\text{in}}-\frac{\bar{\omega}r\sin \theta}{e^{\Phi}}B^{\hat{\theta}}_{\text{in}}\right),\] (96) \[j^{\hat{\theta}} =\sigma\left(E^{\hat{\theta}}_{\text{in}}+\frac{\bar{\omega}r \sin\theta}{e^{\Phi}}B^{\hat{\theta}}_{\text{in}}\right),\] (97) \[j^{\hat{\phi}} =\sigma E^{\hat{\phi}}_{\text{in}}+\frac{\bar{\omega}r\sin \theta}{e^{\Phi}}\rho,\] (98) \[\bar{\omega} =\Omega_{*}-\omega, \tag{99}\]
as it allows writing the electric field components as a function of the magnetic field:
\[E^{\hat{r}}_{\text{in}} =\frac{\bar{\omega}r\sin\theta}{e^{\Phi}}B^{\hat{\theta}}_{\text {in}}, \tag{100}\] \[E^{\hat{\theta}}_{\text{in}} =-\frac{\bar{\omega}r\sin\theta}{e^{\Phi}}B^{\hat{\theta}}_{\text {in}}. \tag{101}\]
The interior solution is then fully determined by solving the interior magnetic field equations that can be obtained via equations (5) and the azimuthal component of (6):
\[\sin\theta\partial_{r}\left(r^{2}B^{\hat{r}}_{\text{in}}\right) +e^{\Lambda_{r}}\partial_{\theta}\left(\sin\theta B^{\hat{\theta} }_{\text{in}}\right)=0, \tag{102}\] \[\partial_{r}\left(re^{\Phi}E^{\hat{\theta}}_{\text{in}}\right) -e^{\Phi+\Lambda}\partial_{\theta}E^{\hat{r}}_{\text{in}}-\] \[-\sin\theta\partial_{r}\left(\omega r^{2}B^{\hat{r}}_{\text{in}} \right)-\omega e^{\Lambda_{r}}r\partial_{\theta}\left(\sin\theta B^{\hat{ \theta}}_{\text{in}}\right)=0, \tag{103}\]
where we have already applied the stationary condition, i.e. \(\partial_{t}B^{\hat{\phi}}=0\), and the axisymmetry condition, i.e. \(\partial_{\theta}B^{\hat{\phi}}=0\). In fact, equation (103) reduces to (102) by the inclusion of equations (99)-(101). With the assumption of the radially uniform interior magnetic field solution, we obtain:
\[2\sin\theta B^{\hat{r}}_{\text{in}}+e^{\Lambda}\partial_{\theta}\left(\sin \theta B^{\hat{\theta}}_{\text{in}}\right)=0, \tag{104}\]
that is valid for all expansion orders.
Also, for the perfect conductor / vacuum interface, the matching conditions are the continuity of the normal magnetic field component (\(B^{\hat{r}}\)) and transverse electric field components (\(E^{\hat{\theta}}\) and \(E^{\hat{\phi}}\)):
\[B^{\hat{r}}_{\text{in}}(r=R_{*}) =B^{\hat{r}}_{\text{out}}(r=R_{*}), \tag{105}\] \[E^{\hat{\theta}}_{\text{in}}(r=R_{*}) =E^{\hat{\theta}}_{\text{out}}(r=R_{*}),\] (106) \[E^{\hat{\phi}}_{\text{in}}(r=R_{*}) =E^{\hat{\phi}}_{\text{out}}(r=R_{*})=0. \tag{107}\]
From equation (105), it makes sense to look for an interior solution that has the same angular dependence as the exterior solution, yielding:
\[B^{\hat{r}}_{\text{in}}(\theta)=\bar{B}^{\hat{r}}_{(0)r_{*}}\cos\theta+\bar{B}^ {\hat{r}}_{(1)\nu_{1}}\cos\theta\left(3\cos^{2}\theta-2\right)+\bar{B}^{\hat{r }}_{(1)\nu_{2}}\cos\theta, \tag{108}\]
with \(\bar{B}^{\hat{r}}_{(0)\nu_{*}}\), \(\bar{B}^{\hat{r}}_{(1)\nu_{1}}\) and \(\bar{B}^{\hat{r}}_{(1)\nu_{2}}\) being constants. This rational allows the determination of the \(B^{\hat{\theta}}\) components via equation (104):
\[B^{\hat{\theta}}_{\text{in}}(\theta)=\bar{B}^{\hat{\theta}}_{(0)r_{*}}\sin \theta+\bar{B}^{\hat{\theta}}_{(1)\nu_{1}}\sin\theta\left(3\cos^{2}\theta-1 \right)+\bar{B}^{\hat{\theta}}_{(1)\nu_{2}}\sin\theta, \tag{109}\]
with
\[\bar{B}^{\hat{\theta}}_{(0)r_{*}} =-\bar{B}^{\hat{r}}_{(0)r_{*}}e^{-\Lambda}, \tag{110}\] \[\bar{B}^{\hat{\theta}}_{(1)\nu_{1}} =-\bar{B}^{\hat{r}}_{(1)\nu_{1}}e^{-\Lambda}/2,\] (111) \[\bar{B}^{\hat{\theta}}_{(1)\nu_{2}} =-\bar{B}^{\hat{r}}_{(1)\nu_{2}}e^{-\Lambda}. \tag{112}\]
Equations (100)-(101) yield then:
\[E^{\hat{r}}_{\text{in}} =\frac{\bar{\omega}r\sin^{2}\theta}{e^{\Phi}}\left(\bar{B}^{ \hat{\theta}}_{(0)\nu_{*}}+\bar{B}^{\hat{\theta}}_{(1)\nu_{1}}\left(3\cos^{2} \theta-1\right)+\bar{B}^{\hat{\theta}}_{(1)\nu_{2}}\right), \tag{113}\] \[E^{\hat{\theta}}_{\text{in}} =-\frac{\bar{\omega}r\sin\theta\cos\theta}{e^{\Phi}}\left(\bar{B}^{ \hat{r}}_{(0)\nu_{*}}+\right.\] \[\left.+\bar{B}^{\hat{r}}_{(1)\nu_{1}}\left(3\cos^{2}\theta-2 \right)+\bar{B}^{\hat{r}}_{(1)\nu_{2}}\right), \tag{114}\]
which generalizes the internal electric field solution found by [16]. Also, equations (108)-(109) and (113)-(114) constitute the internal solution for the non-zero electromagnetic components that will now be matched to the exterior one.
### Determination of the constants of integration
To obtain the complete electromagnetic exterior solution, one just needs to combine the matching conditions given by (105)-(106) with the appropriate Newtonian limits:
\[B^{\hat{r}}_{\text{flat}}(r,\theta)=\lim_{M/r\to 0}B^{\hat{r}}(r,\theta)=\frac{2\mu}{r^{3}} \cos\theta, \tag{115}\] \[B^{\hat{\theta}}_{\text{flat}}(r,\theta)=\lim_{M/r\to 0}B^{\hat{\theta}}(r,\theta)=\frac{\mu}{r^{3 }}\sin\theta, \tag{116}\]
where \(\mu\) is the dipolar moment of the neutron star. As before, we will determine the integration coefficients one expansion order at a time.
#### iv.4.1 Matched zeroth-order solution
When applying the Newtonian limit, only the zeroth-order terms in equations (36)-(37) survive and are capable to construct the radial dependence of the limits in (115)-(116):
\[\lim_{M/r\to 0}B^{\hat{\theta}}_{(0)_{r}}(r)=\frac{2\mu}{r^{3}}, \tag{117}\] \[\lim_{M/r\to 0}B^{\hat{\theta}}_{(0)_{r}}(r)=\frac{\mu}{r^{3}}, \tag{118}\]
which are satisfied when
\[C_{1}=-\frac{6\mu}{M^{3}}, \tag{119}\]
and, consequently,
\[B^{\hat{r}}_{(0)_{r}}(r)=-\frac{3\mu}{4M^{3}}\left[\ln\left(1- \frac{2M}{r}\right)+\frac{2M}{r}\left(1+\frac{M}{r}\right)\right], \tag{120}\] \[B^{\hat{\theta}}_{(0)_{r}}(r)=\frac{3\mu}{4M^{3}}\sqrt{1-\frac{2 M}{r}}\times\] \[\times\left[\ln\left(1-\frac{2M}{r}\right)+\frac{2M}{r}\left(1+ \frac{M}{r-2M}\right)\right]. \tag{121}\]
In the absence of rotation, expressions (120)-(121) correspond to the only non-zero electromagnetic components of a static dipolar field in a Schwarzschild background metric and in the aligned rotator configuration. These expressions coincide with the ones originally found in the work by Anderson & Cohen [1], Ginzburg & Ozernoi [6] (see also expressions (90)-(91) in 16).
As for the interior magnetic field components, equation (105) forces:
\[\tilde{B}^{\hat{r}}_{(0)_{r}}=B^{\hat{r}}_{(0)_{r}}(r=R_{*})=- \frac{3\mu}{4M^{3}}\times\] \[\times\left[\ln\left(1-\frac{2M}{R_{*}}\right)+\frac{2M}{R_{*}} \left(1+\frac{M}{R_{*}}\right)\right], \tag{122}\]
which determines \(\tilde{B}^{\hat{\theta}}_{(0)_{r}}\) via equation (110). This leads to the determination of the zeroth-order electric field components given by equations (100)-(101) (or, (113)-(114)), and magnetic field components via equations (108)-(109):
\[E^{\hat{r}\ \theta^{\prime}n}_{\text{in}}=\frac{\Omega_{*}r\sin^{2 }\theta}{e^{\Phi}}\tilde{B}^{\hat{\theta}}_{(0)_{r}}, \tag{123}\] \[E^{\hat{\theta}\ \theta^{\prime}n}_{\text{in}}=-\frac{\Omega_{*}r \sin\theta\cos\theta}{e^{\Phi}}\tilde{B}^{\hat{\theta}}_{(0)_{r}},\] (124) \[B^{\hat{\theta}\ \theta^{\prime}n}_{\text{in}}=\tilde{B}^{\hat{ \theta}}_{(0)_{r}}\cos\theta,\] (125) \[B^{\hat{\theta}\ \theta^{\prime}n}_{\text{in}}=\tilde{B}^{\hat{ \theta}}_{(0)_{r}}\sin\theta. \tag{126}\]
Imposing the continuity condition on the \(\theta\) component of the electric field (i.e., equation (106)), gives:
\[\mathcal{C}_{2}= \frac{2\Omega_{*}R_{*}\tilde{B}^{\hat{r}}_{(0)_{r}}}{3\alpha^{2}( R_{*})}\times\] \[\times\left[\left(1-\frac{R_{*}}{M}\right)\ln\left(1-\frac{2M}{R_ {*}}\right)-\frac{2M^{2}}{3R_{*}\left(R_{*}-2M\right)}-2\right]^{-1}, \tag{127}\]
which, in par with equation (119), completes the determination of the integration constants of the zeroth-order exterior solution, presented in equations (36)-(39).
As the zeroth-order terms do not account for the frame-dragging effect, then the solution found corresponds to the case of a rotating neutron star with a static Schwarzschild background metric. It is important to notice that this solution coincides with expressions (97)-(99) and (134)-(136) in Rezzolla et al. [16] for the case of an intrinsic aligned dipolar field. Also, the radial eigenfunctions for the electric field satisfy the Newtonian limit solution:
\[\lim_{M/r\to 0}E^{\hat{r}}_{(0)_{r}}(r)=-\frac{\mu\Omega_{*}R_{*}^{2}}{r^{4}}, \tag{128}\] \[\lim_{M/r\to 0}E^{\hat{\theta}}_{(0)_{r}}(r)=-\frac{2\mu\Omega_{*}R_{*}^{2}} {r^{4}}, \tag{129}\]
reducing to the aligned rotating magnetized neutron star solution in Minkowski background metric, found by Deutsch [4].
#### iv.4.2 Matched first-order solution
The approach to determine the first-order integration constants is very similar to the zeroth-order ones. We start by noticing that the integration constants from the source terms of the differential equations, i.e. \(\mathcal{C}_{4}\) and \(\mathcal{C}_{6}\), are already determined via equations (119) and (127).
Imposing the matching condition (106) for the first-order terms of equations (114) and (45), leads to:
\[-\frac{R_{*}}{\alpha(R_{*})}\left(-\omega_{0}\tilde{B}^{\hat{r}}_{ (0)_{r}}+\Omega_{*}\tilde{B}^{\hat{r}}_{(1)_{r1}}\left(3\cos^{2}\theta-2 \right)+\Omega_{*}\tilde{B}^{\hat{r}}_{(1)_{r2}}\right)=\] \[=E^{\hat{\theta}}_{(1)_{r}}(R_{*}), \tag{130}\]
where we have already simplified the angular dependence on both sides of the equation. However, as the term proportional to \(\tilde{B}^{\hat{\ell}}_{(1)_{r_{1}}}\) still has an angular dependence, then this constant must be zero. This is, of course, a consequence of not allowing other multipoles for the interior solution. In addition, we will not allow self-generated magnetic field components within the star. This means that \(\tilde{B}^{\hat{\ell}}_{(1)_{r_{2}}}\) must also vanish such that the seeded interior magnetic field given by equations (108)-(109) is solely the zeroth-order dipolar field. These conditions can be further relaxed in future works. From equation (130), it follows:
\[\tilde{B}^{\hat{\ell}}_{(1)_{r_{1}}} =\tilde{B}^{\hat{\ell}}_{(1)_{r_{1}}}=0, \tag{131}\] \[\tilde{B}^{\hat{\ell}}_{(1)_{r_{2}}} =\tilde{B}^{\hat{\phi}}_{(1)_{r_{2}}}=0,\] (132) \[E^{\hat{\theta}}_{(1)_{r}}(R_{*}) =\frac{R_{*}\omega_{0}}{\alpha(R_{*})}\tilde{B}^{\hat{\ell}}_{(0 )_{r}}. \tag{133}\]
Hence, imposing the matching condition (105) for the first-order terms of equations (108) and (59), leads to:
\[\tilde{B}^{\hat{\ell}}_{(1)_{r_{1}}} =\tilde{B}^{\hat{\ell}}_{(1)_{r_{1}}}(R_{*}), \tag{134}\] \[\tilde{B}^{\hat{\ell}}_{(1)_{r_{2}}} =\tilde{B}^{\hat{\ell}}_{(1)_{r_{2}}}(R_{*}), \tag{135}\]
which were obtained by direct analysis of their angular dependencies. With equation (131), equation (134) can be used to write \(\mathcal{C}_{5}\) as a function of \(\mathcal{C}_{6}\), that was already determined, as:
\[\mathcal{C}_{5}= -\frac{C_{6}}{6}\frac{M}{R_{*}}\left[\frac{2M}{R_{*}}\left(\frac {M}{R_{*}}-3\right)+\left(\frac{4M}{R_{*}}-3\right)\ln\left(1-\frac{2M}{R_{*}} \right)\right]\times\] \[\times\left[\frac{2M}{R_{*}}\left(6+\frac{M}{R_{*}}\right)+\frac{4 5R_{*}}{M}-75+\right.\] \[\left.\quad\quad+\left.\left(36+\frac{15R_{*}}{2M}\left(\frac{3R _{*}}{M}-8\right)\right)\ln\left(1-\frac{2M}{R_{*}}\right)\right]^{-1}. \tag{136}\]
Also, equation (135) can be used to determine \(\mathcal{C}_{7}\) with the help of equation (132):
\[\mathcal{C}_{7}=8\left[\frac{2M}{R_{*}}\left(\frac{M}{R_{*}}+1 \right)+\ln\left(1-\frac{2M}{R_{*}}\right)\right]^{-1}\times\] \[\times\left\{\frac{C_{6}}{6}\left(\frac{M}{R_{*}}-2\right)\left[ \left(\frac{M}{R_{*}}-1\right)\ln\left(1-\frac{2M}{R_{*}}\right)-\frac{2M}{R_ {*}}\right]-\right.\] \[\left.-\mathcal{C}_{5}\left[\frac{2M}{R_{*}}\left(\frac{M}{R_{*}} +2\right)-15+\frac{9R_{*}}{M}+\right.\right.\] \[\left.\quad\quad\quad\quad\quad\quad\left.\left.+\frac{1}{2} \left(4-\frac{3R_{*}}{M}\right)^{2}\ln\left(1-\frac{2M}{R_{*}}\right)\right] \right\}, \tag{137}\]
and equation (133) can be used to determine \(\mathcal{C}_{3}\):
\[\mathcal{C}_{3}= -\frac{1}{6\alpha(R_{*})}\times\] \[\times\left[\left(1-\frac{R_{*}}{M}\right)\ln\left(1-\frac{2M}{R_ {*}}\right)-\frac{2M^{2}}{3R_{*}\left(R_{*}-2M\right)}-2\right]^{-1}\times\] \[\times\left[\frac{2\alpha(R_{*})}{3}\mathcal{C}_{4}\left(\frac{M ^{4}}{R_{*}^{3}\left(R_{*}-2M\right)}\right)+\frac{R_{*}\omega_{0}}{\alpha(R_{* })}\tilde{B}^{\hat{\ell}}_{(0)_{r}}\right]. \tag{138}\]
The integration coefficients in equations (136)-(138) fully determine the exterior electromagnetic solution of a neutron star up to first-order, given by:
\[B^{\hat{\ell}}=\left(B^{\hat{\ell}}_{(0)_{r}}(r)+B^{\hat{\ell}}_ {(1)_{r1}}(r)\left(3\cos^{2}\theta-2\right)+B^{\hat{\ell}}_{(1)_{r2}}(r)\right) \cos\theta, \tag{139}\] \[B^{\hat{\theta}}=\left(B^{\hat{\theta}}_{(0)_{r}}(r)+B^{\hat{ \theta}}_{(1)_{r1}}(r)\left(3\cos^{2}\theta-1\right)+B^{\hat{\theta}}_{(1)_{r2} }(r)\right)\sin\theta,\] (140) \[E^{\hat{\ell}}=\left(E^{\hat{\ell}}_{(0)_{r}}(r)+E^{\hat{\ell}}_ {(1)_{r}}(r)\right)\left(3\cos^{2}\theta-1\right),\] (141) \[E^{\hat{\theta}}=\left(E^{\hat{\theta}}_{(0)_{r}}(r)+E^{\hat{ \theta}}_{(1)_{r}}(r)\right)\sin\theta\cos\theta,\] (142) \[E^{\hat{\phi}}=B^{\hat{\phi}}=0, \tag{143}\]
and the corresponding electromagnetic interior solution:
\[B^{\hat{\ell}}_{\text{in}} =\tilde{B}^{\hat{\ell}}_{(0)_{r}}\cos\theta, \tag{144}\] \[B^{\hat{\theta}}_{\text{in}} =B^{\hat{\theta}}_{(0)_{r}}\sin\theta,\] (145) \[E^{\hat{\ell}}_{\text{in}} =\frac{\tilde{\omega}\sigma\sin^{2}\theta}{e^{\Phi}}\tilde{B}^{ \hat{\theta}}_{(0)_{r}},\] (146) \[E^{\hat{\theta}}_{\text{in}} =-\frac{\tilde{\omega}\sigma\sin\theta\cos\theta}{e^{\Phi}}\tilde{B}^ {\hat{\ell}}_{(0)_{r}}. \tag{147}\]
The electric field and the interior electromagnetic fields match exactly the solution found by Rezzolla et al. [16] for the aligned rotator. The same solution was later derived in expressions (56)-(57d) in Petri [13]. It is important to highlight that the frame-dragging correction to the exterior magnetic field in expressions (139)-(140) constitute a novel set of analytical electromagnetic solutions. Numerical solutions for the dipolar and multipolar fields were obtained in Petri [13] and Petri [14], respectively.
### Solution analysis
It is important to notice that \(\mathcal{C}_{1}\) is of order \(\mu\), while \(\mathcal{C}_{2}\) is of order \(\mu\Omega_{*}\). This comes from the fact that the zeroth-order electric field is induced purely from the rotation of the star. In the same manner, \(\mathcal{C}_{4}\) is of order \(\mu\omega_{0}\), while \(\mathcal{C}_{6}\) is of order \(\mu\omega_{0}\Omega_{*}\). Just from this analysis, it is possible to establish the order of magnitude of the electromagnetic solution:
\[\mu>\mu\Omega_{*}>\mu\omega_{0}>\mu\omega_{0}\Omega_{*}>\ldots, \tag{148}\] \[B_{(0)}>E_{(0)}>E_{(1)}>\ B_{(1)}>\ldots, \tag{149}\]
if one recalls that \(\omega_{0}\ll\Omega_{*}\). This also states that the new terms, proportional to the frame-dragging frequency, are induced from the rotation of the background metric (electric field) or a nonlinear interplay between the stellar rotation and the metric rotation (magnetic field). Figure 1 shows the electric and magnetic field amplitudes for a normalized dipolar moment \(\mu=500\) [\(m_{e}c^{2}R_{*}^{2}/e\)] and a normalized stellar angular velocity \(\Omega_{*}=0.2\) [\(c\) rad \(/R_{*}\)], realized for different Schwarzschild radii values. This figure highlights the first conclusions by Anderson & Cohen [1], Ginzburg & Ozernoi
[6] that the magnetic field amplitude is greatly enhanced when considering a curved spacetime geometry, at fixed dipolar moment. At the poles, the amplitude of the magnetic field could increase by a factor of \(\sim 1.64\) for a typical compactness parameter \(R_{s}\sim 0.5\)\([R_{*}]\). Figure 1 also exhibits the reduction of the induced electric field due to the frame-dragging effect as originally identified by Muslimov & Tsygan [11]. As for the frame-dragging correction to the magnetic field components, it shows that the new terms presented in this paper account for a \(\sim 0.43\%\) amplitude decrease in the equator for a typical compactness value. Figure 2 shows that these corrections not only modify the magnetic amplitude but also influence their local vectorial properties. The angle \(\xi\) between the flat spacetime magnetic field vector and the curved counterpart is shown to be rotated up to \(\sim 2\) degrees for typical compactness values. In addition, Figure 2 also demonstrates that the novel magnetic frame-dragging terms account for percent level corrections when compared to the zeroth-order curved magnetic field vectorial angle, i.e. without the frame-dragging correction.
Another important detail is the existence of surface charges and currents. The surface charge distribution, \(\sigma_{s}\), is supported by the discontinuity on the radial component of the electric field across the stellar surface [16]:
\[\sigma_{s}=\frac{1}{4\pi}\left(E^{\hat{r}}(R_{*})-E_{\rm in}^{\hat{r}}(R_{*}) \right). \tag{150}\]
Similarly, the surface currents, \(i^{\hat{\theta}}\) and \(i^{\hat{\phi}}\), are supported by the discontinuity on the transverse magnetic field components across the stellar surface [16]:
\[i^{\hat{\theta}} =\frac{c}{4\pi}\left(B^{\hat{\phi}}(R_{*})-B_{\rm in}^{\hat{ \phi}}(R_{*})\right)=0, \tag{151}\] \[i^{\hat{\phi}} =\frac{c}{4\pi}\left(B^{\hat{\theta}}(R_{*})-B_{\rm in}^{\hat{ \theta}}(R_{*})\right). \tag{152}\]
Figure 3 shows the charge and azimuthal current amplitudes for the same numerical parameters. This figure shows that the frame-dragging correction does influence the surface charges by a factor of \(\sim 1.14\) and the azimuthal surface current has a \(\sim 0.31\%\) amplitude decrease, both for a typical compactness value, and at the polar cap.
It should be stressed that these analytical results were obtained assuming that the interior magnetic field has a pure dipolar configuration. If multipoles were allowed inside the star, the interface conditions would allow non-trivial frame-dragging corrections at the stellar surface, which could lead to higher amplitude corrections.
## IV General-relativistic particle-in-cell code
To simulate the exterior vacuum solution of a compact neutron star, we implemented a new module capable of capturing the general-relativistic effects within the OSIRIS particle-in-cell (PIC) code framework Fonseca et al. [5]. This new module makes use of the 3+1 formalism described in section II, performing all the calculations in Boyer-Lindquist coordinates in the slow-rotation limit of the Kerr metric. A complete description of the particle-in-cell code OSIRIS-GR will be given in a future paper; here we present a brief overview of the numerical methods used. The field solver consists of a generalized version of the Yee algorithm [19], where the electric and magnetic field components are discretized on a spherical \((r,\theta)\) grid, as shown in Figure 4. The grid is body-fitted to the shape of the neutron star. In this way, the interior radial boundary of the numerical domain corresponds to the stellar surface, placed at \(r=R_{*}=1\)\([R_{*}]\). The outer boundary is placed at \(r=r_{\rm max}=32\)\([R_{*}]\). As we are studying an axisymmetric
Figure 1: Electric and magnetic field amplitudes at the stellar surface, \(r=R_{*}\), as a function of the polar angle \(\theta\). The right side plots correspond to the zeroth-order fields, while the left side corresponds to the complete solution given in this paper. The bottom frame exhibits the correction percentage profile for the magnetic frame-dragging terms. Different colors correspond to different Schwarzschild radii values. The Newtonian limit solution is presented in black.
setup, the domain corresponds to a poloidal cut with \(\theta\) from 0 to \(\pi\), i.e. from north to south pole. In the radial direction, we chose a logarithmically spaced grid to have a higher resolution close to the stellar surface. In the meridian direction, we use a uniformly spaced grid. As for the electromagnetic boundaries, we implemented a Mur outer radial boundary that mimics an open boundary. For the polar axis, which corresponds to the meridional domain boundaries, we set the azimuthal field components to zero on the axis, i.e. \(E^{\phi}=B^{\phi}=0\), and mirror the remaining angular components. For the inner radial boundary, we adopt the rotating conducting conditions by providing the interior solutions found in equations (144)-(147). This means that we start our simulations with the star already in full rotation, i.e. angular velocity equal to \(\Omega_{*}=0.125\) [\(c\) rad \(/R_{*}\)], and with dipolar moment \(\mu=4000\) [\(m_{e}c^{2}R_{*}^{2}/e\)].
The simulations presented in the next section are done with 2048 cells in both directions and with a compactness parameter \(R_{s}=0.5\) [\(R_{*}\)] unless specified otherwise.
## V Results
In order to test the validity of both the code and the newly found solution, the exterior solution given by equations (139)-(143) will be initialized and the amplitude of the formed transient fields in the azimuthal field components will be examined. Ideally, if the exact solution to the system of equations was initialized, no transient would exist and these components would be null as in equation (143). We start by initializing the solution found by Ginzburg & Ozernoi [6] consisting of the
Figure 3: Surface charge and azimuthal current amplitudes as a function of the polar angle \(\theta\) at the stellar surface. The right side plots correspond to the zeroth-order fields, while the left side corresponds to the complete solution given in this paper. The bottom frame exhibits the correction percentage profile for the surface current due to the magnetic frame-dragging terms. Different colors correspond to different Schwarzschild radii values. The Newtonian limit solution is presented in black.
Figure 2: Vectorial angle, \(\xi\), between the magnetic field in curved and flat spacetimes. In the top frame, it is shown how \(\xi\) varies along \(\theta\) with (left side) or without (right side) the magnetic frame-dragging correction. The bottom frame exhibits the correction percentage profile. Different colors correspond to different Schwarzschild radii values. The Newtonian limit solution is presented in black.
zeroth-order terms of the magnetic field. This solution does not account for the frame-dragging effect but still captures the effect of the Schwarzschild fixed background metric. As we are interested in rotating neutron star solutions, we provide the zeroth-order electric field as well, despite, historically, not being the solution presented by Ginzburg & Ozernoi [6]. Figure 5 shows how the transient propagates through the entire domain, being launched from the stellar surface and bouncing back and forth between the stellar surface and the outer radial boundary. Although the outer radial boundary is open-like, it is not a perfect absorbing layer and reflects the incoming wave with a much lower amplitude. It is important to note that after two complete bounces (i.e. after four light crossing times of the domain), the amplitude of these waves is negligible as seen for the last time shown in Figure 5. We believe our boundary condition is still a better option than the standardized outer damping layer [e.g. 2, 3] as this layer also introduces waves into the system due to the damping of the background magnetic field, and modifies the amplitude of all the electromagnetic components close to it.
As mentioned before, the closer the initialized solution is to the exact solution the smaller the amplitude of the excited transient. In this sense, we can use the transient state as a probe for the three solutions present in this paper: (a) the solution found by Ginzburg & Ozernoi [6] extended to account for the neutron star rotation as described above; (b) the solution found by Rezzolla et al. [16] that considers the frame-dragging correction to the electric field; (c) our solution that considers the frame-dragging correction in both the electric and magnetic fields. Recalling equation (149), one sees that we are drawing closer to the exact solution as we are capturing higher order corrections, hence, we expect the transient amplitude to decrease going from the first case to the last. Figure 6 demonstrates this by showing that, indeed, the transient field decreases with the inclusion of high-order terms. In addition, it shows that the inclusion of the frame-dragging correction to the electric
Figure 4: Spherical (r,\(\theta\)) grid with domain boundary conditions and electromagnetic field components layout.
Figure 5: Temporal evolution of the azimuthal magnetic and electric field components for the Ginzburg & Ozernoi [6] solution extended to rotation. The dashed black line represents the light cylinder distance and the poloidal field lines are represented in grey.
(magnetic) field significantly reduces the azimuthal magnetic (electrical) field transient amplitude. To better visualize this feature, we have taken a radial cut at \(\theta=1.0\) (\(\pi/2\)) [rad] for the azimuthal magnetic (electric) field component, as seen in the red dashed line in Figure 6. The resulting lineouts are shown in Figure 7 for two distinct simulation times: \(t=32.00\) (early-stage) and \(128.00\) [\(R_{*}/c\)] (late-stage). It is clear that our solution, case (c), has much smaller early and late-stage field amplitudes. It reduces the late-stage azimuthal electric field amplitude by approximately one order of magnitude, which means that the simulations are more accurate and stable at the expense of a more complicated initialization. Also, we demonstrate that for the stellar parameters chosen, solution (a) does not describe the late-stage very accurately, as expected.
If we zoom in closer to the stellar surface for the late-stage lineouts, shown in Figure 8, we can now analyze the grid resolution effect. Restricting ourselves to solutions (b) and (c), we can conclude that the azimuthal magnetic field component does not change significantly. However, if we look into the azimuthal electric field, we see that the amplitude of the late-stage component is reduced by half when the resolution is increased twofold. Interestingly, when using our solution at the lowest resolution, the late-stage amplitude of the transient is still smaller than the higher resolution run presented for solution (b), showing a higher stabilization of the obtained numerical solution and the potential to reduce the computational cost associated with this kind of studies. For this specific case, it would correspond to a speed-up of a factor of four.
Figure 8: Close-up zoom of the stellar surface transient amplitude at \(t=128\) [\(R_{*}/c\)]. The black and red lines correspond to the solution (b) and (c), respectively. Different radial resolutions correspond to different line styles: dashed, full and dotted correspond to 2048, 4096 and 8192 grid cells, respectively. The azimuthal magnetic field amplitude was multiplied by 100 for better visualization.
Figure 6: Comparison of the azimuthal transient amplitudes for the three solutions at \(t=32\) [\(R_{*}/c\)]. The red dashed line represents the lineout locations for Figures 7 and 8.
Figure 7: Transient amplitude at two times: \(t=32\) and \(128\) [\(R_{*}/c\)]. The upper panels show the azimuthal magnetic field for a cut at \(\theta=1\) rad and the lower panels show the azimuthal electric field for a cut at the equator. The three solutions are given by black, red and blue colors, following the alphabetic order respectively.
Conclusions
Neutron stars comprise a set of compact objects where general-relativistic effects are relevant. This paper presents the solution of the magnetospheric electromagnetic fields to a massive neutron star in a vacuum background with an intrinsic dipolar magnetic moment. We summarize the analytic solutions obtained for an aligned rotator with infinite conductivity and extend them to consider the magnetic frame-dragging correction. Several equations of state models [e.g. 7] predict that neutron stars can achieve compactness values up to \(R_{s}\sim 0.6[R_{s}]\). Therefore, we considered a typical value of \(R_{s}\sim 0.5[R_{s}]\) for the analysis of the derived solution. We show that the new terms account for a 0.43% decrease in magnetic field strength at the equator and an average 1% vectorial angle correction compared to previous solutions available in the literature. This solution modifies the external magnetic field configuration, leading to a self-consistent redistribution of the superficial azimuthal current.
We developed a new module for the OSIRIS particle-in-cell code capable of simulating the exterior magnetospheric problem of neutron stars with general relativity effects. This module performs all the calculations in Boyer-Lindquist coordinates in the slow-rotation limit of the Kerr metric. By prescribing the derived analytic solution to the exterior domain as an initial value problem, it is possible to compare its numerical stability with other solutions in the literature. Theoretically, both azimuthal components of the electric and magnetic fields should be zero in the exterior domain. In the early-stages of the simulation, the transient field launched can be used to probe the proximity between the prescribed solution and the exact one. We showed that the inclusion of the magnetic frame-dragging correction led to a significant reduction of the transient amplitude in both field components. In the late-stages of the simulation, the numerical solution converges to an oscillating reminiscent field whose amplitude is most affected by the initial prescribed solution. This was verified by noticing that the lowest resolution simulation with the electromagnetic frame-dragging correction has a lower reminiscent field amplitude than the highest resolution simulation considering only the electric field correction. Thus, demonstrating that simulations are more accurate and stable at the expense of a more complicated initialization. In particular, this corresponds to a reduction of the simulation runtime by a factor of four. The solution presented in this paper may be a handy tool to benchmark other particle-in-cell codes that rely on analytic solutions for the electromagnetic field initialization.
Future works could extend the present solution to include multipolar field contributions and a misalignment between the magnetic moment and stellar spin axis to approximate it from more realistic profiles; this would be a generalization of the approach presented in this work.
**Acknowledgements:** This work is partially supported by the European Research Council (ERC-2015-AdG Grant 695088). RT is supported by FCT (Portugal) (Grant PD/BD/142971/2018) in the framework of the Advanced Program in Plasma Science and Engineering (APPLAuSE, FCT Grant PD/00505/2018). The authors acknowledge useful discussions with Pablo J. Bilbao. All simulations presented were performed at LUMI within the EuroHPC-JU project EHPC-REG-2021R0038.
**Data Availability:** Any data generated for or included in this article can be made available upon a reasonable request. All analytical solutions were obtained with a Wolfram Mathematica 12 notebook.
|
2305.04272 | Mapping class groups for 2-orbifolds | We define orbifold mapping class groups (with marked points) and study them
using their action on certain orbifold analogs of arcs and simple closed
curves. Moreover, we establish a Birman exact sequence for suitable subgroups
of orbifold mapping class groups. The short exact sequence allows us to deduce
finite presentations of these groups. This is the basis for a similar
discussion of orbifold braid groups in [6]. | Jonas Flechsig | 2023-05-07T13:24:09Z | http://arxiv.org/abs/2305.04272v1 | # Mapping class groups for 2-orbifolds
###### Abstract
We define orbifold mapping class groups (with marked points) and study them using their action on certain orbifold analogs of arcs and simple closed curves. Moreover, we establish a Birman exact sequence for suitable subgroups of orbifold mapping class groups. The short exact sequence allows us to deduce finite presentations of these groups. This is the basis for a similar discussion of orbifold braid groups in [6].
**Mapping class groups for 2-orbifolds**
Jonas Flechsig
November 3, 2021
## 1. Introduction
This article is motivated by the study of orbifold braid groups in [6]. Orbifold braid groups are analogs of Artin braid groups or, more generally, surface braid groups. Instead of considering braids moving inside a disk or a surface, orbifold braids move inside a 2-dimensional orbifold. Orbifold braid groups attracted interest since some of them contain spherical and affine Artin groups of type \(D_{n},\bar{B}_{n}\) and \(\tilde{D}_{n}\) as finite index subgroups by work of Allcock [1]. For these Artin groups, the orbifold braid groups provide us with braid pictures. Roushon published several articles on the structure of orbifold braid groups [12, 13, 14, 15] and the contained Artin groups [11]. Further, Crisp-Paris [3] studied the outer automorphism group of the orbifold braid group.
The studied orbifold braid groups are related to orbifold mapping class groups that are associated to the following orbifolds. Let \(\Sigma_{\Gamma}\) be the orbifold that is defined using the following data: The group \(\Gamma\) is a free product of finitely many finite cyclic groups. As such, \(\Gamma\) acts on a planar, contractible surface \(\Sigma\) with boundary, obtained by thickening the Bass-Serre tree (see Example 2.3 for details). If we add \(L\) punctures, we obtain a similar orbifold as studied by Roushon in [14], which we denote by \(\Sigma_{\Gamma}(L)\). In contrast to his paper, we consider orbifolds with non-empty boundary (which does not affect the structure of the orbifold braid groups). The only singular points in the orbifold \(\Sigma_{\Gamma}(L)\) are cone points that correspond to the finite cyclic factors of the free product \(\Gamma\).
The associated orbifold mapping class group with respect to \(n\) marked points is denoted by \(\operatorname{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L)\right)\) for the punctured orbifold \(\Sigma_{\Gamma}(L)\). A mapping class of \(\Sigma_{\Gamma}(L)\) is represented by a \(\Gamma\)-equivariant homeomorphism of \(\Sigma(L)\) that fixes cone points and the boundary \(\partial\Sigma(L)\). Such a homeomorphism respects the \(n\) marked points if it preserves the \(\Gamma\)-orbit of the \(n\) marked points as a set. The equivalence relation is induced by \(\Gamma\)-equivariant ambient isotopies fixing cone points, marked points and the boundary.
Mapping class groups of surfaces are studied by their action on arcs and simple closed curves. A basic tool for that is the bigon criterion, see [5, Proposition 1.7]. In particular, the bigon criterion implies that homotopic arcs and homotopic simple closed curves are ambient isotopic. For the study of \(\operatorname{Map}^{orb}\left(\Sigma_{\Gamma}\right)\), we introduce orbifold analogs of arcs and simple closed curves, called \(\Gamma\)_-arcs_ and _simple closed \(\Gamma\)-curves_ (see Definitions 3.11 and 3.13). Moreover, we establish a bigon criterion for these analogs (see Propositions 3.25 and 3.21). As in the classical case, this
allows us to deduce that homotopic \(\Gamma\)-arcs and homotopic simple closed \(\Gamma\)-curves are ambient isotopic (see Proposition 3.28).
Moreover, orbifold mapping class groups admit a homomorphism
\[\mathrm{Forget}_{n}^{orb}:\mathrm{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L)\right) \rightarrow\mathrm{Map}^{orb}\left(\Sigma_{\Gamma}(L)\right)\]
by forgetting the marked points. Let \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) be the kernel of \(\mathrm{Forget}_{n}^{orb}\).
Since the cone points are fixed, we may restrict mapping classes to the subsurface \(\Sigma(L,N)\) that is also punctured at the cone points. Using that the quotient \(\Sigma(L,N)/\Gamma\) is a disk \(D(L,N)\) with \(L+N\) punctures, this allows us to construct an isomorphism from \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) to a similar subgroup \(\mathrm{Map}_{n}^{\mathrm{id}}\left(D(L,N)\right)\) of the mapping class group \(\mathrm{Map}_{n}\left(D(L,N)\right)\) (see Proposition 4.3 for details). Moreover, recall that the Birman exact sequence yields the following for \(\mathrm{Map}_{n}\left(D(L,N)\right)\):
\[1\rightarrow\pi_{1}\left(\mathrm{Conf}_{n}\left(D(L,N)\right)\right)\xrightarrow{ \mathrm{Push}_{n}}\mathrm{Map}_{n}\left(D(L,N)\right)\xrightarrow{\mathrm{ Forget}_{n}}\mathrm{Map}\left(D(L,N)\right)\to 1,\]
see, for instance, [5, Theorem 9.1]. Based on Proposition 4.3, this allows us to deduce a similar short exact sequence for the groups \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\):
**Theorem A** (Birman exact sequence for orbifold mapping class groups).: _The following diagram is a short exact sequence:_
\[1\rightarrow\pi_{1}\left(\mathrm{Conf}_{n}\left(D(L,N)\right)\right) \xrightarrow{\mathrm{Push}_{n}^{orb}}\mathrm{Map}_{n}^{\mathrm{id},orb} \left(\Sigma_{\Gamma}(L)\right)\xrightarrow{\mathrm{Forget}_{n}^{orb}} \underbrace{\mathrm{Map}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)}_{= 1}\to 1,\]
_see Theorem 4.10 for details._
Let \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) be the subgroup of mapping classes in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) that induce the trivial permutation on the orbits of marked points. For these subgroups, we obtain:
**Theorem B**.: _The following diagram is a short exact sequence that splits:_
\[1\to F_{n-1+L+N}\xrightarrow{\mathrm{Push}_{\mathrm{PMap}_{n}} ^{orb}}\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right) \xrightarrow{\mathrm{Forget}_{\mathrm{PMap}_{n}}^{\mathrm{id},orb}\left( \Sigma_{\Gamma}(L)\right)}\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{ \Gamma}(L)\right)\to 1,\]
_see Corollary 4.13 for details._
In particular, we can deduce finite presentations for \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) and \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) from Theorem B (see Corollary 4.19 and Proposition 4.22).
As proven in [6, Theorem A], certain orbifold braid groups are quotients of the groups \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Together with the presentations of \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) and its pure subgroup \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\), we obtain presentations of the related orbifold braid groups \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and \(\mathrm{P}\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\), see Theorem B and Corollary 5.6 in [6]. These are essential to deduce structural results about orbifold braid groups. In particular, we obtain a similar Birman exact sequence for pure orbifold braid groups, see [6, Theorem C]. In this case, we surprisingly observe that the analog of the point-pushing map \(\mathrm{Push}_{\mathrm{PMap}_{n}}^{orb}\) is not injective.
### Acknowledgments
I would like to thank my adviser Kai-Uwe Bux for his support and many helpful discussions. Many thanks are also due to Jose Pedro Quintanilha and Xiaolei Wu for their helpful advice at different points of this project. Moreover, I am grateful to Elisa Hartmann and Jose Pedro Quintanilha for their comments on an earlier version of this text.
The author was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) - 426561549. Further, the author was partially supported by Bielefelder Nachwuchsfonds.
## 2. Orbifolds and paths
In this article we only consider orbifolds that are given as the quotient of a manifold (typically a surface) by a proper group action. Recall that an action
\[\phi:G\to\operatorname{Homeo}(M),g\mapsto\phi_{g}\]
on a manifold \(M\) is _proper_ if for each compact set \(K\subseteq M\) the set
\[\{g\in G\mid\phi_{g}(K)\cap K\neq\emptyset\}\]
is compact. Since we endow \(G\) with the discrete topology, i.e. the above set is finite. Orbifolds that appear as proper quotients of manifolds are called _developable_ in the terminology of Bridson-Haefliger [2] and _good_ in the terminology of Thurston [16].
Above and in the following, all manifolds are orientable and all homeomorphisms are orientation preserving.
**Definition 2.1** (Orbifolds, [2, Chapter III.G, 1.3]).: Let \(M\) be a manifold, possibly with boundary, and \(G\) a group with a monomorphism
\[\phi:G\to\operatorname{Homeo}(M)\]
such that \(G\) acts properly on \(M\). Under these conditions the 3-tuple \((M,G,\varphi)\) is called an _orbifold_, which we denote by \(M_{G}\). If \(\operatorname{Stab}_{G}(c)\neq\{1\}\) for a point \(c\in M\), the point \(c\) is called a _singular point_ of \(M_{G}\). If \(\operatorname{Stab}_{G}(c)\) for a point \(c\in M\) is cyclic of finite order \(m\), the point \(c\) is called a _cone point_ of \(M_{G}\) of order \(m\).
A first example of an orbifold is the following:
**Example 2.2**.: Let \(\mathbb{Z}_{m}\) be a cyclic group of order \(m\). The group \(\mathbb{Z}_{m}\) acts on a disk \(D\) by rotations around its center. The action is via isometries and the acting group is finite, i.e. the action is proper. Consequently, \(D_{\mathbb{Z}_{m}}\) is an orbifold with exactly one singular point in the center of \(D\), which is a cone point.
Example 2.2 motivates a more general construction for a free product of finitely many finite cyclic groups which we describe briefly in the following. For further details, we refer to the author's PhD thesis [7, Section 2.1]. We will consider this generalization of the orbifold \(D_{\mathbb{Z}_{m}}\) throughout the article.
**Example 2.3**.: Let \(\Gamma\) be a free product of finite cyclic groups \(\mathbb{Z}_{m_{1}},...,\mathbb{Z}_{m_{N}}\). The group \(\Gamma\) is the fundamental group of the graph of groups with trivial edge groups
As such, \(\Gamma\) acts on its Bass-Serre tree \(T\). The fundamental domain of this action is a path with \(N-1\) edges. The action is free on edges and the vertex stabilizers are conjugates \(\gamma\mathbb{Z}_{m_{\nu}}\gamma^{-1}\) with \(\gamma\in\Gamma\) and \(1\leqslant\nu\leqslant N\). By the choice of a generator \(\gamma_{\nu}\) for each \(\mathbb{Z}_{m_{\nu}}\) with \(1\leqslant\nu\leqslant N\), the link of each vertex carries a cyclic ordering.
Let us consider a proper embedding of the Bass-Serre tree \(T\) into \(\mathbb{C}\) that respects the local cyclic order on each link. If we choose a regular neighborhood of \(T\) inside \(\mathbb{C}\), we obtain a planar, contractible surface \(\Sigma\) (with boundary), see Figure 2.1 for an example.
This surface \(\Sigma\) inherits a proper \(\Gamma\)-action from the Bass-Serre tree such that vertex stabilizers act with respect to the cyclic order on the link of the stabilized vertex. Moreover, the action admits a fundamental domain corresponding to the fundamental domain in \(T\). In particular, we obtain an orbifold structure \(\Sigma_{\Gamma}\).
A point in \(\Sigma_{\Gamma}\) is a singular point if and only if it corresponds to a vertex of \(T\). Hence, the singular points in \(\Sigma_{\Gamma}\) are all cone points and decompose into \(N\) orbits. The quotient \(\Sigma/\Gamma\) is a disk with \(N\) distinguished points that correspond to the orbits of the cone points.
In general, we may choose a fundamental domain \(F\) that is a disk as pictured in Figure 2.2 and contains exactly \(N\) cone points \(c_{1},...,c_{N}\) that lie on the boundary such that each has exactly two adjacent boundary arcs that lie in the same \(\Gamma\)-orbit.
In this particular case, we can also find a metric on \(\Sigma\) such that the \(\Gamma\)-action on \(\Sigma\) is isometric. For each \(\gamma\in\Gamma\), let
\[\lambda_{\gamma^{-1}}:\gamma(F)\to F,\gamma(x)\mapsto x.\]
Figure 2.2. The fundamental domain \(F\).
This allows us to define
\[d:\Sigma\times\Sigma \to\mathbb{R}_{>0},\] \[(x,y) \mapsto\inf_{(x_{0},x_{1},...,x_{k})\in P(x,y)}\sum_{i=1}^{k}d_{ \mathcal{C}}(\lambda_{\gamma_{i}^{-1}}(x_{i-1}),(\lambda_{\gamma_{i}^{-1}}(x_{i })). \tag{1}\]
Here above, \(P(x,y)\) denotes the set of polygonal chains \((x_{0},x_{1},...,x_{k})\) of arbitrary length \(k\in\mathbb{N}_{0}\) inside \(\Sigma\) connecting \(x\) and \(y\) and such that consecutive points \(x_{i-1}\) and \(x_{i}\) are contained in \(\gamma_{i}(F)\) with \(\gamma_{i}\in\Gamma\) and \(1\leq i\leq k\), see [7, Section 2.1.7] for details on the well-definedness of \(d\).
If we remove the boundary of \(\Sigma\), the quotient \(\Sigma^{\circ}/\Gamma\) is homeomorphic to the complex plane with \(N\) distinguished points and associated cyclic groups \(\mathbb{Z}_{m_{\nu}}\) for \(1\leq\nu\leq N\). Adding \(\Gamma\)-orbits of punctures \(\Gamma(r_{\lambda})\) for \(1\leq\lambda\leq L\) to \(\Sigma\) such that \(\Gamma(r_{\theta})\neq\Gamma(r_{\lambda})\) for \(1\leq\theta,\lambda\leq L,\theta\neq\lambda\), we obtain the orbifold called
\[\mathbb{C}(L,N,\mathbf{m})\text{ with }\mathbf{m}=(m_{1},...,m_{N})\]
in [12]. In [1], Allcock studied braids on these orbifolds for
\[(L,N,\mathbf{m})=(0,2,(2,2)),(0,1,(2))\text{ and }(1,1,(2)).\]
Since we want to study mapping class groups, which requires to fix the boundary, we will consider the orbifold \(\Sigma_{\Gamma}\) with boundary. Moreover, we use the notation \(\Sigma_{\Gamma}(L)\) for the orbifold with underlying surface (with boundary)
\[\Sigma(L):=\Sigma\backslash\Gamma(\{r_{1},...,r_{L}\}). \tag{2}\]
We will consider a concept of orbifold paths and their homotopy classes introduced in [2, Chapter III.G, 3]. There an orbifold is considered more generally as an _etale groupoid_\((\mathcal{G},X)\), see [2, Chapter III.G, 2]. If \(M_{G}\) is an orbifold in the sense of Definition 2.1, the associated etale groupoid is given by
\[(\mathcal{G},X)=(G\times M,M).\]
In the following, we will simplify the notation using \(G\) instead of \(\mathcal{G}=G\times M\). In particular, we introduce \(G\)-paths. These are the \(\mathcal{G}\)-paths in [2].
**Definition 2.4** (\(G\)-path, [2, Chapter III.G, 3.1]).: A _\(G\)-path_\(\xi=(g_{0},c_{1},g_{1},...,c_{p},g_{p})\) in \(M_{G}\) with initial point \(x\in M\) and terminal point \(y\in M\) over a subdivision \(a=t_{0}\leq...\leq t_{p}=b\) of the interval \([a,b]\) consists of
1. continuous maps \(c_{i}:[t_{i-1},t_{i}]\to M\) for \(1\leq i\leq p\) and
2. group elements \(g_{i}\in G\) such that \(g_{0}(c_{1}(t_{0}))=x\), \(g_{i}(c_{i+1}(t_{i}))=c_{i}(t_{i})\) for \(1\leq i<p\) and \(g_{p}(y)=c_{p}(t_{p})\) (see Figure 2.3).
For brevity, we write \((g_{0},c_{1},g_{1},...,c_{p})\) for \((g_{0},c_{1},g_{1},...,c_{p},g_{p})\) if \(g_{p}=1\). We say a \(G\)-path is _continuous_ if it is of the form \((g,c)\).
The following equivalence relation identifies certain \(G\)-paths whose continuous pieces have the same \(G\)-orbits.
**Definition 2.5** (Equivalence of \(G\)-paths, [2, Chapter III.G, 3.2]).: Let
\[\xi=(g_{0},c_{1},g_{1},...,c_{p},g_{p})\]
be a \(G\)-path over \(a=t_{0}\leq...\leq t_{p}=b\).
Figure 2.3. A \(G\)-path.
1. A _subdivision_ of \(\xi\) is a \(G\)-path obtained from \(\xi\) by choosing \(t^{\prime}\in[t_{i-1},t_{i}]\) for some \(1\leq i\leq p\) and replacing the entry \(c_{i}\) with the sequence \[(c_{i}|_{[t_{i-1},t^{\prime}]},1,c_{i}|_{[t^{\prime},t_{i}]}).\]
2. A _shift_ of \(\xi\) is a \(G\)-path obtained from \(\xi\) by choosing \(h\in G\) and replacing a subsequence \((g_{i-1},c_{i},g_{i})\) for some \(1\leq i\leq p\) with \[(g_{i-1}h^{-1},h\cdot c_{i},hg_{i}).\]
We say that two \(G\)-paths are _equivalent_ if one can be obtained from the other by a sequence of subdivisions, inverses of subdivisions and shifts.
Using this equivalence relation, we mimic the homotopy relation for paths in topological spaces for \(G\)-paths.
**Definition 2.6** (Homotopy of \(G\)-paths, [2, Chapter III.G, 3.5]).: An _elementary homotopy_ between two \(G\)-paths \(\xi\) and \(\tilde{\xi}\) is a family of \(G\)-paths \(\xi_{s}=(g_{0},c_{1}^{s},...,g_{p})\) over the subdivisions \(0=t_{0}\leq t_{1}\leq...\leq t_{p}=1\). The family \(\xi_{s}\) is parametrized by \(s\in[s_{0},s_{1}]\) such that \(c_{i}^{s}\) depends continuously on the parameter and \(\xi^{s_{0}}=\xi\), \(\xi^{s_{1}}=\tilde{\xi}\).
Two \(G\)-paths are _homotopic (relative to their endpoints)_ if one can pass from the first to the second by a sequence of the following operations:
1. equivalence of \(G\)-paths,
2. elementary homotopies.
Homotopy classes of \(G\)_-loops based at a fixed point \(x\in M\)_ lead to the concept of _orbifold fundamental groups_, see [2, Chapter III.G, 3] for details.
We close this section with a lemma that allows us to restrict to continuous \(G\)-paths in many situations. For a fixed index \(1\leq j\leq p\), the shift defined in Definition 2.5 allows for the following choice: The element \(h\) shifts \(c_{j}\) to \(c_{j}^{\prime}:=h\cdot c_{j}\), \(g_{j-1}^{\prime}=g_{j-1}h^{-1}\) and \(g_{j}^{\prime}=hg_{j}\). Thus, the path \(\xi\) is equivalent to
\[\xi^{\prime}=(g_{0},c_{1},g_{1},...c_{j-1},g_{j-1}^{\prime},c_{j}^{\prime},g_{ j}^{\prime},c_{j+1},...,c_{p},g_{p}). \tag{3}\]
If we choose \(h=g_{j}^{-1}\), the element \(g_{j}^{\prime}\) is trivial. Replacing \(c_{j}^{\prime},1,c_{j+1}\) by \(c_{j}^{\prime}\cup c_{j+1}\), we obtain the path
\[\tilde{\xi}^{\prime}=(g_{0},c_{1},g_{1},...c_{j-1},g_{j-1}^{\prime},c_{j}^{ \prime}\cup c_{j+1},g_{j+1},c_{j+1},g_{j+2},...,c_{p},g_{p}) \tag{4}\]
which is equivalent to \(\xi\) and has shorter subdivision length.
Figure 2.5. An elementary homotopy of \(G\)-paths.
Figure 2.4. Two \(G\)-paths equivalent by a shift.
**Lemma 2.7** ([2, Chapter III.G, 3.9(1)]).: _Every \(G\)-path connecting \(x\) to \(y\) in \(M_{G}\) is equivalent to a unique continuous \(G\)-path \((g,c)\) with \(c:I\to M\) connecting \(g^{-1}(x)\) to \(y\)._
For a proof of Lemma 2.7, we refer to [7, Lemma 2.12].
## 3. Orbifold mapping class groups
An important approach to the Artin braid groups is the identification with mapping class groups of punctured disks, see, for instance, [5, Section 9.1.3]. For disks and other surfaces, mapping class groups can be studied using their action on arcs and simple closed curves. This is based on the fact that homotopic arcs and homotopic simple closed curves are ambient isotopic. For instance, this follows from the bigon criterion which is stated in Proposition 3.9.
This section sets the base for a similar discussion in the orbifold case: It introduces a notion of _orbifold mapping class groups_ (see Definition 3.3) and establishes orbifold analogs of arcs and simple closed curves, called _\(\Gamma\)-arcs_ and _simple closed \(\Gamma\)-curves_ (see Definitions 3.11 and 3.13). In analogy to the surface case, we prove a _bigon criterion_ for both \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves (see Propositions 3.25 and 3.21).
As in the surface case, the bigon criterion implies that homotopic \(\Gamma\)-arcs and homotopic simple closed \(\Gamma\)-curves are ambient isotopic (see Lemma 3.28). This allows us to understand orbifold mapping class groups via their action on \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves. In particular, we will prove that the orbifold mapping class group of \(\Sigma_{\Gamma}\), in the absence of marked points and punctures, is trivial (see Lemma 3.32). This will serve as the base case for Section 4, where we study subgroups of orbifold mapping class groups with marked points.
### Definition and first examples
Given an orbifold \(M_{G}\), we want to define its mapping class group as a group of certain homeomorphisms of \(M\) modulo an equivalence relation. This generalizes the concept of mapping class groups of manifolds. If the acting group \(G\) is trivial, then the orbifold mapping class group of \(M_{\{1\}}\) defined below coincides with the mapping class group of \(M\).
As it is usual in the case of manifolds, we consider homeomorphisms of \(M\) that fix the boundary pointwise. Further, the orbifold mapping class group should reflect the structure of the \(G\)-action on \(M\). For this reason, we restrict to _\(G\)-equivariant_ homeomorphisms. Subject to an isotopy relation on this group, we define the orbifold mapping class group.
**Definition 3.1** (\(G\)-equivariance).: Let \(X,Y\) be topological \(G\)-spaces. A map \(\psi:X\to Y\) is \(G\)-equivariant if
\[\psi(g(x))=g(\psi(x))\text{ for all }g\in G,x\in X.\]
If \(M_{G}\) is an orbifold, the group of \(G\)-equivariant self-homeomorphisms of \(M\) that fix the boundary pointwise is denoted by \(\operatorname{Homeo}^{orb}(M_{G},\partial M)\). As a subgroup of the homeomorphism group of \(M\), it carries the structure of a topological group if it is endowed with the compact-open topology.
**Definition 3.2** (Ambient isotopy).: An _ambient isotopy_ is a continuous map
\[I\to\operatorname{Homeo}^{orb}(M_{G},\partial M).\]
Two \(G\)-equivariant homeomorphisms \(H,H^{\prime}\) are _ambient isotopic_, denoted by \(H\sim\ H^{\prime}\), if there exists an ambient isotopy \(H_{t}\) with \(H_{0}=H\) and \(H_{1}=H^{\prime}\).
**Definition 3.3** (Orbifold mapping class group).: The group of \(G\)-equivariant homeomorphisms that fix the boundary pointwise modulo ambient isotopy
\[\operatorname{Map}^{orb}\left(M_{G}\right):=\operatorname{Homeo}^{orb}(M_{G}, \partial M)/\sim\]
is called the _mapping class group_ of \(M_{G}\).
Based on the fact that \(\operatorname{Homeo}^{orb}(M_{G},\partial M)\) is a topological group, the mapping class group also carries the structure of a topological group. A particular basic example is the following:
**Example 3.4** (\(\operatorname{Map}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) and Alexander trick).: Let \(H\) be a self-homeomorphism of \(D\) that represents an element in \(\operatorname{Map}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) and
\[H_{t}:D\to D,x\mapsto\begin{cases}(1-t)H\left(\frac{x}{1-t}\right)&0\leq|x| \leq 1-t,\\ x&1-t\leq|x|\leq 1.\end{cases}\]
For each \(t\in I\), the map \(H_{t}\) is \(\mathbb{Z}_{m}\)-equivariant and fixes the boundary. Moreover, \(H_{0}=H\) and \(H_{1}=\operatorname{id}_{D}\). Since the map
\[I\to\operatorname{Homeo}^{orb}(D_{\mathbb{Z}_{m}},\partial D),t\mapsto H_{t}\]
is continuous, it yields an ambient isotopy that is known as the _Alexander trick_, see [5, Lemma 2.1]. It shows that \(\operatorname{Map}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) is trivial.
We can encode additional information in the orbifold mapping class group if we endow \(M_{G}\) with a finite set of marked points.
**Definition 3.5** (Orbifold mapping class group with marked points).: Let \(M_{G}\) be an orbifold and let us fix a set of non-singular marked points \(P=\{p_{1},...,p_{n}\}\) in \(M\) such that \(G(p_{i})\neq G(p_{j})\) for \(1\leq i,j\leq n,i\neq j\). By \(\operatorname{Homeo}^{orb}_{n}(M_{G},\partial M)\), we denote the subgroup of homeomorphisms that preserve the orbit of the marked points \(G(P)\) as a set:
\[\{H\in\operatorname{Homeo}^{orb}(M_{G},\partial M)\mid H(G(P))=G(P)\}.\]
We consider these homeomorphisms up to ambient isotopies \(I\to\operatorname{Homeo}^{orb}_{n}(M_{G},\partial M)\). The corresponding equivalence relation is denoted by \(\sim_{n}\). By
\[\operatorname{Map}^{orb}_{n}\left(M_{G}\right):=\operatorname{Homeo}^{orb}_{ n}(M_{G},\partial M)/\sim_{n},\]
we denote the _orbifold mapping class group of \(M_{G}\) with respect to the \(n\) marked points_.
We stress that the orbit of marked points \(G(P)\) is a discrete set. Hence, a homotopy \(H_{t}\) through \(\operatorname{Homeo}^{orb}_{n}(M_{G},\partial M)\) is constant on marked points, i.e. \(H_{t}(p_{j})=H_{0}(p_{j})=H_{1}(p_{j})\) for each \(1\leq j\leq n\) and \(t\in I\).
While the group \(\operatorname{Map}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) is trivial, we obtain for the group \(\operatorname{Map}^{orb}_{1}\left(D_{\mathbb{Z}_{m}}\right)\) with one orbit of marked points:
**Example 3.6**.: \(\operatorname{Map}^{orb}_{1}\left(D_{\mathbb{Z}_{m}}\right)\) is infinite cyclic. The generator is represented by a \(\frac{2\pi}{m}\)-twist \(U\) around the center \(c\) (see Figure 3.1).
Figure 3.1. The \(\frac{2\pi}{m}\)-twist \(U\) for \(m=3\).
We will give a proof of Example 3.6 in the next section. It will follow as a corollary from Proposition 4.3 using the Alexander trick.
Homeomorphisms that map each marked point inside its \(G\)-orbit yield the so called _pure orbifold mapping class group_:
**Definition 3.7** (Pure orbifold mapping class group).: Let \(\mathrm{PHomeo}_{n}^{orb}(M_{G},\partial M)\) be the group of _pure homeomorphisms_
\[\{H\in\mathrm{Homeo}_{n}^{orb}(M_{G},\partial M)\mid H(p_{j})=g_{j}(p_{j})\text { with }g_{j}\in G\text{ for all }1\leq j\leq n\}.\]
The subgroup of \(\mathrm{Map}_{n}^{orb}(M_{G})\) induced by pure homeomorphisms is called the _pure orbifold mapping class group_
\[\mathrm{PMap}_{n}^{orb}(M_{G}):=\mathrm{PHomeo}_{n}^{orb}(M_{G},\partial M)/ \sim_{n}.\]
At this point, we recall that a homeomorphism in the pure mapping class group of a manifold fixes each of the marked points. In contrast, we only require the homeomorphisms in \(\mathrm{PMap}_{n}^{orb}(M_{G})\) to preserve the orbit of each marked point but not to fix the points themselves. Further, we emphasize that we allow different group actions on different orbits of marked points, i.e. \(H(p_{i})=g_{i}(p_{i})\) and \(H(p_{j})=g_{j}(p_{j})\) with \(g_{i}\neq g_{j}\) for \(i\neq j\).
### Arcs and simple closed curves
The study of arcs and simple closed curves is a useful tool to deduce information about mapping class groups. This is based on the fact that each homotopy of arcs or simple closed curves can be realized by an ambient isotopy. For instance, this follows from the so called _bigon criterion_ for arcs and simple closed curves which we recall in the following. For further information, we refer to [5, Sections 1.2.2 and 1.2.7].
Let \(\widetilde{S}\) be a connected compact surface with non-empty boundary. Removing finitely many points from the interior, we obtain a connected surface \(S\). Further, we endow \(S\) with two finite sets of marked points \(P\) and \(Q\). The set \(P\) is contained in the interior while the points from \(Q\) lie on the boundary \(\partial S\).
A continuous map \(b:I\to S\) is called a _path_. We say that \(b\) connects \(b(0)\) to \(b(1)\). In the following, we restrict to paths with \(b(0)\in P\) and \(b(1)\in Q\). A path \(b:I\to S\) is called _proper_ if its image intersects \(P,Q\) and \(\partial S\) precisely in its endpoints. For every path, we identify \(b\) with its image in \(S\). If a path \(b\) embeds \(I\) into \(S\), we call it an _arc_. All paths discussed in this article are proper. Mainly, we will deal with arcs. We consider proper paths up to homotopies relative to the endpoints. By \([b]_{S}\) we denote the homotopy class of a path \(b\) in \(S\).
A continuous map \(c:S^{1}\to S\) is called a _closed curve_. We call a closed curve _proper_ if the image of \(c\) does not intersect with any marked points or the boundary \(\partial S\). For every closed curve \(c\), we identify \(c\) with its image in \(S\). A _simple closed curve_ is a closed curve \(c:S^{1}\to S\) that embeds \(S^{1}\) into \(S\). As for paths, we only consider proper closed curves up to homotopies in the class of proper closed curves and denote the homotopy class of a closed curve \(c\) by \([c]_{S}\).
For a set \(T\), let \(|T|\) denote the cardinality. If \(b,b^{\prime}\) are arcs in \(S\), let
\[i_{S}([b]_{S},[b^{\prime}]_{S})=\min\{|\tilde{b}\cap\tilde{b}^{\prime}|\mid \tilde{b}\in[b]_{S},\tilde{b}^{\prime}\in[b^{\prime}]_{S}\text{ arcs}\}\]
be the _intersection number_ of \([b]_{S}\) and \([b^{\prime}]_{S}\). Representing arcs \(b\) and \(b^{\prime}\) are in _minimal position_ if \(|b\cap b^{\prime}|=i_{S}([b]_{S},[b^{\prime}]_{S})\).
Likewise, the intersection number and minimal position are defined for homotopy classes of simple closed curves.
**Definition 3.8** (Bigon, [5, Section 1.2.4]).: Two paths \(b\) and \(b^{\prime}\) form a _bigon_ if there exists a disk \(D\subseteq S\) such that
1. no marked point is contained in the interior of \(D\),
2. at most one marked point lies in \(\partial D\),
3. \(D\cap(b\cup b^{\prime})=\partial D\),
4. \(\partial D\cap b\) and \(\partial D\cap b^{\prime}\) are non-empty and
5. \(\partial D\cap b\) and \(\partial D\cap b^{\prime}\) are connected.
Likewise, we define bigons for simple closed curves. Due to properness, the condition from (2) is automatically satisfied for closed curves. See Figure 3.2 for a list of examples and counterexamples of bigons.
If two arcs or two simple closed curves form a bigon, they intersect exactly twice in \(\partial D\). We call these intersection points the _endpoints_ of the bigon.
Clearly, for this definition, it is not necessary that \(S\) stems from a compact surface. In particular, we may also consider bigons in the tree-shaped surface \(\Sigma(L)\).
For arcs and simple closed curves, the following is well known:
**Proposition 3.9** (Bigon criterion, [5, Proposition 1.7]).: _Two transverse arcs or two transverse simple closed curves in a surface \(S\) are in minimal position if and only if they do not form a bigon._
A similar criterion also holds for arcs and simple closed curves in the tree-shaped surface \(\Sigma(L)\) from (2).
**Corollary 3.10** (Bigon criterion in \(\Sigma(L)\)).: _Two transverse arcs or two transverse simple closed curves in the tree-shaped surface \(\Sigma(L)\) are in minimal position if and only if they do not form a bigon._
Since we want to consider compact subsurfaces of \(\Sigma(L)\), for the proof and in the following, it is often more convenient to consider the surface \(\Sigma\) with marked points instead of punctures in \(\Gamma(\{r_{1},...,r_{L}\})\). We will also use the notation \(\Sigma(L)\) for the surface \(\Sigma\) with these marked points.
Proof of Corollary 3.10.: On the one hand, it is obvious that two arcs in \(\Sigma(L)\) that form a bigon are not in minimal position.
On the other hand, if two arcs \(b\) and \(b^{\prime}\) do not form a bigon, they are in minimal position in each compact subsurface that contains both arcs. If we assume that these arcs are not in minimal position in \(\Sigma(L)\), there exist homotopic arcs
Figure 3.2. Examples and counterexamples of bigons.
that intersect less than \(b\) and \(b^{\prime}\). Since the connecting homotopies are compactly supported, their images only intersect finitely many \(\Gamma\)-translates of \(F(L)\). The union of these \(\Gamma\)-translates yields a compact subsurface \(\Sigma^{\prime}\) such that \(\tilde{b}\) and \(\tilde{b}^{\prime}\) are contained in \(\Sigma^{\prime}\). Furthermore, in this surface \(\Sigma^{\prime}\) the arcs \(\tilde{b}\) and \(\tilde{b}^{\prime}\) are homotopic to \(b\) and \(b^{\prime}\), respectively. This contradicts the fact that \(b\) and \(b^{\prime}\) are in minimal position in \(\Sigma^{\prime}\). Thus, \(b\) and \(b^{\prime}\) are in minimal position in \(\Sigma(L)\).
The proof of the criterion for simple closed curves follows verbatim.
### \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves
Similarly, we want to discuss the orbifold analogs of arcs and simple closed curves, called \(\Gamma\)_-arcs_ and _simple closed \(\Gamma\)-curves_. The main goal of this section is to establish a bigon criterion for \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves.
For this purpose, we consider the tree shaped surface \(\Sigma(L)\) with marked points in \(\Gamma(\{r_{1},...,r_{L}\})\) and the induced orbifold \(\Sigma_{\Gamma}(L)\).
Now we endow the surface \(\Sigma(L)\) with two sets of marked points: \(P=\{p_{1},...,p_{n}\}\) in the interior of the fundamental domain \(F(L)\) with \(p_{i}\neq p_{j}\) for \(i\neq j\) and \(Q=\{q_{1},...,q_{n^{\prime}}\}\) in \(\partial\Sigma(L)\cap\ F(L)\) with \(q_{k}\neq q_{l}\) for \(k\neq l\) such that all points \(q_{j}\) lie in the same connected component of \(\partial\Sigma(L)\cap F(L)\).
In the following, we consider \(\Gamma\)-paths that connect points in \(P\) to points in \(Q\) (see Definition 2.4). Recall that by Lemma 2.7 each \(\Gamma\)-path is equivalent to a unique continuous \(\Gamma\)-path. Considering \(\Gamma\)-paths _without self-intersections_ leads to an orbifold analog of arcs:
**Definition 3.11** (\(\Gamma\)-arc).: Let \((\gamma,b)\) be a continuous \(\Gamma\)-path. By \(\Gamma(b)\), we denote the union of the \(\Gamma\)-translates of \(b\). We call the \(\Gamma\)-path \((\gamma,b)\)_proper_ if \(b\) intersects \(\Gamma(P)\) and \(\partial\Sigma(L)\) precisely in its endpoints. If \(b\) embeds \(I\) into the tree-shaped surface and the \(\Gamma\)-translates of \(b\) are disjoint, we call \((\gamma,b)\) a \(\Gamma\)_-arc_. Given two \(\Gamma\)-arcs \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\), we call them _disjoint_ if the orbits \(\Gamma(b)\) and \(\Gamma(b^{\prime})\) are disjoint.
A \(\Gamma\)-path \(\beta=(\gamma_{0},b_{1},\gamma_{1},...,b_{p},\gamma_{p})\) is a \(\Gamma\)-arc if the unique equivalent continuous \(\Gamma\)-path \((\gamma,b)\) is a \(\Gamma\)-arc. We call two \(\Gamma\)-arcs \(\beta\) and \(\beta^{\prime}\)_disjoint_ if the unique equivalent continuous \(\Gamma\)-paths are disjoint.
An example and a counterexample for a \(\Gamma\)-arc are pictured in Figure 3.3. In the upper row from left to right, the figure depicts a \(\mathbb{Z}_{3}\)-arc, its unique equivalent continuous \(\mathbb{Z}_{3}\)-arc and its \(\mathbb{Z}_{3}\)-orbit. In the bottom row, the analogs are given for a \(\mathbb{Z}_{3}\)-path that is not a \(\mathbb{Z}_{3}\)-arc.
Figure 3.3. An example (upper row) and a counterexample (bottom row) of a \(\Gamma\)-arc.
As in the case of arcs, we homotope \(\Gamma\)-arcs through proper \(\Gamma\)-paths relative to the endpoints:
**Definition 3.12** (Homotopy and isotopy of \(\Gamma\)-arcs, [2, Chapter III.G, 3.5]).: Two \(\Gamma\)-arcs \(\beta\) and \(\beta^{\prime}\) are _homotopic_ if one can be obtained from the other by a sequence of the following operations:
1. equivalence of \(\Gamma\)-paths (see Definition 2.5),
2. elementary homotopies \(\beta_{t}\) (see Definition 2.6) such that \(\beta_{t}\) is proper for all \(t\).
If in each elementary homotopy that appears in the finite sequence the \(\Gamma\)-path \(\beta_{t}\) is a \(\Gamma\)-arc for every \(t\), we call \(\beta\) and \(\beta^{\prime}\)_isotopic_. By \([\beta]\) we denote the homotopy class of \(\beta\). For a continuous \(\Gamma\)-arc \((\gamma,b)\), we denote the homotopy class by \([\gamma,b]\).
Further, we introduce an orbifold analog of simple closed curves.
**Definition 3.13** (Simple closed \(\Gamma\)-curve).: A simple closed curve \(c:S^{1}\to\Sigma(L)\) is a _simple closed \(\Gamma\)-curve_ if the \(\Gamma\)-translates are either disjoint or coincide. A simple closed \(\Gamma\)-curve \(c\) is _proper_ if it does not intersect \(\Gamma(P)\) and \(\partial\Sigma(L)\). Given two simple closed \(\Gamma\)-arcs \(c\) and \(c^{\prime}\), we call them _disjoint_ if the orbits \(\Gamma(c)\) and \(\Gamma(c^{\prime})\) are disjoint.
An example and a counterexample for a simple closed \(\Gamma\)-curve are pictured in Figure 3.4. In the upper row from left to right, the figure depicts a simple closed \(\mathbb{Z}_{3}\)-curve and its \(\mathbb{Z}_{3}\)-orbit. In the bottom row, the analogs are given for a simple closed curve that is not a simple closed \(\mathbb{Z}_{3}\)-curve.
**Definition 3.14** (Homotopy and isotopy of simple closed \(\Gamma\)-curves).: Two simple closed \(\Gamma\)-curves are _homotopic_ if they are homotopic as simple closed curves in \(\Sigma(L)\). Two simple closed curves are _isotopic_ if they are isotopic as simple closed curves via an isotopy \(h_{t}\) such that \(h_{t}\) is a simple closed \(\Gamma\)-curve at each time \(t\in I\).
Motivated by the intersection number of homotopy classes of arcs and simple closed curves, we define _intersection number_ and _minimal position_ for homotopy classes of \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves.
Figure 3.4. An example (upper row) and a counterexample (bottom row) of a simple closed \(\Gamma\)-curve.
**Definition 3.15** (Minimal position for \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves).: Let \(\beta\) and \(\beta^{\prime}\) be \(\Gamma\)-arcs in \(\Sigma_{\Gamma}(L)\). The homotopy classes \([\beta]\) and \([\beta^{\prime}]\) have _intersection number_
\[i([\beta],[\beta^{\prime}])=\min\{|\tilde{\beta}\cap\Gamma(\tilde{\beta}^{ \prime})|\mid\tilde{\beta}\in[\beta]\text{ and }\tilde{\beta}^{\prime}\in[\beta^{\prime}]\; \Gamma\text{-arcs}\}.\]
Here above, we identify a \(\Gamma\)-arc \((\gamma_{0},b_{1},\gamma_{1},...,b_{p},\gamma_{p})\) with the image of the induced continuous function \([\bigcup_{i=1}^{p}[t_{i-1},t_{i}]\rightarrow\Sigma(L),s\mapsto b_{i}(s)\) for \(s\in[t_{i-1},t_{i}]\).
If \(i([\beta],[\beta^{\prime}])=0\), for \([\beta]\) and \([\beta^{\prime}]\) there exist representatives \(\tilde{\beta}\) and \(\tilde{\beta}^{\prime}\) that are \(\Gamma\)-arcs and \(\tilde{\beta}\) is disjoint from \(\tilde{\beta}^{\prime}\). Representing \(\Gamma\)-arcs \(\beta\) and \(\beta^{\prime}\) are in _minimal position_ if \(|\beta\cap\Gamma(\beta^{\prime})|=i([\beta],[\beta^{\prime}])\).
Likewise, we define the _intersection number_ for homotopy classes of simple closed \(\Gamma\)-curves and _minimal position_ for simple closed \(\Gamma\)-curves.
For \(\Gamma\)-arcs, we further observe:
_Remark 3.16_.: By Lemma 2.7, each \(\Gamma\)-arc is equivalent to a unique continuous \(\Gamma\)-arc. Since the equivalence relation preserves the \(\Gamma\)-orbit of a \(\Gamma\)-arc, the intersection number satisfies:
\[i([\beta],[\beta^{\prime}])=\min\{|b\cap\Gamma(b^{\prime})|\mid(\gamma,b)\in[ \beta],(\gamma^{\prime},b^{\prime})\in[\beta^{\prime}]\text{ continuous }\Gamma\text{-arcs}\}\]
The following yields a sufficient condition for \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves in minimal position:
**Lemma 3.17**.: _Let \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) be continuous \(\Gamma\)-arcs in \(\Sigma_{\Gamma}(L)\). If \(b\) is in minimal position with each \(\Gamma\)-translate of \(b^{\prime}\), the \(\Gamma\)-arcs \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) are in minimal position._
Proof.: If \(b\) is in minimal position with each \(\Gamma\)-translate of \(b^{\prime}\), we have
\[i_{\Sigma(L)}([b]_{\Sigma(L)},[\tilde{\gamma}^{\prime}(b^{\prime})]_{\Sigma( L)})=|b\cap\tilde{\gamma}^{\prime}(b^{\prime})| \tag{5}\]
for each \(\tilde{\gamma}^{\prime}\in\Gamma\). Further, recall that the intersection number of \([\gamma,b]\) and \([\gamma^{\prime},b^{\prime}]\) is given by
\[i([\gamma,b],[\gamma^{\prime},b^{\prime}])=\min\{|\tilde{b}\cap\Gamma(\tilde {b}^{\prime})|\mid(\gamma,\tilde{b})\in[\gamma,b],(\gamma^{\prime},\tilde{b} ^{\prime})\in[\gamma^{\prime},b^{\prime}]\;\Gamma\text{-arcs}\}.\]
For \((\gamma,\tilde{b})\in[\gamma,b]\) and \((\gamma^{\prime},\tilde{b}^{\prime})\in[\gamma^{\prime},b^{\prime}]\), we observe:
\[|\tilde{b}\cap\Gamma(\tilde{b}^{\prime})| = \sum_{\tilde{\gamma}^{\prime}\in\Gamma}|\tilde{b}\cap\tilde{\gamma }^{\prime}(\tilde{b}^{\prime})| \geqslant \sum_{\tilde{\gamma}^{\prime}\in\Gamma}i_{\Sigma(L)}([b]_{ \Sigma(L)},[\tilde{\gamma}^{\prime}(b^{\prime})]_{\Sigma(L)})\] \[\stackrel{{(\ref{eq:def})}}{{=}} \sum_{\tilde{\gamma}^{\prime}\in\Gamma}|b\cap\tilde{\gamma}^{ \prime}(b^{\prime})| = |b\cap\Gamma(b^{\prime})|. \tag{6}\]
Here above, the first equation follows from the fact that \((\gamma^{\prime},\tilde{b}^{\prime})\) is a \(\Gamma\)-arc. The definition of \(i_{\Sigma(L)}\) implies the following \(\geqslant\)-estimate, then we apply (5) and the last equation follows as the first one. Finally, the above estimate implies that
\[i([\gamma,b],[\gamma^{\prime},b^{\prime}])\geqslant|b\cap\Gamma(b^{\prime})|.\]
On the other hand, the definition of the intersection number yields
\[i([\gamma,b],[\gamma^{\prime},b^{\prime}])\leqslant|b\cap\Gamma(b^{\prime})|.\]
Hence, \(i([\gamma,b],[\gamma^{\prime},b^{\prime}])=|b\cap\Gamma(b^{\prime})|\), i.e. \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) are in minimal position.
For the characterization of \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves in minimal position, we introduce orbifold analogs of bigons:
**Definition 3.18** (Pseudo-bigons and bigons for \(\Gamma\)-paths and simple closed \(\Gamma\)-curves).: Two continuous \(\Gamma\)-paths \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) form a _pseudo-bigon_ if there exist \(\Gamma\)-translates \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) such that the paths \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) form a bigon (in the sense of Definition 3.8). If no other \(\Gamma\)-translate of \(b\) or \(b^{\prime}\) intersects the bigon spanned by \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\), we say \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) form a _bigon_. See Figure 3.5 for examples of a bigon and a pseudo-bigon.
Likewise, we define pseudo-bigons and bigons of simple closed \(\Gamma\)-curves.
Two \(\Gamma\)-paths \(\beta\) and \(\beta^{\prime}\) form a _pseudo-bigon_ or a _bigon_ if the unique equivalent continuous \(\Gamma\)-paths \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) form a pseudo-bigon or a bigon, respectively.
For \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves, bigons and pseudo-bigons are related as follows:
**Lemma 3.19**.: _Let \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) be \(\Gamma\)-arcs that bound a pseudo-bigon such that the bigon bounded by the \(\Gamma\)-translates does not contain a cone point. Then \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) bound a bigon._
_The same holds for simple closed \(\Gamma\)-curves._
Proof.: Let us consider two \(\Gamma\)-translates, let us say \(b\) and \(b^{\prime}\), that bound a bigon that does not contain a cone point. Then we may consider all the \(\Gamma\)-translates of \(b\) and \(b^{\prime}\) that intersect the pseudo-bigon. Since \((\gamma,b)\) is a \(\Gamma\)-arc, its \(\Gamma\)-translates only intersect the boundary of the bigon in \(b^{\prime}\). This implies that each intersecting \(\Gamma\)-translate \(\tilde{\gamma}(b)\) bounds a bigon with \(b^{\prime}\). Since no cone point lies inside the bigon, the bounded disk has the structure of a surface. This allows us, to pick a bigon that is not intersected by any other \(\Gamma\)-translate of \(b\). Applying the same idea for \((\gamma^{\prime},b^{\prime})\), we obtain an innermost bigon that is not intersected by any other \(\Gamma\)-translates (see Figure 3.6). This shows that \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) bound a bigon.
Figure 3.6. Pseudo-bigon without a cone point with innermost bigons shadowed in gray.
Figure 3.5. Examples of a bigon and a pseudo-bigon.
By Example 2.3, the tree shaped surface \(\Sigma(L)\) embeds into the complex plane. Formally, this map yields an atlas that endows \(\Sigma(L)\) with a differentiable structure. In terms of this differentiable structure we define:
**Definition 3.20** (Transversality of \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves).: Let \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) be two continuous \(\Gamma\)-arcs that intersect in a point \(x\), i.e. \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) intersect in \(x\). The intersection is _transverse_ if the arcs \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) intersect transversely as arcs in \(\Sigma(L)\). If each intersection of \(\Gamma\)-translates \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) is transverse, we call \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\)_transverse_.
Likewise, we define transversality of simple closed \(\Gamma\)-curves.
Two \(\Gamma\)-arcs \(\beta\) and \(\beta^{\prime}\) are called _transverse_ if the unique equivalent continuous \(\Gamma\)-arcs are transverse.
Now we generalize the bigon criterion from Proposition 3.9 to a criterion for \(\Gamma\)-arcs and simple closed \(\Gamma\)-curves. Since the proof for simple closed \(\Gamma\)-curves is the easier one, we begin with this. We want to prove:
**Proposition 3.21** (Bigon criterion for simple closed \(\Gamma\)-curves).: _Let \(c,c^{\prime}\) be two transverse simple closed \(\Gamma\)-curves in \(\Sigma_{\Gamma}(L)\). The simple closed \(\Gamma\)-curves are in minimal position if and only if they form no bigons._
We start with an observation that has strong implications on pseudo-bigons.
**Lemma 3.22**.: _If two \(\Gamma\)-translates of a simple closed \(\Gamma\)-curve in \(\Sigma_{\Gamma}(L)\) bound intersecting disks, the \(\Gamma\)-translates coincide._
Proof.: Let \(c\) be a simple closed \(\Gamma\)-curve and \(\gamma(c)\) with \(\gamma\neq 1\) a \(\Gamma\)-translate such that \(c\) and \(\gamma(c)\) bound intersecting disks. Due to the Jordan curve theorem [4, Chapter XVII, Theorem 5.4], each of the curves \(c\) and \(\gamma(c)\) divides the surface \(\Sigma\) into two components, the inner and outer region. While the inner region is a bounded disk, the outer region is not bounded. In particular, \(\gamma\) maps the disk \(D_{c}\), bounded by \(c\), to the disk \(D_{\gamma(c)}\), bounded by \(\gamma(c)\).
By Definition 3.13, \(c\) and \(\gamma(c)\) either coincide or their intersection is empty. In the first case we are done. The second case implies that \(\gamma(c)\) lies inside the disk bounded by \(c\) or likewise \(c\) lies inside the disk bounded by \(\gamma(c)\). We show that this cannot happen since \(\Gamma\) acts isometrically with respect to the metric defined in (1).
Let us assume that \(\gamma(c)\) lies in the disk bounded by \(c\). This implies that the diameter of the bounded disks \(D_{c}\) and \(D_{\gamma(c)}\) satisfy
\[\sup_{x,y\in D_{c}}d(x,y)>\sup_{x,y\in D_{\gamma(c)}}d(x,y).\]
Since \(D_{c}\) and \(D_{\gamma(c)}\) are compact and the metric is continuous, both suprema are maxima. Hence, there exist \(x_{0},y_{0}\in D_{c}\) with \(d(x_{0},y_{0})=\sup_{x,y\in D_{c}}d(x,y)\). Using the relation of the maxima and \(\gamma(D_{c})=D_{\gamma(c)}\), this in particular implies \(d(x_{0},y_{0})\neq d(\gamma(x_{0}),\gamma(y_{0}))\). This contradicts the fact that \(\Gamma\) acts isometrically on \(\Sigma(L)\). Consequently, \(\gamma(c)\) and \(c\) coincide and the Lemma holds.
**Corollary 3.23**.: _If two simple closed \(\Gamma\)-curves \(c\) and \(c^{\prime}\) in \(\Sigma_{\Gamma}(L)\) bound a pseudo-bigon, it is a bigon._
Proof.: Let us assume that \(c\) and \(c^{\prime}\) bound a pseudo-bigon such that there exists a \(\Gamma\)-translate, let us say \(\gamma(c)\), that intersects the pseudo-bigon. Then \(c\) and \(\gamma(c)\) bound disks that intersect. By Lemma 3.22, this implies that \(c=\gamma(c)\). Hence, no \(\Gamma\)-translate intersects the pseudo-bigon, i.e. it is a bigon.
Further, Lemma 3.22 allows us to deduce the analog of Lemma 3.17 for simple closed \(\Gamma\)-curves.
**Lemma 3.24**.: _Let \(c\) and \(c^{\prime}\) be simple closed \(\Gamma\)-curves in \(\Sigma_{\Gamma}(L)\). If \(c\) is in minimal position with each \(\Gamma\)-translate of \(c^{\prime}\), the simple closed \(\Gamma\)-curves \(c\) and \(c^{\prime}\) are in minimal position._
Proof.: The proof of Lemma 3.17 was based on a counting argument. To apply the same argument for simple closed \(\Gamma\)-curves, we have to take into account that simple closed curves may have a non-trivial stabilizer. We address that distinguishing two cases.
Firstly, let \(c\) bound no marked points or punctures. Since each \(\Gamma\)-translate \(\gamma^{\prime}(c^{\prime})\) is in minimal position with \(c\), Corollary 3.10 implies that \(\gamma^{\prime}(c^{\prime})\) and \(c\) form no bigon. Thus, no \(\Gamma\)-translate \(\gamma^{\prime}(c^{\prime})\) intersects \(c\). Hence,
\[0\leq i([c],[c^{\prime}])\leq|c\cap\Gamma(c^{\prime})|=0\]
which implies \(i([c],[c^{\prime}])=|c\cap\Gamma(c^{\prime})|=0\).
If \(c\) bounds a set of marked points and punctures \(S\subseteq\Gamma(\{p_{1},...,p_{n},r_{1},...,r_{L}\})\), we observe that each curve \(\tilde{c}\in[c]\) bounds the same set of marked points \(S\). We prove:
_Claim._ An element \(\gamma\in\Gamma\) preserves a simple closed \(\Gamma\)-curve \(\tilde{c}\in[c]\) if and only if \(\gamma\) preserves the set \(S\).
If \(\gamma(\tilde{c})=\tilde{c}\), the element \(\gamma\) preserves the bounded disk \(D_{\tilde{c}}\). Due to the \(\Gamma\)-invariance of marked points and punctures, this implies that \(\gamma(S)=S\).
For the opposite implication, let \(\gamma\in\Gamma\) such that \(\gamma(S)=S\). We observe:
\[S=\gamma(S)\subseteq\gamma(D_{\tilde{c}})=D_{\gamma(\tilde{c})},\]
i.e. \(\gamma(\tilde{c})\) bounds \(S\). By Lemma 3.22, this implies \(\gamma(\tilde{c})=\tilde{c}\). This proves the claim.
Now let \(\Gamma_{c}\subseteq\Gamma\) be a minimal subset such that \(\Gamma_{c}(c)=\Gamma(c)\). The above claim implies that \(\Gamma_{c}\subseteq\Gamma\) is also a minimal set that satisfies \(\Gamma_{c}(\tilde{c})=\Gamma(\tilde{c})\) for each simple closed \(\Gamma\)-curve \(\tilde{c}\in[c]\). Now the proof of the lemma follows verbatim as in Lemma 3.17 if we replace the index set \(\Gamma\) by \(\Gamma_{c}\) in equation (6).
Proof of Proposition 3.21.: We use contraposition, i.e. we prove: \(c\) and \(c^{\prime}\) are not in minimal position if and only if they form a bigon.
If \(c,c^{\prime}\) form a bigon (in the sense of Definition 3.18), we can homotope through the bigon to reduce the number of intersections. Consequently, \(c\) and \(c^{\prime}\) are not in minimal position.
On the other hand, let us assume that the simple closed \(\Gamma\)-curves \(c\) and \(c^{\prime}\) are not in minimal position. This assumption by Lemma 3.24 implies that there exists a \(\Gamma\)-translate \(\tilde{\gamma}^{\prime}(c^{\prime})\) such that \(c\) and \(\tilde{\gamma}^{\prime}(c^{\prime})\) are not in minimal position. By Corollary 3.10, we obtain that \(c\) and \(\tilde{\gamma}^{\prime}(c^{\prime})\) form a bigon, i.e. the simple closed \(\Gamma\)-curves \(c\) and \(c^{\prime}\) bound a pseudo-bigon. Thus, by Corollary 3.23, \(c\) and \(c^{\prime}\) form a bigon. This proves: If \(c\) and \(c^{\prime}\) are not in minimal position, then they form a bigon. The bigon criterion for simple closed \(\Gamma\)-curves follows.
For \(\Gamma\)-arcs, in analogy to Proposition 3.21 we want to prove:
**Proposition 3.25** (Bigon criterion for \(\Gamma\)-arcs).: _Let \(\beta\) and \(\beta^{\prime}\) be two transverse \(\Gamma\)-arcs in \(\Sigma_{\Gamma}(L)\). The \(\Gamma\)-arcs are in minimal position if and only if they form no bigons._
The proof of the bigon criterion for \(\Gamma\)-arcs in \(\Sigma_{\Gamma}(L)\) is also based on Corollary 3.10. However, the proof is more complicated than for simple closed \(\Gamma\)-curves since we do not have a direct analog of Corollary 3.23. Instead, we will prove a necessary condition for \(\Gamma\)-paths that form a pseudo-bigon which contains a cone point:
**Lemma 3.26**.: _Let \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) be continuous \(\Gamma\)-paths. If \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) bound no bigons and bound a pseudo-bigon such that the bounded disk \(D\subseteq\Sigma(L)\) contains a cone point, then at least one of the \(\Gamma\)-paths is not a \(\Gamma\)-arc._
For the proof, we consider the parts of the boundary \(\partial F(L)\) that are not contained in \(\partial\Sigma(L)\), examples for these parts are the red arcs in Figure 3.5. These pieces are \(\Gamma\)-translates of arcs which we denote by \(s_{\nu}\) for \(1\leqslant\nu\leqslant N\). The arc \(s_{\nu}\) connects the cone point \(c_{\nu}\) to the boundary \(\partial\Sigma(L)\). Let \(s_{\nu}\) be parametrized by \(I\) starting at the cone point \(c_{\nu}\) and ending in \(\partial\Sigma(L)\).
Proof of Lemma 3.26.: Let us assume that \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) are \(\Gamma\)_-arcs_ that bound no bigons but a pseudo-bigon that contains the cone point \(\rho(c_{\mu})\) for some \(\rho\in\Gamma\). Moreover, for \(1\leqslant i\leqslant m_{\mu}\), let \(\gamma_{i}(s_{\mu})\) be the \(\Gamma\)-translates that connect the cone point \(\rho(c_{\mu})\) to the boundary.
For brevity of notation, let us assume that the \(\Gamma\)-translates \(b\) and \(b^{\prime}\) bound the pseudo-bigon. Furthermore, we may assume, for instance by piecewise linear approximation, that \(b\) and \(b^{\prime}\) intersect finitely many times with the \(\Gamma\)-translates of \(s_{\nu}\) for \(1\leqslant\nu\leqslant N\). Using that \((\gamma,b)\) is a \(\Gamma\)-arc, we may push off the arcs \(\Gamma(s_{\mu})\) from \(b\) by a \(\Gamma\)-equivariant Hatcher flow. See [9] for further details on the classical Hatcher flow and Figure 3.7 for an example of its \(\Gamma\)-equivariant version. This allows us to assume that \(b\) is disjoint from \(\Gamma(s_{\mu})\), i.e. \(b\) is contained in \(F(L)\). In particular, \((\gamma,b)\) is a \(\Gamma\)-arc.
Using that \(b\) and \(b^{\prime}\) bound a disk \(D\) that contains \(\rho(c_{\mu})\), this implies that the arc \(b^{\prime}\) intersects each of the \(\Gamma\)-translates \(\gamma_{i}(s_{\mu})\) such that the algebraic intersection number \(i_{\pm}(b^{\prime},\gamma_{i}(s_{\mu}))\) has the same sign for each \(1\leqslant i\leqslant m_{\mu}\) (see Figure 3.8). Without loss of generality, we assume that the algebraic intersection number is negative, i.e. \(b^{\prime}\) encircles \(\gamma_{i}(s_{\mu})\) counterclockwise.
Figure 3.7. \(\Gamma\)-equivariant Hatcher flow.
Let \(T_{0}\) be the maximal time such that \(b^{\prime}(T_{0})\in b\cap\partial D\), i.e. \(b^{\prime}(T_{0})\) is an endpoint of the pseudo-bigon, and consider the strictly decreasing sequence \((t_{j})_{j\in N}\) of all times \(t_{j}\in[0,T_{0})\) such that \(b^{\prime}(t_{j})\in\bigcup_{i=1}^{m_{\mu}}\gamma_{i}(s_{\mu})\), where \(N=\{0,...,J\}\) for some \(J\in\mathbb{N}\). The definition of \((t_{j})_{j\in N}\) further induces a time \(t_{j}^{(\mu)}\) and a \(\Gamma\)-translate \(\gamma_{i_{j}}(s_{\mu})\) such that \(b^{\prime}(t_{j})=\gamma_{i_{j}}(s_{\mu})(t_{j}^{(\mu)})\) for every \(j\in N\).
Now we may choose the following subsequence of elements \(t_{j_{k}}\): Let \(t_{j_{1}}\) be the maximal time such that \(b^{\prime}\) intersects \(\gamma_{j_{k}}(s_{\mu})\) in counterclockwise direction. Moreover, let \(t_{j_{k}}\) be the maximal time \(t_{j_{k}}<t_{j_{k-1}}\) such that
* \(b^{\prime}\) intersects \(\gamma_{j_{k}}(s_{\mu})=\rho_{\mu}\gamma_{j_{k-1}}(s_{\mu})\) counterclockwise at time \(t_{j_{k}}\) with \(\rho_{\mu}\in\Gamma\) inducing a counterclockwise rotation of angle \(\frac{2\pi}{m_{\mu}}\) around \(\rho(c_{\mu})\).
* there exists an \(\varepsilon>0\) with \([t_{j_{k-1}}+\varepsilon,t_{j_{k}}-\varepsilon]\) containing all \(t_{j}\) from \([t_{j_{k-1}},t_{j_{k}}]\) and \(i_{\pm}(b^{\prime}|_{[t_{j_{k-1}}+\varepsilon,t_{j_{k}}-\varepsilon]},\gamma_ {i}(s_{\mu}))=0\) for all \(1\leqslant i\leqslant m_{\mu}\).
This defines a subsequence \((t_{j_{k}})_{k\in N^{\prime}}\) of \((t_{j})_{j\in N}\). Since \(b\) and \(b^{\prime}\) bound a bigon that contains \(\rho(c_{\mu})\), this subsequence has at least \(m_{\mu}\) entries.
Claim: The sequence \((t_{j_{k}}^{(\mu)})_{k\in N^{\prime}}\) is strictly decreasing.
We prove the claim by induction on \(k\). The idea is that the disjointness of the \(\Gamma\)-translates of \(b^{\prime}\) forces these \(\Gamma\)-translates to wind around \(\rho(c_{\mu})\) as a snail shell (see Figure 3.8).
If \(t_{j_{k}}^{(\mu)}=t_{j_{k^{\prime}}}^{(\mu)}\) for some \(k,k^{\prime}\in N^{\prime},k\neq k^{\prime}\), the \(\Gamma\)-translates of \(b^{\prime}\) intersect. This contradicts the assumption that \((\gamma^{\prime},b^{\prime})\) is a \(\Gamma\)-arc.
For \(k=1\), the \(\Gamma\)-translate \(\rho_{\mu}(b^{\prime})|_{[t_{j_{1}},1]}\) (i.e. the blue arc in Figure 3.8) connects \(\rho_{\mu}(b^{\prime})(t_{j_{1}})=\gamma_{i_{j_{2}}}(s_{\mu})(t_{j_{1}}^{(\mu)})\) to the boundary. Hence, \(\rho_{\mu}(b^{\prime})\) forces the arc \(b^{\prime}\) (the relevant part of \(b^{\prime}\) is drawn in orange in Figure 3.8) to intersect \(\gamma_{i_{j_{2}}}(s_{\mu})|_{[t_{j_{1}}^{(\mu)},1]}\) at time \(t_{j_{2}}\) such that \(t_{j_{1}}^{(\mu)}>t_{j_{2}}^{(\mu)}\). In Figure 3.8 this is reflected by the fact that the orange intersection points lie closer to the cone point than the blue points.
Now we may assume that \((t_{j_{k}}^{(\mu)})_{k\in N^{\prime}}\) is strictly decreasing up to a fixed \(l\). Recall that \(\gamma_{i_{j_{l+1}}}(s_{\mu})(t_{j_{l}})\) connects to the boundary via the \(\Gamma\)-translate \(\rho_{\mu}(b^{\prime})|_{[t_{j_{l}},1]}\). Thus, \(\rho_{\mu}(b^{\prime})|_{[t_{j_{l}},1]}\) enforces \(b^{\prime}|_{[0,t_{j_{l}}]}\) to intersect the next \(\Gamma\)-translate \(\gamma_{i_{j_{l+1}}}(s_{\mu})\) at a point \(\gamma_{i_{j_{l+1}}}(s_{\mu})(t_{j_{k+1}}^{(\mu)})\) with \(t_{j_{l}}^{(\mu)}>t_{j_{l+1}}^{(\mu)}\). For \(l=2\), this is also visible in Figure 3.8: The gray piece forces the yellow arc to intersect the next \(\Gamma\)-translate of \(s_{\mu}\) closer to the cone point than the gray piece. Thus, the yellow intersection points are closer to the cone point than the orange ones. By induction on \(k\), this implies the claim.
Now let us consider the pseudo-bigon bounded by \(b\) and \(b^{\prime}\) that contains the cone point \(\rho(c_{\mu})\). An example is shaded in gray in Figure 3.9. Moreover, let us consider the \(\Gamma\)-translate \(\rho_{\mu}^{-1}(b^{\prime})\). For this \(\Gamma\)-translate, there exists a time \(t_{j_{k}}\) such that \(\rho_{\mu}^{-1}(b^{\prime})(t_{j_{k}})=\rho_{\mu}^{-1}\gamma_{i_{j_{k}}}(s_{\mu })(t_{j_{k}}^{(\mu)})\), i.e. the blue point in Figure 3.9. Since the sequence \((t_{j_{k}}^{(\mu)})_{k\in N^{\prime}}\) is strictly decreasing, this intersection on \(\rho_{\mu}^{-1}\gamma_{i_{j_{k}}}(s_{\mu})=\gamma_{i_{j_{k-1}}}(s_{\mu})\) lies closer to the cone point than the point \(b^{\prime}(t_{j_{k-1}})=\gamma_{i_{j_{k-1}}}(s_{\mu})(t_{j_{k-1}}^{(\mu)})\), i.e. the yellow point in Figure 3.9. This implies that the blue point \(\rho_{\mu}^{-1}\gamma_{i_{j_{k}}}(s_{\mu})(t_{j_{k}}^{(\mu)})\) lies inside the pseudo-bigon.
Now we consider the piece \(\rho_{\mu}^{-1}(b^{\prime})([0,t_{j_{k}}])\).
If the piece \(\rho_{\mu}^{-1}(b^{\prime})([0,t_{j_{k}}])\) is entirely contained in the disk bounded by the pseudo-bigon, this disk in particular contains the initial point of \(\rho_{\mu}^{-1}(b^{\prime})\). Since this point is a marked point, this contradicts the definition of a pseudo-bigon.
If the piece \(\rho_{\mu}^{-1}(b^{\prime})([0,t_{j_{k}}])\) intersects the arc \(b^{\prime}\), the \(\Gamma\)-translates of \(b^{\prime}\) are not disjoint. This contradicts the assumption that \((\gamma^{\prime},b^{\prime})\) is a \(\Gamma\)-arc.
If the piece \(\rho_{\mu}^{-1}(b^{\prime})([0,t_{j_{k}}])\) intersects the arc \(b\), we obtain that the \(\Gamma\)-translate \(\rho_{\mu}^{-1}(b^{\prime})\) intersects the arc \(b\) twice - when it enters and when it leaves the pseudo-bigon. Since the pseudo-bigon bounds a disk, this implies that \(\rho_{\mu}^{-1}(b^{\prime})\) and \(b\) bound a bigon.
If this bigon does not contain a cone point, Lemma 3.19 implies that \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) form a bigon. This contradicts our assumption.
If the bigon formed by \(\rho_{\mu}^{-1}(b^{\prime})\) and \(b\) contains a cone point (see Figure 3.10 shaded in dark gray), we obtain another pseudo-bigon bounded by \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) that contains a cone point. The existence of this pseudo-bigon that bounds a cone point requires an additional intersection of \(\rho_{\mu}^{-1}(b^{\prime})\) with \(\langle\rho_{\mu}\rangle(s_{\mu})\) that contributes to the sequence \((t_{j_{k}})_{k\in N^{\prime}}\). In Figure 3.10 the additional intersection is marked by the yellow point.
Given the new pseudo-bigon, we may iteratively either apply one of the previous cases or construct new pseudo-bigons. As described above, each new pseudo-bigon requires an additional entry in the sequence \((t_{j_{k}})_{k\in N^{\prime}}\). Since the index set \(N^{\prime}\) of the sequence \((t_{j_{k}})_{k\in N^{\prime}}\) is finite, after finitely many steps no new pseudo-bigon can be obtained. Thus, we end in one of the previous cases and obtain a contradiction. Consequently, our assumption that both \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) are \(\Gamma\)-arcs was wrong and the lemma follows.
Figure 3.10. Another pseudo-bigon that contains a cone point.
Figure 3.9. A pseudo-bigon that contains a cone-point.
Proof of Proposition 3.25.: Using Remark 3.16, the \(\Gamma\)-arcs \(\beta\) and \(\beta^{\prime}\) are in minimal position if and only if the equivalent continuous arcs \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) are in minimal position. Therefore, it is enough to prove the claim for continuous \(\Gamma\)-arcs. We use contraposition, i.e. we prove: \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) are not in minimal position if and only if they form a bigon.
If the \(\Gamma\)-arcs \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) form a bigon, homotoping through the bigon reduces the number of intersections.
For the opposite implication, firstly, let \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) be \(\Gamma\)-arcs that are not in minimal position. Secondly, let us assume that \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) do not form any bigons. Using the first assumption, Lemma 3.17 implies that there exists a \(\Gamma\)-translate \(\tilde{\gamma}^{\prime}(b^{\prime})\) such that \(b\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) are not in minimal position. By Corollary 3.10, this implies that \(b\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) form a bigon as arcs in \(\Sigma(L)\). On the basis of Definition 3.18, this implies that \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) form a pseudo-bigon. Due to Lemma 3.26 and the second assumption, this pseudo-bigon does not contain a cone point. Hence, Lemma 3.19 implies that \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) also bound a bigon which contradicts our assumption. Consequently, each two \(\Gamma\)-arcs that are not in minimal position form a bigon.
As for arcs and simple closed curves, we want to use the bigon criterion to observe that each homotopy between two \(\Gamma\)-arcs or two simple closed \(\Gamma\)-curves can be realized by an ambient isotopy. Therefore, we prove:
**Lemma 3.27**.: _Let \(C\) be a compact set and let the group \(\Gamma\) act on \(\Gamma\times C\) by multiplication in the first factor. Moreover, let \(\lambda:\Gamma\times C\rightarrow\Sigma(L)\) be a continuous \(\Gamma\)-equivariant map. If \(\lambda(\{\gamma\}\times C)\) and \(\lambda(\{\gamma^{\prime}\}\times C)\) are disjoint or equal for all \(\gamma,\gamma^{\prime}\in\Gamma,\gamma\neq\gamma^{\prime}\), then there exists \(\varepsilon>0\) such that the \(\varepsilon\)-neighborhoods of \(\lambda(\{\gamma\}\times C)\) for \(\gamma\in\Gamma\) are pairwise either disjoint or equal._
In particular, given a bigon bounded by the \(\Gamma\)-arcs \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\), this lemma will allow us to construct disjoint \(\varepsilon\)-neighborhoods of the disks bounded by the \(\Gamma\)-translates of \(b\) and \(b^{\prime}\). For this purpose, we choose the compact set \(C\) from Lemma 3.27 to be a disk and endow \(\Gamma\) with the discrete topology. Additionally, we consider a map \(\lambda\) that for each \(\gamma\in\Gamma\) embeds the disks \(\{\gamma\}\times C\) into \(\Sigma(L)\) such that each disk in the image of \(\lambda\) is bounded by certain \(\Gamma\)-translates of \(b\) and \(b^{\prime}\).
To obtain disjoint \(\varepsilon\)-neighborhoods, we measure the distance of the \(\Gamma\)-translates of \(\lambda(\{\gamma\}\times C)\). Using the tiling of \(\Sigma\) into \(\Gamma\)-translates of \(F\), we determine this distance considering finitely many \(\Gamma\)-translates and deduce that the distance is positive.
Proof of Lemma 3.27.: The compactness of \(C\) implies that \(\lambda(\{\gamma\}\times C)\) is compact for every \(\gamma\in\Gamma\). Hence, each \(\lambda(\{\gamma\}\times C)\) intersects only finitely many \(\Gamma\)-translates of the fundamental domain \(F(L)\). Let \(\{\gamma_{1},...,\gamma_{p}\}\) be the subset of \(\Gamma\) such that \(\lambda(\{1\}\times C)\) intersects \(\gamma_{i}(F(L))\) or an adjacent \(\Gamma\)-translate. Further, the compactness of \(F(L)\) implies that \(\lambda(\{\gamma\}\times C)\) intersects \(\gamma_{i}(F(L))\) only for finitely many \(\gamma\in\Gamma\). For each \(1\leqslant i\leqslant p\), let \(\gamma_{1}^{(i)},...,\gamma_{q}^{(i)}\) be the elements from \(\Gamma\), such that \(\lambda(\{\gamma_{j}^{(i)}\}\times C)\) intersects \(\gamma_{i}(F(L))\). Thus, using the \(\Gamma\)-equivariance of \(\lambda\), we obtain:
\[\inf\{d(\lambda(\{\gamma\}\times C),\lambda(\{\gamma^{\prime}\} \times C))\mid\gamma,\gamma^{\prime}\in\Gamma,\gamma\neq\gamma^{\prime}\}\] \[= \inf\{d(\lambda(\{\gamma\}\times C),\lambda(\{1\}\times C))\mid \gamma\in\Gamma,\gamma\neq 1\}\] \[= \min\{d(\lambda(\{\gamma_{j}^{(i)}\}\times C),\lambda(\{1\} \times C))\mid 1\leqslant i\leqslant p,1\leqslant j\leqslant q\text{ with }\gamma_{j}^{(i)}\neq 1\}.\]
The minimum is the distance of two disjoint compact sets. Thus, it has a positive value and we may define
\[\varepsilon:=\frac{1}{3}\min\{d(\lambda(\{\gamma_{j}^{(i)}\}\times C), \lambda(\{1\}\times C))\mid 1\leqslant i\leqslant p,1\leqslant j\leqslant q \text{ with }\gamma_{j}^{(i)}\neq 1\}.\]
**Proposition 3.28** (Homotopic implies ambient isotopic).: _Let \(\beta\) and \(\beta^{\prime}\) be homotopic \(\Gamma\)-arcs in \(\Sigma_{\Gamma}(L)\) with equivalent continuous \(\Gamma\)-arcs \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\). There exists an ambient isotopy_
\[I\to\mathrm{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\]
_with \(H_{0}=\mathrm{id}_{\Sigma(L)}\), \(H_{1}(b(s))=b^{\prime}(s)\) for all \(s\in I\). The same holds if \(c\) and \(c^{\prime}\) are homotopic simple closed \(\Gamma\)-curves._
Proof.: Let \(\beta\) and \(\beta^{\prime}\) be the \(\Gamma\)-arcs from above. By Lemma 3.25, \(\beta\) and \(\beta^{\prime}\) are either in minimal position or form a bigon.
If \(\beta\) and \(\beta^{\prime}\) form a bigon, we find a disk bounded by \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) such that their \(\Gamma\)-translates do not intersect. Applying Lemma 3.27, we obtain that the bounded disk has a compact \(\varepsilon\)-neighborhood such that its \(\Gamma\)-orbit is a disjoint union of its \(\Gamma\)-translates. Hence, the Alexander trick (see Example 3.4) applies to each \(\Gamma\)-translate of the \(\varepsilon\)-neighborhood. We obtain an ambient isotopy connecting \(\mathrm{id}_{\Sigma(L)}\) to a homeomorphism that fixes every point outside the \(\Gamma\)-translates of the \(\varepsilon\)-neighborhood and removes the chosen bigon of \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) and its \(\Gamma\)-translates. This reduces the number of intersections. Iterating this procedure, we may reduce the number of intersections of \(\beta\) and \(\beta^{\prime}\) until it is minimal.
If \(\beta\) and \(\beta^{\prime}\) are in minimal position, their continuous representatives \((\gamma,b)\) and \((\gamma^{\prime},b^{\prime})\) are also in minimal position. Since \(\beta\) and \(\beta^{\prime}\) are homotopic, we have \(\gamma=\gamma^{\prime}\) and \(b\) and \(b^{\prime}\) are homotopic. Consequently, \(b\) and \(b^{\prime}\) share precisely their endpoints and bound a disk such that no \(\Gamma\)-translate of \(b\) or \(b^{\prime}\) intersects. By Lemma 3.27, we once more find a compact \(\varepsilon\)-neighborhood with disjoint \(\Gamma\)-translates. Applying the Alexander trick to each \(\Gamma\)-translate, this allows us to construct an ambient isotopy which connects \(\mathrm{id}_{\Sigma(L)}\) to a homeomorphism that maps \(b\) to \(b^{\prime}\).
Whenever a disk bounded by \(\tilde{\gamma}(b)\) and \(\tilde{\gamma}^{\prime}(b^{\prime})\) contains one or both of the endpoints, we may perturb the \(\varepsilon\)-neighborhood chosen above such that its boundary contains the endpoint. This allows us to assume that the ambient isotopy constructed above fixes the endpoints of \(b\) and \(b^{\prime}\).
Furthermore, the same proof applies for homotopic simple closed \(\Gamma\)-curves.
**Corollary 3.29**.: _If two \(\Gamma\)-arcs or two simple closed \(\Gamma\)-curves in \(\Sigma_{\Gamma}(L)\) are homotopic, they are isotopic._
Figure 3.11. Removing the gray shaded bigon.
### The mapping class group \(\operatorname{Map}^{orb}\left(\Sigma_{\Gamma}\right)\)
So far we have determined the mapping class group of \(D_{\mathbb{Z}_{m}}\) in the case without marked points (see Example 3.4) and with one orbit of marked points (see Example 3.6). We want to close the section with a generalization to the orbifold \(\Sigma_{\Gamma}\) for the case without marked points. This will serve as the base case for an inductive description of the subgroups \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}\right)\) and \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}\right)\) for all \(n\in\mathbb{N}\) in the next section.
Therefore, we will determine how \(\Gamma\)-equivariant homeomorphisms manipulate the boundary of the fundamental domain \(F\). A first step in this direction is to understand the action on cone points:
**Lemma 3.30**.: _Every \(\Gamma\)-equivariant homeomorphism \(H\in\operatorname{Homeo}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) fixes each cone point._
Proof.: Let \(H\in\operatorname{Homeo}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) be an arbitrary \(\Gamma\)-equivariant homeomorphism and let \(1\neq\gamma\in\operatorname{Stab}_{\Gamma}(c_{\nu})=\mathbb{Z}_{m_{\nu}}\). Then the \(\Gamma\)-equivariance implies
\[\gamma(H(c_{\nu}))=H(\gamma(c_{\nu}))=H(c_{\nu}).\]
Consequently, \(\gamma\neq 1\) is contained in both \(\operatorname{Stab}_{\Gamma}(c_{\nu})\) and \(\operatorname{Stab}_{\Gamma}(H(c_{\nu}))\). Since the stabilizer of \(H(c_{\nu})\) is non-trivial, the point \(H(c_{\nu})\) is a cone point \(\gamma^{\prime}(c_{\mu})\). In this case
\[\operatorname{Stab}_{\Gamma}(H(c_{\nu}))=\operatorname{Stab}_{\Gamma}(\gamma ^{\prime}(c_{\mu}))=\gamma^{\prime}\mathbb{Z}_{m_{\mu}}\gamma^{\prime-1}.\]
This group intersects non-trivially with \(\mathbb{Z}_{m_{\nu}}\) if and only if \(\nu=\mu\) and \(\gamma^{\prime}\in\mathbb{Z}_{m_{\nu}}\), i.e. \(H(c_{\nu})=c_{\nu}\). Due to the \(\Gamma\)-equivariance of \(H\), this also implies that \(H\) fixes each \(\Gamma\)-translate of \(c_{\nu}\).
**Corollary 3.31**.: _The mapping class groups \(\operatorname{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L)\right)\) and \(\operatorname{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L,N)\right)\) are isomorphic._
Proof.: Using Lemma 3.30, the map \([H]\mapsto[H|_{\Sigma(L,N)}]\) induces an isomorphism.
Further, Lemma 3.28 puts us in position to apply the Alexander trick to show:
**Lemma 3.32**.: _The mapping class group \(\operatorname{Map}^{orb}\left(\Sigma_{\Gamma}\right)=\operatorname{Map}_{n}^ {orb}\left(\Sigma_{\Gamma}(L)\right)\) for \(L=n=0\) is trivial._
Proof.: Let \(H\) be a homeomorphism that represents an element in \(\operatorname{Map}^{orb}\left(\Sigma_{\Gamma}\right)\). Due to Lemma 3.30, \(H\) fixes every cone point. For every \(1\leqslant\nu\leqslant N\), let \(S_{\nu}\) be a circle centered in \(c_{\nu}\) of radius \(\varepsilon_{\nu}>0\) such that the \(\Gamma\)-translates of these \(N\) circles are either disjoint or coincide. Consequently, the same holds for the \(\Gamma\)-translates of \(H(S_{\nu})\) for \(1\leqslant\nu\leqslant N\). For every \(\nu\), let \(D_{\nu}\) be the disk bounded by \(S_{\nu}\). Using that \(H\) fixes every cone point, this implies that the disk \(H(D_{\nu})\) bounded by \(H(S_{\nu})\) contains the cone point \(c_{\nu}\). Further, the disjointness condition for \(H(S_{\nu})\) implies that \(H(D_{\nu})\) contains no further cone points. Hence, \(S_{\nu}\) and \(H(S_{\nu})\) are simple closed \(\Gamma\)-curves that both bound exactly one cone point. In both cases, the bounded cone point is \(c_{\nu}\). Since the tree-shaped surface \(\Sigma\) contains no punctures or marked points, this implies that \(S_{\nu}\) and \(H(S_{\nu})\) are homotopic. By Lemma 3.28, this allows us to assume that \(H\) fixes the simple closed \(\Gamma\)-curve \(S_{\nu}\) and all its \(\Gamma\)-translates pointwise.
Now we may consider the arcs \(s_{\nu}\) for \(1\leqslant\nu\leqslant N\) whose \(\Gamma\)-translates span the tessellation of \(\Sigma\) into the \(\Gamma\)-translates of \(F\). For every \(\nu\), cutting along \(S_{\nu}\) splits the arc \(s_{\nu}\) into an inner part whose image is contained in \(D_{\nu}\) and an outer part whose image is contained in \(\Sigma\backslash D_{\nu}^{\flat}\). Let \(s_{\nu}^{\prime}\) be the outer part. If we consider the endpoints of the arcs \(s_{\nu}^{\prime}\) for \(1\leqslant\nu\leqslant N\) as marked points, we may consider these arcs as \(\Gamma\)-arcs. Since \(H\) is \(\Gamma\)-equivariant, the arcs \(H(s_{\nu}^{\prime})\) can also be considered as \(\Gamma\)-arcs. Using that \(H\) fixes the curves \(S_{\nu}\), the arcs \(s_{\nu}^{\prime}\) and \(H(s_{\nu}^{\prime})\) share their endpoints.
By Proposition 3.25, these \(\Gamma\)-arcs are either in minimal position or they form a bigon. If \(s^{\prime}_{\nu}\) and \(H(s^{\prime}_{\nu})\) form a bigon, Lemma 3.28 yields an ambient isotopy on \(H\) that reduces the number of intersections of \(H(s^{\prime}_{\nu})\) and \(s^{\prime}_{\nu}\). Iterating this argument, we may adjust \(H\) such that \(H(s^{\prime}_{\nu})\) and \(s^{\prime}_{\nu}\) are in minimal position, i.e. as \(\Gamma\)-arcs \(H(s^{\prime}_{\nu})\) and \(s^{\prime}_{\nu}\) do not bound any bigons.
Lemma 3.26 implies that the \(\Gamma\)-arcs \(H(s^{\prime}_{\nu})\) and \(s^{\prime}_{\nu}\) also do not bound a pseudo-bigon that contains a cone point. This implies that the arcs \(H(s^{\prime}_{\nu})\) and \(s^{\prime}_{\nu}\) bound a disk without cone points that does not intersect with any of its \(\Gamma\)-translates. In particular, the \(\Gamma\)-arcs represented by \(H(s^{\prime}_{\nu})\) and \(s^{\prime}_{\nu}\) are homotopic. Thus, Lemma 3.28 yields an ambient isotopy \(H_{t}\) such that \(H_{0}=H\) and \(H_{1}=H^{\prime}\) with \(H^{\prime}\) fixing \(s^{\prime}_{\nu}\) and \(S_{\nu}\) for \(1\leq\nu\leq N\) pointwise.
The fact that \(H^{\prime}\) preserves the simple closed curves \(S_{\nu}\) allows us to apply the Alexander trick on the disks \(D_{\nu}\) inside \(S_{\nu}\). Further, \(H^{\prime}\) preserves the disk \(F\backslash\bigcup_{\nu=1}^{N}D_{\nu}^{\circ}\), highlighted gray in Figure 3.12, fixing its boundary pointwise. This allows us another application of the Alexander trick on every \(\Gamma\)-translate of this disk. Hence, we obtain that \(H^{\prime}\) is homotopic to the identity.
## 4. Orbifold mapping class groups with marked points
For surfaces with marked points, the mapping class group is determined by the Birman exact sequence. If we consider a disk \(D\), it yields [5, Theorem 9.1]:
\[1\to\pi_{1}(\operatorname{Conf}_{n}(D))\to\operatorname{Map}_{n} \left(D\right)\to\underbrace{\operatorname{Map}\left(D\right)}_{=1}\to 1.\]
Based on the observation that \(\Sigma(L,N)/\Gamma\) is homeomorphic to \(D(L,N)\), we identify a subgroup \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) of \(\operatorname{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L)\right)\) that is isomorphic to a similar subgroup \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\) of \(\operatorname{Map}_{n}\left(D(L,N)\right)\) (see Proposition 4.3). This allows us to deduce a Birman exact sequence for \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) (see Theorem 4.10). Further, we deduce a short exact sequence of pure mapping class groups (see Corollary 4.13). In particular, we obtain presentations for \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}^{n}(L)\right)\) and \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}^{n}(L)\right)\) (see Corollary 4.19 and Proposition 4.22).
Identification of subgroups of the orbifold mapping class group and the mapping class group of a disk
Let \(\Sigma_{\Gamma}(L)\) be the orbifold with underlying surface \(\Sigma\) punctured in \(\Gamma(\{r_{1},...,r_{L}\})\). Technically, as in Section 3, it is often more convenient to consider the surface \(\Sigma\) with marked points at \(\Gamma(\{r_{1},...,r_{L}\})\) instead of punctures. The orbifold mapping class group \(\operatorname{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L)\right)\) has a homomorphism
\[\operatorname{Forget}_{n}^{orb}:\operatorname{Map}_{n}^{orb}\left(\Sigma_{ \Gamma}(L)\right)\to\operatorname{Map}^{orb}\left(\Sigma_{\Gamma}(L)\right)\]
by forgetting the marked points. In the following, we consider the kernel of \(\operatorname{Forget}_{n}^{orb}\).
**Definition 4.1**.: Let \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) denote the kernel of \(\operatorname{Forget}_{n}^{orb}\). This subgroup is induced by the subgroup
\[\operatorname{Homeo}_{n}^{\operatorname{id},orb}(\Sigma_{\Gamma}(L),\partial \Sigma(L)):=\{H\in\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial \Sigma(L))\mid H\sim\operatorname{id}_{\Sigma(L)}\}\]
of \(\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L))\). Moreover, let \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L) \right):=\operatorname{Forget}_{n}^{orb}|_{\operatorname{PMap}_{n}^{orb}( \Sigma_{\Gamma}(L))}\). This subgroup is induced by the subgroup \(\operatorname{PHomeo}_{n}^{\operatorname{id},orb}(\Sigma_{\Gamma}(L), \partial\Sigma(L))\) that contains the pure homeomorphisms of \(\operatorname{Homeo}_{n}^{\operatorname{id},orb}(\Sigma_{\Gamma}(L),\partial \Sigma(L))\).
Additionally, we consider the disk \(D(L,N)\) with \(L+N\) distinct punctures at positions \(\bar{r}_{1},...,\bar{r}_{L}\) and \(\bar{c}_{1},...,\bar{c}_{N}\). If we endow \(D(L,N)\) with \(n\) distinct marked points at \(\bar{p}_{1},...,\bar{p}_{n}\), there exists an analogous forgetful map
\[\operatorname{Forget}_{n}:\operatorname{Map}_{n}\left(D(L,N)\right)\to \operatorname{Map}\left(D(L,N)\right).\]
**Definition 4.2**.: Let \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\) denote the kernel of \(\operatorname{Forget}_{n}\). This subgroup is induced by the subgroup
\[\operatorname{Homeo}_{n}^{\operatorname{id}}(D(L,N),\partial D(L,N)):=\{H\in \operatorname{Homeo}_{n}(D(L,N),\partial D(L,N))\mid H\sim\operatorname{id}_ {D(L,N)}\}\]
of \(\operatorname{Homeo}_{n}(D(L,N))\). Moreover, let \(\operatorname{PMap}_{n}^{\operatorname{id}}\left(D(L,N)\right):=\operatorname {Forget}_{n}|_{\operatorname{PMap}_{n}(D(L,N))}\). This subgroup is induced by the subgroup \(\operatorname{PHomeo}_{n}^{\operatorname{id}}(D(L,N),\partial D(L,N))\) that contains the pure homeomorphisms of \(\operatorname{Homeo}_{n}^{\operatorname{id}}(D(L,N),\partial D(L,N))\).
In contrast to \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\), the subgroup \(\operatorname{Map}_{n}\left(D(L,N)\right)\) satisfies the additional condition that homeomorphisms and ambient isotopies fix points that correspond to cone points. Based on Lemma 3.30 the cone points are automatically fixed by \(\Gamma\)-equivariant homeomorphisms of \(\Sigma(L)\). This will allow us to prove:
**Proposition 4.3**.: _The group \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\) is isomorphic to \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) and the isomorphism restricts to an isomorphism between \(\operatorname{PMap}_{n}^{\operatorname{id}}\left(D(L,N)\right)\) and \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\)._
The proof is divided into the following four steps:
* In Lemma 4.5, we construct a continuous homomorphism \[\pi:\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\to \operatorname{Homeo}_{n}(D(L,N),\partial D(L,N))\] using the projection from \(\Sigma(L,N)\) to \(\Sigma(L,N)/\Gamma\cong D(L,N)\).
* Due to the continuity, the homomorphism \(\pi\) induces a homomorphism \[\pi_{\operatorname{Map}}:\operatorname{Map}_{n}^{\operatorname{id},orb}\left( \Sigma_{\Gamma}(L)\right)\to\operatorname{Map}_{n}^{\operatorname{id}}\left(D (L,N)\right),\] see Lemma 4.6.
* In the opposite direction, we construct a homomorphism \[\varphi:\operatorname{Homeo}_{n}^{\operatorname{id}}(D(L,N),\partial D(L,N))\to \operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\] in Lemma 4.8. This requires to lift a self-homeomorphism of \(D(L,N)\cong\Sigma(L,N)/\Gamma\) that satisfies the conditions from Definition 4.2 to a self-homeomorphism of \(\Sigma(L,N)\). In contrast to Lemma 4.5, we will not prove the continuity of \(\varphi\) in this case.
* However, the homomorphism \(\varphi\) induces a homomorphism \[\varphi_{\operatorname{Map}}:\operatorname{Map}_{n}^{\operatorname{id}}\left(D (L,N)\right)\to\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{ \Gamma}(L)\right),\] see Lemma 4.9. Since \(\varphi\) is not necessarily continuous, in comparison to Lemma 4.6 the well-definedness of \(\varphi_{\operatorname{Map}}\) requires an additional argument.
To deduce that \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\) and \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) are isomorphic, we will finally check that the homomorphisms \(\pi_{\operatorname{Map}}\) and \(\varphi_{\operatorname{Map}}\) are inverse to each other.
Given \(H\in\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L))\), we begin with the definition of an induced map
\[\bar{H}:D_{n}(L,N)\to D_{n}(L,N).\]
By Lemma 3.30, \(H\) fixes all cone points. This allows us to consider \(H\) as a homeomorphism of the surface \(\Sigma(L,N)\). Using the \(\Gamma\)-equivariance of \(H\), we obtain a well-defined map
\[\bar{H}:\Sigma(L,N)/\Gamma\to\Sigma(L,N)/\Gamma,\Gamma(x)\mapsto\Gamma(H(x)).\]
Since \(\Sigma(L,N)/\Gamma\) is homeomorphic to \(D(L,N)\), this defines a self-map of marked disks. Further, we observe that the disk \(D(L,N)\) is homeomorphic to \(F(L,N)/\sim_{\Gamma}\) with \(\sim_{\Gamma}\) identifying boundary points from the same \(\Gamma\)-orbit. Hence, we may consider \(\bar{H}\) as a map
\[F(L,N)\to F(L,N),\bar{x}\mapsto\bar{H}(\bar{x})\]
that coincides on the identified boundary points, fixes the set of marked points \(\{p_{1},...,p_{n}\}\) and restricts to the identity on \(\partial\Sigma(L,N)\cap F(L,N)\).
For the proof of Lemma 4.5, we start with an observation about the fundamental domain \(F\).
**Lemma 4.4**.: _If \(H\in\operatorname{Homeo}^{orb}(\Sigma_{\Gamma},\partial\Sigma)\), then \(H(F)\) is also a fundamental domain of the \(\Gamma\)-action on \(\Sigma\)._
Proof.: Let \(y\) be an arbitrary point in \(\Sigma\). The surjectivity of \(H\) implies that there exists a \(y^{\prime}\in\Sigma\) such that \(y=H(y^{\prime})\). Since \(F\) is a fundamental domain, there exists a point \(x^{\prime}\in F\) and \(\gamma\in\Gamma\) such that \(y^{\prime}=\gamma(x^{\prime})\). Using that \(H\) is \(\Gamma\)-equivariant, we obtain \(y=\gamma(H(x^{\prime}))\), i.e. \(\Sigma=\bigcup_{\gamma\in\Gamma}\gamma(H(F))\).
Moreover, if \(y\) is contained in \(\gamma(H(F))\cap\gamma^{\prime}(H(F))\) for \(\gamma\neq\gamma^{\prime}\), this implies \(y=\gamma(H(x^{\prime}))=\gamma^{\prime}(H(x^{\prime\prime}))\) for \(x^{\prime},x^{\prime\prime}\in F\). This is equivalent to \(H(\gamma(x^{\prime}))=H(\gamma^{\prime}(x^{\prime\prime}))\). Since \(H\) is injective, that is \(\gamma(x^{\prime})=\gamma^{\prime}(x^{\prime\prime})\). Using that \(F\) is a fundamental domain, this implies that \(x^{\prime}\) and \(x^{\prime\prime}\) are contained in \(\partial F\). The homeomorphism \(H\) maps \(\partial F\) to \(\partial H(F)\), i.e. \(H(x^{\prime}),H(x^{\prime\prime})\) are contained in \(\partial H(F)\). This proves that \(H(F)\) is a fundamental domain.
**Lemma 4.5**.: _The map_
\[\pi:\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L,N),\partial\Sigma(L,N)) \to\operatorname{Homeo}_{n}(D(L,N),\partial D(L,N)),\quad H\mapsto\bar{H}\]
_is a continuous homomorphism._
Proof.: We divide the proof into three steps.
_Step \(1\). The map \(\pi\) is a homomorphism._ Let \(H,H^{\prime}\in\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L,N),\partial \Sigma(L,N))\). Then the definition of \(\bar{H}\) and \(\bar{H}^{\prime}\) implies
\[\bar{H}^{\prime}\circ\bar{H}(\Gamma(x))=\bar{H}^{\prime}(\Gamma(H(x)))=\Gamma( H^{\prime}\circ H(x))=\overline{H^{\prime}\circ\bar{H}}(\Gamma(x)).\]
Hence, \(\pi\) is a homomorphism.
_Step \(2\). The map \(\bar{H}\) is a homeomorphism._ Since the map \(\varphi\) is a homomorphism, the inverse of \(\bar{H}\) is given by the image of \(H^{-1}\) under \(\pi\). As a consequence, it suffices to show that \(\bar{H}\) is continuous for every \(H\) in \(\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) to obtain that \(\bar{H}\) is a homeomorphism.
Since \(F\) is a fundamental domain, \(H|_{F}\) determines the induced map \(\bar{H}\). By Lemma 4.4, \(H^{-1}(F)\) is also a fundamental domain. This allows us to cover \(F\) with the \(\Gamma\)-translates of \(H^{-1}(F)\). In this way, we obtain pieces
\[H:F\cap\gamma(H^{-1}(F))\to\gamma(F).\]
Since both, domain and codomain, are contained in a single \(\Gamma\)-translate of \(F\), we may identify the above map with
\[F\cap\gamma(H^{-1}(F))\to F,x\mapsto\gamma^{-1}(H(x)).\]
This map coincides with \(\bar{H}|_{F\cap\gamma(H^{-1}(F))}\). Using that \(H\) is continuous, this implies that \(\bar{H}\) is continuous on every piece \(F\cap\gamma(H^{-1}(F))\).
Next, we check that the continuous pieces match together to a continuous function \(\bar{H}\). Therefore, we recall that \(F\) and \(H^{-1}(F)\) are compact disks. This implies that \(F\) intersects only finitely many \(\Gamma\)-translates of \(H^{-1}(F)\). Hence, we have decomposed \(\bar{H}\) into finitely many continuous pieces. Let \(F\cap\gamma(H^{-1}(F))\) and \(F\cap\gamma^{\prime}(H^{-1}(F))\) be pieces such that they or suitable \(\Gamma\)-translates intersect. If we consider intersecting translates, they share a set contained in the boundary of both (see Figure 4.1 for an example).
On the left of Figure 4.1, the fundamental domain \(F\) is shaded in different colors which describe the above decomposition into pieces. An exemplary intersection of two pieces is colored in yellow. For the blue segments, there exist \(\Gamma\)-translates of the adjacent pieces that intersect. On the right of Figure 4.1, it is shown how the corresponding pieces cover \(D_{n}(L,N)\).
Let us consider an open neighborhood in \(\Sigma(L,N)\) of the intersecting set of the two pieces. Using that \(H\) is continuous on this neighborhood, the \(\Gamma\)-equivariance of \(H\) implies that the maps \(H|_{F\cap\gamma(H^{-1}(F))}\) and \(H|_{F\cap\gamma^{\prime}(H^{-1}(F))}\) coincide on the set that corresponds to the intersecting set. Since \(H\) identifies with \(\bar{H}\) on these pieces, \(\bar{H}|_{F\cap\gamma(H^{-1}(F))}\) and \(\bar{H}|_{F\cap\gamma^{\prime}(H^{-1}(F))}\) coincide on their intersecting set. Thus, by [4, Chapter III, Theorem 9.4], the finitely many continuous pieces of \(\bar{H}\) induce a continuous map on \(D_{n}(L,N)\).
_Step \(3\). Continuity of \(\pi\)._ Finally, we need to check that \(\pi\) is continuous, i.e. for every homeomorphism \(H\in\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) and every neighborhood \(V\) of \(\pi(H)=\bar{H}\), there exists a neighborhood \(V^{\prime}\) of \(H\) such that \(\pi(V^{\prime})\subseteq V\). Given a compact set \(K\) and an open set \(U\) in \(D(L,N)\), let \(V(K,U)\) denote the set
\[\{\phi\in\operatorname{Homeo}_{n}(D(L,N),\partial D(L,N))\mid\phi(K)\subseteq U\}.\]
By definition of the compact-open topology on \(\operatorname{Homeo}_{n}(D(L,N),\partial D(L,N))\), the open set \(V\) contains a subset \(V(K,U)\ni\pi(H)=\bar{H}\). Now we have to find an appropriate open subset \(V^{\prime}\subseteq\operatorname{Homeo}_{n}^{\operatorname{id},orb}(\Sigma_{ \Gamma}(L),\partial\Sigma(L))\) such that \(V^{\prime}\) maps to \(V(K,U)\). We will choose \(V^{\prime}=V^{\prime}(K_{\Sigma},U_{\Sigma})\) for some \(K_{\Sigma},U_{\Sigma}\subseteq\Sigma(L)\) such that \(K_{\Sigma}\) is compact and \(U_{\Sigma}\) is open.
Figure 4.1. Decompositions of \(H\) and \(\bar{H}\).
Since \(D(L,N)\) is homeomorphic to \(F(L,N)/\sim_{\Gamma}\) and \(F(L,N)\subseteq\Sigma(L,N)\), we may consider \(K\) and \(U\) as subsets in \(\Sigma(L)\). We choose \(K_{\Sigma}=K\) and \(U_{\Sigma}=\Gamma(U)\).
\(K_{\Sigma}\) is compact, since it is a compact subset of the compact subset \(F(L,N)\).
For the openness of \(U_{\Sigma}\), we observe: If \(x\in U_{\Sigma}\) is contained in the interior of a \(\Gamma\)-translate of \(F(L,N)\), we may use that \(U\) is open in \(F(L,N)\). This yields an \(\varepsilon\)-ball around \(x\) that is contained in the interior of the \(\Gamma\)-translate of \(F(L,N)\) and \(U\). In particular, this \(\varepsilon\)-ball is contained in \(U_{\Sigma}\). If \(x\in U_{\Sigma}\) is contained in a \(\Gamma\)-translate of \(\partial F(L,N)\), we may use the openness of \(U\) in \(D(L,N)\) to find a surrounding \(\varepsilon\)-ball contained in \(U\subseteq D(L,N)\) for the point \(\bar{x}\) which corresponds to \(x\). If we identify this \(\varepsilon\)-ball with a subset in \(F(L,N)\), it divides into two components. Shifting both of these halves by suitable group elements, yields the \(\varepsilon\)-ball centered at \(x\). Since \(U_{\Sigma}\) covers the whole \(\Gamma\)-orbit of \(U\subseteq F(L,N)\), this \(\varepsilon\)-ball is contained in \(U_{\Sigma}\). Thus, \(U_{\Sigma}\) is open.
For these sets \(K_{\Sigma}\) and \(U_{\Sigma}\), we observe
\[H(K_{\Sigma})=H(K)=H\circ\bar{H}^{-1}\circ\bar{H}(K)\subseteq H\circ\bar{H}^{- 1}(U)\subseteq U_{\Sigma}.\]
In the last inclusion, we used that \(H\) and \(\bar{H}\) differ at most by a \(\Gamma\)-translation. Now \(H(K_{\Sigma})\subseteq U_{\Sigma}\) implies that \(H\in V^{\prime}(K_{\Sigma},U_{\Sigma})\) and further, for every \(H^{\prime}\) in \(\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\), the condition \(H^{\prime}(K_{\Sigma})\subseteq U_{\Sigma}\) implies \(\bar{H}^{\prime}(K)\subseteq U\), i.e. \(\pi(V^{\prime}(K_{\Sigma},U_{\Sigma}))\subseteq V(K,U)\).
If we restrict \(\pi\) to the subgroup \(\operatorname{Homeo}_{n}^{\operatorname{id},orb}(\Sigma_{\Gamma}(L),\partial \Sigma(L))\), we obtain a continuous map \(\operatorname{Homeo}_{n}^{\operatorname{id},orb}(\Sigma_{\Gamma}(L),\partial \Sigma(L))\to\operatorname{Homeo}_{n}(D(L,N),\partial D(L,N))\).
**Lemma 4.6**.: _The restricted map induces a homomorphism_
\[\pi_{\operatorname{Map}}:\operatorname{Map}_{n}^{\operatorname{id},orb}\left( \Sigma_{\Gamma}(L)\right)\to\operatorname{Map}_{n}^{\operatorname{id}}\left( D(L,N)\right).\]
Proof.: It remains to check that the induced map
\[\pi_{\operatorname{Map}}:\operatorname{Map}_{n}^{\operatorname{id},orb}\left( \Sigma_{\Gamma}(L)\right)\to\operatorname{Map}_{n}\left(D(L,N)\right),[H] \mapsto[\bar{H}]\]
is well-defined and that its image is contained in \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\).
For \(H,H^{\prime}\in\operatorname{Homeo}_{n}^{\operatorname{id},orb}(\Sigma_{ \Gamma}(L),\partial\Sigma(L))\) with \(H\sim_{n}H^{\prime}\), there exists an ambient isotopy \(\phi:I\to\operatorname{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) connecting \(H\) and \(H^{\prime}\). Since \(\pi\) is continuous, by Lemma 4.5, we obtain an ambient isotopy \(\bar{\phi}:=\pi\circ\phi\) connecting \(\bar{H}\) and \(\bar{H}^{\prime}\). Hence, the induced map \(\pi_{\operatorname{Map}}\) is well-defined.
Moreover, let \(H\in\operatorname{Homeo}_{n}^{\operatorname{id},orb}(\Sigma_{\Gamma}(L), \partial\Sigma(L))\). If we forget the marked points \(\Gamma(\{p_{1},...,p_{n}\})\), by Definition 4.1, we obtain an ambient isotopy
\[\psi:I\to\operatorname{Homeo}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\]
from \(H\) to \(\operatorname{id}_{\Sigma(L)}\). By Lemma 3.30, \(\psi_{t}\) fixes the cone points for every \(t\in I\). Consequently, the induced ambient isotopy \(\bar{\psi}\) connects \(\bar{H}\) and \(\operatorname{id}_{D(L,N)}\) relative \(\bar{c}_{1},...,\bar{c}_{N}\) and \(\bar{r}_{1},...,\bar{r}_{L}\). Hence, \(\bar{H}\) is contained in the subgroup \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\).
Next, we describe an inverse homomorphism for \(\pi_{\operatorname{Map}}\). The first step is to construct a self-homeomorphism of \(\Sigma_{\Gamma}(L)\) for each \(\bar{H}\in\operatorname{Homeo}_{n}(D(L,N),\partial D(L,N))\):
**Construction 4.7** (A self-homeomorphism of \(\Sigma(L)\)).: By Definition 4.2, there exists an ambient isotopy
\[\bar{\phi}:I\to\operatorname{Homeo}(D(L,N),\partial D(L,N))\]
that connects \(\operatorname{id}_{D(L,N)}\) to \(\bar{H}\). For each point \(\bar{x}\in D\) that is not a marked point or cone point (these points are fixed anyway), we may follow its trace \(\bar{\phi}_{t}(\bar{x})\) and detect its algebraic intersections with the segments \(\bar{s}_{\nu}\) for \(1\leqslant\nu\leqslant N\). This yields a finite
sequence \(((\nu_{1},\varepsilon_{1}),...,(\nu_{q},\varepsilon_{q}))\) with \(1\leq\nu_{j}\leq N\) and \(\varepsilon_{j}=\pm 1\) which induces a group element \(\gamma_{\bar{H},\bar{x}}:=\gamma_{\varepsilon_{1}}^{\varepsilon_{1}}...\gamma_ {\varepsilon_{q}}^{\varepsilon_{q}}\). Now we identify \(\bar{x}\) with a point in \(F(L,N)\) and define
\[\bar{H}_{\Sigma}:\gamma(\bar{x})\mapsto\tilde{\gamma}\gamma_{\bar{H},\bar{x}}( \bar{H}(\bar{x})).\]
The element \(\tilde{\gamma}\) equals \(\gamma\) if \(\gamma(\bar{x})\) is an interior element in \(\gamma(F(L,N))\) or \(\gamma(\bar{x})\) is contained in \(\gamma(s_{\nu})\) for some \(1\leq\nu\leq N\). If \(\gamma(\bar{x})\) is contained in \(\gamma^{\prime}(s_{\nu})\), we set \(\tilde{\gamma}=\gamma^{\prime}\).
To verify that \(\bar{H}_{\Sigma}\) is well-defined, we have to check that \(\gamma_{\bar{H},\bar{x}}\) does not depend on the choice of the ambient isotopy \(\bar{\phi}\).
Let \(\bar{\psi}:I\to\operatorname{Homeo}(D(L,N),\partial D(L,N))\) be another ambient isotopy that connects \(\operatorname{id}_{D(L,N)}\) and \(\bar{H}\). Then \(\bar{\psi}^{-1}\circ\bar{\phi}\) yields an ambient isotopy connecting \(\operatorname{id}_{D(L,N)}\) through \(\bar{H}\) to \(\operatorname{id}_{D(L,N)}\). In particular, \((\bar{\psi}^{-1}\circ\bar{\phi})_{t}(\bar{x})\) represents an element in \(\pi_{1}\left(D(L,N),\bar{x}\right)\). Via point-pushing, \(\pi_{1}\left(D(L,N),\bar{x}\right)\) embeds into \(\operatorname{Map}_{1}\left(D(L,N)\right)\) and the element \([(\bar{\psi}^{-1}\circ\bar{\phi})_{t}(\bar{x})]\) maps onto \([\operatorname{id}_{D(L,N)}]\). Hence, \((\bar{\psi}^{-1}\circ\bar{\phi})_{t}(\bar{x})\) represents the trivial element in \(\pi_{1}\left(D(L,N),\bar{x}\right)\). Since \(\pi_{1}\left(D(L,N),\bar{x}\right)=F_{L+N}\), this implies that the intersection patterns of \(\tilde{\phi}_{t}(\bar{x})\) and \(\tilde{\psi}_{t}(\bar{x})\) with the segments \(\bar{s}_{\nu}\) coincide up to insertion of subsequences \((\mu,\varepsilon),(\mu,-\varepsilon)\) with \(1\leq\mu\leq N\) and \(\varepsilon=\pm 1\). Hence, \(\gamma_{\bar{H},\bar{x}}\) does not depend on the choice of the ambient isotopy.
Moreover, we need to show: If \(\Gamma\)-translates of \(F(L,N)\) intersect in their boundaries, the definition of \(\bar{H}_{\Sigma}\) coincides on both \(\Gamma\)-translates. Let \(\gamma(\bar{x})=\gamma^{\prime}(\bar{y})\) for \(\bar{x},\bar{y}\in F(L,N),\bar{x}\neq\bar{y}\), \(\gamma,\gamma^{\prime}\in\Gamma,\gamma\neq\gamma^{\prime}\). This implies that \(\bar{x}\) and \(\bar{y}\) are contained in the same \(\Gamma\)-orbit. In particular, \(\bar{x},\bar{y}\in\partial F(L,N)\) and \(\bar{x},\bar{y}\) get identified via \(\sim_{\Gamma}\). Thus, \(\bar{H}(\bar{x})=\bar{H}(\bar{y})\). Without loss of generality, we may assume that \(\gamma(\bar{x})=\gamma^{\prime}(\bar{y})\) is contained in \(\gamma(s_{\nu})\), i.e. \(\tilde{\gamma}=\gamma\) for \(\gamma(\bar{x})\) and \(\gamma^{\prime}(\bar{y})\). Since the points \(\bar{x}\) and \(\bar{y}\) get identified by \(\sim_{\Gamma}\), we further obtain \(\gamma_{\bar{H},\bar{x}}=\gamma_{\bar{H},\bar{y}}\) and consequently
\[\bar{H}_{\Sigma}(\gamma(\bar{x}))=\gamma\gamma_{\bar{H},\bar{x}}(\bar{H}(\bar {x}))=\gamma\gamma_{\bar{H},\bar{y}}(\bar{H}(\bar{y}))=\bar{H}_{\Sigma}(\gamma ^{\prime}(\bar{y})).\]
The following Lemma shows that \(\bar{H}_{\Sigma}\) satisfies the expected properties.
**Lemma 4.8**.: _The map \(\varphi:\operatorname{Homeo}^{\operatorname{id}}_{n}(D(L,N),\partial D(L,N)) \to\operatorname{Homeo}^{orb}_{n}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) that maps \(\bar{H}\) to \(\bar{H}_{\Sigma}\) is a homomorphism._
Proof.: It remains to show that \(\varphi\) satisfies the homomorphism property and that \(\bar{H}_{\Sigma}\) is contained in \(\operatorname{Homeo}^{\operatorname{id},orb}_{n}(\Sigma_{\Gamma}(L),\partial \Sigma(L))\).
_Step \(1\). The map \(\varphi\) is a homomorphism._ For each \(\bar{x}\) that is contained in the interior of \(F(L,N)\) or in \(\bigcup_{\nu=1}^{N}s_{\nu}\), we have
\[\bar{x}\overset{\varphi(\bar{H})}{\mapsto}\gamma_{\bar{H},\bar{x}}(\bar{H}( \bar{x}))\overset{\varphi(\bar{K})}{\mapsto}\gamma_{\bar{K},\bar{H}(\bar{x})} \gamma_{\bar{H},\bar{x}}(\bar{K}\circ\bar{H})(\bar{x})=\gamma_{\bar{K},\bar{H} (\bar{x})}\gamma_{\bar{H},\bar{x}}(\overline{K\circ H})(\bar{x}).\]
Let \(\phi_{\bar{H}}\) (resp. \(\phi_{\bar{K}}\)) be a homotopy connecting \(\bar{H}\) (resp. \(\bar{K}\)) to the identity. Then the concatenation \(\phi_{\bar{K}}\circ\phi_{\bar{H}}\) connects \(\overline{K\circ H}\) to \(\operatorname{id}_{D(L,N)}\). This implies
\[\gamma_{\bar{K},\bar{H}(\bar{x})}\gamma_{\bar{H},\bar{x}}=\gamma_{\overline{K \circ H},\bar{x}}.\]
Consequently, \(\varphi(\bar{K})\circ\varphi(\bar{H})(\bar{x})=\varphi(\overline{K\circ H})( \bar{x})\), i.e. \(\varphi\) is a homomorphism.
_Step \(2\). The map \(\bar{H}_{\Sigma}\) is a homeomorphism._ Since \(\varphi\) is a homomorphism, the inverse of \(\bar{H}_{\Sigma}\) is given by \(\bar{H}_{\Sigma}^{-1}\). Thus, it suffices to show that \(\bar{H}_{\Sigma}\) is continuous for any \(\bar{H}\) to obtain that \(\bar{H}_{\Sigma}\) is a homeomorphism.
As in Step \(2\) in Lemma 4.5, we consider pieces \(\gamma(F(L,N))\cap\gamma^{\prime}(\bar{H}_{\Sigma}^{-1}(F(L,N)))\) for \(\gamma,\gamma^{\prime}\in\Gamma\) to prove the continuity of \(\bar{H}_{\Sigma}\). Using that \(F(L,N)\) is a compact disk, each \(\Gamma\)-translate \(\gamma(F(L,N))\) intersects with finitely many other \(\Gamma\)-translates \(\gamma^{\prime}(\bar{H}_{\Sigma}^{-1}(F(L,N)))\). Since \(F(L,N)\) is a fundamental domain, \(\Gamma\)-translates only intersect in the boundary. Further, the domain and codomain of the restriction
\(\bar{H}_{\Sigma}|_{\gamma(F(L,N))\cap\gamma^{\prime}(\bar{H}_{\Sigma}^{-1}(F(L,N)))}\) are both contained in a single \(\Gamma\)-translate of \(F(L,N)\) and the restriction identifies with the corresponding piece
\[\bar{H}:F(L,N)\cap\gamma^{\prime}(\bar{H}_{\Sigma}^{-1}(F(L,N)))\to F(L,N).\]
Since \(\bar{H}\) is continuous, this implies that \(\bar{H}_{\Sigma}\) is continuous on each piece. If two pieces intersect, the well-definedness of \(\bar{H}_{\Sigma}\) implies that both definitions coincide on the intersection. Consequently, by [4, Chapter III, Theorem 9.4], the continuous pieces match together to a continuous function \(\bar{H}_{\Sigma}\).
It remains to check that the above construction of \(\bar{H}_{\Sigma}\) is appropriate in the sense that it induces an inverse homomorphism for \(\pi_{\mathrm{Map}}\).
**Lemma 4.9**.: _The map \(\varphi:\mathrm{Homeo}_{n}^{\mathrm{id}}(D(L,N),\partial D(L,N))\to\mathrm{ Homeo}_{n}^{orb}(\Sigma(L),\partial\Sigma(L))\) induces a homomorphism \(\varphi_{\mathrm{Map}}:\mathrm{Map}_{n}^{\mathrm{id}}\left(D(L,N)\right) \to\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\)._
Proof.: We prove that the induced map
\[\varphi_{\mathrm{Map}}:\mathrm{Map}_{n}^{\mathrm{id}}\left(D(L,N)\right)\to \mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right),[\bar{H}] \mapsto[\bar{H}_{\Sigma}]\]
is well-defined and that its image is contained in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\).
_Step \(1\). The map \(\varphi_{\mathrm{Map}}\) is well-defined._ For \(\bar{H},\bar{H}^{\prime}\in\mathrm{Homeo}_{n}^{\mathrm{id}}(D(L,N),\partial D( L,N))\) with \(\bar{H}\sim_{n}\bar{H}^{\prime}\), we have an ambient isotopy \(\bar{\phi}:I\to\mathrm{Homeo}_{n}(D(L,N),\partial D(L,N))\) connecting \(\bar{H}\) and \(\bar{H}^{\prime}\). We want to prove that \(\bar{\phi}_{\Sigma}:=\varphi\circ\bar{\phi}\) is an ambient isotopy connecting \(\bar{H}_{\Sigma}\) and \(\bar{H}_{\Sigma}^{\prime}\), i.e. we have to check that \(\bar{\phi}_{\Sigma}\) is continuous. We begin with the following:
_Claim_.: Let \(t_{0}\in I\). For each \(\varepsilon>0\), there exists a \(\delta>0\) such that
\[\sup_{x\in D}\sup_{t\in(t_{0}-\delta,t_{0}+\delta)}d(\bar{\phi}_{t}(x),\bar{ \phi}_{t_{0}}(x))<\varepsilon.\]
Let us consider the auxiliary function
\[\tilde{d}:D\times I\to\mathbb{R},\quad(x,t)\mapsto d(\bar{\phi}_{t}(x),\bar{ \phi}_{t_{0}}(x)).\]
Firstly, we prove that \(\tilde{d}\) is continuous. Observe that \(\tilde{d}(x,t_{0})=0\) and recall that \(\bar{\phi}:I\to\mathrm{Homeo}_{n}(D(L,N),\partial D(L,N)),t\mapsto\bar{\phi}_{t}\) is continuous. By a classical result about function spaces, see, for instance, [4, Chapter XII, Theorem 3.1], this implies that the map \(I\times D(L,N)\to D(L,N),(t,x)\mapsto\bar{\phi}_{t}(x)\) is continuous. Moreover, \(d\) is continuous in both arguments. Hence, \(\tilde{d}\) is continuous.
To deduce the claim, let us fix an \(\varepsilon>0\). We want to show that there exists a \(\delta>0\) such that \(\tilde{d}|_{D\times I_{\delta}}\) is bounded above by \(\varepsilon\) with \(I_{\delta}=(t_{0}-\delta,t_{0}+\delta)\). If no such \(\delta\) exists, for each \(\delta>0\), the intersection \(D\times I_{\delta}\cap\tilde{d}^{-1}([\varepsilon,\infty))\neq\emptyset\). Choosing elements from these subsets, we find a sequence \((y_{i})_{i\in\mathbb{N}}\) with \(y_{i}=(x_{i},t_{i})\) and \(|t_{i}-t_{0}|<\frac{1}{4}\) for each \(i\in\mathbb{N}\). Since \(D\times I\) is compact, there exists a convergent subsequence \((y_{i_{j}})_{j\in\mathbb{N}}\). The condition on the \(t_{i}\) implies that \(y=\lim_{j\to\infty}y_{i_{j}}\) is of the form \(y=(x,t_{0})\). In particular, \(\tilde{d}(y)=0\). Since \(\tilde{d}(y_{i_{j}})\geq\varepsilon\) for all \(j\in\mathbb{N}\), this contradicts the continuity of \(\tilde{d}\). Hence, there exists \(\delta>0\) such that \(\tilde{d}|_{D\times I_{\delta}}\) is bounded above by \(\varepsilon\) and the claim follows.
Now let \(K\subseteq\Sigma\) be a compact set and \(U_{\varepsilon}=(\bar{\phi}_{\Sigma,t_{0}}(K))_{\varepsilon}\) an open \(\varepsilon\)-neighborhood of \(\bar{\phi}_{\Sigma,t_{0}}(K)\). Clearly, \(\bar{\phi}_{\Sigma,t_{0}}\in V(K,U_{\varepsilon})\). Further, the claim implies
\[\bar{\phi}_{t}(\bar{x})\in B_{\varepsilon}(\bar{\phi}_{t_{0}}(\bar{x}))\text{ for }t\in(t_{0}-\delta,t_{0}+\delta)\text{ and }\bar{x}\in D. \tag{7}\]
Recall that \(\bar{\phi}_{\Sigma,t}\) maps \(\gamma(\bar{x})\) to \(\tilde{\gamma}\gamma_{\bar{\phi}_{t},\bar{x}}(\bar{\phi}_{t}(\bar{x}))\). The group element \(\gamma_{\bar{\phi}_{t},\bar{x}}\) detects if \(\bar{\phi}_{\Sigma,t}(\bar{x})\) is contained in the same \(\Gamma\)-translate as \(\bar{\phi}_{\Sigma,t_{0}}(\bar{x})\): It coincides with \(\gamma_{\bar{\phi}_{t_{0}},\bar{x}}\) if the trace \(\bar{\phi}_{s}(\bar{x})\) of the ambient isotopy does not intersect any \(\Gamma\)-transales of \(\partial F(L,N)\) for \(s\) between \(t\) and \(t_{0}\). Otherwise \(\gamma_{\bar{\phi}_{t},\bar{x}}\neq\gamma_{\bar{\phi}_{t_{0}},\bar{x}}\) describes an adjacent
translate of \(F(L,N)\) that contains \(\bar{\phi}_{\Sigma,t}(\bar{x})\). Using (7), this implies that \(\bar{\phi}_{\Sigma,t}(\gamma(\bar{x}))=\tilde{\gamma}\gamma_{\bar{\phi}_{t},\bar {x}}(\bar{\phi}_{t}(\bar{x}))\) is contained in \(B_{\varepsilon}(\bar{\phi}_{\Sigma,t_{0}}(\bar{x}))\) for \(t\in(t_{0}-\delta,t_{0}+\delta)\). This is equivalent to \(\bar{\phi}_{\Sigma,t}\in V(K,U_{\varepsilon})\) for \(t\in(t_{0}-\delta,t_{0}+\delta)\), i.e. \(\bar{\phi}_{\Sigma}\) is continuous. Hence, the induced map \(\varphi_{\mathrm{Map}}\) is well-defined.
_Step \(2\)._\([\bar{H}_{\Sigma}]\) _is contained in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\)._ This requires to prove that \(\bar{H}_{\Sigma}\) is \(\Gamma\)-equivariantly homotopic to \(\mathrm{id}_{\Sigma(L)}\) relative \(r_{1},...,r_{L}\). By definition, \(\bar{H}\) is homotopic to \(\mathrm{id}_{D(L,N)}\) relative \(\bar{r}_{1},...,\bar{r}_{L}\) and \(\bar{c}_{1},...,\bar{c}_{N}\) via a homotopy \(\bar{\psi}\). Following the same argument as above, the induced map \(\bar{\psi}_{\Sigma}:=\varphi\circ\bar{\psi}\) yields the required homotopy between \(\bar{H}_{\Sigma}\) and \(\mathrm{id}_{\Sigma(L)}\). Hence, \(\bar{H}_{\Sigma}\) is contained in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\).
Proof of Proposition 4.3.: On the one hand, let \(\bar{H}\in\mathrm{Homeo}_{n}^{\mathrm{id}}(D(L,N))\). Then, by definition, \(\varphi(\bar{H})=\bar{H}_{\Sigma}\) maps \(\gamma(x)\) onto \(\gamma\gamma_{\bar{H},x}(\bar{H}(\bar{x}))\). This implies that \(\pi\circ\varphi(\bar{H})=\bar{H}\) and consequently, \(\pi_{\mathrm{Map}}\circ\varphi_{\mathrm{Map}}=\mathrm{id}_{\mathrm{Map}_{n}^{ \mathrm{id}}(D(L,N))}\).
On the other hand, for each homeomorphism \(H\) that represents an element in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\), there exists a group element \(\gamma_{H,x}\) with \(H(x)=\gamma_{H,x}(\bar{H}(\bar{x}))\) for each \(x\in\Sigma(L)\). Further, recall from Definition 4.1 that if we forget the marked points \(p_{1},...,p_{n}\), we obtain an ambient isotopy \(\phi_{t}\) from \(\mathrm{id}_{\Sigma(L)}\) to \(H\). The induced ambient isotopy \(\bar{\phi}_{t}\) yields \(\gamma_{\bar{H},\bar{x}}\). More precisely, \(\gamma_{\bar{H},\bar{x}}\) is determined by the sequence of algebraic intersections of \(\bar{\phi}_{t}(\bar{x})\) with \(\bar{s}_{\nu}\) for \(1\leq\nu\leq N\). The induced homotopy \(\bar{\phi}_{t}(\bar{x})\) intersects \(\bar{s}_{\nu}\) if and only if \(\phi_{t}(x)\) is going over to another \(\Gamma\)-translate of \(F(L,N)\) passing a \(\Gamma\)-translate of \(s_{\nu}\). Hence, \(\gamma_{\bar{H},\bar{x}}\) coincides with \(\gamma_{H,x}\), i.e. \(\bar{H}_{\Sigma}=H\). This implies \(\varphi\circ\pi(H)=H\) and consequently \(\varphi_{\mathrm{Map}}\circ\pi_{\mathrm{Map}}=\mathrm{id}_{\mathrm{Map}_{n}^{ \mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)}\).
Hence, the groups \(\mathrm{Map}_{n}^{\mathrm{id}}\left(D(L,N)\right)\) and \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) are isomorphic. Obviously, the constructed isomorphisms restrict to the pure subgroups.
In particular, Proposition 4.3 yields a proof that \(\mathrm{Map}_{1}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) is infinite cyclic, which was the object of Example 3.6.
Proof of Example 3.6.: The Alexander trick shows that the groups \(\mathrm{Map}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) and \(\mathrm{Map}\left(D(0,1)\right)\) are both trivial, see Example 3.6 and [5, Lemma 2.1]. This implies that
\[\mathrm{Map}_{1}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\cong\mathrm{Map}_{1}^{ \mathrm{id},orb}\left(D_{\mathbb{Z}_{m}}\right)\ \text{ and }\ \mathrm{Map}_{1}\left(D(0,1)\right)\cong\mathrm{Map}_{1}^{\mathrm{id}}\left(D(0,1 )\right).\]
By Proposition 4.3, we further obtain that \(\mathrm{Map}_{1}^{\mathrm{id},orb}\left(D_{\mathbb{Z}_{m}}\right)\cong\mathrm{ Map}_{1}^{\mathrm{id}}\left(D(0,1)\right)\). Consequently, the group \(\mathrm{Map}_{1}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) is isomorphic to \(\mathrm{Map}_{1}\left(D(0,1)\right)\). Moreover, the point-pushing map
\[\mathrm{Push}_{1}:\pi_{1}(D(0,1),p_{1})\to\mathrm{Map}_{1}\left(D(0,1)\right)\]
defined in [5, Theorem 4.6] yields an isomorphism. Since the fundamental group of the punctured disk \(D(0,1)\) is isomorphic to \(\mathbb{Z}\), this implies that \(\mathrm{Map}_{1}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) is infinite cyclic.
A generator of \(\pi_{1}(D(0,1),p_{1})\) is represented by a loop that encircles the puncture. The image of this loop under \(\mathrm{Push}_{1}\) is a twist of \(p_{1}\) around the puncture in \(D(0,1)\). In \(\mathrm{Map}_{1}^{orb}\left(D_{\mathbb{Z}_{m}}\right)\) this twist corresponds to the \(\frac{2\pi}{m}\)-twist \(U\) from Figure 3.1.
### The generalized Birman exact sequence for orbifold mapping class groups
The identification of \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) and \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\) leads to the Birman exact sequence for the orbifold mapping class groups from Theorem A.
**Theorem 4.10** (Birman exact sequence).: _The following diagram commutes and each row is exact:_
Proof.: The upper row is the generalized Birman exact sequence for the disk with \(L+N\) punctures. The map \(\operatorname{Forget}_{n}\) is defined by forgetting the marked points \(\bar{p}_{1},...,\bar{p}_{n}\). The map \(\operatorname{Push}_{n}\) comes from the extension of a braid \(b\) in \(\operatorname{Conf}_{n}\left(D(L,N)\right)\) to an ambient isotopy \(I\to D(L,N)\) from \(\operatorname{id}_{D(L,N)}\) to \(H_{b}\). Then the image of \(\operatorname{Push}_{n}\) is defined by \(\operatorname{Push}_{n}([b])=[H_{b}]\). Intuitively, the ambient isotopy is obtained by placing our fingers at the initial points of the strand of \(b\) and pushing along the braid. The disk \(D(L,N)\) drags along as we push. Formally, the point-pushing map is induced by the fiber bundle
\[\operatorname{Homeo}_{n}(D(L,N),\partial D(L,N))\to\operatorname{Homeo}(D(L,N), \partial D(L,N))\to\operatorname{Conf}_{n}\left(D^{\circ}(L,N)\right).\]
See [5, Sections 4.2, 9.1.4] for details.
The ambient isotopy from \(\operatorname{id}_{D(L,N)}\) to \(H_{b}\) that defines \(\operatorname{Push}_{n}\) is relative \(\bar{c}_{1},...,\bar{c}_{N}\) and \(\bar{r}_{1},...,\bar{r}_{L}\). Hence, the image of \(\operatorname{Push}_{n}\) is contained in \(\operatorname{Map}_{n}^{\operatorname{id}}\left(D(L,N)\right)\) and by Definition 4.2 the image of \(\operatorname{Forget}|_{\operatorname{Map}_{n}^{\operatorname{id}}(D(L,N))}\) is trivial. Hence, the first row restricts to the short exact sequence in the second row.
The map \(\operatorname{Push}_{n}^{orb}\) in the last row is given by the composition \(\varphi_{\operatorname{Map}}\circ\operatorname{Push}_{n}\). As a composition of isomorphisms, this map is an isomorphism. Hence, the third row is a short exact sequence, i.e. the short exact sequence from Theorem A, and the diagram commutes.
As in the classical case, the Birman exact sequence restricts to pure subgroups. The following two lemmas are required for the proof. We recall from (1), that we endowed \(\Sigma\) with a metric such that the \(\Gamma\)-action is isometric. The _\(\varepsilon\)-collar_ of \(\partial\Sigma(L)\) is the subsurface \(\{x\in\Sigma(L)\mid d(x,y)\leq\varepsilon\text{ for some }y\in\partial\Sigma(L)\}\).
**Lemma 4.11**.: _Let \(\varepsilon>0\) such that each marked point in \(\Gamma(\{p_{1},...,p_{n-1}\})\) and each puncture \(\Gamma(\{r_{1},...,r_{L}\})\) has distance greater than \(\varepsilon\) from the boundary. Then each homeomorphism \(H\) that represents a mapping class in \(\operatorname{Map}_{n-1}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is ambient isotopic (relative \(p_{1},...,p_{n-1},r_{1},...,r_{L}\)) to a homeomorphism \(H_{\varepsilon}\) that coincides with the identity on an \(\varepsilon\)-collar of \(\partial\Sigma(L)\)._
Proof.: The claimed ambient isotopy is a variant of the Alexander trick: Let \(\partial\Sigma(\varepsilon)^{\varepsilon}\) be a system of boundary parallel arcs with distance \(\varepsilon\) to the boundary. Using that the \(\varepsilon\)-collar bounded by \(\partial\Sigma(L)^{\varepsilon}\) and \(\partial\Sigma\) neither contains marked points nor punctures, we can push the boundary along the collar to \(\partial\Sigma(L)^{\varepsilon}\) and replace \(H\) with the identity outside \(\partial\Sigma(L)^{\varepsilon}\) (see Figure 4.2). This ambient isotopy connects \(H\) to a homeomorphism \(H_{\varepsilon}\) that coincides with the identity on the \(\varepsilon\)-collar of \(\partial\Sigma(L)\)
Formally, the following lemma is independent from the previous one. However, given marked points \(p_{1},...,p_{n-1}\), our goal is to apply it to add another marked point lying in a fixed \(\varepsilon\)-collar.
**Lemma 4.12**.: _Let \(\varepsilon>0\) and \(p_{n(\varepsilon)}\) be a point in \(F^{\circ}(L,N)\) with \(p_{n(\varepsilon)}\neq p_{j}\) for \(1\leqslant j<n\) and distance less than \(\varepsilon\) to the boundary. The mapping class group \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) with respect to marked points \(p_{1},...,p_{n}\) is isomorphic to \(\mathrm{Map}_{n(\varepsilon)}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) with respect to marked points \(p_{1},...,p_{n-1},p_{n(\varepsilon)}\). The isomorphism restricts to the pure subgroups._
Proof.: Since \(p_{n}\) and \(p_{n(\varepsilon)}\) are both contained in \(F^{\circ}(L,N)\), there exists a connecting arc contained in \(F^{\circ}(L,N)\backslash\{p_{1},...,p_{n-1}\}\). Via point pushing along this arc, we find a homeomorphism \(P_{n}^{e}\) in \(\mathrm{Homeo}_{n-1}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) which maps \(p_{n}\) to \(p_{n(\varepsilon)}\). The homeomorphism \(P_{n}^{e}\) induces an isomorphism
\[\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to \mathrm{Map}_{n(\varepsilon)}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L) \right),[H]\mapsto[P_{n}^{\varepsilon}\circ H\circ(P_{n}^{\varepsilon})^{-1}].\]
**Corollary 4.13**.: _The following diagram is a short exact sequence that splits:_
Proof.: For \(n=1\), the point-pushing along loops in \(\pi_{1}\left(\mathrm{Conf}_{1}\left(D(L,N)\right)\right)\) from Theorem 4.10 yields pure homeomorphisms that represent elements in \(\mathrm{PMap}_{1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Hence, the short exact sequence for orbifold mapping class groups restricts to a short exact sequence of pure subgroups:
\[1\to\pi_{1}(\mathrm{Conf}_{1}(D(L,N))\xrightarrow{\mathrm{Push}_{1}^{orb}} \mathrm{PMap}_{1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right) \xrightarrow{\mathrm{Forget}_{1}^{orb}}\underbrace{\mathrm{PMap}^{\mathrm{ id},orb}\left(\Sigma_{\Gamma}(L)\right)}_{=1}\to 1.\]
Since \(\mathrm{Conf}_{1}\left(D(L,N)\right)\) is homeomorphic to a disk with \(L+N\) punctures, its fundamental group is the free group \(F_{L+N}\). Replacing \(L\) by \(n-1+L\), yields the first row of the following diagram:
Every homeomorphism that is \(\Gamma\)-equivariantly homotopic to the identity relative \(r_{1},...,r_{L},p_{1},...,p_{n-1}\) in particular is \(\Gamma\)-equivariantly homotopic to \(\mathrm{id}_{\Sigma(L)}\) relative \(r_{1},...,r_{L}\). This allows us to define the vertical maps in the above diagram as embeddings. If we compose the \(\mathrm{Push}_{1}^{orb}\)-map with the embedding
\[\mathrm{PMap}_{1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(n-1+L)\right) \hookrightarrow\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L) \right),\]
Figure 4.2. A variant of the Alexander trick.
we obtain \(\mathrm{Push}_{\mathrm{PMap}_{n}}^{orb}\). As in the first row, \(\mathrm{Forget}_{\mathrm{PMap}_{n}}^{orb}\) is given by forgetting the marked point \(p_{n}\). Using these maps, the above diagram commutes.
It remains to show that the bottom row is a short exact sequence. As a composition of embeddings, \(\mathrm{Push}_{\mathrm{PMap}_{n}}^{orb}\) is injective. Since the diagram commutes, its image is contained in the kernel of \(\mathrm{Forget}_{\mathrm{PMap}_{n}}^{orb}\). On the other hand, if \([H]\) is contained in the kernel of \(\mathrm{Forget}_{\mathrm{PMap}_{n}}^{orb}\), \(H\) is homotopic to \(\mathrm{id}_{\Sigma(L)}\) relative marked points \(r_{1},...,r_{L},p_{1},...,p_{n-1}\). This implies that \([H]\) is contained in the pure subgroup \(\mathrm{PMap}_{1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(n-1+L)\right)\) and the exactness of the first row implies that an element of \(F_{n-1+L+N}\) maps onto \([H]\).
Further, we need to show that \(\mathrm{Forget}_{\mathrm{PMap}_{n}}^{orb}\) is surjective. For this purpose, we construct a map \(\mathrm{Res}_{\varepsilon}:\mathrm{Map}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{ \Gamma}(L)\right)\rightarrow\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{ \Gamma}(L)\right)\) that restricts to a section of \(\mathrm{Forget}_{\mathrm{PMap}_{n}}^{orb}\). Therefore, we use Lemma 4.12 to assume that the marked points \(\Gamma(p_{n})\) are contained in the \(\varepsilon\)-collar of \(\partial\Sigma(L)\) but no other marked point or puncture is contained in this \(\varepsilon\)-collar. By Lemma 4.11, each mapping class \([H]\) in \(\mathrm{Map}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is represented by a homeomorphism \(H_{\varepsilon}\) that is the identity on the \(\varepsilon\)-collar. Since all \(\Gamma\)-translates of \(p_{n}\) are contained in the \(\varepsilon\)-collar, the homeomorphism \(H_{\varepsilon}\) in particular fixes the \(\Gamma\)-translates of the \(n\)-th marked point, i.e. \(H_{\varepsilon}\) represents an element in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). We define
\[\mathrm{Res}_{\varepsilon}:\mathrm{Map}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{ \Gamma}(L)\right)\rightarrow\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{ \Gamma}(L)\right),\;[H]\mapsto[H_{\varepsilon}].\]
To obtain that \(\mathrm{Res}_{\varepsilon}\) is well-defined let \(H\) and \(H^{\prime}\) be two homeomorphisms that are ambient isotopic (relative marked points \(r_{1},...,r_{L},p_{1},...,p_{n-1}\)) via \(H_{t}\). We can apply the pushing from Lemma 4.11 to \(H_{t}\). This yields an ambient isotopy \((H_{t})_{\varepsilon}\) that fixes \(r_{1},...,r_{L},p_{1},...,p_{n-1}\) and restricts to the identity on the \(\varepsilon\)-collar. This ambient isotopy connects \(H_{\varepsilon}\) and \(H^{\prime}_{\varepsilon}\) which implies that the map \(\mathrm{Res}_{\varepsilon}\) is well-defined. It is immediate that the restricted map \(\mathrm{Res}_{\varepsilon}|_{\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_ {\Gamma}(L)\right)}\) is a section of \(\mathrm{Forget}_{\mathrm{PMap}_{n}}^{orb}\). Moreover, the pushing procedure from Lemma 4.11 is compatible with the group structure: If \(H\), \(H^{\prime}\) represent elements in \(\mathrm{Map}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\), we have \((H^{\prime}\circ H)_{\varepsilon}=H^{\prime}_{\varepsilon}\circ H_{\varepsilon}\). This implies that \(\mathrm{Res}_{\varepsilon}\) is a homomorphism. Thus, the second row of the above diagram is a short exact sequence that splits.
**Definition 4.14**.: A group \(G\) is a _semidirect product_ with _normal subgroup_\(N\) and _quotient_\(H\) if there exists a short exact sequence
\[1\to N\xrightarrow{\iota}G\xrightarrow{\pi}H\to 1\]
that has a section \(s:H\to G\). In this case, we denote \(G=N\rtimes H\).
In particular, Corollary 4.13 shows: \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is a semidirect product
\[F_{n-1+L+N}\rtimes\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L) \right).\]
In the following, presentations of groups will be an important tool for us. In particular, presentations allow us to define group homomorphisms by assignments defined on generating sets by the Theorem of von Dyck, see [10, p. 346].
**Lemma 4.15** ([7, Lemma 5.17]).: _Let \(N\) and \(H\) be groups given by presentations \(N=\left\langle X\mid R\right\rangle\) and \(H=\left\langle Y\mid S\right\rangle\). Then the following are equivalent:_
1. \(G\) _is a semidirect product with normal subgroup_ \(N\) _and quotient_ \(H\)_._
2. \(G\) _has a presentation_ \[G=\left\langle X,Y\mid R,S,y^{\pm 1}xy^{\mp 1}=\phi_{y^{\pm 1}}(x)\text{ for all }x\in X,y\in Y\right\rangle\]
_such that_ \(\phi_{y^{\pm 1}}(x)\) _is a word in the alphabet_ \(X\) _for all_ \(x\in X\) _and_ \(y\in Y\)_. Moreover, for each_ \(y\in Y\)_, the assignments_ (8) \[x\mapsto\phi_{y}(x)\] _induce an automorphism_ \(\phi_{y}\in\operatorname{Aut}(N)\) _and the assignments_ (9) \[y\mapsto\phi_{y}\] _induce a homomorphism_ \(H\to\operatorname{Aut}(N)\)_._
Generating set and presentation of \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\)
Endowed with the Birman exact sequence for pure subgroups, our next step is to deduce a generating set and a presentation of \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Therefore, we recall the shape of the fundamental domain \(F\) from Figure 2.2. This allows us to embed \(F\) in \(\mathbb{C}\) as the disk of radius \(\frac{n+L+N+2}{2}\) centered at \(\frac{n-L-N}{2}\). For each \(1\leqslant\nu\leqslant N\), let \(c_{\nu}\) be the upper boundary point of \(\partial F\) with \(\operatorname{Re}(c_{\nu})=-L-\nu\), for each \(1\leqslant\lambda\leqslant L\), let \(r_{\lambda}\) be the point \(-\lambda\in\mathbb{R}\) and for each \(1\leqslant j\leqslant n\), let \(p_{j}\) be the point \(j\) in \(\mathbb{R}\) (see Figure 4.3). Moreover, recall that each cone point in \(\partial F\) has two adjacent arcs that lie in \(\partial F\). For technical reasons, let us assume that the arcs adjacent to \(c_{\nu}\) embed into \(\partial F\) as the boundary arcs with positive imaginary part and real part between \(-L-\nu-\frac{1}{2}\) and \(-L-\nu\) or \(-L-\nu\) and \(-L-\nu+\frac{1}{2}\), respectively.
For every \(1\leqslant i<j\leqslant n\), let \(D_{i,j}\subseteq F(L)\) be the disk
\[\left(B_{\frac{1}{4}}(p_{i})\cup B_{\frac{1}{4}}(p_{j})\right)\cap\{x\in \mathbb{C}\mid\operatorname{Im}(x)\geqslant 0\}\]
\[\cup A_{\frac{j-i}{2}-\frac{1}{4},\frac{j-i}{2}+\frac{1}{4}}\left(\frac{p_{i}+ p_{j}}{2}\right)\cap\{x\in\mathbb{C}\mid\operatorname{Im}(x)\leqslant 0\}\]
where \(A_{r,R}(x)\) denotes the annulus with inner radius \(r\) and outer radius \(R\) centered around \(x\). The disk \(D_{i,j}\) contains precisely the marked points \(p_{i}\) and \(p_{j}\). See Figure 4.3 (left) for a picture of \(D_{i,j}\).
Moreover, for every \(1\leqslant k\leqslant n\) and \(1\leqslant\lambda\leqslant L\), let \(D_{r_{\lambda},k}\subseteq F(L)\) be the disk
\[\left(B_{\frac{1}{4}}(r_{\lambda})\cup B_{\frac{1}{4}}(p_{k})\right)\cap\{x \in\mathbb{C}\mid\operatorname{Im}(x)\geqslant 0\}\]
\[\cup A_{\frac{k+\lambda}{2}-\frac{1}{4},\frac{k+\lambda}{2}+\frac{1}{4}}\left( \frac{r_{\lambda}+p_{k}}{2}\right)\cap\{x\in\mathbb{C}\mid\operatorname{Im}(x )\leqslant 0\}.\]
The disk \(D_{r_{\lambda},k}\) contains precisely the marked point \(r_{\lambda}\) and \(p_{k}\). See Figure 4.3 (right) for a picture of \(D_{r_{\lambda},k}\).
Figure 4.3. The disks \(D_{i,j}\) (left) and \(D_{r_{\lambda},k}\) (right).
The homeomorphisms \(A_{ji}\) and \(B_{k\lambda}\) perform the twists pictured in Figure 4.4 on each \(\Gamma\)-translate of \(D_{i,j}\) and \(D_{r_{\lambda},k}\).
Moreover, for every \(1\leq k\leq n\) and \(1\leq\nu\leq N\), let \(\tilde{D}_{c_{\nu},k}\) be the disk
\[B_{\frac{1}{4}}(p_{k})\cap\{x\in\mathbb{C}\mid\operatorname{Im}(x)\geq 0\}\]
Then \(D_{c_{\nu},k}:=\mathbb{Z}_{m_{\nu}}.\tilde{D}_{c_{\nu},k}\) is a \(\mathbb{Z}_{m_{\nu}}\)-invariant disk that contains the cone point \(c_{\nu}\) and the adjacent marked points \(\mathbb{Z}_{m_{\nu}}(p_{k})\). See Figure 4.5 for a picture of \(\tilde{D}_{c_{\nu},k}\) (left) and an example of the disk \(D_{c_{\nu},k}\subseteq\Sigma(L)\) (right).
Let \(C_{k\nu}\) be the homeomorphism that performs a \(\frac{2\pi}{m_{\nu}}\)-twist as in Figure 3.1 on each \(\Gamma\)-translate of \(D_{c_{\nu},k}\). For the homeomorphisms \(A_{ji},B_{k\lambda}\) and \(C_{k\nu}\), we will use their names as acronyms of the corresponding mapping classes.
**Corollary 4.16**.: _The pure mapping class group \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is generated by_
\[A_{ji},B_{k\lambda}\ \text{ and }\ C_{k\nu}\]
_for \(1\leq i,j,k\leq n\) with \(i<j\), \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\)._
Proof.: The proof that the above elements generate \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) proceeds by induction on \(n\).
For \(n=0\), the group coincides with \(\operatorname{PMap}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). By Definition 4.1, this group is trivial. For \(n\geq 1\), we recall the split short exact sequence from Corollary 4.13:
\[1\to F_{n-1+L+N}\xrightarrow{\operatorname{Push}_{\operatorname{PMap}_{n}}^{ \operatorname{orb}}}\operatorname{PMap}_{n}^{\operatorname{id},orb}\left( \Sigma_{\Gamma}(L)\right)\xrightarrow{\operatorname{Forget}_{\operatorname{ PMap}_{n}}^{\operatorname{orb}}}\operatorname{PMap}_{n-1}^{ \operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to 1.\]
Figure 4.4. The twists induced by \(A_{ji}\) (left) and \(B_{k\lambda}\) (right).
The included free group \(F_{n-1+L+N}\) stems from \(\pi_{1}(D(n-1+L+N))\). This group embeds into \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) via \(\mathrm{Push}_{\mathrm{PMap}_{n}}^{orb}\). Recall that the natural generators of \(\pi_{1}(D(n-1+L+N))\) are represented by loops that encircle precisely one puncture of \(D(n-1+L+N)\). These generators can be chosen such that \(\mathrm{Push}_{\mathrm{PMap}_{n}}^{orb}\) maps them to the mapping classes represented by \(A_{nj},B_{n\lambda}\) and \(C_{n\nu}\) for \(1\leqslant j<n,1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\). For \(n=1\), this implies that \(\mathrm{PMap}_{1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is isomorphic to \(F_{L+N}\) and the elements \(B_{1\lambda},C_{1\nu}\) with \(1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\) form a basis of the free group.
By induction, this allows us to assume that \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is generated by
\[A_{ji},B_{k\lambda}\ \text{ and }\ C_{k\nu}\]
with \(1\leqslant i,j,k<n,i<j,1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\). The section from Corollary 4.13 embeds \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) into \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) sending these generators of \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) to their homonyms in \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Moreover, as described above, \(F_{n-1+L+N}\) embeds into \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) as the subgroup generated by
\[A_{nj},B_{n\lambda}\ \text{ and }\ C_{n\nu}\]
with \(1\leqslant j<n,1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\). Hence, the short exact sequence from Corollary 4.13 implies that the above set generates \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\).
The next step to deduce a presentation from Corollary 4.13 is to observe relations for the above generators.
**Lemma 4.17**.: _Let \(A\) be a homeomorphism that represents an element in the group \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Further, let \(H\) be one of the homeomorphisms_
\[A_{ji},B_{k\lambda}\ \text{ and }\ C_{k\nu}\]
_for some \(1\leqslant i,j,k\leqslant n\) with \(i<j\), \(1\leqslant\nu\leqslant N\) and \(1\leqslant\lambda\leqslant L\). If \(D_{H}\subseteq F(L)\) is the associated supporting disk of \(H\) and \(A(\Gamma(D_{H}))=\Gamma(D_{H})\), then \([AHA^{-1}]=[H]\)._
Proof.: In general, a conjugate \(AHA^{-1}\) is related to \(H\) by the following commuting diagram
Since we further assume \(A(\Gamma(D_{H}))=\Gamma(D_{H})\), both elements \(H\) and \(AHA^{-1}\) induce an element in \(\mathrm{Map}_{n(H)}^{orb}\left(D_{H}\right)\), respectively.
If the supporting disk contains no cone point, it either is a disk \(D_{i,j}\) with two marked points \(p_{i}\) and \(p_{j}\) or a disk \(D_{r_{\lambda},k}\) with one puncture \(r_{\lambda}\) and one marked point \(p_{k}\). In the first case, \(\mathrm{Map}_{n(H)}^{orb}\left(D_{H}\right)\) is isomorphic to the braid group on two strands by an application of the Alexander trick, see [5, Section 9.1.3]. Since the Alexander trick also applies to a disk with one puncture, the group in the second case is isomorphic to the fundamental group of a punctured disk. In both cases this implies that \(\mathrm{Map}_{n(H)}^{orb}\left(D_{H}\right)\) is infinite cyclic.
Otherwise \(D_{H}\) contains a cone point \(c_{\nu}\) and \(m_{\nu}\) translates of marked points, i.e. \((D_{H})_{\mathbb{Z}_{m_{\nu}}}\) is the orbifold described in Example 3.6. As determined in the example, \(\mathrm{Map}_{n(H)}^{orb}\left(D_{H}\right)\) is also infinite cyclic. In particular, the group \(\mathrm{Map}_{n(H)}^{orb}\left(D_{H}\right)\) is abelian in each case.
Since \(A(\Gamma(D_{H}))=\Gamma(D_{H})\), there exists an element \(\gamma\in\Gamma\) such that
\[\gamma A:x\mapsto\gamma(A(x))\]
defines a homeomorphism of \(D_{H}\) that represents an element in \(\mathrm{Map}_{n(H)}^{orb}\left(D_{H}\right)\). Using that these groups are abelian, this implies \([\gamma AH(\gamma A)^{-1}]=[H|_{D_{H}}]\). This is equivalent to \([AHA^{-1}|_{\gamma^{-1}(D_{H})}]=[\gamma^{-1}H\gamma]\) with
\[\gamma^{-1}H\gamma:\gamma^{-1}(D_{H})\rightarrow\gamma^{-1}(D_{H}),\quad x \mapsto\gamma^{-1}(H(\gamma(x))).\]
Since \(H\) and \(AHA^{-1}\) are \(\Gamma\)-equivariant, this implies that \([AHA^{-1}|_{D_{H}}]=[H|_{D_{H}}]\) and consequently \([AHA^{-1}]=[H]\).
Using Lemma 4.17, we observe the following relations in \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\):
**Lemma 4.18**.: _Let \(1\leqslant h,i,j,k,l<n\) with \(h<i<j<k<l\), \(1\leqslant\theta,\lambda\leqslant L\) with \(\theta<\lambda\) and \(1\leqslant\mu,\nu\leqslant N\) with \(\mu<\nu\). Then the following relations hold:_
1. \(A_{lj}A_{nj}A_{lj}^{-1}=A_{nj}^{-1}A_{nl}^{-1}A_{nj}A_{nl}A_{nj}\) and a') \(A_{lj}^{-1}A_{nj}A_{lj}=A_{nl}A_{nj}A_{nl}^{-1}\)_, b._ \(A_{ji}A_{nj}A_{ji}^{-1}=A_{ni}^{-1}A_{nj}A_{ni}\) and b') \(A_{ji}^{-1}A_{nj}A_{ji}=A_{nj}A_{ni}A_{nj}A_{ni}^{-1}A_{nj}^{-1}\)_, c._ \(B_{j\lambda}A_{nj}B_{j\lambda}^{-1}=B_{n\lambda}^{-1}A_{nj}B_{n\lambda}\) and c') \(B_{j\lambda}^{-1}A_{nj}B_{j\lambda}=A_{nj}B_{n\lambda}A_{nj}B_{n\lambda}^{-1}A_ {nj}^{-1}\)_, d._ \(C_{j\nu}A_{nj}C_{j\nu}=C_{n\nu}^{-1}A_{nj}C_{n\nu}\) and d') \(C_{j\nu}^{-1}A_{nj}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1}\)_, e._ \(B_{j\lambda}B_{n\lambda}B_{j\lambda}=B_{n\lambda}B_{j\lambda}=B_{n\lambda}B_{ n\lambda}A_{nj}^{-1}\)_, f._ \(C_{j\nu}C_{n\nu}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}C_{n\nu}\) and f') \(C_{j\nu}^{-1}C_{n\nu}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}\)_,_
2. \([A_{ih},A_{nj}]=1,\quad[B_{i\lambda},A_{nj}]=1\) and \([C_{i\nu},A_{nj}]=1\)_,_ b._ \([A_{lk},A_{nj}]=1,\quad[A_{ji},B_{n\lambda}]=1\)_, \([B_{j\theta},B_{n\lambda}]=1\)_,_ \([A_{ji},C_{n\nu}]=1\)_,_ \([B_{j\lambda},C_{n\nu}]=1\) _and_ \([C_{j\mu},C_{n\nu}]=1\)_,_ c._ \([A_{nl}A_{nj}A_{nl}^{-1},A_{li}]=1\)_,_ \([A_{nl}A_{nj}A_{nl}^{-1},B_{l\lambda}]=1\) and \([A_{nl}A_{nj}A_{nl}^{-1},C_{l\nu}]=1\)_, d._ \([A_{nj}B_{n\theta}A_{nj}^{-1},B_{j\lambda}]=1\) _and_ \([A_{nj}B_{n\theta}A_{nj}^{-1},C_{j\nu}]=1\)_, e._ \([A_{nj}C_{n\mu}A_{nj}^{-1},C_{j\nu}]=1\)_._
_In particular, these relations imply:_
1. \(A_{li}A_{nj}A_{li}^{-1}=A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{nl}A_{nj}A_{nl}^{-1}A_{ nl}^{-1}A_{nl}A_{ni}\)_,_
2. \(A_{li}^{-1}A_{nj}A_{li}=A_{nl}A_{ni}A_{nl}^{-1}A_{nj}A_{ni}A_{nl}A_{ni}A_{nl}A_{ nl}^{-1}\)_,_
3. \(B_{l\lambda}A_{nj}B_{l\lambda}=B_{n\lambda}^{-1}A_{nl}^{-1}B_{n\lambda}A_{nl}A_{nj}A_{ nl}^{-1}B_{n\lambda}^{-1}A_{nl}\)_,_
4. \(B_{l\lambda}^{-1}A_{nj}B_{l\lambda}=A_{nl}B_{n\lambda}A_{nl}^{-1}B_{n\lambda}A_{ nl}B_{n\lambda}^{-1}A_{nl}^{-1}\)_,_
5. \(C_{l\nu}A_{nj}C_{l\nu}^{-1}=C_{n\nu}^{-1}A_{nl}^{-1}C_{n\nu}A_{nl}A_{nj}A_{nl}^{-1}C _{n\nu}^{-1}A_{nl}C_{n\nu}\)_,_
6. \(C_{l\nu}^{-1}A_{nj}C_{l\nu}=A_{nl}C_{n\nu}A_{nl}^{-1}C_{n\nu}^{-1}A_{nj}C_{n\nu}A_{ nl}C_{n\nu}^{-1}A_{nl}^{-1}C_{n\nu}\)_,_
7. \(C_{l\nu}^{-1}A_{nj}C_{l\nu}=A_{nl}C_{n\nu}A_{nl}^{-1}C_{n\nu}^{-1}A_{nj}C_{n\nu}A_{ nl}C_{n\nu}^{-1}A_{nl}^{-1}\)_,_
8. \(B_{j\lambda}B_{n\theta}B_{j\lambda}=B_{n\lambda}A_{nj}B_{n\lambda}A_{nj}B_{n \lambda}A_{nj}B_{n\lambda}^{-1}B_{n\lambda}^{-1}A_{nj}^{-1}\)_,_
9. \(C_{j\nu}B_{n\lambda}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}B_{n \lambda}A_{nj}B_{n\lambda}A_{nj}B_{n\lambda}^{-1}A_{nj}^{-1}\)_,_
10. \(C_{j\nu}C_{n\mu}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}C_{n\nu}^{-1}B_{n\lambda}C_{n \nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1}\)_,_
11. \(C_{j\nu}C_{n\mu}A_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}C_{n\mu}A_{nj}C _{n\nu}^{-1}C_{n\nu}\)_,_
11. \(C_{j\nu}^{-1}C_{n\mu}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}C_{n\mu}C_{n\nu}A_{nj}C_{n \nu}^{-1}A_{nj}C_{n\nu}\)_._
11. \(C_{j\nu}^{-1}C_{n\mu}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}C_{n\nu}^{-1}C_{n\mu}C_{n \nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1}\)_._
Proof.: The relations in 4.18(1) are of the form \(AHA^{-1}=\tilde{A}H\tilde{A}^{-1}\) with \(H\) as in Lemma 4.17 and \(A,\tilde{A}\) pure \(\Gamma\)-equivariant homeomorphisms. This is equivalent to \(\tilde{A}^{-1}AHA^{-1}\tilde{A}=H\). Thus, by Lemma 4.17, it remains to observe that \(\tilde{A}^{-1}A(\Gamma(D_{H}))=\Gamma(D_{H})\) or equivalent \(A(\Gamma(D_{H}))=\tilde{A}(\Gamma(D_{H}))\). This is elaborated in Figure 4.6 for the relations 4.18(1)a)-f). The remaining relations 4.18(1)a)-f') follow from analogous pictures where we twist along the red and blue arrows, respectively, in the opposite direction.
For the commutator relations in 4.18(2), we check that the commuting mapping classes can be realized by homeomorphisms with disjoint support. In the cases when two generators commute, this follows directly from the definition of the generators on page 34. Further, in Figure 4.7, we determine the supporting disks of \(A_{nl}A_{nj}A_{nl}^{-1}\), \(A_{nj}B_{n\theta}A_{nj}^{-1}\) and \(A_{nj}C_{n\mu}A_{nj}^{-1}\). For each of these disks, the \(\Gamma\)-orbit is
Figure 4.6. Observation of the relations 4.18(1)a)-f) (from top to bottom) by consideration of the supporting disks.
disjoint from the support of the commuting generators. Thus, the commutator relations follow. In Figure 4.7 the support of exemplary commuting generators are depicted grayed out.
For the additional conjugation relations, we work out the first two examples. The remaining relations follow analogously. For this purpose, we emphasize that the relations from 4.18(1) imply:
\[A_{li}A_{nl}A_{ni}\overset{\ref{eq:A_n}(1)\text{\tiny{$\alpha$}}\cdot}{=}A_{ni}A _{li}A_{nl}\overset{\ref{eq:A_n}(1)\text{\tiny{b}})}{=}A_{nl}A_{ni}A_{li}. \tag{10}\]
Based on the commutator relation \([A_{nl}A_{nj}A_{nl}^{-1},A_{li}]=1\) from 4.18(2) and relation (10), the first of the missing relations follows:
\[A_{li}A_{nl}A_{nj}A_{nl}^{-1}=A_{nl}A_{nj}A_{nl}^{-1}A_{li}\qquad \qquad\qquad\qquad\qquad\mid A_{ni}\cdot\cdot\cdot A_{nl}\] \[\Leftrightarrow A_{ni}A_{li}A_{nl}A_{nj}=A_{ni}A_{nl}A_{nj}A_{nl}^{-1}A_{li}A_{nl}\] \[\overset{\vee}{\Leftrightarrow} A_{ni}A_{li}A_{nl}A_{nj}=A_{ni}A_{nl}A_{nj}A_{nl}^{-1}(A_{ni}^{-1}A_{ni})A_{li}A_{nl}\] \[\overset{\eqref{eq:A_n}}{\Leftrightarrow} A_{nl}A_{ni}A_{li}A_{nj}=A_{ni}A_{nl}A_{nj}A_{nl}^{-1}A_{nl}A_{ni}A_{li} \qquad\mid A_{ni}^{-1}A_{nl}^{-1}\cdot\cdot A_{li}^{-1}\] \[\Leftrightarrow A_{li}A_{nj}A_{li}^{-1}=A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{nl}A_{nj}A_{ nl}^{-1}A_{ni}^{-1}A_{nl}A_{ni}.\]
From the last equation we also obtain the second conjugation relation:
\[A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{nl}A_{nj}A_{nl}^{-1}A_{ni}^{-1}A_{ nl}A_{ni}=A_{li}A_{nj}A_{li}^{-1}\quad\mid A_{nl}^{-1}A_{ni}^{-1}A_{nl}A_{ni} \cdot\cdot A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{nl}\] \[\Leftrightarrow A_{nj}=A_{nl}^{-1}A_{ni}^{-1}A_{nl}A_{li}A_{nj}A_{lj}^{-1}A_{ li}^{-1}A_{nl}^{-1}A_{ni}A_{nl}\] \[\overset{\eqref{eq:A_n}}{\Leftrightarrow} A_{nj}=A_{nl}^{-1}A_{ni}^{-1}A_{li}A_{nl}A_{ni}A_{nj}A_{ni}^{-1}A_{nl}^{-1}A_{li}^{-1}A_{ ni}A_{nl}\] \[\overset{\vee}{\Leftrightarrow} A_{nj}=(A_{li}A_{li}^{-1})A_{nl}^{-1}A_{li}^{-1}A_{li}A_{nl}A_{ni}A_{nj}A_{ni}^{-1}A_{ nl}^{-1}A_{li}^{-1}A_{ni}A_{nl}(A_{li}A_{li}^{-1})\quad\mid A_{li}^{-1} \cdot\cdot A_{li}\] \[\Leftrightarrow A_{li}^{-1}A_{nj}A_{li}=A_{li}^{-1}A_{nl}^{-1}A_{ni}^{-1}A_{ li}A_{nl}A_{ni}A_{nj}A_{ni}^{-1}A_{nl}^{-1}A_{li}^{-1}A_{ni}A_{nl}A_{li}\] \[\overset{\ref{eq:A_n}}{\Leftrightarrow} A_{li}^{-1}A_{nj}A_{li}=A_{nl}A_{ni}A_{nl}^{-1}A_{ni}^{-1}A_{nl}A_{ nl}^{-1}A_{nl}^{-1}A_{nl}A_{ni}A_{nj}\] \[\overset{\ref{eq:A_n}}{\Leftrightarrow} A_{li}^{-1}A_{nl}A_{ni}A_{nl}^{-1}A_{nl}^{-1}A_{nl}^{-1}A_{nl}^{-1}A_{ nl}^{-1}A_{nl}^{-1}A_{nl}A_{ni}A_{nj}\] \[\overset{\ref{eq:A_n}}{\Leftrightarrow} A_{li}^{-1}A_{nj}A_{li}=A_{nl}A_{ni}A_{nl}^{-1}A_{ni}^{-1}A_{nj}A_{ni}A_{ nl}A_{nl}^{-1}A_{nl}^{-1}.\]
Now the semidirect product structure of \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) allows us to deduce a finite presentation in terms of the above generators.
**Corollary 4.19**.: _The pure mapping class group \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has a presentation with generators_
\[A_{ji},B_{k\lambda}\ \text{ and }\ C_{k\nu},\]
_for \(1\leq i,j,k\leq n\) with \(i<j\), \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\) and the following defining relations for \(1\leq i,j,k,l\leq n\) with \(i<j<k<l\), \(1\leq\theta,\lambda\leq L\) with \(\theta<\lambda\) and \(1\leq\mu,\nu\leq N\) with \(\mu<\nu\):_
1. \(\left[A_{ji},A_{lk}\right]=1\)_,_ \(\left[B_{j\lambda},A_{lk}\right]=1\) _and_ \(\left[C_{j\nu},A_{lk}\right]=1\)_,_
2. \(\left[A_{li},A_{kj}\right]=1\)_,_ \(\left[B_{l\lambda},A_{kj}\right]=1\)_,_ \(\left[B_{l\lambda},B_{k\theta}\right]=1\)_,_ \(\left[C_{l\nu},A_{kj}\right]=1\)_,_ \(\left[C_{l\nu},B_{k\lambda}\right]=1\) _and_ \(\left[C_{l\nu},C_{k\mu}\right]=1\)_,_
3. \(\left[A_{lk}A_{lj}A_{lk}^{-1},A_{ki}\right]=1\)_,_ \(\left[A_{kj}A_{ki}A_{kj}^{-1},B_{j\lambda}\right]=1\)_,_ \(\left[A_{kj}B_{k\theta}A_{kj}^{-1},B_{j\lambda}\right]=1\)_,_ \(\left[A_{kj}A_{ki}A_{kj}^{-1},C_{j\nu}\right]=1\)_,_ \(\left[A_{kj}A_{ki}A_{kj}^{-1},C_{j\nu}\right]=1\)_,_ \(\left[A_{kj}C_{k\mu}A_{kj}^{-1},C_{j\nu}\right]=1\) _and_ \(\left[A_{kj}B_{k\lambda}A_{kj}^{-1},C_{j\nu}\right]=1\)_,_ \(\left[A_{kj}A_{kj}A_{ki}=A_{ki}A_{ji}A_{kj}=A_{kj}A_{ki}A_{ji}\right.\)_ \(\left.A_{ji}B_{j\lambda}B_{i\lambda}=B_{i\lambda}A_{ji}B_{j\lambda}=B_{j\lambda}B _{i\lambda}A_{ji}\right.\) _and_ \(\left.A_{ji}C_{j\nu}C_{i\nu}=C_{i\nu}A_{ji}C_{j\nu}=C_{j\nu}C_{i\nu}A_{ji}\)_._
Proof.: We prove the claimed presentation of \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) by induction on \(n\).
For \(n=0\) and \(n=1\), the same arguments as in the proof of Corollary 4.16 show that \(\mathrm{PMap}_{0}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is trivial and \(\mathrm{PMap}_{1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is a free group of rank \(L+N\) with basis elements \(B_{1\lambda},C_{1\nu}\) for \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\). That is, \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has the above presentation for \(n=0\) and \(n=1\).
By induction hypothesis, we assume that \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has a presentation as claimed above. Moreover, we recall that the group \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) by Corollary 4.13 has a semidirect product structure \(F_{n-1+L+N}\rtimes\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Invoking Lemma 4.15, we obtain a presentation by generators \(A_{ji}\) for \(1\leq i<j\leq n\) and \(C_{k\nu},B_{k\lambda}\) for \(1\leq k\leq n,1\leq\nu\leq N,1\leq\lambda\leq L\) with the following relations: Since \(F_{n-1+L+N}\) is a free group, the set of relations named \(R\) in Lemma 4.15 is empty in this case. From \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) the group \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) inherits the following relations for \(1\leq i,j,k,l<n\) with \(i<j<k<l\), \(1\leq\theta,\lambda\leq L\) with \(\theta<\lambda\) and \(1\leq\mu,\nu\leq N\) with \(\mu<\nu\) named \(S\) in Lemma 4.15:
* \(\left[A_{ji},A_{lk}\right]=1\), \(\left[B_{j\lambda},A_{lk}\right]=1\) and \(\left[C_{j\nu},A_{lk}\right]=1\),
* \(\left[A_{li},A_{kj}\right]=1\), \(\left[B_{l\lambda},A_{kj}\right]=1\), \(\left[B_{l\lambda},B_{k\theta}\right]=1\), \(\left[C_{l\nu},A_{kj}\right]=1\), \(\left[C_{l\nu},B_{k\lambda}\right]=1\) and \(\left[C_{l\nu},C_{k\mu}\right]=1\),
* \(\left[A_{lk}A_{lj}A_{lk}^{-1},A_{ki}\right]=1\), \(\left[A_{kj}A_{ki}A_{kj}^{-1},B_{j\lambda}\right]=1\), \(\left[A_{kj}B_{k\theta}A_{kj}^{-1},B_{j\lambda}\right]=1\), \(\left[A_{kj}A_{ki}A_{kj}^{-1},C_{j\nu}\right]=1\), \(\left[A_{kj}C_{k\mu}A_{kj}^{-1},C_{j\nu}\right]=1\) and \(\left[A_{kj}B_{k\lambda}A_{kj}^{-1},C_{j\nu}\right]=1\),
* \(A_{ji}A_{kj}A_{ki}=A_{ki}A_{ji}A_{kj}=A_{kj}A_{ki}A_{ji}\), \(A_{ji}B_{j\lambda}B_{i\lambda}=B_{i\lambda}A_{ji}\) and \(A_{ji}C_{j\nu}C_{i\nu}=C_{i\nu}A_{ji}C_{j\nu}=C_{j\nu}C_{i\nu}A_{ji}\).
Additionally, the generators satisfy the conjugation relations from Lemma 4.18. Below we list them in an order, that allows us to observe that these relations guarantee that \(\mathrm{Push}_{\mathrm{PMap}_{n}}^{orb}\) embeds \(F_{n-1+L+N}\) as a normal subgroup, i.e. the conjugation relations and the relations from \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) yield a presentation as in Lemma 4.15(2).
It remains to show that this presentation is equivalent to the claimed presentation. Therefore, we explain how the conjugation relations fit into the presentation of \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) claimed above. This requires the observations described below. In (12), we use:
\[A_{lj}A_{nj}A_{lj}^{-1}=A_{nj}^{-1}A_{nl}^{-1}A_{nj}A_{nl}A_{nj} \qquad\qquad\left|A_{nl}A_{nj}\cdot\,A_{lj}\right.\] \[\Leftrightarrow\]
\[\stackrel{{\eqref{eq:13-1}}}{{\Leftrightarrow}} A_{nj}A_{nl}A_{nj}=A_{nj}A_{nj}A_{nl}\qquad\qquad\qquad\mid A_{nj}^{-1}.\] \[\Leftrightarrow A_{lj}A_{nl}A_{nj}=A_{nj}A_{lj}A_{nl}\text{ from \ref{eq:13-1}}(4).\]
The observation in (13) is immediate. For (14), we observe:
\[A_{li}A_{nj}A_{li}^{-1}=A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{nl}A_{nj}A_ {nl}^{-1}A_{ni}^{-1}A_{nl}A_{ni}\quad\mid A_{nl}A_{ni}\ \cdot\ A_{li}\] \[\Leftrightarrow A_{nl}A_{ni}A_{li}A_{nj}=A_{ni}A_{nl}A_{nj}A_{nl}^{-1}A_{nj}^{-1}A_ {nl}A_{ni}A_{li}\] \[\stackrel{{\eqref{eq:13-1}}}{{\Leftrightarrow}} A_{ni}A_{li}A_{nl}A_{nj}=A_{ni}A_{nl}A_{nj}A_{nl}^{-1}A_{nj}^{-1}A_{ni}A_{li}A_{ nl}\qquad\mid A_{ni}^{-1}.\] \[\Leftrightarrow A_{li}A_{nl}A_{nj}=A_{nl}A_{nj}A_{nl}^{-1}A_{li}A_{nl} \qquad\qquad\qquad\mid\cdot A_{nl}^{-1}\] \[\Leftrightarrow A_{li}A_{nl}A_{nj}A_{nl}^{-1}=A_{nl}A_{nj}A_{nl}^{-1}A_{li}\text{ from \ref{eq:13-1}}(3).\]
To deduce (15), we additionally observe
\[A_{lj}A_{nl}A_{nj}\qquad\stackrel{{\eqref{eq:12}}}{{=}} A_{nj}A_{lj}A_{nl}\quad\stackrel{{\eqref{eq:13-1}}}{{=}} A_{nl}A_{nj}A_{lj}. \tag{11}\]
This implies:
\[\Leftrightarrow A_{nj} = A_{nl}A_{ni}A_{nl}^{-1}A_{ni}^{-1}A_{nj}A_{ni}A_{nl}A_{nl}^{-1}A _{nl}^{-1}\qquad\mid A_{li}\cdot\ \cdot A_{li}^{-1}\] \[\stackrel{{\eqref{eq:11}}}{{=}} A_{nl}A_{ni}A_{nl}^{-1}A_{ni}^{-1}A_{nj}A_{ni}A_{nl}A_{nl}^{-1}A_{ni}^{-1}A_{ nl}^{-1}A_{nl}^{-1}\] \[\stackrel{{\eqref{eq:11}}}{{=}} A_{nl}A_{ni}A_{li}A_{nl}^{-1}A_{ni}^{-1}A_{nj}A_{ni}A_{nl}A_{li}^{-1}A_{ni}^{-1}A_{ nl}^{-1}\] \[\stackrel{{\omega}}{{=}} A_{nl}A_{ni}A_{li}A_{nl}^{-1}A_{ni}^{-1}A_{ni}^{-1}(A_{li}^{-1}A_{li})A_{nj}\] \[(A_{li}^{-1}A_{li})A_{ni}A_{nl}A_{li}^{-1}A_{ni}^{-1}A_{nl}^{-1}\] \[\stackrel{{\eqref{eq:12},\eqref{eq:13-1}}}{{=}} A_{nl}A_{ni}A_{ni}^{-1}A_{nl}^{-1}A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{li}A_{nj}\] \[A_{li}^{-1}A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{nl}A_{ni}A_{nl}A_{ni}^{-1 }A_{nl}A_{ni}A_{ni}^{-1}\] \[= A_{nl}^{-1}A_{ni}^{-1}A_{nl}A_{ni}A_{li}A_{nj}A_{li}^{-1}A_{ni}^{- 1}A_{nl}^{-1}A_{ni}A_{nl}\] \[\Leftrightarrow A_{li}A_{nj}A_{li}^{-1}\qquad\qquad\qquad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
(14-2)
\[C_{l\nu}A_{nj}C_{l\nu}^{-1}=C_{n\nu}^{-1}A_{nl}^{-1}C_{n\nu}A_{nl}A_{nj}A_{nl}^{-1 }C_{n\nu}^{-1}A_{nl}C_{n\nu}\quad\stackrel{{\eqref{eq:13-3}}}{{ \Leftrightarrow}}\quad[A_{nl}A_{nj}A_{nl}^{-1},C_{l\nu}]=1,\] (15-2)
\[C_{l\nu}^{-1}A_{nj}C_{l\nu}=A_{nl}C_{n\nu}A_{nl}^{-1}C_{n\nu}^{-1}A_{nj}C_{n \nu}A_{nl}C_{n\nu}^{-1}A_{nl}^{-1}\quad\stackrel{{\eqref{eq:12-5, 13-3}}}{{\Leftrightarrow}}\quad[A_{nl}A_{nj}A_{nl}^{-1},C_{l\nu}]=1,\] (13-1)
\[A_{ji}A_{nj}A_{ji}^{-1}=A_{ni}^{-1}A_{nj}A_{ni}\quad\Leftrightarrow A_{ni}A_{ji}A_{nj}=A_{nj}A_{ni}A_{ji},\] (12-1)
\[A_{ji}^{-1}A_{nj}A_{ji}=A_{nj}A_{ni}A_{nj}A_{ni}^{-1}A_{nj}^{-1} \quad\stackrel{{\eqref{eq:13-3}}}{{\Leftrightarrow}}\quad A_{ji}A_{nj}A_{ni}=A_{nj}A_{ni}A_{ji},\] (13-2)
\[B_{j\lambda}A_{nj}B_{j\lambda}^{-1}=B_{n\lambda}^{-1}A_{nj}B_{n\lambda} \quad\Leftrightarrow B_{n\lambda}B_{j\lambda}A_{nj}=A_{nj}B_{n\lambda}B_{j\lambda},\] (12-2)
\[B_{j\lambda}^{-1}A_{nj}B_{j\lambda}=A_{nj}B_{n\lambda}A_{nj}B_{n\lambda}=A_{nj} B_{n\lambda}B_{j\lambda},\] (13-3)
\[C_{j\nu}A_{nj}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}C_{n\nu}\quad\Leftrightarrow C_{n\nu}C_{j\nu}A_{nj}=A_{nj}C_{n\nu}C_{j\nu},\] (13-3)
\[C_{j\nu}^{-1}A_{nj}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1} \quad\stackrel{{\eqref{eq:13-5}}}{{\Leftrightarrow}}\quad C_{j \nu}A_{nj}C_{n\nu}=A_{nj}C_{n\nu}C_{j\nu},\]
\[A_{ih}A_{nj}A_{ih}^{-1}=A_{ih}^{-1}A_{nj}A_{ih}=A_{nj} \quad\Leftrightarrow\quad[A_{ih},A_{nj}]=1,\] \[B_{i\lambda}A_{nj}B_{i\lambda}^{-1}=B_{i\lambda}^{-1}A_{nj}B_{i \lambda}=A_{nj} \quad\Leftrightarrow\quad[B_{i\lambda},A_{nj}]=1,\] \[C_{i\nu}A_{nj}C_{i\nu}^{-1}=C_{i\nu}^{-1}A_{nj}C_{i\nu}=A_{nj} \quad\Leftrightarrow\quad[C_{i\nu},A_{nj}]=1.\]
\[A_{ji}B_{n\iota}A_{ji}^{-1}=A_{ji}^{-1}B_{n\iota}A_{ji}=B_{n\iota} \quad\Leftrightarrow\quad[A_{ji},B_{n\iota}]=1,\] \[B_{j\theta}B_{n\iota}B_{j\theta}^{-1}=B_{j\theta}^{-1}B_{n\iota}B _{j\theta}=B_{n\iota} \quad\Leftrightarrow\quad[B_{j\theta},B_{n\iota}]=1,\] (12-4)
\[B_{j\iota}B_{n\iota}B_{j\iota}^{-1}=B_{n\iota}^{-1}A_{nj}^{-1}B_{n\iota}A_{nj}B _{n\iota}\quad\stackrel{{\eqref{eq:13-2}}}{{\Leftrightarrow}}\quad A_{nj}B_{n \iota}B_{j\iota}=B_{j\iota}A_{nj}B_{n\iota},\] (13-4)
\[B_{j\lambda}B_{n\iota}B_{j\lambda}^{-1}=B_{n\lambda}^{-1}A_{nj}^{-1}B_{n\lambda} A_{nj}B_{n\iota}A_{nj}^{-1}B_{n\lambda}^{-1}A_{nj}B_{n\lambda}\quad\stackrel{{ \eqref{eq:13-2}}}{{\Leftrightarrow}}\quad[A_{nj}B_{n\iota}A_{nj}^{-1},B_{j \lambda}]=1,\] (15-3)
\[B_{j\lambda}^{-1}B_{n\iota}B_{j\lambda}=A_{nj}B_{n\lambda}A_{nj}^{-1}B_{n \lambda}B_{n\lambda}A_{nj}B_{n\lambda}^{-1}A_{nj}^{-1}\quad\stackrel{{ \eqref{eq:12-4,13-2}}}{{\Leftrightarrow}}\quad[A_{nj}B_{n\iota}A_{nj}^{-1},B_{j \lambda}]=1,\] (14-4)
\[C_{j\nu}B_{n\iota}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}B_{n \iota}A_{nj}^{-1}C_{n\nu}^{-1}A_{nj}C_{n\nu}\quad\stackrel{{ \eqref{eq:13-3}}}{{\Leftrightarrow}}\quad[A_{nj}B_{n\iota}A_{nj}^{-1},C_{j\nu}]=1,\] (15-4)
\[C_{j\nu}^{-1}B_{n\iota}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}C_{n\nu}^{-1}B_{n \iota}C_{n\nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1}\quad\stackrel{{ \eqref{eq:12-5,13-3}}}{{\Leftrightarrow}}\quad[A_{nj}B_{n\iota}A_{nj}^{-1},C_{ j\nu}]=1.\]
\[A_{ji}C_{n\nu}A_{ji}^{-1}=A_{ji}^{-1}C_{n\nu}A_{ji}=C_{n\nu} \quad\Leftrightarrow\quad[A_{ji},C_{n\nu}]=1,\] \[B_{j\lambda}C_{n\nu}B_{j\lambda}^{-1}=B_{j\lambda}^{-1}C_{n\nu} B_{j\lambda}=C_{n\nu} \quad\Leftrightarrow\quad[B_{j\lambda},C_{n\nu}]=1,\] (15-5)
\[C_{j\mu}C_{n\nu}C_{j\mu}^{-1}=C_{j\mu}^{-1}C_{n\nu}C_{j\mu}=C_{n\nu} \quad\Leftrightarrow\quad[C_{j\mu},C_{n\nu}]=1,\] (12-5)
\[C_{j\nu}^{-1}C_{n\nu}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}C_{n \nu}\quad\stackrel{{\eqref{eq:13-3}}}{{\Leftrightarrow}}\quad C_{j \nu}A_{nj}C_{n\nu}=C_{n\nu}C_{j\nu}A_{nj},\] (13-5)
\[C_{j\nu}^{-1}C_{n\nu}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}\quad\Leftrightarrow \quad[A_{nj}C_{n\nu}A_{nj}^{-1},C_{j\nu}]=1.\]
Together, all these relations yield the presentation from above.
Generating set and presentation of \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\)
Now we want to deduce a presentation of \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). For this purpose, let \(H_{j}\) for \(1\leq j<n\) be the homeomorphism that performs the following half-twist on each \(\Gamma\)-translate of the disk \(D_{j,j+1}\), i.e. the disk \(D_{i,j+1}\) from Figure 4.3 for \(i=j\):
For each \(1\leq j<n\), let \(T_{\lambda}:=B_{1\lambda}\) and \(U_{\nu}:=C_{1\nu}\). As for the pure generators, we will use the names \(H_{j},T_{\lambda}\) and \(U_{\nu}\) as acronyms for the represented mapping classes.
**Lemma 4.20**.: _The group \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is generated by_
\[H_{j},T_{\lambda}\;\;\text{and}\;\;U_{\nu}\]
_with \(1\leq j<n\), \(1\leq j\leq L\) and \(1\leq\nu\leq N\)._
Proof.: Definition 4.1 yields a short exact sequence
\[1\to\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to \mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to\mathrm{S }_{n}\to 1.\]
Further, the elements \(H_{j}\) for \(1\leq j<n\) map to the generating set of adjacent transpositions in \(\mathrm{S}_{n}\). Hence, the elements \(H_{j}\) and the pure mapping classes generate \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Using Lemma 4.19, the subgroup \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is generated by
\[A_{ji},B_{k\lambda}\;\;\text{and}\;\;C_{k\nu}\]
with \(1\leq i<j\leq n,1\leq k\leq n,1\leq\lambda\leq L\) and \(1\leq\nu\leq N\). Moreover, we observe
\[H_{j-1}^{-1}...H_{i+1}^{-1}(D_{i,i+1}) =D_{i,j},\] \[H_{j-1}^{-1}...H_{1}^{-1}(D_{r\lambda,1}) =D_{r\lambda,j}\;\text{and}\] \[H_{k-1}^{-1}...H_{1}^{-1}(D_{c_{\nu},1}) =D_{c_{\nu},k},\]
see Figure 4.9. Using Lemma 4.17, this implies
\[A_{ji} =H_{j-1}^{-1}...H_{i+1}^{-1}H_{i}^{2}H_{i+1}...H_{j-1}, \tag{17}\] \[B_{k\lambda} =H_{k-1}^{-1}...H_{1}^{-1}T_{\lambda}H_{1}...H_{k-1}\text{ and}\] (18) \[C_{k\nu} =H_{k-1}^{-1}...H_{1}^{-1}U_{\nu}H_{1}...H_{k-1}. \tag{16}\]
Consequently, \(H_{j},T_{\lambda}\) and \(U_{\nu}\) with \(1\leq j<n,1\leq\lambda\leq L\) and \(1\leq\nu\leq N\) generate \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\).
Figure 4.8. The half-twist \(H_{j}\).
Subject to the above generating set, we will give a suitable set of relations that defines \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). For this purpose, we introduce the abbreviations
\[X_{j} :=A_{nj}=H_{n-1}^{-1}...H_{j+1}^{-1}H_{j}^{2}H_{j+1}...H_{n-1}, \tag{20}\] \[Y_{\lambda} :=B_{n\lambda}=H_{n-1}^{-1}...H_{1}^{-1}T_{t}H_{1}...H_{n-1}\text{ and}\] (21) \[Z_{\nu} :=C_{n\nu}=H_{n-1}^{-1}...H_{1}^{-1}U_{\nu}H_{1}...H_{n-1}. \tag{19}\]
We recall that these elements generate the image of \(\operatorname{Push}_{\operatorname{PMap}_{n}}^{orb}\), i.e. they generate a free subgroup \(F_{n-1+L+N}\) in \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\).
In the following, the relations \(H_{i}H_{i+1}H_{i}=H_{i+1}H_{i}H_{i+1}\) for \(1\leqslant i\leqslant n-2\) and \([H_{j},H_{k}]=1\) for \(1\leqslant j,k<n\) with \(|j-k|\geqslant 2\) are called _braid and commutator relations_ for \(H_{1},...,H_{n-1}\).
**Lemma 4.21**.: _The generators \(H_{1},...,H_{n-1},T_{1},...,T_{L},U_{1},...,U_{N}\) of \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) satisfy the following relations for \(2\leqslant j<n\), \(1\leqslant\theta,\lambda\leqslant L\) with \(\theta<\lambda\) and \(1\leqslant\mu,\nu\leqslant N\) with \(\mu<\nu\):_
1. _braid and commutator relations for the generators_ \(H_{1},...,H_{n-1}\)_,_
2. 1. \([T_{\lambda},H_{j}]=1\)_,_ 2. \([U_{\nu},H_{j}]=1\)_,_ 3. \([H_{1}T_{\lambda}H_{1},T_{\lambda}]=1\)_,_ 4. \([H_{1}U_{\nu}H_{1},U_{\nu}]=1\) _and_ 5. \([T_{\theta},B_{2\lambda}]=1\quad\text{for}\quad B_{2\lambda}=H_{1}^{-1}T_{ \lambda}H_{1}\)_,_ 6. \([U_{\mu},C_{2\nu}]=1\quad\text{for}\quad C_{2\nu}=H_{1}^{-1}U_{\nu}H_{1}\) _and_ 7. \([T_{\lambda},C_{2\nu}]=1\quad\text{for}\quad C_{2\nu}=H_{1}^{-1}U_{\nu}H_{1}\)_._
_In particular, these relations imply that_
\[\operatorname{Map}_{n-1}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L) \right)=\left\langle H_{1},...,H_{n-2},T_{\lambda},U_{\nu}\mid 1\leqslant \lambda\leqslant L,1\leqslant\nu\leqslant N\right\rangle\]
acts on \(F_{n-1+L+N}\) via conjugation. More precisely, for \(1\leq i,j,k<n\) with \(i<j<k\), \(1\leq\theta,\iota,\lambda\leq L\) with \(\theta<\iota<\lambda\) and \(1\leq\mu,\nu,o\leq N\) with \(\mu<\nu<o\), we have:_
\[H_{k}X_{j}H_{k}^{-1} =H_{k}^{-1}X_{j}H_{k}=X_{j} \text{for }k\leq n-2,\] \[H_{j}X_{j}H_{j}^{-1} =X_{j}^{-1}X_{j+1}X_{j} \text{for }j\leq n-2,\] \[H_{j-1}^{-1}X_{j}H_{j} =X_{j+1} \text{for }j\leq n-2,\] \[H_{j-1}X_{j}H_{j-1} =X_{j-1} \text{for }j<n,\] \[H_{j-1}^{-1}X_{j}H_{j-1} =X_{j}X_{j-1}X_{j}^{-1} \text{for }j<n,\] \[H_{i}X_{j}H_{i}^{-1} =H_{i}^{-1}X_{j}H_{i}=X_{j} \text{for }i\leq j-2,\] \[T_{\lambda}X_{j}T_{\lambda}^{-1} =T_{\lambda}^{-1}X_{j}T_{\lambda}=X_{j} \text{for }2\leq j,\] \[U_{\nu}X_{j}U_{\nu}^{-1} =U_{\nu}^{-1}X_{j}U_{\nu}=X_{j} \text{for }2\leq j,\] \[T_{\lambda}X_{1}T_{\lambda}^{-1} =Y_{\lambda}^{-1}X_{1}Y_{\lambda},\] \[T_{\lambda}^{-1}X_{1}T_{\lambda} =X_{1}Y_{\lambda}X_{1}Y_{\lambda}^{-1}X_{1}^{-1},\] \[U_{\nu}X_{1}U_{\nu}^{-1} =Z_{\nu}^{-1}X_{1}Z_{\nu},\] \[U_{\nu}^{-1}X_{1}U_{\nu} =X_{1}Z_{\nu}X_{1}Z_{\nu}^{-1}X_{1}^{-1},\] \[H_{j}Y_{\iota}H_{j}^{-1} =H_{j}^{-1}Y_{\iota}H_{j}=Y_{\iota} \text{for }j\leq n-2,\] \[T_{\theta}Y_{\iota}T_{\theta}^{-1} =T_{\theta}^{-1}Y_{\iota}T_{\theta}=Y_{\iota},\] \[T_{i}Y_{\iota}T_{\iota}^{-1} =Y_{\iota}^{-1}X_{1}^{-1}Y_{\iota}X_{1}Y_{\iota},\] \[T_{\iota}^{-1}Y_{\iota}T_{\iota} =X_{1}Y_{\iota}X_{1}^{-1},\] \[T_{\lambda}Y_{\iota}T_{\lambda}^{-1} =Y_{\lambda}^{-1}X_{1}^{-1}Y_{\lambda}X_{1}Y_{\lambda}X_{1}^{-1}Y_ {\lambda}^{-1}X_{1}Y_{\lambda},\] \[T_{\lambda}^{-1}Y_{\iota}T_{\lambda} =X_{1}Y_{\lambda}X_{1}^{-1}Y_{\lambda}^{-1}Y_{\iota}Y_{\lambda}X_ {1}Y_{\lambda}^{-1}X_{1}^{-1},\] \[U_{\nu}Y_{\iota}U_{\nu}^{-1} =Z_{\nu}^{-1}X_{1}^{-1}Z_{\nu}X_{1}Y_{\iota}X_{1}^{-1}Z_{\nu}^{-1} X_{1}Z_{\nu},\] \[U_{\nu}^{-1}Y_{\iota}U_{\nu} =X_{1}Z_{\nu}X_{1}^{-1}Z_{\nu}^{-1}Y_{\iota}Z_{\nu}X_{1}Z_{\nu}^{ -1}X_{1}^{-1},\] \[H_{j}Z_{\nu}H_{j}^{-1} =H_{j}^{-1}Z_{\nu}H_{j}=Z_{\nu} \text{for }j\leq n-2,\] \[T_{\lambda}Z_{\nu}T_{\lambda}^{-1} =T_{\lambda}^{-1}Z_{\nu}T_{\lambda}=Z_{\nu},\] \[U_{\mu}Z_{\nu}U_{\mu}^{-1} =U_{\mu}^{-1}Z_{\nu}U_{\mu}=Z_{\nu},\] \[U_{\nu}Z_{\nu}U_{\nu}^{-1} =Z_{\nu}^{-1}X_{1}^{-1}Z_{\nu}X_{1}Z_{\nu},\] \[U_{\nu}^{-1}Z_{\nu}U_{\nu} =X_{1}Z_{\nu}X_{1}^{-1},\] \[U_{o}Z_{\nu}U_{o}^{-1} =Z_{o}^{-1}X_{1}^{-1}Z_{o}X_{1}Z_{\nu}X_{1}^{-1}Z_{o}^{-1}X_{1}Z_ {o},\] \[U_{o}^{-1}Z_{\nu}U_{o} =X_{1}Z_{o}X_{1}^{-1}Z_{o}^{-1}Z_{\nu}Z_{o}X_{1}Z_{o}^{-1}X_{1}^ {-1}.\]
Proof.: The braid and commutator relations for \(H_{1},...,H_{n-1}\) in 4.19(1) follow as in the surface case.
The relations in 4.21(3) are reformulations of
\[A_{21}C_{2\nu}C_{1\nu}=C_{1\nu}A_{21}C_{2\nu}\text{ and }A_{21}B_{2\lambda}B_{1 \lambda}=B_{1\lambda}A_{21}C_{2\lambda}\]
from relation 4.19(4). Using the definitions of \(A_{21}\), \(B_{2\lambda}\) and \(C_{2\nu}\) from (16), (17) and (18), these relations are equivalent to
\[(H_{1}^{2})(H_{1}^{-1}U_{\nu}H_{1})U_{\nu} =U_{\nu}(H_{1}^{2})(H_{1}^{-1}U_{\nu}H_{1})\text{ and }\] \[(H_{1}^{2})(H_{1}^{-1}T_{\lambda}H_{1})T_{\lambda} =T_{\lambda}(H_{1}^{2})(H_{1}^{-1}T_{\lambda}H_{1}).\]
These are equivalent to \([U_{\nu},H_{1}U_{\nu}H_{1}]=1\) and \([T_{\lambda},H_{1}T_{\lambda}H_{1}]=1\).
The commutator relations 4.21(2) and 4.21(4) are a direct consequence of the fact that the commuting mapping classes can be realized by homeomorphisms with disjoint supports (see Figure 4.10).
For the proof that the above conjugation relations follow from the relations 4.21(1) to 4.21(4), we refer to [7, Lemma 5.24].
The above relations and the semidirect product structure together allow us to deduce a presentation of \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). The argument is motivated by the braid combing in [8, Proposition 3.1].
**Proposition 4.22**.: _For \(n\geqslant 1\), the group \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is presented by generators_
\[H_{1},...,H_{n-1},T_{1},...,T_{L},U_{1},...,U_{N}\]
_and defining relations for \(2\leqslant j<n\), \(1\leqslant\theta,\lambda\leqslant L\) with \(\theta<\lambda\) and \(1\leqslant\mu,\nu\leqslant N\) with \(\mu<\nu\):_
1. _braid and commutator relations for the generators_ \(H_{1},...,H_{n-1}\)_,_
2. 1. \([T_{\lambda},H_{j}]=1\)_,_ 2. \([U_{\nu},H_{j}]=1\)_,_ 3. \([H_{1}T_{\lambda}H_{1},T_{\lambda}]=1\)_,_ 4. \([H_{1}U_{\nu}H_{1},U_{\nu}]=1\) _and_ 5. \([T_{\theta},B_{2\lambda}]=1\quad\text{for}\quad B_{2\lambda}=H_{1}^{-1}T_{ \lambda}H_{1}\)_,_ 6. \([U_{\mu},C_{2\nu}]=1\quad\text{for}\quad C_{2\nu}=H_{1}^{-1}U_{\nu}H_{1}\) _and_ 7. \([T_{\theta},C_{2\nu}]=1\quad\text{for}\quad C_{2\nu}=H_{1}^{-1}U_{\nu}H_{1}\)_._
Proof.: For \(n=1\), the group presented above is free of rank \(L+N\). On the other hand, for \(n=1\), we have \(\operatorname{Map}_{1}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L) \right)=\operatorname{PMap}_{1}^{\operatorname{id},orb}\left(\Sigma_{\Gamma} (L)\right)\) and the latter group is free of rank \(L+N\) by Corollary 4.13.
For \(n\geqslant 2\), we suppose that \(\operatorname{Map}_{n-1}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has a presentation as claimed above. Further, we know from Lemma 4.20 that \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is generated by \(H_{1},..,H_{n-1},T_{1},...,T_{L},U_{1},...,U_{N}\). Due to Lemma 4.21, these generators satisfy the above relations. Hence, we have a surjective homomorphism \(\varphi\) from the group
Figure 4.10. Observation of the relations 4.21(2)-4.21(4) by consideration of the supporting disks.
with the above presentation onto \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). It remains to check that this homomorphism is also injective.
Let \(W=\sigma_{1}^{\varepsilon_{1}}...\sigma_{p}^{\varepsilon_{p}}\) with \(\sigma_{i}\in\left\{H_{j},T_{\lambda},U_{\nu}\mid 1\leqslant j<n,1\leqslant \lambda\leqslant L,1\leqslant\nu\leqslant N\right\}\) and \(\varepsilon_{i}\in\left\{\pm 1\right\}\) be a word in the kernel of the homomorphism \(\varphi\). Using that the word represents a pure mapping class, we can rewrite \(W\) as a word in letters \(A_{ji},B_{k\lambda}\) and \(C_{k\nu}\) for \(1\leqslant i,j,k\leqslant n,i<j\), \(1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\) using the abbreviations from (16) to (18), see [7, Theorem 3.15] for details on an analogous rewriting. In particular, this rewriting only uses the above relations. If we further use the abbreviations \(X_{j},Y_{\lambda}\) and \(Z_{\nu}\) as introduced in (19) to (21), then the conjugation relations from Lemma 4.21 allow us to rewrite \(W\) as a product
\[W_{1}(X_{1},...,X_{n-1},Y_{1},...,Y_{L},Z_{1},...,Z_{N})\cdot W_{2}(H_{1},...,H_{n-2},T_{1},...,T_{L},U_{1},...,U_{N}).\]
Since \(\varphi(W)=1\) is contained in the pure orbifold mapping class group, we can use the semidirect product structure
\[\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)=F_{n-1+L+ N}\rtimes\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\]
proven in Theorem 4.13. The letters used in \(W_{1}\) imply that \(\varphi(W_{1})\) is contained in the free group \(F_{n-1+L+N}\). In particular, \(\varphi(W_{1})\) is pure. Thus, the word \(\varphi(W_{2})\) is also pure and contained in \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). This implies that \(\varphi(W_{1})\cdot\varphi(W_{2})\) is the unique decomposition of \(\varphi(W)=1\) into the normal subgroup and quotient of the semidirect product, i.e. \(\varphi(W_{1})=1\) in \(F_{n-1+L+N}\) and \(\varphi(W_{2})=1\) in \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). The first equation directly implies \(W_{1}=1\). Using the induction hypothesis, we further obtain \(W_{2}=1\). Thus, \(\varphi\) is an isomorphism and \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has the above presentation.
|
2308.05190 | Doppler-Enhanced Quantum Magnetometry with thermal Rydberg atoms | We report experimental measurements showing how one can combine quantum
interference and thermal Doppler shifts at room temperature to detect weak
magnetic fields. We pump ${}^{87}$Rb atoms to a highly-excited, Rydberg level
using a probe and a coupling laser, leading to narrow transmission peaks of the
probe due to destructive interference of transition amplitudes, known as
Electromagnetically Induced Transparency (EIT). While it is customary in such
setups to use counterpropagating lasers to minimize the effect of Doppler
shifts, here we show, on the contrary, that one can harness Doppler shifts in a
copropagating arrangement to produce an enhanced response to a magnetic field.
In particular, we demonstrate an order-of-magnitude bigger splitting in the
transmission spectrum as compared to the counterpropagating case. We explain
and generalize our findings with theoretical modelling and simulations based on
a Lindblad master equation. Our results pave the way to using quantum effects
for magnetometry in readily deployable room-temperature platforms. | Shovan Kanti Barik, Silpa B S, M Venkat Ramana, Shovan Dutta, Sanjukta Roy | 2023-08-09T18:58:20Z | http://arxiv.org/abs/2308.05190v1 | # Doppler-Enhanced Quantum Magnetometry with thermal Rydberg atoms
###### Abstract
We report experimental measurements showing how one can combine quantum interference and thermal Doppler shifts at room temperature to detect weak magnetic fields. We pump \({}^{87}\)Rb atoms to a highly-excited, Rydberg level using a probe and a coupling laser, leading to narrow transmission peaks of the probe due to destructive interference of transition amplitudes, known as Electromagnetically Induced Transparency (EIT). While it is customary in such setups to use counterpropagating lasers to minimize the effect of Doppler shifts, here we show, on the contrary, that one can harness Doppler shifts in a copropagating arrangement to produce an enhanced response to a magnetic field. In particular, we demonstrate an order-of-magnitude bigger splitting in the transmission spectrum as compared to the counterpropagating case. We explain and generalize our findings with theoretical modeling and simulations based on a Lindblad master equation. Our results pave the way to using quantum effects for magnetometry in readily deployable room-temperature platforms.
## I Introduction
In recent years Rydberg atoms have opened up novel avenues and provided a powerful and versatile platform for exploring both fundamental problems of many-body physics through quantum simulation [1] and important technological applications [2] such as quantum information processing [3], photon-photon quantum gates [4], precision measurements using quantum sensing [5; 6], and quantum communication using cold Rydberg atoms as single-photon sources [7; 8].
Quantum sensors [9] utilize quantum properties such as interference, entanglement, or squeezed states to precisely measure a physical quantity. In particular, precision magnetometry has a wide variety of applications in archaeology, space explorations, geophysics, detection of brain activity, and detection of mineralisations. While cryogenically cooled superconducting devices set the bar for sensing ultraweak magnetic fields, room-temperature vapor-cell experiments are much more convenient to deploy for practical applications [10] due to the simplified experimental system and non-requirement of atom cooling or ultra-high vacuum. Here we use the quantum interference phenomenon of Electromagnetically Induced Transparency (EIT) [11], in conjunction with thermal Doppler shifts, to demonstrate enhanced response to magnetic fields of a room-temperature atomic vapor.
In a typical Rydberg EIT configuration a probe beam couples the ground state to an intermediate, excited state of finite lifetime, which in turn is coupled to a highly-excited, metastable Rydberg state by a coupling beam. At two-photon resonance the transition amplitudes from the ground and Rydberg levels to the intermediate level interfere destructively, leading to vanishing absorption of the probe beam over a narrow transparency window. EIT has important applications in slowing of light [12], quantum memory [13], light storage [14], and accurate, nondestructive mapping of Rydberg levels [15; 16].
At room temperature the motion of atoms leads to Doppler shifts for both the probe and the coupling lasers. When the two are counterpropagating and have the same frequency, these Doppler shifts cancel out in the two-photon resonance, and one obtains a narrow EIT signal. Generically, the frequencies are different and the cancellation is only partial [17]. However, it is still conventional to use the counterpropagating arrangement to minimize Doppler broadening. Magnetometry using this EIT configuration have been explored in Refs. [18; 19; 20; 21; 22].
In contrast, we demonstrate an order of magnitude enhancement of the magnetic-field response in a copropagating arrangement for various polarizations of the two lasers. Furthermore, we show that such an enhancement is obtained whenever the Zeeman shift of the intermediate level is larger (or smaller) than those of both the ground and the Rydberg levels, and the enhancement factor is controlled by the ratio \(\omega_{p}/\omega_{c}\) where \(\omega_{p(c)}\) is the probe (coupling) frequency. These findings have important applications toward quantum sensing of magnetic fields using thermal Rydberg atoms.
## II Experimental setup and methods
The schematic diagram of the experimental setup is shown in Fig. 1. For the Rydberg EIT we utilize three hyperfine manifolds \(\{5S_{1/2},F=2\}\rightarrow\{5P_{3/2},F^{\prime}=3\}\rightarrow\{35S_{1/2},F^ {\prime\prime}=2\}\) of \({}^{87}\)Rb, as shown in Fig. 2. The hyperfine states of the Rydberg level \(\{35S_{1/2},F^{\prime\prime}=2\}\) are mixed with the \(F^{\prime\prime}=1\) hyperfine sector at magnetic field \(B\gtrsim 1\) G, resulting in 8 sublevels. The probe beam is derived from an External Cavity Diode Laser (ECDL) (Toptica DL 100) and tuned to the D2 atomic transition \(\{5S_{1/2},F=2\}\rightarrow\{5P_{3/2},F^{\prime}=3\}\) at a wavelength of 780 nm. The coupling beam is derived from a tunable frequency-doubled laser (Toptica TA SHG pro) and tuned to the transition \(\{5P_{3/2},F^{\prime}=3\}\rightarrow\{35S_{1/2},F^{\prime\prime}=2\}\) at a wavelength of 480 nm by tuning the seed laser of the frequency-doubled laser to 960 nm. The frequencies of the seed laser and the frequency-doubled coupling
laser are monitored using a commercial wavelength meter based on the Fizeau interferometer (HighFinesse WS8-2), having an absolute accuracy of 2 MHz and a frequency resolution of 200 kHz. In the experiment the probe laser is scanned across the \(\{5S_{1/2},F=2\}\rightarrow\{5P_{3/2},F^{\prime}=3\}\) transition and the coupling laser is kept fixed at the \(\{5P_{3/2},F^{\prime}=3\}\rightarrow\{35S_{1/2},F^{\prime\prime}=2\}\) transition.
As shown in Fig. 1, the laser beam at 780 nm is split into two parts with a polarizing beam splitter (PBS 2), one used as a probe and the other as a reference beam. Both the probe and the reference beams with a Gaussian radius of 0.7 mm are collimated and aligned through a Rb vapor cell of length 75 mm. The coupling laser beam is divided into two parts using a polarizing beamsplitter (PBS 1), with one aligned counterpropagating with the probe and the other aligned copropagating with the probe. The Gaussian radius of these two beams are 1.2 mm and 0.7 mm, respectively. The reference beam is propagated through the vapor cell without overlapping with any of the coupling beams to provide a Doppler-broadened background spectrum. The absorption spectra of the probe and the reference beams are recorded on a homemade balanced photodiode, and the difference signal is obtained to record the Rydberg EIT signal on a flat background with a good signal-to-noise ratio.
A solenoid coil is used to generate magnetic field along its axis (along z-axis in Fig. 1). The magnetic field along the axis of the solenoid coil is measured with a Gauss-meter Model 460 3-channel from Lake Shore Cryptonics. The magnetic field gradient of the solenoid coil with current is measured as 33.616 Gauss/Amp. The Rb vapor cell is placed along the axis of the solenoid coil where the magnetic field is almost uniform across the length of the vapor cell. We vary the magnetic field from 0 to 10 Gauss. The measured standard deviation of magnetic field along the vapor cell is around 0.2 Gauss at 10 Gauss magnetic field. The vapor cell and the solenoid are enclosed in a \(\mu\)-metal magnetic shield to prevent influence of ambient stray magnetic fields on the Rydberg EIT signals. Three layers of \(\mu\)-metal sheets are used to reduce the ambient stray magnetic field below 0.05 Gauss.
## III Theoretical model
The absorption spectra result from the complex susceptibility at the probe frequency in the steady state arising from the optical drive and the spontaneous decay of the excited states, as sketched in Fig. 2. We obtain the steady state from a Lindblad master equation governing the density matrix of an atom, and average the resulting susceptibility over a thermal distribution.
The ground manifold consists of 5 Zeeman sublevels \(|m_{F}^{g}\rangle\) with energies \(\varepsilon(m_{F}^{g})=g_{F}^{g}m_{F}^{g}\mu_{B}B\), where \(B\) is the magnetic field, \(\mu_{B}\) is the Bohr magneton, \(g_{F}^{g}\) is the Lande-\(g\) factor, and \(m_{F}^{g}\in\{-2,\ldots,2\}\). Similarly, the excited manifold is split into 7 sublevels \(|m_{F}^{e}\rangle\) with energies \(\varepsilon(m_{F}^{e})=\varepsilon_{g,e}+g_{F}^{e}m_{F}^{e}\mu_{B}B\), where \(m_{F}^{e}\in\{-3,\ldots,3\}\) and \(\varepsilon_{g,e}\) is the energy from the ground level at \(B=0\). In the Rydberg manifold the splittings for \(B\sim 1\)G are large enough to mix the hyperfine levels \(F^{\prime\prime}=1\) and \(F^{\prime\prime}=2\), decoupling the nuclear spin \(I=3/2\) from the electronic
Figure 1: Schematic diagram of the experimental setup. Red beam represents the probe laser (780 nm) which is split into two parts by using a polarizing beamsplitter (PBS 2) and focused onto a homemade balanced detector after traveling through the vapor cell. Blue beam is divided into two coupling beams (480 nm) using a polarizing beamsplitter (PBS 1), which are then counterpropagated and overlapped with the probe beam using two dichoric mirrors (DMs). The vapor cell is placed along the axis of a solenoid coil and magnetically shielded by \(\mu\)-metal. M: mirror; PBS: polarizing beam splitter; HWP: half wave plate; QWP: quarter wave plate.
Figure 2: Energy-level diagram showing the ground manifold \(\{5S_{1/2},F=2\}\), the excited manifold \(\{5P_{3/2},F^{\prime}=3\}\), and the Rydberg manifold \(\{35S_{1/2},I=3/2\}\) of \({}^{87}\)Rb in the presence of a magnetic field. The red and blue arrows denote coherent pumping by the probe and coupling lasers, respectively, for \(\sigma^{\pm}\) polarizations. The dashed lines represent incoherent decay of the excited states back to the ground manifold.
angular momentum \(J=1/2\). This Paschen-Back effect gives 8 sublevels \(|m_{F}^{r},m_{J}^{r}\rangle\) with energies \(\varepsilon(m_{I}^{r},m_{J}^{r})=\varepsilon_{g,r}+A_{\rm hf}m_{I}^{r}m_{J}^{r}+ (g_{I}^{r}m_{I}^{r}+g_{J}^{r}m_{J}^{r})\mu_{B}B\), where \(\varepsilon_{g,r}\) is the reference from the ground level, \(A_{\rm hf}\) is the hyperfine interaction energy, \(m_{I}^{r}\in\{\pm 1/2,\pm 3/2\}\), and \(m_{J}^{r}\in\{\pm 1/2\}\). The Lande-\(g\) factors are related to the angular momentum quantum numbers, which give \(g_{F}^{g}=1/2\), \(g_{F}^{r}=2/3\), and \(g_{J}^{r}=2\). We also have \(A_{\rm hf}=2\pi\hbar\times 573\) kHz [23] and \(g_{I}^{r}=-0.001\)[24] from prior experimental measurements. Hence, the largest contribution to the Rydberg splitting comes from \(g_{J}^{r}\). Note the splittings are in MHz, whereas the gaps \(\varepsilon_{g,e}\) and \(\varepsilon_{g,r}\) are in PHz.
The coupling between the energy levels is given by the Hamiltonian \(-\hat{\bf d}.{\bf E}(t)\), where \(\hat{\bf d}\coloneqq-e\hat{\bf r}\) is the dipole operator and \({\bf E}(t)\) is the net electric field of the lasers. The latter can be expressed as \({\bf E}(t)={\rm Re}[{\mathbf{\mathcal{E}}}_{p}e^{-{\rm i}\omega_{p}t}+{\mathbf{ \mathcal{E}}}_{e}e^{-{\rm i}\omega_{e}t}]\), where \({\mathbf{\mathcal{E}}}_{p(c)}\) is the complex amplitude and \(\omega_{p(c)}\sim{\rm PHz}\) is the frequency of the probe (coupling) laser. To extract the time-averaged dynamics we move to a rotating frame with the transformation \(\hat{U}=e^{{\rm i}\omega_{p}t\hat{P}_{c}+(\omega_{p}+\omega_{c})t\hat{P}_{c}}\), where \(\hat{P}_{e(r)}\) projects onto the excited (Rydberg) manifold. The transformed Hamiltonian \(\hat{H}_{\circ}\) is given by \(\hat{U}({\rm i}\partial_{t}-\hat{H})\hat{U}^{\dagger}={\rm i}\partial_{t}- \hat{H}_{\circ}\). Averaging out the oscillating terms in \(\hat{H}_{\circ}\) yields a time-independent Hamiltonian with the rescaled energies \(\varepsilon_{\circ}(m_{F}^{g})=\varepsilon(m_{F}^{g})\), \(\varepsilon_{\circ}(m_{F}^{e})=g_{F}^{e}m_{F}^{e}\mu_{B}B-\Delta_{p}\), and \(\varepsilon_{\circ}(m_{I}^{r},m_{J}^{r})=(g_{I}^{r}m_{F}^{r}+g_{J}^{r}m_{J}^{r })\mu_{B}B+A_{\rm hf}(m_{I}^{r}m_{J}^{r}-3/4)-\Delta_{p}-\Delta_{c}\), where \(\Delta_{p}=\hbar\omega_{p}-\varepsilon_{g,e}\) is the detuning of the probe laser and \(\Delta_{c}\coloneqq\hbar\omega_{c}+\varepsilon_{g,e}-\varepsilon_{g,r}-(3/4)A_ {\rm hf}\) is the detuning of the coupling laser, defined with respect to the \(F^{\prime\prime}=2\) Rydberg level at \(B=0\). The Rabi frequencies are given by \(\Omega(m_{F}^{g},m_{F}^{e})=-\langle m_{F}^{e}|\hat{\bf d}.{\mathbf{\mathcal{E}}}_ {p}|m_{F}^{g}\rangle\) and \(\Omega(m_{F}^{e},m_{I}^{r},m_{J}^{r})=-\langle\{m_{I}^{r},m_{J}^{r}\}|\hat{ \bf d}.{\mathbf{\mathcal{E}}}_{c}|m_{F}^{e}\rangle\), which lead to selection rules depending on the polarization of \({\mathbf{\mathcal{E}}}_{p}\) and \({\mathbf{\mathcal{E}}}_{c}\). In particular, for circular polarization \(\sigma^{\pm}\) perpendicular to \({\bf B}\), \(m_{F}\) changes only by \(\pm 1\) (Fig. 2).
The optical drive competes with the spontaneous decay of the excited states back to the ground manifold, which occurs over a lifetime of \(\sim 0.1\)\(\mu\)s. Radiative decay can happen through all polarization channels, so \(m_{F}\) changes by \(0\) or \(\pm 1\) with different branching ratios (Fig. 2). These individual decay rates as well as the dipole matrix elements entering the Rabi frequencies are independent of the drive parameters and are calculated using the ARC library [25]. Note that the Rydberg levels have a long lifetime (\(\sim 20\)\(\mu\)s [26]) which does not significantly alter the steady state.
Under the usual Born-Markov approximation [27] the driven-dissipative dynamics are governed by a Lindblad master equation
\[{\rm d}\hat{\rho}_{\circ}/{\rm d}t={\rm i}[\hat{\rho}_{\circ},\hat{H}_{\circ} /\hbar]+\sum\nolimits_{k}\gamma_{k}\big{(}\hat{L}_{k}\hat{\rho}_{\circ}\hat{L} _{k}^{\dagger}-\{\hat{L}_{k}^{\dagger}\hat{L}_{k},\hat{\rho}_{\circ}\}/2\big{)},\]
where \(\hat{\rho}_{\circ}\) is the density matrix in the rotating frame and \(\hat{L}_{k}\) describe the transitions \(|m_{F}^{g}\rangle\langle m_{F}^{e}|\) with corresponding decay rates \(\gamma_{k}\). We find a unique steady state for \(\hat{\rho}_{\circ}\) using exact diagonalization. Moving back to the lab frame, \(\hat{\rho}=\hat{U}^{\dagger}(t)\hat{\rho}_{\circ}\hat{U}(t)\), one finds that the coherences between ground and excited (excited and Rydberg) manifold oscillate at frequency \(\omega_{p}\) (\(\omega_{c}\)).
The probe absorption originates from a complex refractive index \(n(\omega_{p})=\sqrt{1+\chi(\omega_{p})}\)[28], where the susceptibility \(\chi\) is defined in terms of the dipole moment per unit volume, \(N\langle\hat{\bf d}\rangle=\varepsilon_{0}\chi{\bf E}\). Here \(N\) is the atom number density and \(\varepsilon_{0}\) is the permittivity of free space. Writing \(\langle\hat{\bf d}\rangle={\rm Tr}[\hat{\bf d}\hat{\rho}(t)]\) and comparing Fourier coefficients yield \(\chi(\omega_{p})=[2N/(\varepsilon_{0}{\mathbf{\mathcal{E}}}_{p})]\ {\rm Tr} \big{(}\hat{P}_{g}\hat{d}_{p}\hat{P}_{e}\hat{\rho}_{\circ}\big{)}\), where \(\hat{d}_{p}\) is the component of \({\rm d}\) along the probe field, \({\bf d}.{\mathbf{\mathcal{E}}}_{p}=d_{p}{\mathbf{\mathcal{E}}}_{p}\), and \(\hat{P}_{g}\) projects onto the ground manifold.
In the absence of Doppler shift, the zero-temperature susceptibility \(\chi_{0}\) does not depend on the directions of the probe or the coupling lasers. At room temperature, however, one has to average \(\hat{\rho}\) over a thermal distribution of atoms. An atom moving towards the probe at speed \(v\) sees it blue shifted by \(\omega_{p}v/c\). For the same atom the coupling frequency is red (blue) shifted by \(\omega_{c}v/c\) if it is counterpropagating (copropagating). Hence, for the two cases the thermal susceptibility is given by
\[\chi=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}{\rm d}u\ e^{-u^{2}}\chi_{0} \big{(}\Delta_{p}+\Delta_{p}^{T}u,\Delta_{c}\mp\Delta_{c}^{T}u\big{)}\, \tag{1}\]
with \(u=v/v_{T}\), \(\Delta_{p(c)}^{T}\coloneqq\omega_{p(c)}v_{T}/c\), and \(v_{T}=\sqrt{2k_{B}T/m}\), \(m\) being the mass of a \({}^{87}\)Rb atom and \(k_{B}\) the Boltzmann constant. At room temperature \(T=300\) K the Doppler shifts \(\Delta_{p(c)}^{T}\) are hundreds of MHz, which strongly rescale the susceptibility. However, since \(v_{T}/c\sim 10^{-6}\) relativistic Doppler shifts are negligible.
To clearly observe the transmission peaks in the experiment, we subtract out a broad reference signal resulting from only the probe beam (see Fig. 1). This corresponds to the difference susceptibility \(\bar{\chi}\coloneqq\chi^{g,e}-\chi\), where \(\chi^{g,e}\) is the susceptibility without the Rydberg levels. One can obtain \(\bar{\chi}\) by thermally averaging its zero-temperature counterpart \(\bar{\chi}_{0}\coloneqq\chi_{0}^{g,e}-\chi_{0}\) following Eq. (1). Crucially, as a function of the probe detuning \(\chi^{g,e}\) exhibits a single peak whose width is set by \(\Delta_{p}^{T}\sim 300\) MHz, whereas the linewidths in \(\bar{\chi}\) are limited by the excited-state lifetime (tens of MHz). Thus, employing the EIT scheme allows us to resolve magnetic fields that are orders of magnitude smaller than what would be possible otherwise.
## IV Results and discussion
The simplest scenario that gives EIT is when the two lasers have opposite circular polarizations, e.g., probe \(\sigma^{+}\) and coupling \(\sigma^{-}\). Then, as sketched in Fig. 2, the drive and loss bring the atoms to the manifold of \(|m_{F}^{g}=2\rangle\), \(|m_{F}^{g}=3\rangle\), and \(|m_{I}^{r}=3/2,m_{J}^{r}=1/2\rangle\), so the steady state reduces to that of a 3-level EIT [11]. In particular, Im \(\chi_{0}\) drops to zero at two-photon resonance, signaling a sharp transparency window [Fig. 3(a)]. Away from resonance \(\chi_{0}\) matches the Lorentzian profile of Im \(\chi_{0}^{g,e}\) obtained without the coupling laser. Hence, the difference \(\bar{\chi}_{0}\) has an imaginary part that is peaked at
photon resonance, i.e., \(|\Delta_{p}+\Delta_{c}+\big{(}\Delta_{p}^{T}\mp\Delta_{c}^{T}\big{)}u^{*}|<w_{ \text{EIT}}\) [see Eq. (1)], where \(w_{\text{EIT}}\) is the EIT window. For detunings of order tens of MHz, \(u^{*}<1\) and the integral can be approximated as
\[\text{Im }\bar{\chi}\approx\frac{A}{\omega_{c}\mp\omega_{p}}\text{ Im }\bar{\chi}_{0}^{g,e}\bigg{(}\frac{\omega_{c}\Delta_{p}\pm\omega_{p}\Delta_{c}}{ \omega_{c}\mp\omega_{p}}\bigg{)}\,, \tag{2}\]
where \(A\) is a slowly-varying function of \(\Delta_{p}\) and \(\Delta_{c}\). Here we assume \(|\omega_{c}-\omega_{p}|\gg\omega_{p}w_{\text{EIT}}/w_{g,e}\), where \(w_{g,e}\) is the two-level absorption window (width of \(\delta_{c}^{g,e}\)), set by the lifetime of the excited state. For sharp EIT \(w_{\text{EIT}}\ll w_{g,e}\), the assumption is usually well justified. From Eq. (2) the transmission peak occurs at \(\Delta_{p}=\mp(\omega_{p}/\omega_{c})\Delta_{c}\) for the counter- and copropagating cases, respectively, with linewidths \((1\mp\omega_{p}/\omega_{c})w_{g,e}\) and heights \(\propto 1/(\omega_{c}\mp\omega_{p})\). So the peak is narrower and taller for counterpropagating lasers [see Fig. 3(b)]. This can be understood by noting that a broader range of velocities contribute to the signal in this configuration [Figs. 3(c, d)], but this range falls off rapidly as one deviates from \(\Delta_{p}=-(\omega_{p}/\omega_{c})\Delta_{c}\).
Figure 4 shows a side-by-side comparison of experimental measurements and numerical simulations with \(B=0\) for the case where both lasers are linearly polarized. Here the physics does not reduce to three levels as either laser can induce \(\sigma^{+}\) or \(\sigma^{-}\) transitions (see Fig. 2). However, as the sublevels in each manifold are degenerate, the transmission spectrum is similar: In both theory and experiment we observe a narrow peak at \(\Delta_{p}=-(\omega_{p}/\omega_{c})\Delta_{c}\) for counterpropagating lasers and a relatively broad peak at \(\Delta_{p}=(\omega_{p}/\omega_{c})\Delta_{c}\) for copropagating lasers. This is seen for a wide range of blue and red detuning.
The main difference between the measurements and simulations is that the relative signal height for the copropagating case is smaller in the simulations. (Note that in the experiment the power of the coupling laser was adjusted to match the heights for the two configurations.) This discrepancy originates from the fact that as the coupling power is reduced the experimental signals go to zero but the steady-state susceptibilities do not (see Fig. 10 in Appendix B). This is because the relaxation time diverges and effects such as dephasing due to laser linewidths and stochastic resetting due to atoms leaving the probe beam come into play [22]. Note, however, these do not affect the locations or widths of the peaks.
The response to a magnetic field \(B\) is again easiest to understand for the \(\sigma^{+}\)-\(\sigma^{-}\) configuration. Here, \(B\) shifts the ground level \(|m_{F}^{g}=2\rangle\) by \(\Delta_{g}=g_{F}^{g}m_{F}^{g}\mu_{B}B=\mu_{B}B\), the excited level \(|m_{F}^{g}=3\rangle\) by \(\Delta_{e}=g_{F}^{g}m_{F}^{g}\mu_{B}B=2\mu_{B}B\)
Figure 3: (a) Imaginary part of the zero-temperature, steady-state susceptibility \(\chi_{0}\) as a function of the probe detuning \(\Delta_{p}\) for magnetic field \(B=0\), coupling detuning \(\Delta_{c}=2\) MHz, and \(\sigma^{+}\)-\(\sigma^{-}\) polarizations of the two lasers. The dotted curve corresponds to the susceptibility without the coupling beam. The vertical line shows two-photon resonance condition \(\Delta_{p}=-\Delta_{c}\). Here \(\chi_{\text{ref}}=2Ne_{0}/(\varepsilon_{0}\varepsilon_{p})\approx 2\times 10^{-4}\). (b) Thermally averaged difference susceptibility, \(\bar{\chi}=\chi^{g,e}-\chi\), for counter- and copropagating lasers for the same setup with \(\Delta_{c}=-51\) MHz. Vertical lines show the peak locations \(\Delta_{p}=\mp\Delta^{*}\) where \(\Delta^{*}\coloneqq(\omega_{p}/\omega_{c})\Delta_{c}\) and \(\omega_{p(c)}\) is the probe coupling) frequency. (c) Zero-temperature difference susceptibility \(\bar{\chi}_{0}\) showing a narrow EIT window. Green and blue lines represent thermal averaging for counter- and copropagating peaks, \(\Delta_{p}=\mp\Delta^{*}\). (d) Contribution of different velocities to the susceptibility, \(\chi_{0}(v)\coloneqq\chi_{0}(\Delta_{p}+\omega_{p}v/c,\Delta_{c}\mp\omega_{c}v/c)\), for \(\Delta_{c}=-10\) MHz at \(\Delta_{p}=\mp\Delta^{*}\) (solid lines) and at \(\Delta_{p}=\mp\Delta^{*}/2\) (dashed lines).
Figure 4: Measurements (left) and simulations (right) for different values of \(\Delta_{c}\) and \(B=0\) when the lasers are both linearly polarized along \(x\) (perpendicular to \(\mathbf{B}\)). Green and blue curves show the counter- and copropagating cases, respectively. Vertical lines show the corresponding peak locations at \(\Delta_{p}=\mp(\omega_{p}/\omega_{c})\Delta_{c}\). For clarity the numerical susceptibility for the copropagating case is multiplied by a factor of 3 (see text). Note, all parameter values are listed in Appendix C.
and the Rydberg level \(|m_{I}^{r}=3/2,m_{J}^{r}=1/2\rangle\) by \(\Delta_{r}\approx g_{J}^{r}m_{J}^{r}\mu_{B}B=\mu_{B}B\) (Fig. 2). Hence, the effective detuning of the probe is \(\Delta_{p}^{\text{eff}}=\Delta_{p}+\Delta_{g}-\Delta_{e}=\Delta_{p}-\mu_{B}B\) and that of the coupling is \(\Delta_{e}^{\text{eff}}=\Delta_{c}+\Delta_{e}-\Delta_{r}\approx\Delta_{c}+ \mu_{B}B\). The condition for a transmission peak is the same as before, namely, \(\Delta_{p}^{\text{eff}}=\mp(\omega_{p}/\omega_{c})\Delta_{e}^{\text{eff}}\), which gives
\[\Delta_{p}(B)\approx\mp(\omega_{p}/\omega_{c})\Delta_{c}+(1\mp\omega_{p}/ \omega_{c})\mu_{B}B. \tag{3}\]
Therefore, as \(B\) is varied the peak for the copropagating case moves faster by a factor \((\omega_{c}+\omega_{p})/(\omega_{c}-\omega_{p})\approx 4.2\). The same is true for a \(\sigma^{-}\)-\(\sigma^{+}\) configuration with \(B\) replaced by \(-B\). This enhanced response is shown in Fig. 5. Note that, in general, the peak varies linearly with a slope
\[\frac{\partial\Delta_{p}}{\partial(\mu_{B}B)}\approx g_{F}^{e}m_{F}^{e}-g_{F}^ {g}m_{F}^{g}\mp\frac{\omega_{p}}{\omega_{c}}(g_{F}^{e}m_{F}^{e}-g_{J}^{r}m_{J} ^{r}). \tag{4}\]
Thus, copropagating lasers give an advantage whenever the Zeeman shift of the excited level is higher (or lower) than those of both the ground and the Rydberg levels.
When the lasers are linearly polarized, different paths connecting the ground and Rydberg manifolds acquire different Zeeman shifts (see Fig. 2), which splits the corresponding two-photon resonances, producing multiple
Figure 5: Top: Measurements showing the variation of the EIT signal with \(B\) for \(\sigma^{-}\)-\(\sigma^{+}\) polarizations and \(\Delta_{c}=-36\) MHz for (a) copropagating and (b) counterpropagating lasers. Vertical lines show the predicted peak locations at \(\Delta_{p}(-B)\) given by Eq. (3). Bottom: Displacement of the fitted peaks with \(B\). Solid lines show theoretical predictions.
Figure 7: Top: Measurements (left) and simulations (right) for increasing \(B\) with \(\Delta_{c}=-36\) MHz and \(x\)-\(x\) polarizations. As before, green and blue curves stand for counter- and copropagating cases. Vertical lines show \(\Delta_{p}(\pm B)\) from Eq. (3). Measurements are averaged over \(x\)-\(x\) and \(y\)-\(y\) polarizations to correct for any misalignment of lasers. As in Fig. 4, \(\bar{\chi}\) for the copropagating case is enlarged by a factor of 3. Bottom: Fitted peak separation as a function of \(B\), showing an enhanced response for copropagating lasers. Solid lines are numerical fits and dashed lines are predicted using Eq. (3).
transmission peaks at zero temperature. This is shown in Fig. 6(a), where we find numerically that Im \(\chi_{0}\) drops to zero whenever the resonant levels satisfy \(m_{I}^{r}+m_{J}^{r}=m_{P}^{g}\) and the lasers have identical polarization. Upon thermal averaging, the difference susceptibility \(\bar{\chi}\) generically has a two-peaked Doppler-broadened profile as in Fig. 6(b). For polarizations perpendicular to the magnetic field, as in our experimental setup, the response is dominated by extreme \(m_{F}\) values corresponding to the \(\sigma^{+}\)-\(\sigma^{-}\) and \(\sigma^{-}\)-\(\sigma^{+}\) configurations. Hence, from Eq. (3) the peak separation is given by \(\Delta_{pp}\approx 2(1\mp\omega_{p}/\omega_{c})\mu_{B}B\) for counter- and copropagating cases, respectively.
Figure 7 shows experimental measurements which corroborate these predictions. We observe an order of magnitude bigger splitting in the transmission spectrum for copropagating lasers. The peak separations, plotted in the lower panel, match numerical simulations and are indeed magnified by the factor \((\omega_{c}+\omega_{p})/(\omega_{c}-\omega_{p})\approx 4.2\) relative to the counterpropagating setup. We attribute the slight mismatch between the predicted and measured peak locations for the counterpropagating case to a slow frquency drift of the lasers. In Appendix A we present simulations that show for other polarizations the spectrum can also exhibit multiple peaks.
## V Conclusion and outlook
We have performed quantum magnetometry with thermal \({}^{87}\)Rb atoms using a Rydberg EIT scheme. We have demonstrated that, contrary to the prevailing custom of using counterpropagating lasers to mitigate Doppler effects, one can harness such effects in a copropagating arrangement to obtain a greatly enhanced response to magnetic fields. The enhancement depends on the ratio of the laser frequencies, which can be varied by targeting different energy levels. Our measurements are in good agreement with theoretical modeling based on a Markovian master equation, and pave the way to precision magnetometry in room-temperature setups.
For simplicity we have focused exclusively on a one-dimensional geometry. It would be interesting to see if varying the relative orientations of the magnetic field and the probe and coupling beams produces more dramatic effects. A limitation of the current approach is that the transmission peaks are broader for the copropagating case, which limits the resolution. Thus, it would be very useful to explore if there are ways to control the linewidth independently of the peak position. Our work provides strong motivations for these developments which can lead to important applications in quantum sensing with Rydberg atoms and [29] and possibly magnetometry beyond the standard quantum limit [30].
###### Acknowledgements.
S.R. acknowledges funding from the Department of Science and Technology, India, via the WOS-A project grant no. SR/WOS-A/PM-59/2019. We acknowledge the contribution of Meena M. S. and the RRI mechanical workshop for their assistance with the experiments. S.D. acknowledges the hospitality of the Max Planck Institute for the Physics of Complex Systems, where part of the calculations was done. S.K.B. acknowledges the funding from the I-HUB Quantum Technology Foundation via the SPIKE Project grant no. I-HUB/SPIKE/2023-24/004.
## Appendix A Simulations for other polarizations of the probe and the coupling lasers
In Fig. 8 we show the response for the \(\sigma^{+}\)-\(x\) and \(x\)-\(\sigma^{+}\) polarizations. For the former the steady state is almost identical to that of the \(\sigma^{+}\)-\(\sigma^{-}\) case discussed in the main text. Hence, both counter- and copropagating configurations exhibit a single peak given by Eq. (3). For the latter the steady state is close to that of the \(\sigma^{-}\)-\(\sigma^{+}\) case, so we see a single dominant peak in both configurations; however, we can also discern a satellite peak for the counterpropagating case resulting from other two-photon resonances. Note that such a peak was observed in the experiment for the \(\sigma^{-}\)-\(\sigma^{+}\) setting [see Fig. 5(b)], which we attribute to an imperfect alignment of the lasers.
Figure 9 shows when the probe is \(\pi\) polarized (parallel to the magnetic field) and the coupling laser is linearly polarized, the counterpropagating arrangement produces multiple peaks whereas the copropagating case exhibits
only two shallow peaks. Now the dominant contribution comes from paths with \(m_{F}^{a}=0\), \(m_{F}^{e}=0\) and \(m_{J}^{r}=\pm 1/2\) (cf. Fig. 2), hence from Eq. (4) the central peak separation is comparable in the two cases. This example illustrates the response can be sensitive to the polarizations.
## Appendix B Signal strength vs coupling laser power
Figure 10 shows how the EIT signal strength changes as a function of the coupling laser power (\(P_{c}\)), which enters the Rabi frequencies between the excited and the Rydberg levels. While the experimental signal vanishes for \(P_{c}\to 0\) in both counter- and copropagating configurations, the numerically computed steady-state susceptibilities remain largely unaltered. As discussed in the main text, this mismatch arises from a diverging relaxation time at weak coupling, where slow processes such as dephasing and stochastic resetting become significant.
## Appendix C Parameter values used for measurements and numerical simulations
The power and beam radius of the probe laser were set to 0.015 mW and 699 \(\mu\)m, respectively. For the coupling laser we used a beam radius of 731.5 \(\mu\)m for copropagating arrangements and 1227 \(\mu\)m for counterpropagating arrangements as well as for the zero-temperature calculations in Figs. 3 and 6. Table 1 lists the values of the coupling laser power used for the different cases.
|
2307.15506 | Improving image quality of sparse-view lung tumor CT images with U-Net | Background: We aimed at improving image quality (IQ) of sparse-view computed
tomography (CT) images using a U-Net for lung metastasis detection and
determining the best tradeoff between number of views, IQ, and diagnostic
confidence.
Methods: CT images from 41 subjects aged 62.8 $\pm$ 10.6 years (mean $\pm$
standard deviation), 23 men, 34 with lung metastasis, 7 healthy, were
retrospectively selected (2016-2018) and forward projected onto 2,048-view
sinograms. Six corresponding sparse-view CT data subsets at varying levels of
undersampling were reconstructed from sinograms using filtered backprojection
with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and
evaluated for each subsampling level on 8,658 images from 22 diseased subjects.
A representative image per scan was selected from 19 subjects (12 diseased, 7
healthy) for a single-blinded multireader study. These slices, for all levels
of subsampling, with and without U-Net postprocessing, were presented to three
readers. IQ and diagnostic confidence were ranked using predefined scales.
Subjective nodule segmentation was evaluated using sensitivity and Dice
similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used.
Results: The 64-projection sparse-view images resulted in 0.89 sensitivity
and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had
improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led
to insufficient IQ for diagnosis. For increased views, no substantial
discrepancies were noted between sparse-view and postprocessed images.
Conclusions: Projection views can be reduced from 2,048 to 64 while
maintaining IQ and the confidence of the radiologists on a satisfactory level. | Annika Ries, Tina Dorosti, Johannes Thalhammer, Daniel Sasse, Andreas Sauter, Felix Meurer, Ashley Benne, Tobias Lasser, Franz Pfeiffer, Florian Schaff, Daniela Pfeiffer | 2023-07-28T12:03:55Z | http://arxiv.org/abs/2307.15506v4 | # Improving Image Quality of Sparse-view Lung Cancer CT Images with a Convolutional Neural Network
###### Abstract
Artifacts in sparse-view lung cancer computed tomography images were corrected with a U-Net model, and the best trade-off between the number of projection views, image quality, and diagnostic confidence was determined to be 64 views in a reader study.
Keywords:Artifact Correction Lung Cancer Post-processing Sparse-view CT Dual-frame U-Net
## 1 Summary statement:
Artifacts in sparse-view lung cancer computed tomography images were corrected with a U-Net model, and the best trade-off between the number of projection views, image quality, and diagnostic confidence was determined to be 64 views in a reader study.
## 2 Key points:
1. The results of our reader study suggest that with a post-processing method by the dual-frame U-Net, the number of views used for CT reconstruction can be reduced from 2048 to 64 while maintaining diagnostically accurate image quality for lung nodule detection (sensitivity = 0.94).
2. Dice Similarity Coefficients show that lung nodule markings by the readers did not significantly improve for the images post-processed by the dual-frame U-Net compared to the unprocessed sparse-view images.
3. Correcting sparse-view artifacts in CT scans with the dual-frame U-Net drastically increased the readers' confidence in the diagnosis for the detection of lung nodules.
**Abbreviations:** Confidence Interval (CI), Convolutional Neural Network (CNN), Computed Tomography (CT), Dice Similarity Coefficient (DSC), Mean Squared Error (MSE), Sensitivity (Se), Specificity (Sp)
Artifact Correction Lung Cancer Post-processing Sparse-view CT Dual-frame U-Net
###### Abstract
**Purpose:** To improve the image quality of sparse-view computed tomography (CT) images with a U-Net for lung cancer detection and to determine the best trade-off between number of views, image quality, and diagnostic confidence.
**Methods:** CT images from 41 subjects (34 with lung cancer, seven healthy) were retrospectively selected (\(01.2016-12.2018\)) and forward projected onto 2048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views, respectively. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, seven healthy) for a single-blinded reader study. The selected slices, for all levels of subsampling, with and without post-processing by the U-Net model, were presented to three readers. Image quality and diagnostic confidence were ranked using pre-defined scales. Subjective nodule segmentation was evaluated utilizing sensitivity (Se) and Dice Similarity Coefficient (DSC) with 95% confidence intervals (CI).
**Results:** The 64-projection sparse-view images resulted in \(\text{Se}=0.89\) and \(\text{DSC}=0.81[0.75,0.86]\) while their counterparts, post-processed with the U-Net, had improved metrics (\(\text{Se}=0.94\), DSC \(=0.85[0.82,0.87]\)). Fewer views lead to insufficient quality for diagnostic purposes. For increased views, no substantial discrepancies were noted between the sparse-view and post-processed images.
**Conclusion:** Projection views can be reduced from 2048 to 64 while maintaining image quality and the confidence of the radiologists on a satisfactory level.
## 1 Introduction
Lung cancer maintains the highest mortality rate for malignancies around the globe, with more than 2.2 million new cases recorded worldwide in 2020 [1][2]. More than half of all lung cancer diagnoses present as symptomatic once the patient has reached a progressive stage [3]. Regular screenings enable early detection and thereby increase survival rates [3][4].
X-ray computed tomography (CT) is considered standard practice in present-day medicine for diagnosing lung nodules [4][5][6], yet it comes at the cost of radiation exposure [7][8]. To make regular screenings possible, a trade-off between dose and image quality must be found [4]. Sparse-view CT is a technique for dose reduction. However, this technique leads to a degradation of image quality due to distinct streak artifacts caused by a limited number of projection views in the reconstruction process [9][10].
Machine learning approaches have shown promising results for sparse-view artifact correction [9][10][11][12][13][14]. Specifically, residual learning has delivered superior results compared to the direct approach [11][12]. The goal of the network in residual learning is to estimate the difference between sparse-view and full-view images. In a direct approach, the network aims to predict the artifact-free image. The simpler topological structure of residual images allows for more efficient learning [12]. A popular network architecture for such artifact-correction tasks is the U-Net [15]. With a large receptive field, the model is capable of handling global artifacts such as the given sparse-view streak artifacts [11][12]. The dual-frame U-Net was proposed as a more robust variant of the standard U-Net for the task at hand [13].
We hypothesize that by post-processing sparse-view lung cancer CT images with the dual-frame U-Net, the image quality can be substantially improved, allowing radiologists to diagnose lung tumor nodules at greatly reduced radiation exposure. In this pilot study, we assess the performance of the given architecture on correcting for streak artifacts present in sparse-view lung cancer CT scans. An image reconstructed from 2048 views, later referred to as a full-view image, was taken to calculate the residual image. Six levels of subsampled input images were reconstructed from 16, 32, 64, 128, 256, and 512 views, respectively. By conducting a reader study with the unprocessed sparse-view images and their U-Net post-processed counterpart images, we aim to find the best trade-off between the number of views, image quality, and confidence of the participating radiologists on their diagnosis.
## 2 Methods
### Dataset
The dataset consisted of 8,677 images from 41 subjects (seven healthy, 34 suffering from lung cancer). Approval from an ethics committee and patient informed consent were obtained. All data was selected retrospectively (\(01.2016-12.2018\)) and anonymized. The data selection flowchart is given in Figure 1. Independent datasets were utilized for model assessment and the reader study. Additional 9,481 images from the Luna16 external dataset were utilized for testing the model's robustness [16][17]. Table 1 shows the subject demographics for the internal datasets.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Model assessment} & \multicolumn{3}{c}{Reader study} \\ \hline Parameter \(\backslash\)subset & Train & Validation & Test & Healthy & Diseased \\ \hline \hline Male & 5 & 2 & 4 & 5 & 7 \\ Female & 7 & **0** & 4 & 2 & 5 \\ Age 1 (years) & 65.8 [37, 79] & 77.0 [73, 81] & 61.9 [46, 83] & 44.3 [28, 70] & 64.8 [44, 79] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Subject Demographics for Internal Datasets (n = 41)
Figure 1: Flowchart of the data selection process. A total of 8,658 computed tomography (CT) images of lung cancer subjects were retrospectively selected in our clinic for training and testing of the U-Net model. For the reader study, 19 images were selected, corresponding to one image per subject for seven healthy subjects and 12 subjects suffering from lung cancer. The robustness of the model was further tested with 9,481 lung CT images from the Luna16 external public dataset [16][17].
### Data Preparation
The CT images were forward projected onto 2048-view sinograms. Sparse-view CT data subsets at varying levels of undersampling were generated using the filtered backprojection algorithm with 16, 32, 64, 128, 256, and 512 views, respectively. The full-view data was generated using 2048 views. All operations were performed using the Astra toolbox (2.1.1) [18][19][20]. Images were of size 512x512 pixels. The intensity values of all images were clipped to the lung window, [-1450, 250] HU, and normalized between zero and one.
22 of the diseased subjects were split on CT scan level into train (n=12, images=4,723), validation (n=2, images=787), and test sets (n=8, images=3,148). The residual ground truth label images were calculated as the difference between the full-view and the sparse-view images for each projection view. The final post-processed image was the pure-artifact U-Net prediction subtracted from the sparse-view input.
### Network Architecture
The dual-frame U-Net was utilized, as depicted in Figure 2. The contracting path consists of four subsequently applied encoder blocks, each with two convolution layers (3x3 kernels, followed by batch normalization and a rectified linear unit activation). A 2x2 max pooling layer is applied after each encoder block. Following the two convolution layers in the bottleneck, the features are upsampled with four subsequently applied decoder blocks mirroring the contracting path, followed by a 2x2 unpooling before each decoder block. The dual-frame U-Net introduces additional skip connections, bridging the output of each encoder block after pooling to the input of the associated decoder block before unpooling. These additional connections ensure the frame condition is met, thereby reducing blurring and image artifacts. The final image is obtained with a 1x1 convolution [13].
The dual-frame U-Net was chosen as it generated robust outputs and had a comparable computational effort as the standard U-Net. The network was implemented with Keras (2.4.0) in TensorFlow, randomly initialized, and trained individually for each number of projection views [21][22]. The sparse-view images were taken as input, and the residual images were taken as labels. No data augmentation was applied. Mean squared error (MSE) loss with an adaptive moment estimation optimizer was utilized. Early stopping was implemented if validation loss did not improve. Training took place for a maximum of \(n=30\) epochs and a batch size of six. The initial learning rate \(lr_{0}\) was set to \(lr_{0}=0.001\) and decayed exponentially per epoch following \(lr_{n}=lr_{n-1}\cdot e^{-0.1}\). The model with the smallest validation loss among all epochs was chosen for inference on the test sets and the reader study. The quality of post-processed images was evaluated with the MSE and the structural similarity index measure (SSIM) metrics [23].
### Reader Study
CT scans from 19 subjects (seven healthy, 12 diseased) were considered for this single-blinded study. Three board-certified radiologists and an in-training radiologist, respectively with 15 (DP), 11, 10, and 5 years of experience in chest radiology, participated in the study. Using the full-view images, DP selected a representative slice per subject and marked the ground truth lung nodule segmentation (mean diameter = \(1.11[0.91,1.31]cm\)) for the diseased subjects. All nodules were metastases. The sparse-view images reconstructed from 16, 32, 64, 128, and 256 views and
Figure 2: The architecture of the dual-frame U-Net. The model takes as input the unprocessed sparse-view images and outputs the pure artifact residual image. An example of 16 projection sparse-view input and corresponding residual output is shown. The number of channels is provided above each layer.
post-processed by the U-Net were presented to the other three radiologists, resulting in a total of 190 evaluated images.
Full-view and all sparse-view images of an exemplary slice are shown in Figure 3. Slices reconstructed and post-processed using 512 views were excluded from the study as DP determined that even without any post-processing, they are of comparable quality to the full-view images.
Readers were asked to independently annotate each slice using our in-house tool by rating every image on quality, the confidence of diagnosis, and the severity of artifacts present in the image according to pre-defined labels in Table 2. Furthermore, the radiologists were asked to independently segment perceived suspect pulmonary nodules. The following metrics were considered to compare the diagnostic reliability of images for different views. Sensitivity (Se) and specificity (Sp) values were calculated using true positive (TP), true negative (TN), false positive (FP), and false negative (FN) cases as [24]:
\[Se=\frac{TP}{TP+FN}\quad Sp=\frac{TN}{TN+FP}\quad. \tag{1}\]
For all TP cases, the segmentation overlaps were calculated with the Dice Similarity Coefficient (DSC) [25][26]. For two segmentations, X and Y, the DSC was calculated as
\[DSC=\frac{2|X\cap Y|}{|X|+|Y|}\quad. \tag{2}\]
Here, \(|X\cap Y|\) is the number of pixels marked in both segmentations. In case of no overlap, or if one of the segmentations was empty, the resulting DSC was zero. 95% confidence intervals (CI) and P-values from the clustered Wilcoxon signed-rank test were calculated with Python's SciPy library (1.4.1) [27][28].
\begin{table}
\begin{tabular}{c c c c} \hline \hline Scale & Quality & Confidence & Artifacts \\ \hline \hline
1 & Not diagnostic & Not confident at all & No artifacts \\
2 & Highly impaired & Slightly confident & Few artifacts and quality not impaired \\
3 & Impaired & Somewhat confident & Some artifacts and reduced quality \\
4 & Sufficient & Fairly confident & A lot of artifacts and reduced quality \\
5 & High & Very confident & \(=\) \\
6 & Very high & Surely confident & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Defined Scale for Describing the Image Quality, Confidence of Diagnosis, and Artifact Severity for Sparse-view CT Images with and without Post-processing
Figure 3: An example computed tomography (CT) image reconstructed with full-view and sparse-view projections, with and without post-processing by the dual-frame U-Net. The image on the left demonstrates the ground truth full-view image without post-processing. The top row shows the CT image reconstructed with different sparse-view projections without post-processing. The bottom row depicts the respective sparse-view images post-processed by the U-Net model for each projection view. The region of interest (blue box) shows the metastasis (highlighted by the yellow arrow). All images are clipped to the lung window and include a contrast medium (Iodine). Scale bar = \(5cm\).
Results
The following results show the model's performance on 3,148 images from eight diseased subjects and 9,481 images from the Luna16 dataset. Furthermore, results of the reader study on 19 CT-wise images from 12 diseased and seven healthy subjects are described.
### Network Performance
Figure 3 shows an example slice with varying levels of sub-sampling alongside the corresponding U-Net post-processed results. It can be observed that fewer projection views result in more artifacts. The sparse-view images from extremely limited views also lead to a loss of structural integrity in their post-processed counterparts. This was especially prominent for 16 views, as lung nodule distortion and microvascular structures generate diminished performance capabilities. Tumor composition and primary anatomical characteristics can better be amassed once reconstruction views have increased to 32. For 64 views, streak artifacts do not impact the tumor's visibility due to tissue density, but minimalistic structural identification, such as small vessels, are not clearly portrayed. Minor features are displayed for 128 and 256 views; however, for 128 views, some streak artifacts remain present. For the post-processed image of 32 views, the tumor's shape is mostly correct, and the display of the vascular structures is improved. For 64 or more views, the tumor appearance in the post-processed image is similar to the full-view image. Furthermore, vascular distinction on imaging can be detected with the post-processed 128 views image. The post-processed image from 256 views is very close in quality to the full-view image. For 512 views, no qualitative differences can be detected.
A directly proportional relationship is observed between improved image quality and higher views. As shown in Table 3, calculated mean MSE values decrease and mean SSIM values increase with more projection views for the internal test set and the external Luna16 dataset. Although mean MSE and SSIM values are marginally better for the internal test set, the model achieves comparable results on the external Luna16 dataset.
### Reader Study
The resulting mean values for quality, confidence, and artifacts reported by the readers are shown in Figure 4. The labeled mean quality for sparse-view images decreases linearly from roughly "sufficient" to approximately "not diagnostic" for decreasing number of projection views. The mean quality for the post-processed images is similar for 256, 128, and 64 views at "sufficient" or "high." For 32 views, the mean quality decreases to below "sufficient," and for 16 to "sufficient" or "impaired." The tendency for the mean confidence is similar for both sparse-view and post-processed images. For the sparse-view images, the confidence again decreases linearly with decreasing number of views, ranging from "fairly confident" or "very confident" to "not confident at all" or "slightly confident." The post-processed 256, 128, and 64 view images lead to "fairly" or "very" confident results. For 32 and 16 views, the confidence decreases respectively to below "fairly confident." The subjective quality and confidence of post-processed images are significantly higher than their unprocessed pairs for 64 and fewer views (_P_<0.05). The presence of artifacts increases for the sparse-view images with fewer views. Fewer sparse-view artifacts are visible for 256 views ("few artifacts"). More artifacts are present in 32 and 16 views (approximately "a lot of artifacts"). For the post-processed images, 256 and 128 views lead to a similar representation of artifacts ("no" or "few artifacts"). The artifacts are further
\begin{table}
\begin{tabular}{l|c|c|c c} \hline \hline Metric & Dataset & 16 projections & 32 projections & 64 projections \\ \hline \hline MSE & Test set & \(2.36[1.95,2.78]\cdot 10^{-3}\) & \(7.95[6.51,9.39]\cdot 10^{-4}\) & \(2.40[1.96,2.84]\cdot 10^{-4}\) \\ \cline{2-5} & Luna16 & \(6.52[4.93,8.10]\cdot 10^{-3}\) & \(3.46[2.49,4.43]\cdot 10^{-3}\) & \(1.04[0.746,1.34]\cdot 10^{-3}\) \\ \cline{2-5} SSIM & Test set & \(0.799[0.749,0.809]\) & \(0.834[0.808,0.861]\) & \(0.895[0.873,0.917]\) \\ \cline{2-5} & Luna16 & \(0.782[0.758,0.805]\) & \(0.816[0.792,0.840]\) & \(0.873[0.852,0.895]\) \\ \hline \hline MSE & & 128 projections & 256 projections & 512 projections \\ \cline{2-5} & Test set & \(8.46[6.31,10.6]\cdot 10^{-6}\) & \(2.28[1.54,3.01]\cdot 10^{-5}\) & \(3.78[2.81,4.75]\cdot 10^{-6}\) \\ \cline{2-5} & Luna16 & \(6.19[4.18,8.19]\cdot 10^{-4}\) & \(1.07[0.810,1.32]\cdot 10^{-4}\) & \(5.34[3.93,6.75]\cdot 10^{-5}\) \\ \cline{2-5} SSIM & Test set & \(0.938[0.920,0.955]\) & \(0.979[0.973,0.985]\) & \(0.997[0.996,0.997]\) \\ \cline{2-5} & Luna16 & \(0.908[0.889,0.927]\) & \(0.96[0.950,0.969]\) & \(0.983[0.980,0.986]\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean Squared Error (MSE) and Structural Similarity Index Measure (SSIM) Metrics for Post-processed Images in the Internal Test Set and the External Luna16 Dataset for All Projection Views Presented by the Mean values and the Corresponding 95% Confidence Intervals
reduced for 64, 32, and 16 views ("no artifacts"). Post-processed images have significantly fewer subjective artifacts than their unprocessed pairs for 128 and fewer projection views (_P_<0.001).
The confusion matrices and the corresponding Se and Sp values are shown in Tables 4 and 5, respectively. In some images, incorrect subjective segmentation by the readers resulted in falsely marked pixels in an alternate location. Such cases are counted as FN and mostly appeared for the sparse-view images reconstructed with 16 views. An example of such an inaccurately marked image, as well as a correctly marked image, and an image with an extra perceived nodule, are shown in Figure 5.
The confusion matrices in Table 4 show increasing FN cases with decreasing number of views for the sparse-view images and their post-processed counterparts. This leads to a decreased Se, as seen in Table 5. The number of FP is mostly independent of the number of views, which leads to Sp values between 0.86 and 1.00. Regarding Se and Sp, there are no notable differences between the tendencies of sparse-view and post-processed images. 256 and 128 views lead to Se values between 0.97 and 1.00 and Sp values between 0.86 and 0.95. For 64 views, there are four falsely classified instances for sparse-view and post-processed cases each. In the case of the sparse-view images, they all count as FN, leading to Se = 0.89 and Sp = 1.00. In the case of the post-processed images, two count as FN and two as FP, resulting in a comparably higher Se and lower Sp (Se = 0.94, Sp = 0.90). For 32 views, the number of FN instances increases to six in both cases, leading to Se = 0.83. For 16 views, the FN value rises to 20 (Se = 0.44) and 29 (Se = 0.19) for sparse-view and post-processed images, respectively.
Figure 6 shows the mean DSC and 95% CI for sparse-view images with and without post-processing by the model. The mean DSC shows only slight differences between sparse-view images with and without post-processing for 32 or more views. For instance, in the case of 64 views, sparse-view images without post-processing resulted in DSC = 0.81 [0.75,0.86], while images post-processed by the model had improved DSC = 0.85 [0.82,0.87] (_P_>0.1). Additionally, the 95% CI lines show that neither of those differences is statistically significant, as the individual intervals overlap in almost all cases. It must be noted that although no statistically significant discrepancy in segmentation overlap is observed, subjective quality and confidence assessment was markedly higher in the post-processed images of 64 views and fewer (_P_<0.05).
It must be noted that every image labeled as "not diagnostic" in terms of image quality or "not confident at all" in terms of confidence of diagnosis would not be considered in a clinical workflow. This is especially the case for sparse-view images reconstructed from 16 views but also for some sparse-view images reconstructed from 32 views. Thus, these
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{16} & \multicolumn{3}{c}{32} & \multicolumn{3}{c}{64} & \multicolumn{3}{c}{128} & \multicolumn{3}{c}{256} \\ & & \multicolumn{3}{c}{projections} & \multicolumn{3}{c}{projections} & \multicolumn{3}{c}{projections} & \multicolumn{3}{c}{projections} & \multicolumn{3}{c}{projections} & \multicolumn{3}{c}{projections} \\ \hline \hline & reader & true & + & - & sum & + & - & sum & + & - & sum & + & - & sum \\ \hline \hline & + & 7 & 2 & 9 & 30 & 1 & 31 & 34 & 2 & 36 & 35 & 2 & 37 & 35 & 2 & 37 \\ Processed & - & 29 & 19 & 48 & 6 & **20** & 26 & 2 & 19 & 21 & 1 & 19 & 20 & 1 & **19** & 20 \\ \hline & sum & 36 & 21 & & 36 & 21 & & 36 & 21 & & 36 & 21 & & 36 & 21 & \\ & + & 16 & **0** & 16 & 30 & **0** & 30 & 32 & 0 & 32 & 36 & 1 & 37 & 36 & 3 & 39 \\ Sparse & - & 20 & **21** & 41 & 6 & **21** & 27 & 4 & **21** & 25 & 0 & 20 & 20 & 0 & 18 & 18 \\ & sum & 36 & 21 & & 36 & 21 & & 36 & 21 & & 36 & 21 & & 36 & 21 & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Confusion Matrices for Sparse-view CT Images and their Post-processed Counterpart Images for All Projection Views Calculated Over 19 Subject-wise Images Presented to Three Readers (n = 57)
Figure 5: Examples of metastasis segmentations. A correctly marked nodule, true positive (TP), and two incorrectly segmented regions, namely false negative (FN) and false positive (FP), are shown. FP refers to the case where the perceived metastasis was non-existent. FN refers to the case where the perceived nodule had no overlap with the ground truth segmentation. The top row shows the overlay of the ground truth segmentation (yellow) and the segmentation marked by the reader (blue) over the full-view image. The bottom row shows the sparse-view image, reconstructed from 16 projection views with or without post-processing, presented to the readers for marking lung nodules. All slices are clipped to the lung window and include a contrast medium (Iodine). Scale bar =\(5cm\).
instances will not be considered for further discussion.
All images post-processed by the model are labeled with better image quality and confidence of diagnosis. More precisely, the difference between sparse-view images with and without post-processing is the most prominent result for all assigned labels. It indicates that the radiologists prefer working with the post-processed images over the unprocessed sparse-view ones: They rate their quality higher, see fewer artifacts in the images, and, most importantly, are more confident in their diagnosis. Especially the higher quality and the increased confidence could be accompanied by a shorter processing time and, in the long run, lead to fewer signs of fatigue compared to working with unprocesssed sparse-view images. Since 256, 128, and 64 views lead to very similar results regarding the quality and confidence labels and worse results are achieved with 32 views, 64-view images appear to be the best choice.
To define a threshold providing a reasonable trade-off between a reduced number of projection views and diagnostic value, Sp and Se values should be maximized. Typically values of about 0.95 are chosen as acceptable thresholds for Sp and Se values. Accordingly, FP and FN values should be minimized: FP cases should be avoided as these cause unnecessary follow-up procedures, potentially exposing the patient to more radiation if a full-view scan is required. However, it is of utmost importance to avoid FN cases since these would lead to afflicted patients not getting diagnosed. Low FP cases are correlated with high Sp, and low FN values are associated with high Se. Therefore, Sp values of about 0.90 were deemed acceptable in this work. The overall sample size was comparatively small, leading to a considerable decrease of either Sp or Se given an FP or FN case, respectively. According to these thresholds, the lowest possible number of projection views allowing reliable diagnosis would be achieved for post-processed images of 64 views, leading to Se = 0.94 and Sp = 0.90. The selected Se value is slightly lower than the typical threshold of 0.95. Nonetheless, we consider 64 views an acceptable threshold due to our small sample size.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline & \multicolumn{2}{c}{16} & 32 & 64 & 128 & 256 \\ & \multicolumn{2}{c}{projections} & projections & projections & projections & projections \\ \hline \hline Processed & Sensitivity & 0.19 & 0.83 & 0.94 & 0.97 & 0.97 \\ & Specificity & 0.90 & 0.95 & 0.90 & 0.90 & 0.90 \\ Sparse & Sensitivity & 0.44 & 0.83 & 0.89 & 1.00 & 1.00 \\ & Specificity & 0.86 & 1.00 & 1.00 & 0.95 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Sensitivity and Specificity for Sparse-view CT Images and their Post-processed Counterpart Images for All Projection Views Calculated Over 19 Subject-wise Images Presented to Three Readers (n = 57)
Figure 6: Mean over the Dice Similarity Coefficient values and 95% confidence intervals (CI) for lung nodule segmentations marked over the sparse-view images with and without post-processing.
The mean DSC values did not consistently show a trend of improvement between the post-processed and the unprocessed sparse-view images. Yet, these findings support the choice of the trade-off threshold at 64 views: The mean DSC values for the post-processed images of 64 views result in the greatest improvement over the mean DSC values of their unprocessed counterparts in comparison to the other projection views.
Some limitations were present in this study. In clinical practice, radiologists often search the entire stack of images for malignancies. The present reader study could have modeled the clinical workflow more precisely as it only considered single CT images. Including neighboring slices would come closer to clinical diagnosis based on CT scans and most likely reduce the amount of falsely classified patients. Furthermore, the sparse-view data generated for this study was obtained using simplified conditions not reflective of the complex reconstruction processes in clinical settings. Therefore, only the reduced number of projection views compared to the full-view images can be reported, and an exact measure of dose reduction is hence unachievable.
Overall, the amount of projection views can be reduced by a factor of 32 compared to the full-view image with post-processing by a dual-frame U-Net, while keeping the diagnostic value and the confidence of the radiologists at a satisfactory level. Regarding the radiologists' confidence, the images post-processed with the model lead to drastically better results than the unprocessed sparse-view images. These findings suggest that post-processed sparse-view CT images by the dual-frame U-Net could help enable dose-efficient screening for lung cancer detection.
## Acknowledgments
This work was funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the Federal Government and the Lander, the German Research Foundation (GRK2274), as well as by the Technical University of Munich-Institute for Advanced Study. |
2304.03634 | Hydrodynamic limit of the multi-component slow boundary WASEP with
collisions | In this article, we study the hydrodynamic limit for a stochastic interacting
particle system whose dynamics consists in a superposition of several dynamics:
the exclusion rule, that dictates that no more than a particle per site with a
fixed velocity is allowed; a collision dynamics, that dictates that particles
at the same site can collide and originate particles with new velocities such
that the linear momentum is conserved; a boundary dynamics that injects and
removes particle in the system. This last dynamics destroys the conservation
law, and its strength is regulated by a parameter $\theta$. The goal is the
derivation of the hydrodynamic limit, and the boundary conditions change
drastically according to the value of $\theta$. | Oslenne Araújo, Patrícia Gonçalves, Alexandre B. Simas | 2023-04-07T13:28:50Z | http://arxiv.org/abs/2304.03634v2 | # Hydrodynamic limit of the multi-component
###### Abstract
In this article, we study the hydrodynamic limit for a stochastic interacting particle system whose dynamics consists in a superposition of several dynamics: the exclusion rule, that dictates that no more than a particle per site with a fixed velocity is allowed; a collision dynamics, that dictates that particles at the same site can collide and originate particles with new velocities such that the linear momentum is conserved; a boundary dynamics that injects and removes particle in the system. This last dynamics destroys the conservation law, and its strength is regulated by a parameter \(\theta\). The goal is the derivation of the hydrodynamic limit, and the boundary conditions change drastically according to the value of \(\theta\).
_Keywords:_ Hydrodynamic limit, Stochastic reservoirs, Boundary conditions, Exclusion process.
## 1 Introduction
Stochastic interacting particle systems (SIPS) is an area of probability theory devoted to the mathematical analysis of a collective behavior of continuous time random walks random models subject to constraints. These systems arise in statistical physics, biology, and many other fields of science and they were introduced in the mathematics community in the 1970s by Spitzer [18]. A classic problem in this field is to derive the macroscopic laws of the thermodynamic quantities of a given physical system, considering microscopic dynamics composed of particles that move according to some prescribed stochastic law. These macroscopic laws are governed by Partial Differential Equations (PDEs) or stochastic PDEs, depending one whether one is looking at the convergence to the mean or at the fluctuations around that mean. Convergence to the mean is a scaling limit, called the _Hydrodynamic Limit_ which is described by a solution to a PDE, called the hydrodynamic equation, see [14].
To make the reading as enjoyable as possible, we shall outline informally the model that we investigate in this article when the dimension \(d=1\) (see Figures 1 and 2), but our results also extend to any dimension \(d\). We consider our particle model evolving on the discrete set of sites \(\{1,\ldots,N-1\}\) to which we call bulk. Consider now a finite set of possible velocities \(\mathcal{V}\subset\mathbb{R}\) and fix a velocity \(v\in\mathcal{V}\). At any given time \(t\), each site of the bulk is either empty or occupied by one particle with velocity \(v\) and each particle attempts to jump to one of its neighbors with the same velocity, under a weakly asymmetric rate. To prevent the occurrence of more than one particle per site with the same velocity \(v\), we introduce an exclusion rule that suppresses jumps to an already occupied site by a particle with the given velocity \(v\). The boundary dynamics is given by the following birth and death process at the sites \(1\) or \(N-1\): a particle is inserted into the system at site \(1\) with rate \(\frac{\alpha_{v}}{N^{\theta}}\) if the site is empty; while if the site \(1\) is occupied, a particle is removed from there at rate \(\frac{1-\alpha_{v}}{N^{\theta}}\). On the other hand, at site \(N-1\) a particle can be inserted into the system, at rate \(\frac{\beta_{v}}{N^{\theta}}\), if the site \(N-1\) is empty; while a particle can be removed from the site \(N-1\) at the rate \(\frac{1-\beta_{v}}{N^{\theta}}\). Superposed to these dynamics, there is also a collision process that exchanges velocities of particles at the same site in such a way that the moment is conserved.
In Figure 1, we have an illustration of our dynamics, the particles at the bulk are colored in gray, and the particles at the two reservoirs are colored in blue. Note that if a particle at site \(x\) with velocity \(v\), attempts to jump to an already occupied site \(y\) with velocity \(v\), the jump is not allowed. In this case, the particle does not move, see for example in Figure 1, the particle at site \(3\) with velocity \(v_{3}\) is not allowed to jump to site \(2\) since there is a particle with velocity \(v_{3}\) at site \(2\). On the other hand, if the destination site is empty the jump is performed, see for example in Figure 1, the particle at site \(1\) with
velocity \(v_{2}\) is allowed to jump to site 2 keeping the same velocity \(v_{2}\). Let us suppose that the clock associated with the leftmost reservoir rings. Since there exists no particle at site 1 with velocity \(v_{1}\), a particle can be injected into the system at the site 1 with velocity \(v_{1}\) at rate \(\frac{\alpha_{v_{1}}}{N^{\theta}}\). Also, if the clock associated to the site 1 with velocity \(v_{5}\) rings, the particle leaves the system at rate \(\frac{1-\alpha_{v_{5}}}{N^{\theta}}\) (See Figure 1). Analogously, suppose that the clock associated with the rightmost reservoir rings. Since there is no particle at site \(N-1\) with velocity \(v_{1}\), a particle can be injected into the system at the site \(N-1\) with the velocity \(v_{1}\) at rate \(\frac{\beta_{v_{1}}}{N^{\theta}}\). Also, if the clock associated with the site \(N-1\) with velocity \(v_{5}\) rings, the particle leaves the system at rate \(\frac{1-\beta_{v_{5}}}{N^{\theta}}\) (See Figure 1).
Now let us suppose that the clock associated to \(x_{k}\) rings (see Figure 2). There are two particles at \(x_{k}\), one with velocity \(v_{2}\) and another one with velocity \(v_{4}\). Then, they collide at rate one and produce two particles at the same site \(x_{k}\), but with velocities \(v_{1}\) and \(v_{5}\), satisfying \(v_{2}+v_{4}=v_{1}+v_{5}\), i.e. the preservation of momentum.
Figure 1: Illustration of the dynamics
Figure 2: Illustration of the collision dynamics
Our goal is to show that the conserved quantities of the system can be described by a hydrodynamic equation. To that end, fix a finite time horizon \([0,T]\), and consider the dynamical behavior of the empirical density and momentum over such interval. The law of large numbers for the empirical density and momentum (the hydrodynamic limit), when the system is taken in the diffusive scaling limit, is given by a system of parabolic evolution equations of the form
\[\partial_{t}(\rho,\varrho)+\sum_{v\in\mathcal{V}}\tilde{v}[v\cdot\nabla F(\rho, \varrho)]=\frac{1}{2}\Delta(\rho,\varrho), \tag{1}\]
where \(\tilde{v}=(1,v_{1},\dots,v_{d})\), \(\rho\) stands for the density and \(\varrho=(\varrho_{1},\dots,\varrho_{d})\) for the linear momentum. Above \(F\) is a thermodynamical quantity determined by the ergodic properties of the dynamics.
Now we say some words about the proof. We follow the entropy method first introduced in [13] and for that we prove tightness of the sequence of empirical measures and then we characterize uniquely the limit point. For the latter, due to the fact that our rates are weakly asymmetric and the system is in contact with slow reservoirs we need several technical replacement lemmas in order to write down the notion of weak solutions to each one of the hydrodynamic equations.
The outline of this article is described as follows: in Section 2 we establish the notation adopted in this work and state some useful results and our main theorem, whose proof is postponed to Section 3 and 4. In Section 5, we prove the Replacement Lemmas needed for the proof of the hydrodynamic limit. In Appendix A, we prove the uniqueness of weak solutions of the hydrodynamic equations.
## 2 Statement of Results
We start by introducing some notation to be used in this work. Let \(\mathbb{T}^{d}_{N}=\{0,\dots,N-1\}^{d}=(\mathbb{Z}/N\mathbb{Z})^{d}\) be the \(d\)-dimensional discrete torus and let \(D^{d}_{N}=S_{N}\times\mathbb{T}^{d-1}_{N}\), which will henceforth be called _bulk_, with \(S_{N}=\{1,\dots,N-1\}\). Further, let \(\mathbb{T}^{d}=[0,1)^{d}=(\mathbb{R}/\mathbb{Z})^{d}\) denote the \(d\)-dimensional torus and let \(D^{d}=[0,1]\times\mathbb{T}^{d-1}\). Moreover, let \(\mathcal{V}\subset\mathbb{R}^{d}\) be a finite set of velocities \(v=(v_{1},\dots,v_{d})\). We will assume that \(\mathcal{V}\) is invariant under reflections and permutations of the coordinates, i.e.,
\[(v_{1},\dots,v_{i-1},-v_{i},v_{i+1},\dots,v_{d})\ \ \text{and}\ \ (v_{\sigma(1)},\dots,v_{\sigma(d)})\]
belong to \(\mathcal{V}\) for all \(1\leq i\leq d\), and all permutations \(\sigma\) of \(\{1,\dots,d\}\), provided that \((v_{1},\dots,v_{d})\in\mathcal{V}\).
The dynamics is chosen in such a way that at most one particle with a certain velocity is allowed at each site of \(D^{d}_{N}\). We let \(\eta(x,v)\in\{0,1\}\) denote the number of particles with velocity \(v\in\mathcal{V}\) at site \(x\in D^{d}_{N}\); \(\eta_{x}=\{\eta(x,v);\,v\in\mathcal{V}\}\) denote the number of particles with velocity \(v\) at site \(x\); and \(\eta=\{\eta_{x};\,x\in D^{d}_{N}\}\) denote a configuration. Finally, we let \(X_{N}=(\{0,1\}^{V})^{D^{d}_{N}}\) denote the set of particle configurations.
In the interior of the domain \(D^{d}_{N}\), the dynamics consists of two parts:
1. particles in the bulk evolve according to nearest-neighbor weakly asymmetric random walks with exclusion among particles with the same velocity,
2. binary collisions, that preserve linear momentum, between particles with different velocities.
### The Model
Our chief goal is to study the stochastic lattice gas model induced by the generator \(\mathcal{L}_{N}\), which is the superposition of the Glauber dynamics with collision among particles of different velocities and an exclusion dynamics:
\[\mathcal{L}_{N,\theta}=N^{2}\{\mathcal{L}^{b}_{N,\theta}+\mathcal{L}^{c}_{N}+ \mathcal{L}^{ex}_{N}\}, \tag{2}\]
where \(\theta\geq 0\) is a parameter related to Glauber dynamics. In (2), \(\mathcal{L}^{b}_{N,\theta}\) denotes the generator of the Glauber dynamics, modelling insertion or removal of particles; \(\mathcal{L}^{c}_{N}\) denotes the generator that models the collision part of the dynamics; and lastly, \(\mathcal{L}^{ex}_{N}\) models the exclusion part of the dynamics. Note that in (2) time has been speeded up diffusively due to the factor \(N^{2}\).
The generator of the exclusion part of the dynamics, \(\mathcal{L}^{ex}_{N}\), acts on \(f:X_{N}\to\mathbb{R}\) as
\[(\mathcal{L}^{ex}_{N}f)(\eta)=\sum_{v\in\mathcal{V}}\sum_{x,z\in D^{d}_{N}} \eta(x,v)(1-\eta(z,v))P_{N}(z-x,v)[f(\eta^{z,z,v})-f(\eta)],\]
where
\[\eta^{x,y,v}(z,w)=\left\{\begin{array}{l}\eta(y,v)\text{ if }w=v\text{ and }z=x,\\ \eta(x,v)\text{ if }w=v\text{ and }z=y,\\ \eta(z,w)\text{ otherwise.}\end{array}\right. \tag{3}\]
In view of (4), we write the decomposition \(\mathcal{L}_{N}^{ex}=\mathcal{L}_{N}^{ex,1}+\mathcal{L}_{N}^{ex,2}\) in terms of symmetric and weakly asymmetric part of the generator, where for \(f:X_{N}\rightarrow\mathbb{R}\) it holds
\[(\mathcal{L}_{N}^{ex,1}f)(\eta)= \frac{1}{v}\sum_{\begin{subarray}{c}x,v\in\mathcal{V}\\ |x-x|=1\end{subarray}}\eta(x,v)(1-\eta(z,v))[f(\eta^{x,z,v})-f(\eta)],\] \[(\mathcal{L}_{N}^{ex,2}f)(\eta)= \frac{1}{N}\sum_{v\in\mathcal{V}}\sum_{x,z\in\mathcal{D}_{N}^{d} }\eta(x,v)(1-\eta(z,v))p(z-x,v)[f(\eta^{x,z,v})-f(\eta)].\]
Above \(p(x,v)\) is an irreducible transition probability with finite range and mean velocity \(v\), i.e., \(\forall v\in\mathcal{V}\), \(0\leq p(x,v)\leq 1\), and \(\sum_{x\in\mathbb{Z}^{d}}p(x,v)=1.\) Moreover, \(\exists\,\mathcal{R}\in\mathbb{N}:\|x\|>\mathcal{R}\)1 implies \(p(x,v)=0\), and
Footnote 1: \(\|\cdot\|\) stands for the Euclidian norm in \(\mathbb{R}^{d}\).
\[\sum_{x\in\mathbb{Z}^{d}}xp(x,v)=v.\]
Note that, since we have finitely many velocities, we can choose a \(\mathcal{R}\) that suits all \(v\in\mathcal{V}\). The jump law and the waiting times are chosen so that the jump rate from site \(x\), with velocity \(v\), to site \(x+y\), with the same velocity \(v\), is given by
\[P_{N}(y,v)=\frac{1}{2}\sum_{j=1}^{d}(\delta_{y,e_{j}}+\delta_{y,-e_{j}})+\frac {1}{N}p(y,v), \tag{4}\]
where \(\delta_{x,y}\) stands for the Kronecker delta, which is equal to one if \(x=y\) and \(0\) otherwise, and \(\{e_{1},\ldots,e_{d}\}\) is the canonical basis in \(\mathbb{R}^{d}\).
The generator of the collision part of the dynamics, \(\mathcal{L}_{N}^{c}\), acts on \(f:X_{N}\rightarrow\mathbb{R}\) as
\[(\mathcal{L}_{N}^{c}f)(\eta)=\sum_{y\in\mathcal{D}_{N}^{c}}\sum_{q\in Q}p_{c}( y,q,\eta)[f(\eta^{y,q})-f(\eta)],\]
where \(Q\) is the set of all the possible collisions that preserve linear momentum, i.e.
\[Q=\{q=(v,w,v^{\prime},w^{\prime})\in\mathcal{V}^{4}:v+w=v^{\prime}+w^{\prime }\}. \tag{5}\]
The rate \(p_{c}(y,q,\eta)\) is given by
\[p_{c}(y,q,\eta)=\eta(y,v)\eta(y,w)[1-\eta(y,v^{\prime})][1-\eta(y,w^{\prime})], \tag{6}\]
and for \(q=(v_{0},v_{1},v_{2},v_{3})\), the configuration \(\eta^{y,q}\) after the collision is defined as
\[\eta^{y,q}(z,u)=\left\{\begin{array}{l}\eta(y,v_{j+2})\text{ if }z=y\text{ and }u=v_{j}\text{ for some }0\leq j\leq 3,\\ \eta(z,u)\text{ otherwise,}\end{array}\right. \tag{7}\]
where the index of \(v_{j+2}\) should be taken modulo \(4\). Particles of velocities \(v\) and \(w\) at the same site collide at rate one and produce two particles of velocities \(v^{\prime}\) and \(w^{\prime}\) at the same site but the velocities satisfy the identity \(v+w=v^{\prime}+w^{\prime}\).
Finally, the generator of the Glauber dynamics, with parameter \(\theta\geq 0\), is given by
\[(\mathcal{L}_{N,\theta}^{b}f)(\eta) =\frac{1}{N^{\theta}}\sum_{\begin{subarray}{c}x\in\mathcal{D}_{N} ^{d}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\left(\alpha_{v}\left(\tfrac{\bar{x} }{N}\right)(1-\eta(x,v))+(1-\alpha_{v}\left(\tfrac{\bar{x}}{N}\right))\eta(x,v )\right)[f(\sigma^{x,v}\eta)-f(\eta)] \tag{8}\] \[+\frac{1}{N^{\theta}}\sum_{\begin{subarray}{c}x\in\mathcal{D}_{N} ^{d}\\ x_{1}=N-1\end{subarray}}\sum_{\begin{subarray}{c}v\in\mathcal{V}\\ \end{subarray}}\left(\beta_{v}\left(\tfrac{\bar{x}}{N}\right)(1-\eta(x,v))+(1- \beta_{v}\left(\tfrac{\bar{x}}{N}\right))\eta(x,v)\right)[f(\sigma^{x,v}\eta) -f(\eta)], \tag{9}\]
where \(x=(x_{1},\tilde{x})\) with \(\tilde{x}=(x_{2},\ldots,x_{d})\), and
\[\sigma^{x,v}\eta(y,w)=\left\{\begin{array}{ll}1-\eta(x,v),\mbox{ if }w=v\mbox{ and }y=x,\\ \eta(y,w),\mbox{ otherwise,}\end{array}\right. \tag{10}\]
for every \(v\in\mathcal{V}\), \(\alpha_{v}\), \(\beta_{v}\in C^{2}(\mathbb{T}^{d-1})\). We also assume that, for every \(v\in\mathcal{V}\), the functions \(\alpha_{v}\) and \(\beta_{v}\), take values in a compact subset of \((0,1)\), and this implies that \(\alpha_{v}(\cdot)\) and \(\beta_{v}(\cdot)\) are bounded away from \(0\) and \(1\). Nevertheless, these conditions can be relaxed in the case \(\theta\geq 1\). The functions \(\alpha_{v}(\cdot)\) and \(\beta_{v}(\cdot)\) represent the density at the reservoirs. The parameter \(\theta\) controls the flow of the particles at the boundary of \(D^{d}\). This intuition will be made precise in Theorem 1.
In the text, sometimes it will be more convenient to write
\[r_{x}^{N}(\eta,\alpha)=\alpha_{v}\left(\tfrac{\tilde{x}}{N} \right)(1-\eta(x,v))+(1-\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right))\eta(x,v), \tag{11}\] \[r_{x}^{N}(\eta,\beta)=\beta_{v}\left(\tfrac{\tilde{x}}{N}\right) (1-\eta(x,v))+(1-\beta_{v}\left(\tfrac{\tilde{x}}{N}\right))\eta(x,v). \tag{12}\]
Let \(\{\eta_{t,\theta},\,t\geq 0\}\) be the Markov process with generator \(\mathcal{L}_{N,\theta}\) let denote \(\{S_{t}^{N,\theta},\,t\geq 0\}\) denote the semigroup associated to \(\mathcal{L}_{N,\theta}\). Let \(\mathcal{D}(\mathbb{R}_{+},X_{N})\) be the set of right continuous functions with left limits taking values in \(X_{N}\) endowed with the uniform topology. For a probability measure \(\mu\) on \(X_{N}\), denote by \(\mathbb{P}_{\mu,\theta}\) the probability measure on the path space \(\mathcal{D}(\mathbb{R}_{+},X_{N})\) induced by \(\{\eta_{\theta}(t):\,t\geq 0\}\) and the initial measure \(\mu\). The expectation with respect to \(\mathbb{P}_{\mu,\theta}\) is denoted by \(\mathbb{E}_{\mu,\theta}\).
### Invariant Measures
In this subsection, we consider the weakly asymmetric exclusion process among particles with the same velocity and collision between particles with different velocities but evolving on the \(d\)-dimensional discrete torus \(\mathbb{T}_{N}^{d}\) i.e. with periodic boundary conditions.
For each configuration \(\eta\in\{0,1\}^{\mathcal{V}}\), let \(I_{0}(\eta)\) denote the mass of \(\eta\) and \(I_{k}(\eta)\), \(k\in\{1,\ldots,d\}\), denote the \(k\)-th momentum of \(\eta\), i.e.,
\[I_{0}(\eta)=\sum_{v\in\mathcal{V}}\eta(v),\quad I_{k}(\eta)=\sum_{v\in \mathcal{V}}v_{k}\eta(v).\]
Set \(\mathbf{I}(\eta):=(I_{0}(\eta),\ldots,I_{d}(\eta))\). Fix \(L\geq 1\) and a configuration \(\eta\). Let \(\mathbf{I}^{L}(x,\eta)=(I_{0}^{L}(x),\ldots,I_{d}^{L}(x))\) be the average of the conserved quantities in a cube of length \(L\) centered at \(x\):
\[\mathbf{I}^{L}(x,\eta)=\frac{1}{|\Lambda_{L}|}\sum_{z\in x+\Lambda_{L}}\mathbf{ I}(\eta_{z}), \tag{13}\]
where, \(\Lambda_{L}=\{-L,\ldots,L\}^{d}\) and \(|\Lambda_{L}|=(2L+1)^{d}\) is the discrete volume of the box \(\Lambda_{L}\).
Assume that the set of velocities is chosen in such a way that the unique conserved quantities by the random walk dynamics described above are the mass and the momentum: \(\sum_{x\in D_{x}^{L}}\mathbf{I}(\eta_{x})\). The following examples of sets of velocities satisfying these conditions were introduced in [7]:
_Model I:_ Denote by \(\mathcal{E}=\{e=\pm e_{i}\mbox{ for some }i\in\{1,\ldots,d\}\}\) and let \(\mathcal{V}=\mathcal{E}\). Under these choices, the only possible collisions are \(q=(v,w,v^{\prime},w^{\prime})\) such that \(v+w=0\) and \(v^{\prime}+w^{\prime}=0\). So, we have \(\mathcal{V}=\{\pm e_{1},\ldots,\pm e_{d}\}\). _Model II:_ Let \(d=3\) denote by \(\sigma\) any element of \(\mathcal{J}\), the permutation group of \(\{1,2,3\}\). Let \(\mathcal{V}=\{v:\sigma v=\varpi\mbox{ for some }\sigma\in\mathcal{J}\}\), where \(\varpi\) is the positive solution of \(\varpi^{4}-6\varpi^{2}-1=0\).
The proof that the only collision invariants are the total mass and momentum is part of the ergodic theorem also proved in [7].
For each chemical potential \(\lambda=(\lambda_{0},\ldots,\lambda_{d})\in\mathbb{R}^{d+1}\), let \(m_{\lambda}\) denote the probability measure on \(\{0,1\}^{\mathcal{V}}\) given by
\[m_{\lambda}(\eta)=\frac{1}{Z(\lambda)}\exp\{\lambda\cdot I(\eta)\}, \tag{14}\]
where \(Z(\lambda)\) is a normalizing constant. Note that \(m_{\lambda}\) is a product measure on \(\{0,1\}^{\mathcal{V}}\), i.e., the variables \(\{\eta(v):\,v\in\mathcal{V}\}\) are independent under \(m_{\lambda}\).
Let \(\mu_{\lambda}^{N}\) denote the product measure on \(X_{N}\) with marginals given by
\[\mu_{\lambda}^{N}\{\eta:\,\eta(x,\cdot)=\eta\}=m_{\lambda}(\eta),\]
for each \(\eta\in\{0,1\}^{\mathcal{V}}\) and \(x\in D_{N}^{d}\). Note that \(\{\eta(x,v):x\in D_{N}^{d},\,v\in\mathcal{V}\}\) are independent random variables under \(\mu_{\lambda}^{N}\), and that the measure \(\mu_{\lambda}^{N}\) is invariant for the process when taken with periodic boundary conditions, i.e., with generator given by \(\mathcal{L}_{N}=N^{2}\{\mathcal{L}_{N}^{c}+\mathcal{L}_{N}^{\pi_{X}}\}\). The expectation under \(\mu_{\lambda}^{N}\) of the mass and \(k\)-th momentum are, respectively, given by
\[\rho(\lambda):=E_{\mu_{\lambda}^{N}}[I_{0}(\eta_{x})]=\sum_{v\in\mathcal{V}} \theta_{v}(\lambda)\quad\text{and}\quad\varrho_{k}(\lambda):=E_{\mu_{\lambda}^ {N}}[I_{k}(\eta_{x})]=\sum_{v\in\mathcal{V}}v_{k}\theta_{v}(\lambda),\,k\in\{ 1,\dots,d\}.\]
In the last formula, \(\theta_{v}(\lambda)\) is the expected value of the density of particles with velocity \(v\) under \(m_{\lambda}\):
\[\theta_{v}(\lambda):=E_{m_{\lambda}}[\xi(v)]=\frac{\exp\left\{\lambda_{0}+ \sum_{k=1}^{d}\lambda_{k}v_{k}\right\}}{1+\exp\left\{\lambda_{0}+\sum_{k=1}^{ d}\lambda_{k}v_{k}\right\}}. \tag{15}\]
Let \((\rho,\varrho)(\lambda):=(\rho(\lambda),\varrho_{1}(\lambda),\dots,\varrho_{d }(\lambda))\) be the map that associates the chemical potential to the vector of density and momentum. For the case without reservoirs, it is possible to prove that \((\rho,\varrho)\) is a diffeomorphism onto \(\mathfrak{U}\subset\mathbb{R}^{d+1}\), the interior of the convex envelope of \(\{I(\xi),\,\xi\in\{0,1\}^{\mathcal{V}}\}\). Denote by \(\Lambda=(\Lambda_{0},\dots,\Lambda_{d}):\mathfrak{U}\rightarrow\mathbb{R}^{d+1}\) the inverse of \((\rho,\varrho)\). This correspondence allows one to parameterize the invariant states by the density and momenta: for each \((\rho,\varrho)\in\mathfrak{U}\), we have a product measure \(\nu_{\rho,\varrho}^{N}=\mu_{\Lambda(\rho,\varrho)}^{N}\) on \(X_{N}\).
It is possible to show, after a long and tedious computation, that in _Model \(I\)_ for every \(v\in\mathcal{V}\), the function \(\Phi_{v}:\mathfrak{U}\rightarrow(0,1)\) given by \(\Phi_{v}(p)=\chi(\theta_{v}(\Lambda(p)))\) is Lipschitz. Therefore, we will also assume that for the chosen set of velocities \(\mathcal{V}\), for every \(v\in\mathcal{V}\), the function \(\Phi_{v}(\cdot)\) is Lipschitz.
**Remark 1**.: _Note that the previous discussion is restricted to considering the model evolving on the torus. For our model, due to the Glauber dynamics, these product measures are no longer invariant. Nevertheless, we introduced them and recalled their definitions since they will play a key role in the proof of the replacement lemmas, since they will be used as reference measures that are close to the (unknown) invariant measure of the system (see Section 5)._
### Hydrodynamic Equations
Recall that we fixed a finite time horizon \([0,T]\). Let \(C^{m,n}([0,T]\times D^{d})\) be the set of functions defined on \([0,T]\times D^{d}\) that are \(m\) times continuously differentiable on the first variable, the time variable, and \(n\) times continuously differentiable on the second variable, the space variable. For a function \(G:=G(t,u)\in C^{m,n}([0,T]\times D^{d})\), we let \(\partial_{t}G\) denote its derivative with respect to the time variable \(t\) and \(\partial_{u}G\) denote its derivative with respect to the space variable \(u_{i}\), \(i\in\{1,\dots,d\}\). For simplicity of notation, we set \(\Delta G:=\sum_{i=1}^{d}\frac{\partial^{2}G}{\partial u_{i}}\) and \(\nabla G=(\partial_{u_{1}},\dots,\partial_{u_{d}})\) represents the generalized gradient of the function \(G\). Finally, \(C^{m,n}_{0}([0,T]\times D^{d})\) denotes the set of functions \(G\in C^{m,n}([0,T]\times D^{d})\) such that for any time \(t\) the function \(G_{t}\) vanishes at the boundary of \(D^{d}\), that is, \(G_{t}(0,\tilde{u})=G_{t}(1,\tilde{u})=0\), where \(u\in D^{d}\) is decomposed as \((u_{1},\tilde{u})\), with \(\tilde{u}\in\mathbb{T}^{d-1}\).
Let \((B,\|\cdot\|_{B})\) be a separable Banach space. We let \(L^{2}([0,T],B)\) be the Banach space of measurable functions \(U:[0,T]\to B\) for which \(\|U\|^{2}_{L^{2}([0,T],B)}=\int_{0}^{T}\|U_{t}\|^{2}_{B}\,dt<\infty.\) Moreover, let \(\mathscr{H}^{1}(D^{d})\) be the Sobolev space of measurable functions in \(L^{2}(D^{d})\) that have generalized derivatives in \(L^{2}(D^{d})\).
We are now in a position to define the system of partial differential equations with different boundary conditions, and the respective notions of weak solutions that appear in the hydrodynamic limit of our model.
We begin by introducing the hydrodynamic equation:
**Definition 1**.: _Let \(\mathfrak{C}\in\{0,1,+\infty\}.\) Fix a measurable functions \(\rho_{0}:D^{d}\rightarrow\mathbb{R}_{+}\), \(\varrho_{0}:D^{d}\rightarrow\mathbb{R}^{d}\) such that \((\rho_{0},\varrho_{0})(u)\in\mathfrak{U}\). The following system of parabolic partial differential equations is called the hydrody
namic equation associated to the stochastic lattice gas model with generator (2):_
\[\left\{\begin{array}{l}\partial_{t}(\rho,\varrho)+\sum_{v\in\mathcal{V}}\tilde{v} [v\cdot\nabla\chi(\theta_{v}(\Lambda(\rho,\varrho)))]=\frac{1}{2}\Delta(\rho, \varrho),\\ \Big{(}\frac{\partial(\rho,\varrho)}{\partial u_{1}}(t,0,\tilde{u})-2 \sum_{v\in\mathcal{V}}\tilde{v}[v\cdot\chi(\theta_{v}(\Lambda(\rho,\varrho)))] \Big{)}=\mathfrak{C}\Big{(}(\rho,\varrho)(t,0,\tilde{u})-\sum_{v\in\mathcal{V }}v_{k}\alpha_{v}(\tilde{u})\Big{)},\\ \Big{(}\frac{\partial(\rho,\varrho)}{\partial u_{1}}(t,1,\tilde{u})-2 \sum_{v\in\mathcal{V}}\tilde{v}[v\cdot\chi(\theta_{v}(\Lambda(\rho,\varrho)))] \Big{)}=\mathfrak{C}\Big{(}\sum_{v\in\mathcal{V}}v_{k}\beta_{v}(\tilde{u})-( \rho,\varrho)(t,1,\tilde{u})\Big{)},\\ (\rho,\varrho)(0,u)=(\rho_{0},\varrho_{0})(u),\end{array}\right. \tag{16}\]
_where \(t\in(0,T],\tilde{u}\in\mathbb{T}^{d-1}\), \(u\in D^{d}\), \(\chi(r)=r(1-r)\) is the static compressibility of the system and for each velocity \(v=(v_{1},\ldots,v_{d})\), we let \(\tilde{v}=(1,v_{1},\ldots,v_{d})\)._
We will now make the notion of weak solution to the system (16) precise:
**Definition 2**.: _Let \(\mathfrak{C}\in\{0,1,+\infty\}\). Fix a measurable functions \(\rho_{0}:D^{d}\to\mathbb{R}_{+}\), \(\varrho_{0}:D^{d}\to\mathbb{R}^{d}\) such that \((\rho_{0},\varrho_{0})(u)\in\mathfrak{U}\). We say that \((\rho,\varrho):[0,T]\times D^{d}\to\mathbb{R}_{+}\times\mathfrak{U}\) is a weak solution of the system of parabolic partial differential equations (16) if the following two conditions hold:_
1. \((\rho,\varrho)\in L^{2}([0,T],\mathcal{H}^{1}(D^{d}));\)__
2. \((\rho,\varrho)\) _satisfies the weak formulation:_ \[\mathfrak{F}_{\mathfrak{C}}((\rho,\varrho),t,G) :=\int_{D^{d}}(\rho,\varrho)(T,u)G(T,u)\,du-\int_{D^{d}}(\rho, \varrho)(0,u)G(0,u)\,du\] (17) \[-\int_{0}^{T}\!\!\!\int_{D^{d}}(\rho,\varrho)(t,u)\left(\partial_ {t}G(t,u)+\frac{1}{2}\sum_{i=1}^{d}\frac{\partial^{2}G}{\partial u_{i}^{2}}(t,u)\right)\,du\,dt\] \[-\int_{0}^{T}\!\!\!\int_{D^{d}}\sum_{v\in\mathcal{V}}\tilde{v} \cdot\chi(\theta_{v}(\Lambda(\rho,\varrho)))\sum_{i=1}^{d}v_{i}\frac{\partial G }{\partial u_{i}}(t,u)\,du\,dt\] \[-\frac{\tilde{C}(\mathfrak{C})}{2}\int_{0}^{T}\!\!\!\int_{\{1 \}\times\mathbb{T}^{d-1}}\left(\sum_{v\in\mathcal{V}}v_{k}\beta_{v}(\tilde{u} )-(\rho,\varrho)(t,1,\tilde{u})\right)G(t,1,\tilde{u})\,dS\,dt\] \[+\frac{\tilde{C}(\mathfrak{C})}{2}\int_{0}^{T}\!\!\!\int_{\{0 \}\times\mathbb{T}^{d-1}}\left((\rho,\varrho)(t,0,\tilde{u})-\sum_{v\in \mathcal{V}}v_{k}\alpha_{v}(\tilde{u})\right)G(t,0,\tilde{u})\,dS\,dt\] \[+\frac{1}{2}\int_{0}^{T}\!\!\!\int_{\{1\}\times\mathbb{T}^{d-1}} (\rho,\varrho)(t,1,\tilde{u})\frac{\partial G}{\partial u_{1}}(t,1,\tilde{u}) \,dS\,dt\] \[-\frac{1}{2}\int_{0}^{T}\!\!\!\int_{\{0\}\times\mathbb{T}^{d-1}} (\rho,\varrho)(t,0,\tilde{u})\frac{\partial G}{\partial u_{1}}(t,0,\tilde{u}) \,dS\,dt=0\]
_for all \(t\in[0,T]\) and any function \(G:[0,T]\times D^{d}\to\mathbb{R}^{d+1}\) in \(C(\mathfrak{C})\), where_
\[C(\mathfrak{C}):=\textbf{1}_{\{0,1\}}(\mathfrak{C})C^{1,2}([0,T]\times D^{d})+ \textbf{1}_{\{+\infty\}}(\mathfrak{C})C_{0}^{1,2}([0,T]\times D^{d})\text{ and }\tilde{C}(\mathfrak{C})=\left\{\begin{array}{ll}0,&\text{if } \mathfrak{C}=+\infty,\\ 1,&\text{if }\mathfrak{C}=1,\\ 0,&\text{if }\mathfrak{C}=0.\end{array}\right. \tag{18}\]
**Remark 2**.: _In Definition 2, if \(\mathfrak{C}=0\), we say that \((\rho,\varrho)\) has Neumann boundary conditions, if \(\mathfrak{C}=1\), we say that \((\rho,\varrho)\) has Robin boundary conditions, and if \(\mathfrak{C}=+\infty\), we say that \((\rho,\varrho)\) has Dirichlet boundary conditions that we interpret as \((\rho,\varrho)(t,u)=d(u)\), \(u\in\{0,1\}\times\mathbb{T}^{d-1},\) where for \(u=(u_{1},\tilde{u})\in\{0,1\}\times\mathbb{T}^{d-1}\), \(\{0,1\}\times\mathbb{T}^{d-1}\),_
\[d(u)=\left\{\begin{array}{ll}\sum_{v\in\mathcal{V}}(\alpha_{v}(\tilde{u}),v_{1 }\alpha_{v}(\tilde{u}),\ldots,v_{d}\alpha_{v}(\tilde{u})),&\text{ if }u_{1}=0,\\ \sum_{v\in\mathcal{V}}(\beta_{v}(\tilde{u}),v_{1}\beta_{v}(\tilde{u}),\ldots,v_{d }\beta_{v}(\tilde{u})),&\text{ if }u_{1}=1.\end{array}\right. \tag{19}\]
Our proof boils down to applying carefully the entropy method developed by [13]. To that end we need to prove the uniqueness of the weak solutions of our hydrodynamic equations. We could not cover all the cases but we refer the reader to Appendix A for a proof.
### Hydrodynamic Limit
Let \(\mathcal{M}_{+}\) be the space of finite positive measures on \(D^{d}\) endowed with the weak topology and let \(\mathcal{M}\) be the space of bounded variation signed measures on \(D^{d}\) endowed with the weak topology. Let \(\mathcal{M}_{+}\times\mathcal{M}^{d}\) be the Cartesian product of these spaces endowed with the product topology, which is metrizable, since the space of continuous functions defined on \(D^{d}\), denoted by \(C(D^{d})\) is separable, and by the Riesz-Markov theorem, we have \(C(D^{d})^{*}=\mathcal{M}\).
Let \(\theta\geq 0\) and recall that \(\{\eta_{t,\theta}\}_{t\geq 0}\) is the Markov process with generator \(\mathcal{L}_{N,\theta}\). For \(k=0,\ldots,d\), let \(\pi_{t,\theta}^{k,N}\) denote the empirical measure associated to the \(k\)-th quantity of interest:
\[\pi_{t,\theta}^{k,N}(du)=\frac{1}{N^{d}}\sum_{x\in D^{d}_{N}}I_{k}(\eta_{t, \theta}(x))\delta_{\frac{x}{N}}(du), \tag{19}\]
where \(\delta_{u}(du)\) stands for the Dirac measure concentrated at \(u\in D^{d}\). We let \(\langle\pi_{t,\theta}^{k,N},G\rangle\) denote the integral of a test function \(G\) with respect to the empirical measure \(\pi_{t,\theta}^{k,N}\), and \(\langle f,g\rangle_{\nu}\) denote the inner product in \(L^{2}(D^{d},\nu)\) between \(f\) and \(g\): \(\langle f,g\rangle_{\nu}=\int_{D^{d}}fg\,d\nu\).
Let \(\mathcal{D}([0,T],\,\mathcal{M}_{+}\times\mathcal{M}^{d})\) be the set of right continuous functions with left limits taking values in \(\mathcal{M}_{+}\times\mathcal{M}^{d}\) and endowed with the uniform topology. We consider the sequence \((\mathbb{Q}_{N,\theta})_{N}\) of probability measures on \(\mathcal{D}([0,T],\,\mathcal{M}_{+}\times\mathcal{M}^{d})\) that corresponds to the Markov process \(\pi_{t,\theta}^{N}=(\pi_{t,\theta}^{0,N},\ldots,\pi_{t,\theta}^{d,N})\) starting from a probability measure \(\mu^{N}\). At this point we need to fix initial measurable profiles \(\rho_{0}:D^{d}\to\mathbb{R}_{+}\) and \(\varrho_{0}:D^{d}\to\mathbb{R}^{d}\), where \(\varrho_{0}=(\varrho_{0,1},\ldots,\varrho_{0,d})\) and an initial distribution \((\mu^{N})_{N}\) on \(X_{N}\). Before introducing the main result, let us define some notions that will play an important role in what follows.
**Definition 3** (Finite energy).: _We say that \((\rho,\varrho)\in L^{2}([0,T],(L^{2}(D^{d}))^{d+1})\) has finite energy if its components belong to \(L^{2}([0,T],\mathcal{H}^{1}(D^{d}))\), i.e., if \(\nabla\rho\) and \(\nabla\varrho_{k}\) are measurable functions such that for \(k\in\{1,\ldots,d\}\)_
\[\int_{0}^{T}ds\left(\int_{D^{d}}\|\nabla\rho(s,u)\|^{2}du\right)<\infty,\quad \int_{0}^{T}ds\left(\int_{D^{d}}\|\nabla\varrho_{k}(s,u)\|^{2}du\right)<\infty.\]
**Definition 4**.: _We say that a sequence of probability measures \((\mu^{N})_{N}\) on \(X_{N}\) is associated to the density profile \(\rho_{0}\) and to the momentum profile \(\varrho_{0}\), where we have \(Im((\rho_{0},\varrho_{0}))\subset\mathfrak{U}\), if, for every \(G\in C(D^{d})\) and for every \(\delta>0\),_
\[\lim_{N\to\infty}\mu^{N}\left[\eta:\left|\frac{1}{N^{d}}\sum_{x\in D^{d}_{N}} G\left(\tfrac{x}{N}\right)I_{0}(\eta(x))-\int_{D^{d}}G(u)\rho_{0}(u)du\right|> \delta\right]=0,\]
_and for every \(1\leq k\leq d\),_
\[\lim_{N\to\infty}\mu^{N}\left[\eta:\left|\frac{1}{N^{d}}\sum_{x\in D^{d}_{N}} G\left(\tfrac{x}{N}\right)I_{k}(\eta(x))-\int_{D^{d}}G(u)\varrho_{0,k}(u)du \right|>\delta\right]=0.\]
**Theorem 1**.: _Let \(\rho_{0}\) and \(\varrho_{0}\) be measurable functions, also let \((\mu^{N})_{N}\) be a sequence of probability measures on \(X_{N}\) associated to the profile \((\rho_{0},\varrho_{0})\). Then, for every \(t\in[0,T]\), for every function \(G\in C(D^{d})\), and for every \(\delta>0\),_
\[\lim_{N\to\infty}\mathbb{P}_{\mu^{N}}\left[\eta.\in\mathcal{D}([0,T],X_{N}): \left|\frac{1}{N^{d}}\sum_{x\in D^{d}_{N}}G\left(\tfrac{x}{N}\right)I_{0}( \eta_{tN^{2},\theta}(x))-\int_{D^{d}}G(u)\rho(t,u)\,du\right|>\delta\right]=0,\]
_and for \(1\leq k\leq d\),_
\[\lim_{N\to\infty}\mathbb{P}_{\mu^{N}}\left[\eta.\in\mathcal{D}([0,T],X_{N}): \left|\frac{1}{N^{d}}\sum_{x\in D^{d}_{N}}G\left(\tfrac{x}{N}\right)I_{k}( \eta_{tN^{2},\theta}(x))-\int_{D^{d}}G(u)\varrho_{k}(t,u)\,du\right|>\delta \right]=0,\]
_where \((\rho,\varrho)\) has finite energy (see Definition 3) and it is the unique weak solution of (16) as given in Definition 2 with \(\mathfrak{C}=+\infty\), if \(0\leq\theta<1\); \(\mathfrak{C}=1\), if \(\theta=1\); or \(\mathfrak{C}=0\), if \(\theta>1\) and \(d=1\)._
**Remark 3**.: _In the case in which \(\theta<0\), one can use the same arguments as the ones in the proof of Theorem 1, but taking test functions in \(C^{1}([0,T])\otimes C_{c}^{2}(D^{d})\), that is, the functions of class \(C^{1}\) in time, and of class \(C^{2}\) and compactly supported in space. But also requiring that the weak solutions also satisfy the Dirichlet conditions, almost surely in time. This can be done in a similar way as in [3]. The details are left to the reader._
To prove Theorem 1, we first prove a compactness result in Section 3, more precisely, we prove the tightness of the sequence \((\mathbb{Q}_{N,\theta})_{N}\). We then prove in Section 4 that all the limit points of \((\mathbb{Q}_{N,\theta})_{N}\) are concentrated on measures that are absolutely continuous with respect to the Lebesgue measure, and whose Radon-Nikodym derivatives are weak solutions to (16). To such an end we will require some auxiliary results that are given in Sections 5 and 6. Finally, uniqueness of the weak solutions are obtained in Appendix A, which in turn gives us the weak convergence and convergence in probability of \((\mathbb{Q}_{N,\theta})_{N}\) to \(\mathbb{Q}_{\theta}^{*}\) as \(N\to\infty\), where the measure \(\mathbb{Q}_{\theta}^{*}\) is a Dirac measure concentrated on \((\rho,\varrho)du\), and \((\rho,\varrho)\) is the unique weak solution to (16).
## 3 Tightness
In this section, we show that the sequence of probability measures \((\mathbb{Q}_{N,\theta})_{N}\) is tight in the Skorohod space \(\mathcal{D}([0,T],\,\mathcal{M}_{+}\times\mathcal{M}^{d})\). In order to do that, we invoke the Aldous' criterion and from [14, Chapter 4, Proposition 1.7] it is enough to show that for every function \(G\) in a dense subset of \(C(D^{d})\) with respect to the uniform topology, the sequence of measures that correspond to the real processes \(\langle\pi_{t,\theta}^{k,N}\,,G\rangle\), is tight. To such an end, we need to prove two things. First:
\[\lim_{A\to+\infty}\lim_{N\to+\infty}\mathbb{P}_{\mu^{N}}\Big{(}|\langle\pi_{t,\theta}^{k,N},G\rangle|>A\Big{)}=0,\text{ for each }k\in\{1,\ldots,d\}. \tag{20}\]
But this follows from Chebychev's inequality and the fact that for the exclusion type dynamics, the number of particles per site is at most one for each fixed velocity. Second, we need to prove that for all \(\varepsilon>0\) and any function \(G\) in a dense subset of \(C(D^{d})\), with respect to the uniform topology:
\[\lim_{\theta\to 0}\limsup_{N\to+\infty}\sup_{\begin{subarray}{c}\tau\in \mathcal{I}\\ t\leq\delta\end{subarray}}\mathbb{P}_{\mu^{N}}\Big{(}\eta:|\langle\pi_{\tau+ t,\theta}^{k,N},G\rangle-\langle\pi_{t,\theta}^{k,N},G\rangle|>\varepsilon\Big{)}=0, \tag{21}\]
where \(\mathfrak{T}_{T}\) denotes the set of stopping times with respect to the canonical filtration bounded by \(T\). Recall that it is enough to prove the assertion for functions \(G\) in a dense subset of \(C(D^{d})\) with respect to the uniform topology. We will split the proof into two cases: By Dynkin's formula, see, for example, [14, Appendix A1, Lemma 5.1], for each \(k\in\{1,\ldots,d\}\)
\[M_{t,\theta}^{N,k}(G)=\langle\pi_{t,\theta}^{k,N},G\rangle-\langle\pi_{0, \theta}^{k,N},G\rangle-\int_{0}^{t}(\mathcal{L}_{N}+\partial_{s})\langle\pi_{s,\theta}^{k,N}\,,G\rangle\,ds \tag{22}\]
is a martingale with respect to the natural filtration \(\mathcal{F}_{t,\theta}=\sigma(\eta_{s,\theta},s\leq t)\). Then, given \(\tau\in\mathfrak{T}_{T}\),
\[\mathbb{P}_{\mu^{N}}\Big{(}\eta:\Big{|}\langle\pi_{\tau+t,\theta }^{k,N},G\rangle-\langle\pi_{\tau,\theta}^{k,N},G\rangle|>\varepsilon\Big{)}\] \[= \,\mathbb{P}_{\mu^{N}}\Big{(}\eta:|M_{\tau,\theta}^{N,k}(G)-M_{ \tau+t,\theta}^{N,k}(G)+\int_{\tau}^{\tau+t}\mathcal{L}_{N}\langle\pi_{s, \theta}^{k,N},G\rangle\,ds\Big{|}>\varepsilon\Big{)}\] \[\leq \,\mathbb{P}_{\mu^{N}}\Big{(}\eta:|M_{\tau,\theta}^{N,k}(G)-M_{ \tau+t,\theta}^{N,k}(G)\Big{|}>\frac{\varepsilon}{2}\Big{)}+\mathbb{P}_{\mu^{ N}}\Big{(}\eta:\Big{|}\int_{\tau}^{\tau+t}\mathcal{L}_{N}\langle\pi_{s, \theta}^{k,N},G\rangle\,ds\Big{|}>\frac{\varepsilon}{2}\Big{)}.\]
Applying Chebychev's inequality (resp. Markov's inequality) in the first (resp. second) term on the right-hand side of the last inequality, we can bound the previous expression from above by
\[\frac{2}{\varepsilon^{2}}\mathbb{E}_{\mu^{N}}\left[\Big{(}M_{\tau,\theta}^{N,k }(G)-M_{\tau+t,\theta}^{N,k}(G)\Big{)}^{2}\right]+\frac{2}{\varepsilon}\mathbb{ E}_{\mu^{N}}\left[\,\Big{|}\int_{\tau}^{\tau+t}\mathcal{L}_{N}\langle\pi_{s, \theta}^{k,N},G\rangle\,ds\Big{|}\,\right].\]
Therefore, in order to prove (21) it is enough to show that
\[\lim_{\delta\to 0}\limsup_{N\to+\infty}\sup_{\begin{subarray}{c}\tau\in \mathcal{I}\\ t\leq\delta\end{subarray}}\mathbb{E}_{\mu^{N}}\Big{[}\,\Big{|}\int_{\tau}^{ \tau+t}\mathcal{L}_{N}\langle\pi_{s,\theta}^{k,N},G\rangle\,ds\Big{|}\,\Big{]}=0, \tag{23}\]
\[\lim_{\delta\to 0}\limsup_{N\to+\infty}\sup_{\begin{subarray}{c}\varepsilon\in \mathcal{I}\\ t\leq\delta\end{subarray}}\sup_{\begin{subarray}{c}\varepsilon\in\mathcal{I} \\ t\leq\delta\end{subarray}}\mathbb{E}_{\mu^{N}}\Big{[}\Big{(}M^{N,k}_{\tau, \theta}(G)-M^{N,k}_{\tau+\ell,\theta}(G)\Big{)}^{2}\Big{]}=0. \tag{24}\]
Let us start by proving (23) for functions \(G\in C^{2}(D^{d})\), since by a standard \(L^{1}\) procedure it is easy to extend it to functions \(G\in C(D^{d})\). Now, we will show that there exists a constant \(C\) such that \(\mathcal{L}_{N,\theta}\langle\pi^{k,N}_{s,\theta},G\rangle\leq C\) for any \(s\leq T\). For that purpose, note that
\[|\mathcal{L}_{N,\theta}\langle\pi^{k,N}_{s,\theta},G\rangle|\leq|N^{2} \mathcal{L}_{N}^{ex,1}\langle\pi^{k,N}_{s,\theta},G\rangle|+|N^{2}\mathcal{L }_{N}^{ex,2}\langle\pi^{k,N}_{s,\theta},G\rangle|+|N^{2}\mathcal{L}_{N}^{c} \langle\pi^{k,N}_{s,\theta},G\rangle|+|N^{2}\mathcal{L}_{N,\theta}^{b} \langle\pi^{k,N}_{s,\theta},G\rangle|.\]
Let us bound this separately. Note that,
\[|N^{2}\mathcal{L}_{N}^{ex,1}\langle\pi^{k,N}_{s,\theta},G\rangle| \leq\Big{|}\langle\pi^{k,N}_{s,\theta},\frac{1}{2}\Delta_{N}G \rangle\Big{|}+\Big{|}\frac{N}{2N^{d}}\sum_{\begin{subarray}{c}\varepsilon\in D ^{d}_{N}\\ x_{1}=1\end{subarray}}\sum_{\begin{subarray}{c}v\in\mathcal{V}\\ x_{1}=1\end{subarray}}v_{k}\eta_{s,\theta}(1,\tilde{x},v)\partial_{x_{1}}^{N, +}G(0,\tilde{x})\Big{|}\] \[\quad+\Big{|}\frac{N}{2N^{d}}\sum_{\begin{subarray}{c}\varepsilon \in D^{d}_{N}\\ x_{1}=N-1\end{subarray}}\sum_{\begin{subarray}{c}v\in\mathcal{V}\\ v\in\mathcal{V}\end{subarray}}v_{k}\eta_{s,\theta}(N-1,\tilde{x},v)\partial_{x _{1}}^{N,-}G(1,\tilde{x})\Big{|} \tag{25}\] \[\leq\frac{1}{2}\|G^{\prime\prime}\|_{\infty}+\frac{CNN^{d-1}}{2N ^{d}}\|G^{\prime}\|_{\infty}+\frac{CNN^{d-1}}{2N^{d}}\|G^{\prime}\|_{\infty}= \frac{1}{2}\|G^{\prime\prime}\|_{\infty}+C\|G^{\prime}\|_{\infty}.\]
Similarly, using the fact that \(p\) has finite range, we have that
\[|N^{2}\mathcal{L}_{N}^{ex,2}\langle\pi^{k,N}_{s,\theta},G\rangle| \leq\Big{|}\frac{1}{N^{d}}\sum_{j=1}^{d}\sum_{x\in D^{d}_{N}} \langle\partial_{x_{j}}^{N}G\rangle\left(\tfrac{\pi}{N}\right)\sum_{v\in \mathcal{V}}v_{k}\sum_{z\in\mathbb{Z}^{d}}p(z,v)z_{j}\eta_{s,\theta}(x,v)(1- \eta_{s,\theta}(x+z,v))\Big{|}\] \[\leq\frac{\tilde{C}N^{d}\|G^{\prime}\|_{\infty}}{N^{d}}=\tilde{C }\|G^{\prime}\|_{\infty}. \tag{26}\]
Also,
\[|N^{2}\mathcal{L}_{N,\theta}^{b}\langle\pi^{k,N}_{s,\theta},G\rangle| \leq\Big{|}\frac{N^{2}}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c} \varepsilon\in D^{d}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}G\left(\tfrac{1}{N},\tfrac{ \tilde{x}}{N}\right)\left[\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right)-\eta_ {s,\theta}(1,\tilde{x},v)\right]\Big{|} \tag{27}\] \[+\Big{|}\frac{N^{2}}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c} \varepsilon\in D^{d}_{N}\\ x_{1}=N-1\end{subarray}}\sum_{v\in V}v_{k}G\left(\tfrac{N-1}{N},\tfrac{ \tilde{x}}{N}\right)\left[\beta_{v}\left(\tfrac{\tilde{x}}{N}\right)-\eta_ {s,\theta}(N-1,\tilde{x},v)\right]\Big{|}\] \[\leq\frac{CN^{2}N^{d-1}}{N^{d}N^{\theta}}\|G\|_{\infty}+\frac{CN ^{2}N^{d-1}}{N^{d}N^{\theta}}\|G\|_{\infty}=2CN^{1-\theta}\|G\|_{\infty}.\]
Now observe that,
\[N^{2}\mathcal{L}_{N}^{c}\langle\pi^{k,N}_{s,\theta},G\rangle=\frac{N^{2}}{N^{ d}}\sum_{x\in D^{d}_{N}}\sum_{v\in\mathcal{V}}v_{k}G\left(\tfrac{\pi}{N}\right) \mathcal{L}_{N}^{c}(\eta_{s,\theta}(x,v)).\]
Since the operator is linear, we just need to compute \(\mathcal{L}_{N}^{c}(\eta_{s,\theta}(x,v))\). For \(f(\eta)=\eta(x,v)\), we have that
\[(\mathcal{L}_{N}^{c}f)(\eta) =\sum_{y\in D^{d}_{N}}\sum_{q\in Q}p_{c}(y,q,\eta)[f(\eta^{y,q}) -f(\eta)]=\sum_{q\in Q}p_{c}(x,q,\eta)[\eta(x,v_{j+2})-\eta(x,v)]=0.\]
Therefore,
\[N^{2}\mathcal{L}_{N}^{c}\langle\pi^{k,N}_{s,\theta},G\rangle=0. \tag{28}\]
Now, let \(\theta\geq 1\). From (25), (26), (27) and (28) we have that \(|\mathcal{L}_{N,\theta}\langle\pi^{k,N}_{s,\theta},G\rangle|\leq C\) which proves (23).
Let, now, \(0\leq\theta<1\). If we try to apply the same strategy used for \(\theta\geq 1\) we will run into trouble when trying to control the modulus of continuity of \(\int_{0}^{t}N^{2}\mathcal{L}_{N,\theta}^{b}\langle\pi^{k,N}_{s,\theta},G\rangle\,ds\), because the expression in (27) can explode when \(N\to\infty\). We will prove (21) first for functions \(G\in C_{c}^{2}(D^{d})\) and then we can
extend it, by a \(L^{1}\) approximation procedure which is explained below, to functions \(G\in C^{1}(D^{d})\). In this case it holds
\[\begin{array}{ll}|N^{2}\mathcal{L}_{N}^{ex,1}\langle\pi_{s,\theta}^{k,N},G \rangle|&\leq|\langle\pi_{s,\theta}^{k,N},\frac{1}{2}\Delta_{N}G\rangle|+ \Big{|}\frac{N}{2N^{d}}\sum_{\begin{subarray}{c}x\in D_{s}^{d}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}\eta_{s,\theta}(1,\tilde{x},v) \partial_{x_{1}}^{N,+}G(0,\tilde{x})\Big{|}\\ &+\Big{|}\frac{N}{2N^{d}}\sum_{\begin{subarray}{c}x\in D_{s}^{d}\\ x_{1}=N-1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}\eta_{s,\theta}(N-1,\tilde{ x},v)\partial_{x_{1}}^{N,-}G(1,\tilde{x})\Big{|}\leq\frac{1}{2}\|G^{\prime \prime}\|_{\infty}.\end{array} \tag{29}\]
Also,
\[\begin{array}{ll}|N^{2}\mathcal{L}_{N}^{b}\langle\pi_{s,\theta}^{k,N},G \rangle|&\leq\Big{|}\frac{N^{2}}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c}x \in D_{s}^{d}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}G\left(\frac{1}{N},\frac{ \tilde{x}}{N}\right)[\alpha_{v}\left(\frac{\tilde{x}}{N}\right)-\eta_{s,\theta }(1,\tilde{x},v)]\Big{|}\\ &+\Big{|}\frac{N^{2}}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c}x \in D_{s}^{d}\\ x_{1}=N-1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}G\left(\frac{N-1}{N},\frac{ \tilde{x}}{N}\right)[\beta_{v}\left(\frac{\tilde{x}}{N}\right)-\eta_{s,\theta }(N-1,\tilde{x},v)]\Big{|}.\end{array}\]
So,
\[|N^{2}\mathcal{L}_{N,\theta}^{b}\langle\pi_{s,\theta}^{k,N},G\rangle| \leq\Big{|}\frac{N}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c}x \in D_{s}^{d}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}\,N\left[G\left(\frac{1}{N}, \frac{\tilde{x}}{N}\right)-G\left(\frac{0}{N},\frac{\tilde{x}}{N}\right) \right][\alpha_{v}\left(\frac{\tilde{x}}{N}\right)-\eta_{s,\theta}(1,\tilde{ x},v)]\Big{|}\] \[+\Big{|}\frac{N}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c}x \in D_{s}^{d}\\ x_{1}=N-1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}\,N\left[G\left(\frac{N-1}{N },\frac{\tilde{x}}{N}\right)-G\left(\frac{N}{N},\frac{\tilde{x}}{N}\right) \right][\beta_{v}\left(\frac{\tilde{x}}{N}\right)-\eta_{s,\theta}(N-1,\tilde {x},v)]\Big{|}\] \[\leq\Big{|}\frac{N}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c}x \in D_{s}^{d}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}\,\partial_{x_{1}}^{N,+}G \left(\frac{0}{N},\frac{\tilde{x}}{N}\right)[\alpha_{v}\left(\frac{\tilde{x}}{ N}\right)-\eta_{s,\theta}(1,\tilde{x},v)]\Big{|}\] \[+\Big{|}\frac{N}{N^{d}N^{\theta}}\sum_{\begin{subarray}{c}x \in D_{s}^{d}\\ x_{1}=N-1\end{subarray}}\sum_{v\in\mathcal{V}}v_{k}\,\partial_{x_{1}}^{N,-}G \left(\frac{N}{N},\frac{\tilde{x}}{N}\right)[\beta_{v}\left(\frac{\tilde{x}}{ N}\right)-\eta_{s,\theta}(N-1,\tilde{x},v)]\Big{|}\] \[\leq CN^{1-d-\theta}\|G^{\prime}\|_{\infty}, \tag{30}\]
since \(G\in C_{c}^{2}(D^{d})\). Using (26), (28), (29) and (30) this finishes the proof of (23) for any \(\theta\geq 0\).
We will now prove (24). Since
\[\Big{(}M_{\tau,\theta}^{N,k}(G)-M_{\tau+t,\theta}^{N,k}(G)\Big{)}^{2}-\int_{ \tau}^{\tau+t}\mathcal{L}_{N}[\langle\pi_{s,\theta}^{k,N},G\rangle]^{2}-2 \langle\pi_{s,\theta}^{k,N},G\rangle\mathcal{L}_{N}\langle\pi_{s,\theta}^{k,N },G\rangle ds\]
is a mean zero martingale. We obtain that
\[\mathbb{E}_{\mu^{N}}\Big{[}\Big{(}M_{\tau,\theta}^{N,k}(G)-M_{\tau+t,\theta}^{ N,k}(G)\Big{)}^{2}\Big{]}=\mathbb{E}_{\mu^{N}}\Big{[}\int_{\tau}^{\tau+t} \mathcal{L}_{N}[\langle\pi_{s,\theta}^{k,N},G\rangle]^{2}-2\langle\pi_{s, \theta}^{k,N},G\rangle\mathcal{L}_{N}\langle\pi_{s,\theta}^{k,N},G\rangle ds \Big{]}.\]
Note that, (24) holds if we show that
\[\int_{\tau}^{\tau+t}\mathcal{L}_{N}[\langle\pi_{s,\theta}^{k,N},G\rangle]^{2}-2 \langle\pi_{s,\theta}^{k,N},G\rangle\mathcal{L}_{N}\langle\pi_{s,\theta}^{k,N},G \rangle ds,\]
converges to zero uniformly in \(t\in[0,T]\), when \(N\to\infty\). Simple, albeit long, computations show that
\[N^{2}\mathcal{L}_{N}^{ex,1}\langle\pi_{s,\theta}^{k,N},G\rangle^{ 2}-2\langle\pi_{s,\theta}^{k,N},G\rangle N^{2}\mathcal{L}_{N}^{ex,1}\langle \pi_{s,\theta}^{k,N},G\rangle\] \[=\frac{1}{2N^{2d}}\sum_{v\in\mathcal{V}}\sum_{x\in D_{s}^{d}} \sum_{j=1}^{d}v_{k}^{2}\left[\eta_{s,\theta}(x,v)-\eta_{s,\theta}(x+e_{j},v)) \right]^{2}[\partial_{x_{j}}^{N}G\left(\frac{x}{N}\right)]^{2}\leq\frac{C}{N^{d} }\|G^{\prime}\|_{\infty}^{2}.\]
We also have
\[N^{2}\mathcal{L}_{N}^{ex,2}\langle\pi_{s,\theta}^{k,N},G\rangle^{2} -2\langle\pi_{s,\theta}^{k,N},G\rangle N^{2}\mathcal{L}_{N}^{ex,2}\langle\pi_{s, \theta}^{k,N},G\rangle\] \[=\frac{1}{N^{2d+1}}\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d}} \sum_{w\in\mathcal{Z}^{d}}v_{h}^{2}\,\eta_{s,\theta}(x,v)(1-\eta_{s,\theta}(x +w,v))p(w,v)w_{j}^{2}[\partial_{x_{j}}^{N}G\left(\tfrac{x}{N}\right)]^{2}\leq \frac{\tilde{C}}{N^{d+1}}\|G^{\prime}\|_{\infty}^{2}.\]
Moreover,
\[N^{2}\mathcal{L}_{N,\theta}^{b}\langle\pi_{s,\theta}^{k,N},G \rangle^{2}-2\langle\pi_{s,\theta}^{k,N},G\rangle N^{2}\mathcal{L}_{N,\theta} ^{b}\langle\pi_{s,\theta}^{k,N},G\rangle\] \[=\frac{N^{2}}{N^{2d}}\sum_{\stackrel{{ x\in D_{N}^{d}}}{{x_{1}=1}}}\sum_{v\in \mathcal{V}}\left[\frac{\alpha_{v}(\tfrac{x}{N})(1-\eta_{s,\theta}(x,v))+(1- \alpha_{v}(\tfrac{x}{N}))\eta_{s,\theta}(x,v)}{N^{\theta}}\right]v_{k}^{2} \,[G\left(\tfrac{1}{N},\tfrac{\tilde{x}}{N}\right)]^{2}\] \[+\frac{N^{2}}{N^{2d}}\sum_{\stackrel{{ x\in D_{N}^{d}}}{{x_{1}=N-1}}}\sum_{v\in \mathcal{V}}\left[\frac{\beta_{v}(\tfrac{x}{N})(1-\eta_{s,\theta}(x,v))+(1- \beta_{v}(\tfrac{x}{N}))\eta_{s,\theta}(x,v)}{N^{\theta}}\right]v_{k}^{2}\,[ G\left(\tfrac{N-1}{N},\tfrac{\tilde{x}}{N}\right)]^{2}\leq\frac{1}{N^{d}}\|G\|_{ \infty}^{2}.\]
And doing a long but simple computation, we prove that
\[N^{2}\mathcal{L}_{N}^{c}\langle\pi_{s,\theta}^{k,N},G\rangle^{2}-2\langle\pi _{s,\theta}^{k,N},G\rangle N^{2}\mathcal{L}_{N}^{c}\langle\pi_{s,\theta}^{k,N },G\rangle=0.\]
This finishes the proof of tightness for any \(\theta\geq 0\).
## 4 Characterization of limit points
This section deals with the characterization of the limit of points of \((\mathbb{Q}_{N,\theta})_{N}\) for \(\theta\geq 0\). For simplicity we split the proof in three different regimes.
### Characterization of the limit points for \(\theta\in[0,1)\)
**Proposition 1**.: _If \(\mathbb{Q}_{\theta}^{*}\) is a limit point of \((\mathbb{Q}_{N,\theta})_{N\geq 1}\), then_
\[\mathbb{Q}^{*}\left[\pi_{\cdot}:\pi=(\rho,\varrho)du\text{ and }\mathfrak{F}_{+ \infty}((\rho,\varrho),t,G)=0\right]=1,\]
_for all \(t\in[0,T]\), \(\forall G\in C_{0}^{1,2}([0,T]\times D^{d})\) and \(\theta\in[0,1)\), where the functional \(\mathfrak{F}_{\cdot}\) was defined in (17)._
Proof.: It is enough to verify that, for \(\delta>0\) and \(G\in C_{0}^{1,2}([0,T]\times D^{d})\) fixed,
\[\mathbb{Q}_{\theta}^{*}\Bigg{[}\pi_{\cdot}:\pi=(\rho,\varrho)du\text{ and }\sup_{0\leq t\leq T}|\mathfrak{F}_{+\infty}((\rho,\varrho),t,G)|>\delta\Bigg{]}=0. \tag{31}\]
First, observe that \(\mathbb{Q}_{\theta}^{*}\Bigg{[}\pi_{\cdot}:\pi=(\rho,\varrho)du\Bigg{]}=1\). Now, we need to replace the term \(\chi(\theta_{v}(\lambda(\rho,\varrho)))\) in (17) by a functional of the empirical measure. To this end, for each \(\varepsilon>0\), let \(j_{\varepsilon}(\cdot)\), which we will use to obtain an approximation of the point evalution, be given by
\[j_{\varepsilon}(x)=(2\varepsilon)^{-d}\mathbb{1}_{[-\varepsilon,\varepsilon]^{ d}}(x),\quad x\in D^{d}.\]
Now, let \((\rho,\varrho)*j_{\varepsilon}(\cdot)=(\rho*j_{\varepsilon}(\cdot),\varrho_{1}*j _{\varepsilon}(\cdot),\ldots,\varrho_{d}*j_{\varepsilon}(\cdot))\). Now, we have that \((\rho,\varrho)*j_{\varepsilon}(\cdot)\to(\rho,\varrho)(\cdot)\) in \(L^{1}(D^{d})\) as \(\varepsilon\to 0\). Since \(\chi(\theta_{v}(\Lambda(\cdot)))\) is Lipschitz, we have that \(\chi(\theta_{v}(\Lambda((\rho,\varrho)*j_{\varepsilon}(\cdot))))\to\chi( \theta_{v}(\Lambda(((\rho,\varrho)(\cdot))))\) in \(L^{1}(D^{d})\). Thus, \(Q^{*}\) almost surely, for every \(v\in\mathcal{V}\),
\[\int_{0}^{t}\int_{D^{d}}\tilde{v}\cdot\chi(\theta_{v}(\Lambda((\rho,\varrho)*j_ {\varepsilon})))\sum_{i=1}^{d}v_{i}\frac{\partial G}{\partial u_{i}}(r,u)\,du \,dr\to\int_{0}^{t}\int_{D^{d}}\tilde{v}\cdot\chi(\theta_{v}(\Lambda(\rho, \varrho)))\sum_{i=1}^{d}v_{i}\frac{\partial G}{\partial u_{i}}(r,u)\,du\,dr,\]
as \(\varepsilon\to 0\). Let, now, \(\mathfrak{F}_{\mathfrak{C}}^{\varepsilon}((\rho,\varrho),t,G)\) be the expression obtained after we replace \(\chi(\theta_{v}(\Lambda((\rho,\varrho))))\) by \(\chi(\theta_{v}(\Lambda((\rho,\varrho)*j_{\varepsilon})))\) in (16). Therefore, (31) will follow if we show that
\[\limsup_{\varepsilon\to 0}\mathbb{Q}_{\theta}^{*}\Bigg{[}\pi_{\cdot}:\pi=(\rho, \varrho)du\text{ and }\sup_{0\leq t\leq T}|\mathfrak{F}_{+\infty}^{\varepsilon}((\rho,\varrho),t,G)|> \delta\Bigg{]}=0. \tag{32}\]
By the same arguments in [14, p.77], we have that
\[\mathbf{I}^{[Ne]}(x,\eta)=C_{N,\varepsilon}(\pi_{\theta}^{N}*j_{\varepsilon}) \left(\frac{x}{N}\right),\]
where \(\pi_{\theta}^{N}*j_{\varepsilon}(\cdot)=\left(\pi_{\theta}^{0,N}*j_{\varepsilon }(\cdot),\pi_{\theta}^{1,N}*j_{\varepsilon}(\cdot),\ldots,\pi_{\theta}^{d,N}*j _{\varepsilon}(\cdot)\right)\) and \(C_{N,\varepsilon}=1+O(N^{-1})\).
Since the set considered in (32) is an open set with respect to Skorohod topology \(\mathcal{D}([0,T],X_{N})\), we can use the Portmanteau's Theorem directly and bound the last probability by
\[\limsup_{\varepsilon\to 0}\liminf_{N\to\infty}\mathbb{Q}_{N} \left[\pi_{\cdot}:\sup_{0\leq t\leq T}\left|\langle\pi_{t}^{k},G_ {t}\rangle-\langle\pi_{0}^{k},G_{0}\rangle\right.\right.\] \[\left.\left.-\int_{0}^{t}\left\langle\pi_{\tau}^{k},\left( \partial_{r}G(r,u)+\frac{1}{2}\Delta G\right)\right\rangle\,dr\right.\right.\] \[\left.\left.+\,\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T }^{d-1}}d(1,\tilde{u})\frac{\partial G}{\partial u_{1}}(r,1,\tilde{u})\,dS\, dr-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}}d(0,\tilde{u})\frac{ \partial G}{\partial u_{1}}(r,0,\tilde{u})\,dS\,dr\right.\right.\] \[\left.\left.-\,\int_{0}^{t}\int_{D^{d}}\sum_{v\in\mathcal{V}} \tilde{v}\cdot\chi(\theta_{v}(\Lambda(\mathbf{I}^{[Ne]}(u,\eta))))\sum_{i=1}^ {d}v_{i}\frac{\partial G}{\partial u_{i}}(r,u)\,du\,dr\right|>\delta\right].\]
Summing and subtracting \(\int_{0}^{t}N^{2}\mathcal{L}_{N}\langle\pi_{r}^{k,N},G_{r}\rangle\,dr\) in the expression above, we can bound it by the sum of
\[\liminf_{N\to\infty}\mathbb{P}_{\mu^{N}}\left[\sup_{0\leq t\leq T}|M_{t}^{N,k} (G)|>\frac{\delta}{2}\right] \tag{33}\]
and
\[\limsup_{\varepsilon\to 0}\limsup_{N\to\infty}\mathbb{P}_{\mu^{N}} \left[\eta_{\cdot}:\sup_{0\leq t\leq T}\left|\int_{0}^{t}N^{2} \mathcal{L}_{N}\langle\pi_{r}^{k,N},G_{r}\rangle\,dr-\frac{1}{2}\int_{0}^{t} \langle\pi_{r}^{k,N},\Delta G\rangle\,dr\right. \tag{34}\] \[\left.-\int_{0}^{t}\int_{D^{d}}\sum_{v\in\mathcal{V}}\tilde{v} \cdot\chi(\theta_{v}(\Lambda(\mathbf{I}^{[Ne]}(u,\eta))))\sum_{i=1}^{d}v_{i} \frac{\partial G}{\partial u_{i}}(r,u)\,du\,dr\right.\] \[\left.+\,\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{d-1 }}d(1,\tilde{u})\frac{\partial G}{\partial u_{1}}(r,1,\tilde{u})\,dS\,dr\right.\] \[\left.-\,\left.\,\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times \mathbb{T}^{d-1}}d(0,\tilde{u})\frac{\partial G}{\partial u_{1}}(r,0,\tilde{u })\,dS\,dr\right|>\frac{\delta}{2}\right].\]
Since
\[M_{t}^{N,k}(G) =\langle\pi_{t}^{k,N},G\rangle-\langle\pi_{0}^{k,N},G\rangle- \int_{0}^{t}\langle\pi_{s}^{k,N},\partial_{s}G\rangle\,ds-\frac{1}{2}\int_{0} ^{t}\langle\pi_{s}^{k,N},\Delta_{N}G\rangle\,ds \tag{35}\] \[+\frac{1}{2}\int_{0}^{t}\frac{1}{N^{d-1}}\sum_{\stackrel{{ x\in D^{d}_{s}}}{{x_{1}=N-1}}}I_{k}(\eta_{x}(s))\left\{N\left[G\left( \frac{N}{N},\frac{\tilde{x}}{N}\right)-G\left(\frac{N-1}{N},\frac{\tilde{x}}{ N}\right)\right]\right\}\,ds\] \[-\frac{1}{2}\int_{0}^{t}\frac{1}{N^{d-1}}\sum_{\stackrel{{ x\in D^{d}_{s}}}{{x_{1}=N-1}}}I_{k}(\eta_{x}(s))\,\left\{-N\left[G\left( \frac{0}{N},\frac{\tilde{x}}{N}\right)-G\left(\frac{1}{N},\frac{\tilde{x}}{ N}\right)\right]\right\}\,ds\] \[-\int_{0}^{t}\frac{1}{N^{d}}\sum_{\stackrel{{ x\in D^{d}_{s}}}{{x_{1}=N-1}}}\sum_{v\in\mathcal{V}}v_{k}\,NG\left( \frac{N-1}{N},\frac{\tilde{x}}{N}\right)\left[\partial_{v}(\frac{\tilde{x}}{ N})-\eta_{sN^{2}}(N-1,\tilde{x},v)\right]ds\]
where \(\tau_{x}\) stands for the translation by \(x\) on the state space \(X_{N}\) so that \((\tau_{x}\eta)(y,v)=\eta(x+y,v)\) for all \(x,y\in\mathbb{Z}^{d},v\in\mathcal{V}\), and \(W^{N,n_{\prime}}_{j,k}\) is given by:
\[W^{N,n_{\mu}}_{j,k}=\sum_{v\in\mathcal{V}}v_{k}\sum_{z\in\mathbb{Z}^{d}}p(z,v) z_{j}\eta_{\ast}(0,v)[1-\eta_{\ast}(z,v)],\]
where \(v_{0}=1\) and \(\pi^{k,N}_{t}\) is the empirical measure defined in (19). Now, let us bound the expression inside the probability in equation (34), by the sum of the following terms
\[\sup_{0\leq t\leq T}\Bigl{|}\int_{0}^{t}\Big{[}\frac{1}{2N^{d}} \sum_{x\in D^{k}_{N}}I_{k}(\eta_{\ast}(r))\Delta_{N}G\left(\tfrac{x}{N}\right) -\frac{1}{2N^{d}}\sum_{x\in D^{k}_{N}}I_{k}(\eta_{\ast}(r))\Delta G\left( \tfrac{x}{N}\right)\Big{]}\,dr\Bigr{|}, \tag{36a}\] \[\sup_{0\leq t\leq T}\Bigl{|}\int_{0}^{t}\frac{1}{2N^{d-1}}\sum_{ \begin{subarray}{c}x\in D^{d}_{N}\\ x_{1}=N-1\end{subarray}}I_{k}(\eta_{(N-1,\tilde{x})}(r))\left[\partial^{N,+}_{ x_{1}}G\left(\tfrac{N-1}{N},\tfrac{\tilde{x}}{N}\right)-\partial_{x_{1}}G_{r}(1, \tilde{x})\right]\,dr\Bigr{|},\] (36b) \[\sup_{0\leq t\leq T}\Bigl{|}\frac{1}{2}\int_{0}^{t}\partial_{x_{ 1}}G_{r}(1,\tilde{x})\left[I_{k}(\eta_{(N-1,\tilde{x})}(r))-d(1,\tilde{x}) \right]\,dr\Bigr{|},\] (36c) \[\sup_{0\leq t\leq T}\Bigl{|}\int_{0}^{t}\frac{1}{2N^{d-1}}\sum_{ \begin{subarray}{c}x\in D^{d}_{N}\\ x_{1}=1\end{subarray}}I_{k}(\eta_{(1,\tilde{x})}(r))\left[\partial^{N,-}_{x_{1} }G\left(\tfrac{1}{N},\tfrac{\tilde{x}}{N}\right)-\partial_{x_{1}}G_{r}(0, \tilde{x})\right]\,dr\Bigr{|},\] (36d) \[\sup_{0\leq t\leq T}\Bigl{|}\frac{1}{2}\int_{0}^{t}\partial_{x_{ 1}}G_{r}(0,\tilde{x})\left[I_{k}(\eta_{(1,\tilde{x})}(r))-d(0,\tilde{x})\right] \,dr\Bigr{|},\] (36e) \[\sup_{0\leq t\leq T}\Bigl{|}\int_{0}^{t}\frac{N^{1-\theta}}{N^{d}} \sum_{\begin{subarray}{c}x\in D^{d}_{N}\\ x_{1}=1\end{subarray}}\partial^{N,-}_{x_{1}}G\left(\tfrac{1}{N},\tfrac{\tilde{x} }{N}\right)\Big{[}\sum_{v\in\mathcal{V}}\alpha_{v}(\tfrac{\tilde{x}}{N})-I_{k }(\eta_{(1,\tilde{x})})\Big{]}\,dr\Bigr{|},\] (36f) \[\sup_{0\leq t\leq T}\Bigl{|}\int_{0}^{t}\frac{N^{1-\theta}}{N^{d}} \sum_{\begin{subarray}{c}x\in D^{d}_{N}\\ x_{1}=N-1\end{subarray}}\partial^{N,+}_{x_{1}}G\left(\tfrac{N-1}{N},\tfrac{ \tilde{x}}{N}\right)\Big{[}\sum_{v\in\mathcal{V}}\beta_{v}(\tfrac{\tilde{x}}{N })-I_{k}(\eta_{(N-1,\tilde{x})})\Big{]}\,dr\Bigr{|}\] (36g) \[\sup_{0\leq t\leq T}\Bigl{|}\int_{0}^{t}\Big{[}\frac{1}{N^{d}} \sum_{j=1}^{d}\sum_{x\in D^{d}_{N}}(\partial^{N}_{x_{j}}G)(\tfrac{x}{N})\tau_{ x}W_{j,k}-\int_{D^{d}}\sum_{v\in\mathcal{V}}\tilde{v}\cdot\chi(\theta_{v}(\Lambda( \mathbf{I}^{[N\varepsilon]}(u,\eta))))\sum_{i=1}^{d}v_{i}\frac{\partial G}{ \partial x_{i}}\,dx\Big{]}dr\Bigr{|} \tag{36h}\]
Using Taylor expansion, we obtain that (36a), (36b) and (36d) converges to zero as \(N\to\infty\). Furthermore, since \(G\in C^{1,2}_{0}([0,T]\times D^{d})\), we have that (36c) and (36e) is equal to zero and using the Replacement Lemmas (resp. Proposition 4 and Lemma 5) in (36f, 36g) and (36h), it is easy to see that terms above converges to zero, as \(N\to\infty\), and then \(\varepsilon\to 0\). This concludes the proof.
### Characterization of limit points for \(\theta=1\)
We begin by fixing some notations used along this subsection. We denote by
\[\overrightarrow{\eta}^{\ell}(x_{1},\tilde{x},v):=\frac{1}{\ell}\sum_{y\in \overrightarrow{\Lambda}^{\ell}_{x_{1}}}I_{k}(\eta_{\ast}(y,\tilde{x},v))\quad \text{ and }\quad\overleftarrow{\eta}^{\ell}(x_{1},\tilde{x},v):=\frac{1}{\ell}\sum_{y \in\overrightarrow{\Lambda}^{\ell}_{x_{1}}}I_{k}(\eta_{\ast}(y,\tilde{x},v)), \tag{37}\]
where \(\overleftarrow{\Lambda}^{\ell}_{x_{1}}:=\{x_{1}-\ell+1,\ldots,x_{1}\}\) (resp. \(\overrightarrow{\Lambda}^{\ell}_{x_{1}}:=\{x_{1},\ldots,x_{1}+\ell-1\}\)).
**Proposition 2**.: _If \(\mathbb{Q}^{\ast}_{\theta}\) is a limit point of \((\mathbb{Q}_{N,\theta})_{N\geq 1}\), then_
\[\mathbb{Q}^{\ast}_{\theta}\left[\pi:\pi=(\rho,\varrho)du\text{ and }\mathfrak{F}_{1}((\rho,\varrho),t,G)=0\right]=1,\]
_for all \(t\in[0,T],\)\(\forall G\in C^{1,2}([0,T]\times D^{d})\) and \(\theta=1\)._
Proof.: It is enough to verify that, for \(\delta>0\) and \(G\in C^{1,2}([0,T]\times D^{d})\) fixed,
\[\mathbb{Q}^{\ast}_{\theta}\left[\pi_{:}:\pi=(\rho,\varrho)du\text{ and }\sup_{0\leq t\leq T}|\mathfrak{F}_{1}((\rho,\varrho),t,G)|>\delta\right]=0.\]
Since \(\mathbb{Q}_{\theta}^{*}\Big{[}\pi:\pi=(\rho,\varrho)du\Big{]}=1\), we can follow the same strategy as for the \(\theta\in[0,1)\) case, and obtain that it is enough to prove that
\[\limsup_{\varepsilon\to 0}\mathbb{Q}_{\theta}^{*}\Bigg{[}\pi:\, \sup_{0\leq t\leq T}\Bigg{|}\int_{D^{4}}(\rho,\varrho)(t,u)G(t,u)\,du-\int_{D^{ 4}}(\rho,\varrho)(0,u)G(0,u)\,du \tag{38}\] \[\qquad\qquad-\int_{0}^{t}\int_{D^{4}}\sum_{v\in\mathcal{V}}\tilde {v}\cdot\chi(\theta_{v}(\Lambda((\rho,\varrho)*j_{\varepsilon})))\Big{)}\sum_{ i=1}^{d}v_{i}\frac{\partial G}{\partial u_{i}}(r,u)\,du\,dr\] \[\qquad\qquad-\int_{0}^{t}\int_{D^{4}}(\rho,\varrho)(r,u)\left( \partial_{r}G(r,u)+\frac{1}{2}\Delta G\right)\,du\,dr\] \[\qquad\qquad-\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^ {d-1}}G(r,1,\tilde{u})\sum_{v\in\mathcal{V}}v_{k}\beta_{v}(\tilde{u})\,dS\,dr- \frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}}\!\!\!G(r,0,\tilde{ u})\sum_{v\in\mathcal{V}}v_{k}\alpha_{v}(\tilde{u})\,dS\,dr\] \[\qquad\qquad+\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^ {d-1}}(\rho,\varrho)(r,1,\tilde{u})\left[\frac{\partial G}{\partial u_{1}}(r,1,\tilde{u})+G(r,1,\tilde{u})\right]\,dS\,dr\] \[\qquad\qquad-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^ {d-1}}(\rho,\varrho)(r,0,\tilde{u})\left[\frac{\partial G}{\partial u_{1}}(r, 0,\tilde{u})-G(r,0,\tilde{u})\right]\,dS\,dr\Bigg{|}>\delta\Bigg{]}=0.\]
At this point, we would like to apply Portmanteau's Theorem in the probabilities \(\mathbb{Q}_{N,\theta}\), as we did in the previous case. But here we face a problem. Unfortunately, the set inside the above probability is not an open set with respect to the Skorohod topology. In order to avoid this problem, we fix \(\varepsilon>0\) (which is the same \(\varepsilon\) used in (38)) and we consider two approximations of the identity, for fixed \(u_{1}\in[0,1]\) which are given on \(w\in[0,1]\) by
\[\overleftarrow{\tau}_{\varepsilon}^{u_{1}}(w)=\frac{1}{\varepsilon}1_{(u_{1}- \varepsilon,u_{1}]}(w)\,\,\,\text{and}\,\,\,\overrightarrow{\tau}_{ \varepsilon}^{u_{1}}(w)=\frac{1}{\varepsilon}1_{[u_{1},u_{1}+\varepsilon)}(w).\]
We use the notation \(\langle\pi_{r},\overleftarrow{\tau}_{\varepsilon}^{u_{1}}\rangle=\langle( \rho,\varrho)_{r},\overleftarrow{\tau}_{\varepsilon}^{u_{1}}\rangle\) and \(\langle\pi_{r},\overrightarrow{\tau}_{\varepsilon}^{u_{1}}\rangle=\langle( \rho,\varrho)_{r},\overrightarrow{\tau}_{\varepsilon}^{u_{1}}\rangle\), so that
\[\langle\pi_{r},\overleftarrow{\tau}_{\varepsilon}^{u_{1}}\rangle=\frac{1}{ \varepsilon}\int_{u_{1}-\varepsilon}^{u_{1}}(\rho,\varrho)_{r}(w,\tilde{u})\,dw \quad\text{and}\quad\langle\pi_{r},\overrightarrow{\tau}_{\varepsilon}^{u_{1 }}\rangle=\frac{1}{\varepsilon}\int_{u_{1}}^{u_{1}+\varepsilon}(\rho,\varrho)_ {r}(w,\tilde{u})\,dw.\]
By summing and subtracting proper terms, we bound the probability in (38) from above by
\[\limsup_{\varepsilon\to 0}\mathbb{Q}_{\theta}^{*} \bigg{[}\pi:\,\sup_{0\leq t\leq T}\bigg{|}\int_{D^{4}}(\rho, \varrho)(t,u)G(t,u)\,du-\int_{D^{4}}(\rho,\varrho)(0,u)G(0,u)\,du \tag{39}\] \[\qquad\qquad-\int_{0}^{t}\int_{D^{4}}\sum_{v\in\mathcal{V}}\tilde {v}\cdot\chi(\theta_{v}(\Lambda((\rho,\varrho)*j_{\varepsilon})))\bigg{)} \sum_{i=1}^{d}v_{i}\frac{\partial G}{\partial u_{i}}(r,u)\,dx\,dr\] \[\qquad+\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{d-1}} \big{[}(\rho,\varrho)(r,1,\tilde{u})-\langle\pi_{r},\overleftarrow{\tau}_{ \varepsilon}^{1}\rangle\big{]}\left[\frac{\partial G}{\partial u_{1}}(r,1, \tilde{u})+G(r,1,\tilde{u})\right]\,dS\,dr\] \[\qquad+\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{d-1}} \langle\pi_{r},\overleftarrow{\tau}_{\varepsilon}^{1}\rangle\left[\frac{ \partial G}{\partial u_{1}}(r,1,\tilde{u})+G(r,1,\tilde{u})\right]\,dS\,dr\] \[\qquad-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}} \big{[}(\rho,\varrho)(r,0,\tilde{u})-\langle\pi_{r},\overrightarrow{\tau}_{ \varepsilon}^{0}\rangle\big{]}\left[\frac{\partial G}{\partial u_{1}}(r,0, \tilde{u})-G(r,0,\tilde{u})\right]\,dS\,dr\] \[\qquad-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}} \langle\pi_{r},\overrightarrow{\tau}_{\varepsilon}^{0}\rangle\left[\frac{ \partial G}{\partial u_{1}}(r,0,\tilde{u})-G(r,0,\tilde{u})\right]\,dS\,dr\] \[\qquad-\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{d-1}} G(r,1,\tilde{u})\sum_{v\in\mathcal{V}}v_{k}\beta_{v}(\tilde{u})\,dS\,dr-\frac{1}{2} \int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}}G(r,0,\tilde{u})\sum_{v\in \mathcal{V}}v_{k}\alpha_{v}(\tilde{u})\,dS\,dr\] \[\qquad-\int_{0}^{t}\int_{D^{4}}(\rho,\varrho)(r,u)\left(\partial_{ r}G(r,u)+\frac{1}{2}\Delta G\right)\,du\,dr\Bigg{|}>\frac{\delta}{5}\Bigg{]}=0.\]
From Lebesgue's Differentiation Theorem, observe that for almost every \(u_{1}\in[0,1]\),
\[\lim_{\varepsilon\to 0}|(\rho,\varrho)(r,u_{1},\tilde{x})-\langle\pi_{r}, \overleftarrow{\tau}_{\varepsilon}^{u_{1}}\rangle|=0\,\,\text{and}\,\,\lim_{ \varepsilon\to 0}|(\rho,\varrho)(r,u_{1},\tilde{x})-\langle\pi_{r},\overrightarrow{ \tau}_{\varepsilon}^{u_{1}}\rangle|=0.\]
Since the functions \(\overleftarrow{\alpha}_{e}^{u_{1}}\) and \(\overleftarrow{\alpha}_{e}^{u_{1}}\) are not continuous, we cannot use Portmanteau's Theorem. However, we can approximate each one of these functions by continuous functions, in such a way that the error vanishes as \(\varepsilon\to 0\). Then, since the set inside the probability in (39) is an open set with respect to the Skorohod topology, we can use Portmanteau's Theorem and bound (39) from above by
\[\limsup_{\varepsilon\to 0}\liminf_{N\to\infty}\mathbb{Q}_{N,\theta} \Bigg{[}\pi:\sup_{0\leq t\leq T}\left|\langle\pi_{t}^{k},G_{t} \rangle-\langle\pi_{0}^{k},G_{0}\rangle\right. \tag{40}\] \[-\int_{0}^{t}\int_{D^{d}}\sum_{v\in\mathcal{V}}\tilde{v}\cdot \chi(\theta_{v}(\Lambda(\mathbf{I}^{[N\varepsilon]}(u,\eta))))\sum_{i=1}^{d}v_ {i}\frac{\partial G}{\partial u_{i}}(r,u)\,du\,dr\] \[-\int_{0}^{t}\int_{D^{d}}(\rho,\varrho)(r,u)\left(\partial_{r}G( r,u)+\frac{1}{2}\Delta G\right)\,du\,dr\] \[-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}}G(r,1, \tilde{u})\sum_{v\in\mathcal{V}}v_{k}\beta_{v}(\tilde{u})\,dS\,dr\] \[-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}}G(r,0, \tilde{u})\sum_{v\in\mathcal{V}}v_{k}\alpha_{v}(\tilde{u})\,dS\,dr\] (41) \[+\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{d-1}} \overleftarrow{\eta}_{r}^{\varepsilon N}(N-1,\tilde{u},v)\left[\frac{\partial G }{\partial u_{1}}(r,1,\tilde{u})+G(r,1,\tilde{u})\right]\,dS\,dr\] \[-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}} \overrightarrow{\eta}_{r}^{\varepsilon N}(1,\tilde{u},v)\left[\frac{\partial G }{\partial u_{1}}(r,0,\tilde{u})-G(r,0,\tilde{u})\right]\,dS\,dr\Bigg{|}>\frac {\delta}{8}\Bigg{]}=0.\]
Summing and subtracting \(\int_{0}^{t}N^{2}\mathcal{L}_{N}\langle\pi_{s}^{k,N},G_{s}\rangle\,ds\) in equation inside the supremum in (40), we can bound it from above by the sum of
\[\mathbb{P}_{\mu^{N}}\left[\sup_{0\leq t\leq T}|M_{t}^{N,k}(G)|>\frac{\delta}{ 16}\right] \tag{42}\]
and
\[\mathbb{P}_{\mu^{N}} \Bigg{[}\sup_{0\leq t\leq T}\left|-\int_{0}^{t}\int_{D^{d}}\sum_ {v\in\mathcal{V}}\tilde{v}\cdot\chi(\theta_{v}(\Lambda(\mathbf{I}^{[N \varepsilon]}(u,\eta))))\sum_{i=1}^{d}v_{i}\frac{\partial G}{\partial u_{i}}(r,u)\,du\,dr+\int_{0}^{t}N^{2}\mathcal{L}_{N}\langle\pi_{s}^{k,N},G_{s}\rangle \,ds \tag{43}\] \[+\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{d-1}} \overleftarrow{\eta}_{r}^{\varepsilon N}(N-1,\tilde{u},v)\left[\frac{ \partial G}{\partial u_{1}}(r,1,\tilde{u})+G(r,1,\tilde{u})\right]\,dS\,dr\] \[-\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}} \overrightarrow{\eta}_{r}^{\varepsilon N}(1,\tilde{u},v)\left[\frac{\partial G }{\partial u_{1}}(r,0,\tilde{u})-G(r,0,\tilde{u})\right]\,dS\,dr\] \[-\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{d-1}}G(r,1, \tilde{u})\sum_{v\in\mathcal{V}}v_{k}\beta_{v}(\tilde{u})\,dS\,dr-\frac{1}{2} \int_{0}^{t}\int_{\{0\}\times\mathbb{T}^{d-1}}G(r,0,\tilde{u})\sum_{v\in \mathcal{V}}v_{k}\alpha_{v}(\tilde{u})\,dS\,dr\] \[-\left.\int_{0}^{t}\int_{D^{d}}(\rho,\varrho)(r,u)\left(\partial_ {r}G(r,u)+\frac{1}{2}\Delta G\right)\,du\,dr\right|>\frac{\delta}{16}\Bigg{]}=0.\]
From Doob's inequality, the equation (42) vanishes as \(N\to\infty\). Using the value of \(\int_{0}^{t}N^{2}\mathcal{L}_{N}\langle\pi_{s}^{k,N},G_{s}\rangle\,ds\), we can bound the equation (43) from above by a sum of terms using the same argument from the previous subsection, since \(G\in C^{1,2}([0,T]\times D^{d})\) and using the Replacement Lemmas it is easy to see that terms above converges to zero, as \(N\to\infty\) and \(\varepsilon\to 0\). This concludes the proof.
### Characterization of limit points for \(\theta>1\)
**Proposition 3**.: _If \(\mathbb{Q}_{\theta}^{*}\) is a limit point of \(\{\mathbb{Q}_{N,\theta}\}_{N\geq 1}\), then_
\[\mathbb{Q}_{\theta}^{*}\,[\pi:\pi=(\rho,\varrho)du\text{ and }\mathfrak{F}_{0}((\rho, \varrho),t,G)=0]=1,\]
_for all \(t\in[0,T],\,\forall G\in C^{1,2}([0,T]\times D^{d})\) and \(\theta>1\)._
Proof.: Following the same reasoning as in Proposition 1 and 2, it is enough to verify that, for \(\delta>0\) and \(G\in C^{1,2}([0,T]\times D^{d})\) fixed, we have
\[\mathbb{Q}_{\theta}^{\varepsilon}\Bigg{[}\pi_{\cdot}:\pi=(\rho,\varrho)du\text{ and }\sup_{0\leq t\leq T}\left|\tilde{\mathfrak{s}}_{0}((\rho,\varrho),t,G)\right|> \delta\Bigg{]}=0.\]
We need to change the boundary terms \((\rho,\varrho)_{r}(0,\tilde{u})\) (resp. \((\rho,\varrho)_{r}(1,\tilde{u})\) by \(\overrightarrow{\eta}_{r}^{\varepsilon N}(1,\tilde{u},v)\) (resp. \(\overleftarrow{\eta}_{r}^{\varepsilon N}(N-1,\tilde{u},v)\)) as well as to change \(\chi(\theta_{v}(\Lambda(\mathbf{I}^{[N\varepsilon]}(u,\eta)))\)) \(\chi(\theta_{v}(\Lambda(\mathbf{I}^{[N\varepsilon]}(u,\eta))))\). Then, we sum and subtract \(\int_{0}^{t}N^{2}\mathcal{L}_{N,\theta}\langle\pi_{r}^{k,N},G\rangle\,dr\), and it will be enough to analyze
\[\limsup_{\varepsilon\to 0}\limsup_{N\to\infty}\mathbb{P}_{\mu^{N}} \left[\eta_{\cdot}:\sup_{0\leq t\leq T}\left|\int_{0}^{t}N^{2} \mathcal{L}_{N}(\pi_{r}^{k,N},G)\,dr-\frac{1}{2}\int_{0}^{t}\langle\pi_{r}^{k,N},\Delta G\rangle\,dr\right.\right.\] \[\left.\left.-\int_{0}^{t}\int_{D^{d}}\sum_{v\in\mathcal{V}}\bar{v }\cdot\chi(\theta_{v}(\Lambda(\mathbf{I}^{[N\varepsilon]}(u,\eta))))\sum_{i= 1}^{d}v_{i}\frac{\partial G}{\partial u_{i}}(r,u)\,du\,dr\right.\right.\] \[\left.\left.+\frac{1}{2}\int_{0}^{t}\int_{\{1\}\times\mathbb{T}^{ d-1}}\overleftarrow{\eta}_{r}^{\varepsilon N}(N-1,\tilde{u},v)\frac{ \partial G}{\partial u_{1}}(r,1,\tilde{u})\,dS\,dr\right.\right.\] \[\left.\left.-\;\frac{1}{2}\int_{0}^{t}\int_{\{0\}\times\mathbb{T }^{d-1}}\overrightarrow{\eta}_{r}^{\varepsilon N}(1,\tilde{u},v)\frac{ \partial G}{\partial u_{1}}(r,0,\tilde{u})\,dS\,dr\right|>\delta\Bigg{]}\,.\]
Doing the same as in the previous cases and using the fact that \(G\in C^{1,2}([0,T]\times D^{d})\) and \(\theta>1\), we just have to analyze the following probability, for all \(\tilde{\delta}\) and \(x_{1}=\{1,N-1\}\):
\[\limsup_{\varepsilon\to 0}\limsup_{N\to\infty}\mathbb{P}_{\mu^{N}}\left[\eta_{ \cdot}:\sup_{0\leq t\leq T}\left|\int_{0}^{t}[\eta_{s}^{\varepsilon N}(u_{1}, \tilde{u})-\eta_{s}(u_{1},\tilde{u})]\frac{\partial G}{\partial u_{1}}\,dr \right|>\tilde{\delta}\right].\]
Applying the Replacement Lemma, Lemma 4, we conclude that, taking limit when \(\varepsilon\to 0\), the limit above goes to \(0\). This concludes the proof of this proposition.
## 5 Replacement Lemmas
We will need the following auxiliary function \(h\) to be able to obtain some entropy estimates that are essential to the proof of the hydrodynamic limit.
For each \(v\in\mathcal{V}\), consider the functions \(h_{k}^{v}:D^{d}\to(0,1)\), for \(k=0,\ldots,d\).
**Remark 4**.: _We will have two situations for the function \(h=\sum_{v\in\mathcal{V}}(h_{0}^{v},v_{1}h_{1}^{v},\ldots,v_{d}h_{d}^{v})\):_
* _when_ \(\theta\in[0,1)\) _we will assume that, for each_ \(k\in\{0,\ldots,d\}\)_,_ \(h_{k}^{v}\) _are smooth functions, such that the restriction of_ \(h\) _to_ \(\{0\}\times\mathbb{T}^{d-1}\) _equals to the vector valued function_ \(d(0,\tilde{u})\)_, and that the restriction of_ \(h\) _to_ \(\{1\}\times\mathbb{T}^{d-1}\) _equals to vector valued function_ \(d(1,\tilde{u})\)_, with_ \(d\) _given in (_18_);_
* _when_ \(\theta\geq 1\) _we will assume that_ \(h\) _is a constant function._
We then consider \(\nu_{h}^{N}\) as the product measure on \(X_{N}\) with marginals given by
\[\nu_{h}^{N}\{\eta:\eta(x,\cdot)=\xi\}=m_{\Lambda(h(x))}(\xi), \tag{44}\]
where \(m_{\lambda}(\cdot)\) was defined in (14).
### Estimates on Dirichlet Forms
Let \(f:X_{N}\to\mathbb{R}\) be a local function. Note that \(\langle\mathcal{L}_{N,\theta}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}\) does not always have a closed form. To estimate \(\langle\mathcal{L}_{N,\theta}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}\), let
\[D^{\theta}_{\nu_{h}^{N}}(\sqrt{f})=D^{\varepsilon x}_{\nu_{h}^{N}}(\sqrt{f})+D ^{\varepsilon_{N}}_{\nu_{h}^{N}}(\sqrt{f})+D^{b,\theta}_{\nu_{h}^{N}}(\sqrt{f}),\]
with
\[D_{\nu_{h}^{N}}^{c_{x}}(\sqrt{f})=\sum_{v\in\mathcal{V}}\sum_{x\in D_{h}^{d}}\sum_ {z\in D_{h}^{d}}c_{(x,z,v)}(\eta)\int[\sqrt{f(\eta^{x,z,v})}-\sqrt{f(\eta)}]^{2} d\nu_{h}^{N},\]
where \(c_{(x,z,v)}(\eta)=\eta(x,v)(1-\eta(z,v))P_{N}(z-x,v)\),
\[D_{\nu_{h}^{N}}^{c_{x}}(\sqrt{f})=\sum_{q\in\mathcal{Q}}\sum_{x\in D_{h}^{d}} \int p_{c}(x,q,\eta)[\sqrt{f(\eta^{x,q})}-\sqrt{f(\eta)}]^{2}d\nu_{h}^{N}\]
where \(\mathcal{Q}\) and \(p_{c}(x,q,\eta)\) were defined in (5) and (6), respectively, and
\[D_{\nu_{h}^{N}}^{b,\theta}(\sqrt{f}) =\sum_{v\in\mathcal{V}}\sum_{x\in\{1\}\times\mathbb{T}_{N}^{d-1}} \int\frac{r_{x}^{N}(\eta,\alpha)}{N^{\theta}}[\sqrt{f(\sigma^{x,v}\eta)}- \sqrt{f(\eta)}]^{2}d\nu_{h}^{N}\] \[+\sum_{v\in\mathcal{V}}\sum_{x\in\{1-\}\times\mathbb{T}_{N}^{d-1} }\int\frac{r_{x}^{N}(\eta,\beta)}{N^{\theta}}[\sqrt{f(\sigma^{x,v}\eta)}- \sqrt{f(\eta)}]^{2}d\nu_{h}^{N}:=D_{\nu_{h}^{N}}^{b,\theta,\alpha}(\sqrt{f})+D _{\nu_{h}^{N}}^{b,\theta,\beta}(\sqrt{f}),\]
where \(r_{x}^{N}(\eta,\alpha)\) and \(r_{x}^{N}(\eta,\beta)\) were defined in (11). In order to prove the main result of this section, we need some intermediate results. For that purpose, we adapt from [3] the following lemmas:
**Lemma 1**.: _Let \(T:\eta\in X_{N}\to T(\eta)\in X_{N}\) be a transformation in the configuration space and \(c:\eta\in X_{N}\to c(\eta)\) be a positive local function. Let \(f\) be a density with respect to a probability measure \(\nu_{h}^{N}\) on \(X_{N}\). Then,_
\[\langle c(\eta)[\sqrt{f(T(\eta))}-\sqrt{f(\eta)}],\sqrt{f(\eta)} \rangle_{\nu_{h}^{N}}\leq-\frac{1}{4}\int c(\eta)\left(\sqrt{f(T(\eta))}- \sqrt{f(\eta)}\right)^{2}\,d\nu_{h}^{N}\] \[+\frac{1}{16}\int\frac{1}{c(\eta)}\left[c(\eta)-c(T(\eta))\frac{ \nu_{h}^{N}(T(\eta))}{\nu_{h}^{N}(\eta)}\right]^{2}\left(\sqrt{f(T(\eta))}+ \sqrt{f(\eta)}\right)^{2}\,d\nu_{h}^{N}.\]
**Lemma 2**.: _Let \(f\) be a density with respect to a probability measure \(\nu_{h}^{N}\) on \(X_{N}\). Then,_
\[\sup_{x\neq y}\int f(\eta^{x,y,v})d\nu_{h}^{N}\leq C,\quad\sup_{x}\int f(\eta^ {x,q})d\nu_{h}^{N}\leq C\quad\text{and}\quad\sup_{x}\int f(\sigma^{x,v}\eta)d \nu_{h}^{N}\leq C,\]
_where \(\eta^{x,y,v}\), \(\eta^{x,q}\) and \(\sigma^{x,v}\eta\), were defined in (3), (7) and (10), respectively._
**Remark 5**.: _Note that in [3] the proof is done for the cases \(T(\eta)=\eta^{x,y,v}\) and \(T(\eta)=\sigma^{x,v}\eta\) in Lemma 2. For \(T(\eta)=\eta^{x,q}\), the proof follows from the arguments of the other cases._
As a consequence of Lemmas 1 and 2, we conclude that
**Corollary 1**.: _Let \(h\) be one of the functions satisfying the assumption of Remark 4. Let \(f:X_{N}\to\mathbb{R}\) be a density with respect to the measure \(\nu_{h}^{N}\). Then,_
1. _if_ \(h\) _is a constant function,_ \(N^{2}\langle\mathcal{L}_{N}^{c_{x}}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}=- \frac{N^{2}}{2}D_{\nu_{h}^{N}}^{c_{x}}(\sqrt{f});\)__
2. _if_ \(h\) _is a smooth function,_ \(N^{2}\langle\mathcal{L}_{N}^{c_{x}}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}=- \frac{N^{2}}{4}D_{\nu_{h}^{N}}^{c_{x}}(\sqrt{f})+\mathcal{R}_{N}(h)\) _with_ \(|\mathcal{R}_{N}(h)|\leq CN^{d}\)_._
Proof.: Begin by observing that
\[N^{2}\langle\mathcal{L}_{N}^{c_{x}}\sqrt{f},\sqrt{f}\rangle_{\nu_ {h}^{N}} =N^{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{h}^{d}}\sum_{z\in D_ {h}^{d}}c_{(x,z,v)}(\eta)\left(\sqrt{f(\eta^{x,z,v})}-\sqrt{f(\eta)}\right) \sqrt{f(\eta)}\,d\nu_{h}^{N}\] \[+\frac{N^{2}}{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{h}^{d}} \sum_{z\in D_{h}^{d}}c_{(x,z,v)}(\eta)\left(\sqrt{f(\eta^{x,z,v})}-\sqrt{f(\eta )}\right)\sqrt{f(\eta^{x,z,v})}\,d\nu_{h}^{N}\] \[-\frac{N^{2}}{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{h}^{d}} \sum_{z\in D_{h}^{d}}c_{(x,z,v)}(\eta)\left(\sqrt{f(\eta^{x,z,v})}-\sqrt{f(\eta )}\right)\sqrt{f(\eta^{x,z,v})}\,d\nu_{h}^{N},\]
where \(c_{(x,z,v)}(\eta)=\eta(x,v)(1-\eta(z,v))P_{N}(z-x,v)\). Combining some terms and doing a change of variables, we obtain that the last display equals to
\[-\frac{N^{2}}{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{k}^{d}}\sum _{z\in D_{N}^{d}}c_{(x,z,v)}(\eta)\left(\sqrt{f(\eta^{x,z,v})}-\sqrt{f(\eta)} \right)^{2}\,d\nu_{h}^{N}\] \[+\frac{N^{2}}{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{k}^{d}} \sum_{z\in D_{N}^{d}}c_{(x,z,v)}(\eta)\left(\sqrt{f(\eta^{x,z,v})}-\sqrt{f( \eta)}\right)\sqrt{f(\eta^{x,z,v})}\,d\nu_{h}^{N}\] \[-\frac{N^{2}}{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{k}^{d}} \sum_{z\in D_{N}^{d}}c_{(z,x,v)}(\eta)\frac{\nu_{h}^{N}(\eta^{x,z,v})}{\nu_{h }^{N}(\eta)}\left(\sqrt{f(\eta^{x,z,v})}-\sqrt{f(\eta)}\right)\sqrt{f(\eta^{x,z,v})}dr_{h}^{N}.\]
Last display equals to
\[-\frac{N^{2}}{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d}} \sum_{z\in D_{N}^{d}}c_{(x,z,v)}(\eta)\left(\sqrt{f(\eta^{x,z,v})}-\sqrt{f( \eta)}\right)^{2}\,d\nu_{h}^{N}\] \[+\frac{N^{2}}{2}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d}} \sum_{z\in D_{N}^{d}}c_{(x,z,v)}(\eta)\left(1-\frac{\nu_{h}^{N}(\eta^{x,z,v}) }{\nu_{h}^{N}(\eta)}\right)\sqrt{f(\eta^{x,z,v})}\left(\sqrt{f(\eta^{x,z,v})} -\sqrt{f(\eta)}\right)\,d\nu_{h}^{N}.\]
Hence, \(N^{2}\langle\mathcal{L}_{N}^{ex}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}=- \frac{N^{2}}{2}D_{\nu_{h}^{N}}^{ex}(\sqrt{f})+g_{N}(h)\), where
\[g_{N}(h)=\frac{N^{2}}{2}\!\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d}}\!\! \sum_{x\in D_{N}^{d}}\!\!c_{(x,z,v)}(\eta)\!\left(1-\frac{\nu_{h}^{N}(\eta^{x,z,v})}{\nu_{h}^{N}(\eta)}\right)\!\sqrt{f(\eta^{x,z,v})}\left(\sqrt{f(\eta^{ x,z,v})}-\sqrt{f(\eta)}\right)d\nu_{h}^{N}.\]
To handle \(g_{N}(h)\), we start by observing that \(\left|1-\frac{\nu_{h}^{N}(\eta^{x,z,v})}{\nu_{h}^{N}(\eta)}\right|=\left|1- \frac{\gamma_{z,v}}{\gamma_{x,v}}\right|\) where
\[\gamma_{x,v}=\frac{\theta_{v}(\Lambda(h(x)))}{1-\theta_{v}(\Lambda(h(x)))}. \tag{45}\]
Thus, if \(h\) is constant, then \(g_{N}(h)=0\).
On the other hand, let us assume that \(h\) is not constant, we need to redo the analysis of \(g_{N}(h)\). By applying the elementary inequality, \(ab\leq\frac{1}{2}a^{2}+\frac{1}{2}b^{2}\) in \(g_{N}(h)\), with
\[a=\frac{N}{\sqrt{2}}\sqrt{\eta(x,v)(1-\eta(z,v)P_{N}(z-x,v))}\left(\sqrt{f( \eta^{x,z,v})}-\sqrt{f(\eta)}\right)\]
and
\[b=\frac{N}{\sqrt{2}}\sqrt{\eta(x,v)(1-\eta(z,v)P_{N}(z-x,v))}\sqrt{f(\eta^{x, z,v})}\left(1-\frac{\nu_{h}^{N}(\eta^{x,z,v})}{\nu_{h}^{N}(\eta)}\right),\]
we can bound \(g_{N}(h)\) from above by
\[\frac{N^{2}}{4}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d}} \sum_{z\in D_{N}^{d}}\eta(x,v)(1-\eta(z,v))P_{N}(z-x,v)\left(\sqrt{f(\eta^{x, z,v})}-\sqrt{f(\eta)}\right)^{2}\,d\nu_{h}^{N}\] \[+\frac{N^{2}}{4}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d}} \sum_{z\in D_{N}^{d}}\eta(x,v)(1-\eta(z,v))P_{N}(z-x,v)\left(1-\frac{\nu_{h}^ {N}(\eta^{x,z,v})}{\nu_{h}^{N}(\eta)}\right)^{2}f(\eta^{x,z,v})\,d\nu_{h}^{N}.\]
Thus \(|g_{N}(h)|\leq\frac{N^{2}}{4}D_{\nu_{h}^{N}}^{ex}(\sqrt{f})+\mathcal{R}_{N}(h)\), where
\[\mathcal{R}_{N}(h):=\frac{N^{2}}{4}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d }}\sum_{z\in D_{N}^{d}}c_{(x,z,v)}(\eta)\left(1-\frac{\nu_{h}^{N}(\eta^{x,z,v}) }{\nu_{h}^{N}(\eta)}\right)^{2}f(\eta^{x,z,v})\,d\nu_{h}^{N}.\]
Doing again the change of variables \(\eta^{x,z,v}=\xi\), we obtain
\[\mathcal{R}_{N}(h)=\frac{N^{2}}{4}\int\sum_{v\in\mathcal{V}}\sum_{x\in D_{N}^{d }}\sum_{z\in D_{N}^{d}}c_{(x,x,v)}(\eta)\left|1-\frac{\nu_{h}^{N}(\eta)}{\nu_{h }^{N}(\eta^{x,z,v})}\right|^{2}\frac{\nu_{h}^{N}(\eta^{x,z,v})}{\nu_{h}^{N}( \eta)}f(\eta)\,d\nu_{h}^{N}.\]
Now, observe that
\[\left|1-\frac{\nu_{h}^{N}(\eta)}{\nu_{h}^{N}(\eta^{x,x,v})}\right|=\left(1-\frac{ \gamma_{x,v}}{\gamma_{z,v}}\right)\leq\bar{c}\|\gamma^{\prime}\|_{\infty}\frac {1}{N}, \tag{46}\]
since \(\gamma\) is bounded away from zero, see (45), and
\[\left|\frac{\nu_{h}^{N}(\eta^{x,x,v})}{\nu_{h}^{N}(\eta)}\right|\leq C.\]
Also note that \(f\) is a density with respect to \(\nu_{h}^{N}\), therefore,
\[|\mathcal{R}_{N}(h)|\leq CN^{d}.\]
This finishes the proof of Corollary 1.
**Corollary 2**.: _For \(h\) being one of the functions satisfying the assumption of Remark 4 and for a density \(f\), with respect to the measure \(\nu_{h}^{N}\), it holds_
\[N^{2}\langle\mathcal{L}_{N}^{c}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}=-\frac{ N^{2}}{2}D_{\nu_{h}^{N}}^{c}(\sqrt{f}). \tag{47}\]
Proof.: Let \(q=(v,w,v^{\prime},w^{\prime})\) and \(\tilde{q}=(v^{\prime},w^{\prime},v,w)\), recall (6). We have that
\[N^{2}\langle\mathcal{L}_{N}^{c}\sqrt{f},\sqrt{f}\rangle_{\nu_{h} ^{N}} =N^{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q,\eta)\left( \sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)\sqrt{f(\eta)}\,d\nu_{h}^{N}\] \[+\frac{N^{2}}{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q, \eta)\left(\sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)\sqrt{f(\eta^{y,q})}\,d \nu_{h}^{N}\] \[-\frac{N^{2}}{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q, \eta)\left(\sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)\sqrt{f(\eta^{y,q})}\,d \nu_{h}^{N}.\]
From a change of variables
\[N^{2}\langle\mathcal{L}_{N}^{c}\sqrt{f},\sqrt{f}\rangle_{\nu_{h} ^{N}}= -\frac{N^{2}}{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q, \eta)\left(\sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)^{2}\,d\nu_{h}^{N}\] \[+\frac{N^{2}}{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q, \eta)\left(\sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)\sqrt{f(\eta^{y,q})}\,d \nu_{h}^{N}\] \[-\frac{N^{2}}{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q, \eta)\left(\sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)\sqrt{f(\eta^{y,q})}\, \frac{\nu_{h}^{N}(\eta^{y,q})}{\nu_{h}^{N}(\eta)}\,d\nu_{h}^{N}.\]
Combining some terms, we obtain
\[N^{2}\langle\mathcal{L}_{N}^{c}\sqrt{f},\sqrt{f}\rangle_{\nu_{h} ^{N}} =-\frac{N^{2}}{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q, \eta)\left(\sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)^{2}\,d\nu_{h}^{N}\] \[+\frac{N^{2}}{2}\int\sum_{y\in D_{N}^{d}}\sum_{q\in Q}p_{c}(y,q, \eta)\left(\sqrt{f(\eta^{y,q})}-\sqrt{f(\eta)}\right)\sqrt{f(\eta^{y,q})} \left[1-\frac{\nu_{h}^{N}(\eta^{y,q})}{\nu_{h}^{N}(\eta)}\right]\,d\nu_{h}^{N}.\]
Since \(v+w=v^{\prime}+w^{\prime}\), we observe that \(\frac{\nu_{h}^{N}(\eta^{y,q})}{\nu_{h}^{N}(\eta)}=\frac{\gamma_{y,v^{\prime}} \gamma_{y,w^{\prime}}}{\gamma_{y,v^{\prime}}\gamma_{y,w}}=1\), therefore, \(N^{2}\langle\mathcal{L}_{N}^{c}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}=-\frac{ N^{2}}{2}D_{\nu_{h}^{N}}^{c}(\sqrt{f})\), and this identity holds for functions \(h\) satisfying the assumption of Remark 4.
**Corollary 3**.: _For \(h\) being one of the functions satisfying the assumption of Remark 4 and for a density \(f\) with respect to the measure \(\nu_{h}^{N}\) it holds_
\[N^{2}\langle\mathcal{L}_{N,\theta}^{b}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}=- \frac{N^{2}}{2}D_{\nu_{h}^{N}}^{b,\theta}(\sqrt{f})+\mathcal{R}_{N,\theta}^{ \alpha}(h)+\mathcal{R}_{N,\theta}^{\beta}(h), \tag{48}\]
_with \(|\mathcal{R}_{N,\theta}^{\alpha}(h)|\leq\frac{CN^{d+1}}{N^{\theta}}\left|m_{ \Lambda(h(x/N))}-\alpha_{v}\left(\frac{\tilde{x}}{N}\right)\right|\) and \(|\mathcal{R}_{N,\theta}^{\beta}(h)|\leq\frac{CN^{d+1}}{N^{\theta}}\left|m_{ \Lambda(h(x/N))}-\beta_{v}\left(\frac{\tilde{x}}{N}\right)\right|.\)_
Proof.: We present the proof for the left boundary since the other case is analogous. By splitting the integral on the left-hand side of (48) into the integral over the sets \(A_{0}=\{\eta:\eta((1,\tilde{x}),v)=0\}\) and \(A_{1}=\{\eta:\eta((1,\tilde{x}),v)=1\}\), we obtain
\[N^{2}\langle\mathcal{L}^{b}_{N,\theta}\sqrt{f},\sqrt{f}\rangle_{ \nu^{N}_{h}} =\frac{N^{2}}{N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x\in D^{b}_{N }\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\alpha_{v}\left(\tfrac{\tilde{x}}{N }\right)[\sqrt{f(\sigma^{x,v}\eta)}-\sqrt{f(\eta)}]\sqrt{f(\eta)}\,d\nu^{N}_{h}\] \[+\frac{N^{2}}{N^{\theta}}\int_{A_{1}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\left(1-\alpha_{v}\left(\tfrac{ \tilde{x}}{N}\right)\right)[\sqrt{f(\sigma^{x,v}\eta)}-\sqrt{f(\eta)}]\sqrt{f (\eta)}\,d\nu^{N}_{h}.\]
Last display can be rewritten as
\[\frac{N^{2}}{N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x\in D^{b}_{N }\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\alpha_{v}\left(\tfrac{\tilde{x}}{ N}\right)\left(\sqrt{f(\sigma^{x,v}\eta)}\sqrt{f(\eta)}-\frac{1}{2}\left(\sqrt{f( \eta)}\right)^{2}\right)\,d\nu^{N}_{h}\] \[+\frac{N^{2}}{N^{\theta}}\int_{A_{1}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\left(1-\alpha_{v}\left(\tfrac{ \tilde{x}}{N}\right)\right)\left(\sqrt{f(\sigma^{x,v}\eta)}\sqrt{f(\eta)}- \frac{1}{2}\left(\sqrt{f(\eta)}\right)^{2}\right)\,d\nu^{N}_{h}\] \[-\frac{N^{2}}{2N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\alpha_{v}\left(\tfrac{\tilde{x}}{ N}\right)\left(\sqrt{f(\eta)}\right)^{2}\,d\nu^{N}_{h}\] \[-\frac{N^{2}}{2N^{\theta}}\int_{A_{1}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\left(1-\alpha_{v}\left(\tfrac{ \tilde{x}}{N}\right)\right)\left(\sqrt{f(\eta)}\right)^{2}\,d\nu^{N}_{h}.\]
Summing and subtracting the term needed to complete the square in last display, we obtain that \(N^{2}\langle\mathcal{L}^{b}_{N,\theta}\sqrt{f},\sqrt{f}\rangle_{\nu^{N}_{h}}\) is equal to
\[-\frac{N^{2}}{2N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{\begin{subarray}{c}v\in\mathcal{V}\\ x_{1}=1\end{subarray}}\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right)\left(\sqrt{f( \sigma^{x,v}\eta)}-\sqrt{f(\eta)}\right)^{2}\,d\nu^{N}_{h}\] \[-\frac{N^{2}}{2N^{\theta}}\int_{A_{1}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{\begin{subarray}{c}v\in\mathcal{V}\\ x_{1}=1\end{subarray}}\left(1-\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right) \right)\left(\left[\sqrt{f(\sigma^{x,v}\eta)}-\sqrt{f(\eta)}\right]^{2}-[ \sqrt{f(\sigma^{x,v}\eta)}]^{2}+[\sqrt{f(\eta)}]^{2}\right)\,d\nu^{N}_{h}\] \[+\frac{N^{2}}{2N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\alpha_{v}\left(\tfrac{\tilde{x}}{ N}\right)\left([\sqrt{f(\sigma^{x,v}\eta)}]^{2}-[\sqrt{f(\eta)}]^{2}\right)\,d\nu^{N}_{h}.\]
Using a change of variables on the last two terms above, we obtain that \(N^{2}\langle\mathcal{L}^{b}_{N,\theta}\sqrt{f},\sqrt{f}\rangle_{\nu^{N}_{h}}\) is equal to
\[-\frac{N^{2}}{2N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\alpha_{v}\left(\tfrac{\tilde{x}}{ N}\right)\left(\sqrt{f(\sigma^{x,v}\eta)}-\sqrt{f(\eta)}\right)^{2}\,d\nu^{N}_{h} \tag{49}\] \[-\frac{N^{2}}{2N^{\theta}}\int_{A_{1}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\left(1-\alpha_{v}\left(\tfrac{ \tilde{x}}{N}\right)\right)\left(\left[\sqrt{f(\sigma^{x,v}\eta)}-\sqrt{f( \eta)}\right]^{2}-[\sqrt{f(\sigma^{x,v}\eta)}]^{2}\right)\,d\nu^{N}_{h}\] \[+\frac{N^{2}}{2N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\alpha_{v}\left(\tfrac{\tilde{x}}{N} \right)[\sqrt{f(\sigma^{x,v}\eta)}]^{2}-\left(1-\alpha_{v}\left(\tfrac{\tilde{x} }{N}\right)\right)\frac{m_{\Lambda(h(x/N))}}{1-m_{\Lambda(h(x/N))}}[\sqrt{f( \sigma^{x,v}\eta)}]^{2}\,d\nu^{N}_{h}\] \[-\frac{N^{2}}{2N^{\theta}}\int_{A_{1}}\sum_{\begin{subarray}{c}x \in D^{b}_{N}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\alpha_{v}\left(\tfrac{\tilde{x}}{N} \right)\frac{1-m_{\Lambda(h(x/N))}}{m_{\Lambda(h(x/N))}}[\sqrt{f(\sigma^{x,v} \eta)}]^{2}\,d\nu^{N}_{h}.\]
For a general function \(h(\cdot)\), we can rewrite (49) and conclude that \(N^{2}\langle\mathcal{L}^{b}_{N,\theta}\sqrt{f},\sqrt{f}\rangle_{\nu^{N}_{h}}\) is equal to
\[-\frac{N^{2}}{2}D^{b,\theta,\alpha}_{\nu^{N}_{h}}(\sqrt{f})-\frac{ N^{2}}{2N^{\theta}}\int_{A_{1}}\sum_{\begin{subarray}{c}x\in D\delta\\ x_{1}=1\end{subarray}}\sum_{\begin{subarray}{c}x\in\mathcal{V}\\ x_{1}=1\end{subarray}}\omega_{v}\left(\tfrac{\tilde{x}}{N}\right)\left[\frac{1- m_{\Lambda(h(x/N))}}{m_{\Lambda(h(x/N))}}-\frac{1-\alpha_{v}(\tfrac{\tilde{x}}{N})}{ \alpha_{v}(\tfrac{\tilde{x}}{N})}\right]\left[\sqrt{f(\sigma^{x,v}\eta)} \right]^{2}d\nu^{N}_{h}\] \[-\frac{N^{2}}{2N^{\theta}}\int_{A_{0}}\sum_{\begin{subarray}{c}x \in D\delta\\ x_{1}=1\end{subarray}}\sum_{\begin{subarray}{c}x\in\mathcal{T}^{d}\\ x_{1}=1\end{subarray}}\sum_{v\in\mathcal{V}}\left(1-\alpha_{v}\left(\tfrac{ \tilde{x}}{N}\right)\right)\left[\frac{m_{\Lambda(h(x/N))}}{1-m_{\Lambda(h(x/N ))}}\frac{\alpha_{v}(\tfrac{\tilde{x}}{N})}{1-\alpha_{v}(\tfrac{\tilde{x}}{N}) }\right]\left[\sqrt{f(\sigma^{x,v}\eta)}\right]^{2}d\nu^{N}_{h}.\]
The second and third terms in last display are bounded by \(\frac{CN^{d+1}}{N^{\theta}}\left|m_{\Lambda(h(x/N))}-\alpha_{v}\left(\tfrac{ \tilde{x}}{N}\right)\right|.\)
**Remark 6**.: _If \(H(\mu^{N}|\nu^{N}_{h})\) is the relative entropy of the measure \(\mu^{N}\) with respect to \(\nu^{N}_{h}\), see (44), then there exists a constant \(C_{h}\) such that \(H(\mu^{N}|\nu^{N}_{h})\leq C_{h}N^{d}\). To prove it, note that by the definition of the entropy,_
\[H(\mu^{N}|\nu^{N}_{h})=\int\log\left(\frac{\mu^{N}(\eta)}{\nu^{N}_{h}(\eta)} \right)\mu^{N}(\eta)\leq\int\log\left(\frac{1}{\nu^{N}_{h}(\eta)}\right)\mu^{ N}(\eta).\]
_Since the measure \(\nu^{N}_{h}\) is a product measure with marginal given by \(\nu^{N}_{h}\{\eta:\eta(x,\cdot)=\xi\}=m_{\Lambda(h(x))}(\xi)\), where \(m_{\lambda}(\cdot)\) was defined in (14), we obtain that the last display is bounded from above by_
\[\int\log\left(\frac{1}{\inf_{x\in D^{d}}(m_{\Lambda(h(x))})\wedge(1-m_{\Lambda (h(x))})}\right)^{N^{d}}\mu^{N}(\eta)=N^{d}\log\left(\frac{1}{\inf_{x\in D^{d} }(m_{\Lambda(h(x))})\wedge(1-m_{\Lambda(h(x))})}\right).\]
_Since the functions \(h^{v}_{k}\), defined in Remark 4, are continuous, the image of each \(h^{v}_{k}\) is a compact set bounded away from \(0\) and \(1\). Hence, from the definition of the measure \(m\), we have \(m_{\Lambda(h(x/N))}>0\) and \(m_{\Lambda(h(x/N))}<1\). The constant can be taken as \(C_{h}:=\log\left(\frac{1}{\inf_{x\in D^{d}}m_{\Lambda(h(x/N))}\wedge(1-m_{ \Lambda(h(x/N))})}\right)\)._
### Replacement Lemma at the Boundary
Fix \(k\in\{0,\ldots,d\}\), a continuous function \(G:[0,T]\times\mathbb{T}^{d-1}\to\mathbb{R}^{d+1}\), and for each \(k\) consider the quantities
\[V^{1,\ell}_{k}(\eta_{s,\theta},\alpha,G) =\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}^{d-1}}G_{k}(s, \tilde{x}/N)\left(I_{k}(\eta_{s,\theta}(1,\tilde{x}))-\sum_{v\in\mathcal{V}}v _{k}\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right)\right),\] \[V^{1,\ell}_{k}(\eta_{s,\theta},\beta,G) =\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}^{d-1}}G_{k}(s, \tilde{x}/N)\left(I_{k}(\eta_{s,\theta}(N-1,\tilde{x}))-\sum_{v\in\mathcal{V} }v_{k}\beta_{v}\left(\tfrac{\tilde{x}}{N}\right)\right),\] \[V^{2,\ell}_{k}(\eta_{s,\theta},\alpha,G,\varepsilon) =\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}^{d-1}}G_{k}(s, \tilde{x})\Bigg{(}I_{k}(\eta_{s,\theta}(1,\tilde{x}))-\frac{1}{\lfloor \varepsilon N\rfloor}\sum_{x_{1}=1}^{\lfloor\varepsilon N\rfloor+1}I_{k}(\eta_ {s,\theta}(x_{1},\tilde{x}))\Bigg{)},\] \[V^{2,r}_{k}(\eta_{s,\theta},\beta,G,\varepsilon) =\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}^{d-1}}G_{k}(s, \tilde{x})\Bigg{(}I_{k}(\eta_{s,\theta}(N-1,\tilde{x}))-\frac{1}{\lfloor \varepsilon N\rfloor}\sum_{x_{1}=N-1-\lfloor\varepsilon N\rfloor}^{N-1}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_where \(j=\{1,2\}\), and_
\[\vartheta=\left\{\begin{array}{ll}\ell,&\mbox{if}\quad\zeta=\alpha,\\ r,&\mbox{if}\quad\zeta=\beta.\end{array}\right. \tag{50}\]
Proof.: By the entropy inequality and Jensen's inequality for any \(A>0\) the expectation in the statement of the proposition is bounded from above by
\[\frac{H(\mu^{N}|\nu_{h}^{N})}{AN^{d}}+\frac{1}{AN^{d}}\log\mathbb{E}_{\nu_{h}^ {N}}\left[\exp\left\{\left|\int_{0}^{t}ds\,AN^{d}V_{k}^{j,\vartheta}(\eta_{s, \theta},\zeta,G)\right|\right\}\right]. \tag{51}\]
By Remark 6, the leftmost term in last display is bounded by \(\frac{C_{h}}{A}\), so we only need to show that the rightmost term vanishes as \(N\to\infty\). Since \(e^{|x|}\leq e^{x}+e^{-x}\) and
\[\limsup_{N\to\infty}N^{-d}\log\{a_{N}+b_{N}\}\leq\max\{\limsup_{N\to\infty}N^ {-d}\log(a_{N}),\limsup_{N\to\infty}N^{-d}\log(b_{N})\},\]
we may remove the absolute value from the expression (51). By the Feynman-Kac formula, see for instance [14], (51) is bounded from above by
\[\frac{C_{h}}{A}+t\sup_{f}\left\{\int V_{k}^{j,\vartheta}(\eta,\zeta,G)f(\eta )\,d\nu_{h}^{N}+\frac{\langle\mathcal{L}_{N,\vartheta}\sqrt{f},\sqrt{f} \rangle_{\nu_{h}^{N}}}{AN^{d-2}}\right\},\]
where the supremum is taken over all densities \(f\) with respect to \(\nu_{h}^{N}\). The proof follows from an application of the auxiliary lemmas given below.
**Lemma 3**.: _For every \(k\in\{0,\ldots,d\}\), and every \(G\in C([0,T],\mathbb{T}^{d-1})\), there exist constants \(B,C,C^{\prime},K\) such that_
\[\langle V_{k}^{1,\vartheta}(\eta,\zeta,G),f(\eta)\rangle_{\nu_{h}^{N}}\leq CBN ^{\theta}+\frac{C^{\prime}}{B}D_{\nu_{h}^{N}}^{b,\vartheta}(\sqrt{f})+K,\]
_where \(\vartheta\) solves (50)._
Proof.: We prove for \(\vartheta=\ell\), since for \(\vartheta=r\) the proof is entirely analogous. First of all, note that since \(G\) is continuous and its domain \([0,T]\times D^{d}\) is compact, it is enough to prove the result with \(G=\mathbf{1}\).
\[\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\int f(\eta)\left(I_{k}(\eta_{s}(1, \tilde{x}))-\sum_{v\in\mathcal{V}}v_{k}\alpha_{v}\left(\tfrac{\tilde{x}}{N} \right)\right)\,d\nu_{h}^{N}=\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\sum_{v \in\mathcal{V}}v_{k}\int f(\eta)\left(\eta(1,\tilde{x},v)-\alpha_{v}\left( \tfrac{\tilde{x}}{N}\right)\right)\,d\nu_{h}^{N}.\]
By summing and subtracting an appropriate term, and also multiplying by \(N^{-d+1}\), last term is equal to
\[\frac{1}{2}\Big{|}\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_ {N}^{d-1}}\sum_{v\in\mathcal{V}}v_{k}\int\left(f(\eta)-f(\sigma^{x,v}\eta) \right)\left(\eta(1,\tilde{x},v)-\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right) \right)\,d\nu_{h}^{N}\Big{|} \tag{52}\] \[+\frac{1}{2}\Big{|}\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_ {N}^{d-1}}\sum_{v\in\mathcal{V}}v_{k}\int\left(f(\eta)+f(\sigma^{x,v}\eta) \right)\left(\eta(1,\tilde{x},v)-\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right) \right)\,d\nu_{h}^{N}\Big{|}.\]
Applying Young's inequality, \(ab\leq\frac{\varepsilon a^{2}}{2}+\frac{b^{2}}{2\varepsilon}\) with \(\varepsilon=BN^{\theta}\), on the first term of last display, we can bound it from above by
\[\frac{B}{4}\Big{|}\frac{N^{\theta}}{N^{d-1}}\sum_{\tilde{x}\in \mathbb{T}_{N}^{d-1}}\sum_{v\in\mathcal{V}}v_{k}\int\left(\sqrt{f(\eta)}+ \sqrt{f(\sigma^{x,v}\eta)}\right)^{2}\frac{\left(\eta(1,\tilde{x},v)-\alpha_{ v}\left(\tfrac{\tilde{x}}{N}\right)\right)^{2}}{r_{x}^{N}(\eta,\alpha)}\,d\nu_{h}^{N} \Big{|} \tag{53}\] \[+\frac{1}{4B}\Big{|}\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T} _{N}^{d-1}}\sum_{v\in\mathcal{V}}v_{k}\int\left(\sqrt{f(\eta)}-\sqrt{f(\sigma ^{x,v}\eta)}\right)^{2}\frac{r_{x}^{N}(\eta,\alpha)}{N^{\theta}}\,d\nu_{h}^{N} \Big{|}\]
where \(r_{x}(\eta,\alpha)\) was defined in (11) and this holds for any \(B>0\). Since \(\left(\eta(1,\tilde{x},v)-\alpha_{v}\left(\tfrac{\tilde{x}}{N}\right)\right)^{2}\leq 1\) and \(r_{x}^{N}(\eta,\alpha)\leq 1\), we obtain that (53) is bounded from above by
\[CBN^{\theta}+\frac{C^{\prime}}{B}D_{\nu_{h}^{N}}^{b,\vartheta}(\sqrt{f}),\]
with \(C\) and \(C^{\prime}\) constants. Now, we analyze the second term of (52). Note that
\[\frac{1}{2}\Big{|}\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{N}^ {d-1}}\sum_{v\in\mathcal{V}}v_{k}\int\left(f(\eta)+f(\sigma^{x,v}\eta)\right) \left(\eta(1,\tilde{x},v)-\alpha_{v}\left(\frac{\tilde{x}}{N}\right)\right)\,d \nu_{h}^{N}\Big{|}\] \[= \frac{1}{2}\Big{|}\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{ N}^{d-1}}\sum_{v\in\mathcal{V}}v_{k}\int f(\eta)\left(\eta(1,\tilde{x},v)- \alpha_{v}\left(\frac{\tilde{x}}{N}\right)\right)\,d\nu_{h}^{N}\Big{|}\] \[+ \frac{1}{2}\Big{|}\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{ N}^{d-1}}\sum_{v\in\mathcal{V}}v_{k}\int f(\sigma^{x,v}\eta)\left(\eta(1,\tilde{x},v)- \alpha_{v}\left(\frac{\tilde{x}}{N}\right)\right)\,d\nu_{h}^{N}\Big{|}.\]
Using that \([\eta(1,\tilde{x},v)-\alpha_{v}\left(\frac{\tilde{x}}{N}\right)]\leq 1\) and Lemma 2, we obtain that the first term above is bounded by a constant \(K_{1}\) and the second term above is bounded by a constant \(K_{2}\). Therefore,
\[\langle V_{k}^{1,\ell}(\eta,\alpha,G),f(\eta)\rangle_{\nu_{h}^{N}}\leq CBN^{ \vartheta}+\frac{C^{\prime}}{B}D_{\nu_{h}^{N}}^{b}(\sqrt{f})+K_{1}+K_{2}.\]
**Lemma 4**.: _For every \(k\in\{0,\ldots,d\}\), and every \(G\in C([0,T],\mathbb{T}^{d-1})\), there exist constants \(B,C,C^{\prime},K\) such that_
\[\limsup_{\varepsilon\to 0}\limsup_{N\to\infty}\langle V_{k}^{2,\vartheta}( \eta,\zeta,G),f(\eta)\rangle_{\nu_{h}^{N}}=0,\]
_where \(\vartheta\) solves (50) and \(h\) is constant._
Proof.: First of all, note that since \(G\) is continuous and its domain \([0,T]\times D^{d}\) is compact, it is enough to prove the result with \(G=\mathbf{1}\). We will only prove for \(\vartheta=\ell\), since for \(\vartheta=r\) the proof is entirely analogous. Observe that
\[\langle V_{k}^{2,\ell}(\eta,\zeta,\mathbf{1}),f(\eta)\rangle_{ \nu_{h}^{N}} =\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\int f( \eta)\left(I_{k}(\eta(1,\tilde{x}))-\frac{1}{\varepsilon N}\sum_{x_{1}=1+1}^{ \varepsilon N+1}I_{k}(\eta(x_{1},\tilde{x}))\right)\,d\nu_{h}^{N}\] \[=\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\sum_{v \in\mathcal{V}}v_{k}\int f(\eta)\left(\eta(1,\tilde{x},v)-\frac{1}{\varepsilon N }\sum_{x_{1}=1+1}^{\varepsilon N+1}\eta(x_{1},\tilde{x},v)\right)\,d\nu_{h}^{N}\] \[=\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\sum_{v \in\mathcal{V}}v_{k}\int f(\eta)\left(\frac{1}{\varepsilon N}\sum_{x_{1}=1+1}^ {\varepsilon N+1}[\eta(1,\tilde{x},v)-\eta(x_{1},\tilde{x},v)]\right)\,d\nu_{ h}^{N}.\]
By writing the term \(\frac{1}{\varepsilon N}\sum_{x_{1}=1+1}^{\varepsilon N+1}[\eta(1,\tilde{x},v)- \eta(x_{1},\tilde{x},v)]\) as a telescopic sum, we obtain that the last term is equal to
\[\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\sum_{v \in\mathcal{V}}v_{k}\int f(\eta)\left(\frac{1}{\varepsilon N}\sum_{x_{1}=1+1} ^{\varepsilon N+1}\sum_{y=1}^{x_{1}-1}\left\{\eta(y,\tilde{x},v)-\eta(y+1, \tilde{x},v)\right\}\right)\,d\nu_{h}^{N}.\]
Writing this sum as twice its half, performing change of variables and noting that \(h\) is constant, we obtain that the last display is equal to
\[\frac{1}{N^{d-1}}\!\!\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\sum_{v\in \mathcal{V}}v_{k}\frac{1}{2\varepsilon N}\sum_{x_{1}=1+1}^{\varepsilon N+1}\sum _{y=1}^{x_{1}-1}\int\left(f(\eta)-f(\eta^{y,y+1,v})\right)\left(\eta(y,\tilde{x },v)-\eta(y+1,\tilde{x},v)\right)\,d\nu_{h}^{N}. \tag{54}\]
Rewriting \(f(\eta)-f(\eta^{y,y+1,v})\) as \(\left(\sqrt{f(\eta)}-\sqrt{f(\eta^{y,y+1,v})}\right)\left(\sqrt{f(\eta)}+\sqrt{ f(\eta^{y,y+1,v})}\right)\) and using Young's inequality, for all \(B>0\), we obtain that (54) is bounded from above by
\[\frac{1}{N^{d-1}}\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\sum_{v\in \mathcal{V}}v_{k}\frac{B}{2\varepsilon N}\sum_{x_{1}=1+1}^{\varepsilon N+1}\sum _{y=1}^{x_{1}-1}\int\left(\sqrt{f(\eta)}-\sqrt{f(\eta^{y,y+1,v})}\right)^{2}\,d \nu_{h}^{N}\] \[+\frac{1}{N^{d-1}}\!\!\sum_{\tilde{x}\in\mathbb{T}_{N}^{d-1}}\!\! \sum_{v\in\mathcal{V}}v_{k}\frac{1}{2B\varepsilon N}\!\!\sum_{x_{1}=1+1}^{ \varepsilon N+1}\!\!\sum_{y=1}^{x_{1}-1}\!\!\int\!\!\left(\sqrt{f(\eta)}+\sqrt{ f(\eta^{y,y+1,v})}\right)^{2}\!\!\left(\eta(y,\tilde{x},v)-\eta(y+1, \tilde{x},v)\right)^{2}d\nu_{h}^{N}.\]
Using that \(f\) is a density for \(\nu_{h}^{N}\), the second term in last display is bounded by \(\frac{C_{\ell}N}{B}\). Letting the sum in \(y\) run from \(1\) to \(N-1\), the first term in last display is bounded by \(BD_{\nu_{h}^{x}}^{ex}(\sqrt{f})\). By Corollary 1, since \(h\) is a constant function, we obtain
\[D_{\nu_{h}^{N}}^{ex}(\sqrt{f})=-\langle\mathcal{L}_{N}^{ex}\sqrt{f},\sqrt{f} \rangle_{\nu_{h}^{N}}.\]
Since \(0\leq D_{\nu_{h}^{N}}^{c}(\sqrt{f})=-\langle\mathcal{L}_{N}^{c}\sqrt{f},\sqrt {f}\rangle_{\nu_{h}^{N}}\) and \(0\leq D_{\nu_{h}^{N}}^{b}(\sqrt{f})\), using Corollary 3, we have that \(D_{\nu_{h}^{N}}^{b,\theta}(\sqrt{f})=-\langle\mathcal{L}_{N,\theta}^{b}\sqrt {f},\sqrt{f}\rangle_{\nu_{h}^{N}}\). Therefore,
\[D_{\nu_{h}^{X}}^{ex}(\sqrt{f})\leq-B\langle\mathcal{L}_{N,\theta}\sqrt{f}, \sqrt{f}\rangle_{\nu_{h}^{N}}.\]
### Replacement Lemma at the bulk
Before we state the replacement lemma at the bulk that will allow us to prove that the limit points \(\mathbb{Q}_{\theta}^{s}\) are concentrated on weak solutions of the system of partial differential equations (1), we introduce some notations. Recall the definition of \(\mathbf{I}^{L}\) in (13). Let \(\mathfrak{B}_{L}\) be the set of all possible values of \(\mathbf{I}^{L}(0,\eta)\) for \(\eta\in(\{0,1\}^{\mathcal{V}})^{\lambda_{L}}\), that is,
\[\mathfrak{B}_{L}=\{\mathbf{I}^{L}(0,\eta);\,\eta\in(\{0,1\}^{\mathcal{V}})^{ \lambda_{L}}\}.\]
Note that \(\mathfrak{B}_{L}\) is a finite subset of the convex envelope of \(\{\mathbf{I}(\xi):\,\xi\in\{0,1\}^{\mathcal{V}}\}\). The set of configurations \((\{0,1\}^{\mathcal{V}})^{\lambda_{L}}\) splits into invariant subsets: for \(\mathbf{i}\) in \(\mathfrak{B}_{L}\), let
\[\mathcal{H}_{L}(\mathbf{i}):=\{\eta\in(\{0,1\}^{\mathcal{V}})^{\lambda_{L}}: \,\mathbf{I}^{L}(0)=\mathbf{i}\}.\]
For each \(\mathbf{i}\) in \(\mathfrak{B}_{L}\), define the canonical measure \(\nu_{\lambda_{L},\mathbf{i}}\) as the uniform probability measure on \(\mathcal{H}_{L}(\mathbf{i})\). Note that for every \(\lambda\) in \(\mathbb{R}^{d+1}\)
\[\nu_{\Lambda_{L},\mathbf{i}}(\cdot)=\mu_{\lambda}^{\Lambda_{L}}(\cdot\,|\, \mathbf{I}^{L}(0)=\mathbf{i}).\]
Let \(\langle g\,;\,f\rangle_{\mu}\) stand for the covariance of \(g\) and \(f\) with respect to \(\mu\), i.e.,
\[\langle g\,;\,f\rangle_{\mu}=E_{\mu}[fg]-E_{\mu}[f]E_{\mu}[g].\]
**Proposition 5** (Equivalence of ensembles).: _Fix \(\ell,L\), with \(\ell<L\) the cubes \(\Lambda_{\ell}\subset\Lambda_{L}\), for each \(\mathbf{i}\in\mathfrak{B}_{L}\), denote by \(\nu^{\ell}\) the projection of the canonical measure \(\nu_{\Lambda_{L},\mathbf{i}}\) on \(\Lambda_{\ell}\) and by \(\mu^{\ell}\) the projection of the grand canonical measure \(\mu^{L}_{\Lambda(\mathbf{i})}\) on \(\Lambda_{\ell}\). There exists a finite constant \(C(\ell,\mathcal{V})\), depending only on \(\ell\) and \(\mathcal{V}\), such that_
\[|E_{\mu^{\ell}}[f]-E_{\nu^{\ell}}[f]|\leq\frac{C(\ell,\mathcal{V})}{|\Lambda_ {L}|}\langle f;f\rangle_{\mu^{\ell}}^{1/2}\]
_for every function \(f:(\{0,1\}^{\mathcal{V}})^{\Lambda_{\ell}}\mapsto\mathbb{R}\)._
The proof of Proposition 5 can be found in [2].
**Lemma 5** (Replacement lemma).: _For all \(\delta>0\), \(j\in\{1,\ldots,d\}\) and \(k\in\{0,\ldots,d\}\):_
\[\limsup_{\varepsilon\to 0}\limsup_{N\to\infty}\mathbb{P}_{\mu^{N}}\left[\int_{0 }^{T}\frac{1}{N^{d}}\sum_{x\in D_{N}^{L}}\tau_{x}V_{\varepsilon N}^{j,k}(\eta _{s,\theta})\,ds\geq\delta\right]=0,\]
_where_
\[V_{\ell}^{j,k}(\eta)=\Big{|}\frac{1}{(2\ell+1)^{d}}\sum_{y\in\Lambda_{\ell}} \sum_{v\in\mathcal{V}}v_{k}\sum_{z\in\mathbb{Z}^{d}}p(z,v)z_{j}\tau_{y}[\eta(0,v)(1-\eta(z,v))]-\sum_{v\in\mathcal{V}}v_{j}v_{k}\chi(\theta_{v}(\Lambda( \mathbf{I}^{\prime}(0,\eta))))\Big{|}.\]
Note that for all \(j\in\{1,\ldots,d\}\) and \(k\in\{0,1,\ldots,d\}\), \(V_{\varepsilon N}^{j,k}\) is well-defined for large \(N\) since \(p(\cdot,v)\) is of finite range. We now observe that Corollaries 1 and 2 permit us to prove the previous replacement lemma for the boundary driven exclusion process by using the process without the boundary part of the generator (see [15] for further details). For the proof of Lemma 5, see [17, Lemma 3.7].
Energy Estimates
We will now define some quantities in order to prove that each component of the vector solution belongs, in fact, to \(L^{2}([0,T],\mathcal{H})\).
Let the energy \(\mathcal{E}:\mathcal{D}([0,T],\mathcal{M})\to[0,\infty]\) be given by
\[\mathcal{E}(\pi)=\sum_{i=1}^{d}\mathcal{E}_{i}(\pi),\]
with
\[\mathcal{E}_{i}(\pi)=\sup_{G\in C^{\infty}_{c}(\Omega_{T})}\left\{2\int_{0}^{T }\,dt\,\langle\pi_{t},\partial_{u_{i}}G_{t}\rangle-\int_{0}^{T}\,dt\int_{D^{d} }\,du\,G(t,u)^{2}\right\},\]
where \(\Omega_{T}=(0,T)\times D^{d}\) and \(C^{\infty}_{c}(\Omega_{T})\) stands for the set of infinitely differentiable functions (with respect to time and space) with compact support contained in \(\Omega_{T}\). For any \(G\in C^{\infty}_{c}(\Omega_{T})\), \(1\leq i\leq d\) and \(C\geq 0\), let the functional \(\mathcal{E}^{G}_{i,C}:\mathcal{D}([0,T],\mathcal{M})\to\mathbb{R}\) be given by
\[\mathcal{E}^{G}_{i,C}(\pi)=\int_{0}^{T}ds\,\langle\pi_{s},\partial_{u_{i}}G_{ s}\rangle-C\int_{0}^{T}ds\int_{D^{d}}du\,G(s,u)^{2}.\]
Note that
\[\sup_{G\in C^{\infty}_{c}(\Omega_{T})}\{\mathcal{E}^{G}_{i,C}\}=\frac{ \mathcal{E}_{i}(\pi)}{4C}. \tag{55}\]
It is well-known that \(\mathcal{E}(\pi)\) is finite if, and only if, \(\pi\) has a generalized gradient, \(\nabla\pi=(\partial_{u_{1}}\pi,\ldots,\partial_{u_{d}}\pi)\), which is a measurable function and
\[\tilde{\mathcal{E}}(\pi)=\int_{0}^{T}ds\int_{D^{d}}du\,\|\nabla\pi_{t}(u)\|^{ 2}<\infty,\]
in which case, \(\mathcal{E}(\pi)=\tilde{\mathcal{E}}(\pi)\). Recall from Section 3 that the sequence \((\mathbb{Q}_{N,\theta})_{N}\) is tight. We have that:
**Proposition 6**.: _Let \(\mathbb{Q}^{*}_{\theta}\) be any limit point of the sequence of measures \((\mathbb{Q}_{N,\theta})_{N}\). Then,_
\[E_{\mathbb{Q}^{*}_{\theta}}\left[\int_{0}^{T}ds\,\left(\int_{D^{d}}\|\nabla \rho(s,u)\|^{2}\,du\right)\right]<\infty\ \ \text{and}\ \ E_{\mathbb{Q}^{*}_{\theta}}\left[\int_{0}^{T}ds\,\left(\int_{D^{d}}\| \nabla\varrho_{k}(s,u)\|^{2}\,du\right)\right]<\infty,\]
_for \(k\in\{1,\ldots,d\}\)._
The proof of Proposition 6 is given after the following lemmas.
**Lemma 6**.: _For all \(\theta\geq 0\) and for each \(i\in\{1,\ldots,d\}\) there is a positive constant \(C>0\) such that_
\[E_{\mathbb{Q}^{*}_{\theta}}\left[\sup_{G}\left\{\int_{0}^{T}\int_{D^{d}} \partial_{u_{i}}G(s,u)\varrho_{k}(s,u)duds-C\int_{0}^{T}ds\int_{D^{d}}du\,G( s,u)^{2}\right\}\right]<\infty,\]
_for \(k\in\{0,1,\ldots,d\}\), where the supremum is carried over all the functions \(G\in C^{\infty}(\Omega_{T})\) and \(\varrho_{0}=\rho\)._
Proof.: Let \(\{G^{m}:m\geq 1\}\) be a sequence of functions in \(C^{\infty}_{c}(\Omega_{T})\). Thus, it is sufficient to prove that, for every \(r\geq 1\),
\[E_{\mathbb{Q}^{*}_{\theta}}\left[\max_{1\leq m\leq r}\left\{\mathcal{E}^{G^{m }}_{i,C}(\pi^{k})\right\}\right]\leq\tilde{C}, \tag{56}\]
for some constant \(\tilde{C}>0\), independent of \(r\). The expression on the left-hand side of (56) is equal to
\[\lim_{N\to\infty}E_{\mu^{N}}\left[\max_{1\leq m\leq r}\left\{\int_{0}^{T} \langle\partial_{u_{i}}G^{m}(s,u),\pi^{k,N}_{s,\theta}\rangle ds-C\int_{0}^{T} ds\int_{D^{d}}du\,G^{m}(s,u)^{2}\right\}\right]. \tag{57}\]
By the relative entropy bound (see Remark 6), Jensen's inequality and the fact that \(\exp\{\max_{1\leq j\leq k}a_{j}\}\leq\sum_{1\leq j\leq k}\exp a_{j}\), the expectation in (57) is bounded from above by
\[\frac{H(\mu^{N}|\nu^{N}_{h})}{N^{d}}+\frac{1}{N^{d}}\log\sum_{1\leq m\leq r}E_ {\nu^{N}_{h}}\left[\exp\left\{\int_{0}^{T}N\langle\partial_{u_{i}}G^{m}(s,u), \pi^{k,N}_{s,\theta}\rangle ds-C\int_{0}^{T}ds\int_{D^{d}}du\,G^{m}(s,u)^{2} \right\}\right],\]
where the functions \(h\) are the same as in Remark 4 in Section 4.
We can bound the first term in the sum above by \(C_{h}\). It is enough to show, for a fixed function \(G\), that
\[\limsup_{N\to\infty}\frac{1}{N^{d}}\log E_{\nu_{h}^{N}}\left[\exp\left\{\int_{0} ^{T}N\langle\partial_{u_{i}}G(s,u),\pi_{s,\theta}^{k,N}\rangle ds-C\int_{0}^{T} ds\int_{D^{d}}du\,G(s,u)^{2}\right\}\right]\leq\tilde{c}\]
for some constant \(\tilde{c}\) independent of \(G\). Then the result follows from the next lemma and the definition of the empirical measure.
**Lemma 7**.: _There exists a constant \(C_{0}=C_{0}(h)>0\), such that for every \(i=1,\ldots,d\) every \(k\in\{0,\ldots,d\}\) and every function \(G\in C_{c}^{\infty}(\Omega_{T})\)_
\[\limsup_{N\to\infty}\frac{1}{N^{d}}\log E_{\nu_{h}^{N}}\left[\exp\{N^{d} \mathcal{E}_{i,C_{0}}^{G}(\pi_{\theta}^{k,N})\}\right]\leq C_{0}.\]
Proof.: Writing \(\partial_{u_{i}}G_{s}\left(\frac{x}{N}\right)=N\left[G_{s}\left(\frac{x+e_{i} }{N}\right)-G_{s}\left(\frac{x}{N}\right)\right]+O(N^{-1})\) and summing by parts (the compact support of \(G\) takes care of the boundary term), by applying the Feynman-Kac formula and using the same arguments as in the proof of Lemma 4, we have that
\[\frac{1}{N^{d}}\log E_{\nu_{h}^{N}}\left[\exp\left\{N\int_{0}^{T} ds\sum_{x\in D_{N}^{d}}\left(I_{k}(\eta_{s,\theta}(x))-I_{k}(\eta_{s,\theta}(x-e_{i}) )\right)G\left(s,\frac{x}{N}\right)\right\}\right]\leq\frac{1}{N^{d}}\int_{0} ^{T}\lambda_{s}^{N}\,ds,\]
where \(\lambda_{s}^{N}\) is equal to
\[\sup_{f}\left\{\left\langle N\sum_{x\in D_{N}^{d}}\left((I_{k}( \eta_{s,\theta}(x))-I_{k}(\eta_{s,\theta}(x-e_{i}))\right)G\left(s,\frac{x}{N} \right),f\right\rangle\right\}_{\nu_{h}^{N}}+N^{2}\langle\mathcal{L}_{N, \theta}\sqrt{f},\sqrt{f}\rangle_{\nu_{h}^{N}}\right\}, \tag{58}\]
where the supremum is taken over all densities \(f\) with respect to \(\nu_{h}^{N}\). Now we will consider two cases: if \(h\) is a constant function by Corollaries 1, 2 and 3, the expression inside brackets is bounded from above by
\[-\frac{N^{2}}{2}D_{\nu_{h}^{N}}(\sqrt{f})+\sum_{x\in D_{N}^{d}} \left\{NG\left(s,\frac{x}{N}\right)\int[I_{k}(\eta_{s,\theta}(x))-I_{k}(\eta_{ s,\theta}(x-e_{i}))]f(\eta)d\nu_{h}^{N}\right\}.\]
We now rewrite the term inside the brackets as
\[\sum_{v\in\mathcal{V}}v_{k}\sum_{x\in D_{N}^{d}}\left\{\int NG \left(s,\frac{x}{N}\right)[\eta(x,v)-\eta(x-e_{i},v)]f(\eta)d\nu_{h}^{N} \right\}. \tag{59}\]
After a simple computation, we may rewrite the terms inside the brackets of the above expression as
\[NG\left(s,\frac{x}{N}\right)\int[\eta(x,v)-\eta(x-e_{i},v)]f( \eta)d\nu_{h}^{N} \tag{60a}\] \[=NG\left(s,\frac{x}{N}\right)\int\eta(x,v)f(\eta)\,d\nu_{h}^{N}- NG\left(s,\frac{x}{N}\right)\int\eta(x,v)f(\eta^{x-e_{i},x,v})\frac{\nu_{h}^{N}( \eta^{x,x-e_{i},v})}{\nu_{h}^{N}(\eta)}d\nu_{h}^{N}\] (60b) \[=NG\left(s,\frac{x}{N}\right)\int\eta(x,v)[f(\eta)-f(\eta^{x-e_{i},x,v})]d\nu_{h}^{N}. \tag{60c}\]
By using \(f(\eta)-f(\eta^{x-e_{i},x,v})=[\sqrt{f(\eta)}-\sqrt{f(\eta^{x-e_{i},x,v})}][ \sqrt{f(\eta)}+\sqrt{f(\eta^{x-e_{i},x,v})}]\) and applying Young's inequality, the equation (60c) is bounded from above by
\[\frac{N^{2}}{2}\int[\sqrt{f(\eta^{x-e_{i},x,v})}-\sqrt{f(\eta)}]^{2}d\nu_{h}^{ N}+2G\left(s,\frac{x}{N}\right)^{2}\int\eta(x,v)(\sqrt{f(\eta)}+\sqrt{f( \eta^{x-e_{i},x,v})})^{2}d\nu_{h}^{N}.\]
Using the above estimate (59) is clearly bounded by \(\frac{N^{2}}{2}D_{\nu_{h}^{N}}(\sqrt{f})+CG\left(s,\frac{x}{N}\right)^{2}\), where \(C\) is a positive constant. Thus, letting \(C_{0}=C\), the statement of the lemma holds.
Now we will analyze the other case, \(h\) is a smooth function \(h\). By Corollaries 1, 2 and 3, the expression inside brackets in (58) is bounded from above by
\[CN^{d}-\frac{N^{2}}{4}D_{\nu_{h}^{N}}(\sqrt{f})+\sum_{x\in D_{N}^{d}}\left\{NG \left(s,\tfrac{x}{N}\right)\int[I_{k}(\eta_{x}(s))-I_{k}(\eta_{x-e_{i}}(s))]f( \eta)d\nu_{h}^{N}\right\}.\]
Rewriting the term above, we will analyze the expression
\[\sum_{x\in D_{N}^{d}}\sum_{v\in\mathcal{V}}v_{k}\left\{NG\left(s,\tfrac{x}{N} \right)\int[\eta(x,v)-\eta(x-e_{i},v)]f(\eta)d\nu_{h}^{N}\right\}. \tag{61}\]
Now rewrite the term inside the brackets as
\[NG\left(s,\tfrac{x}{N}\right)\int[\eta(x,v)-\eta(x-e_{i},v)]f( \eta)d\nu_{h}^{N}\] \[= NG\left(s,\tfrac{x}{N}\right)\int\eta(x,v)f(\eta)d\nu_{h}^{N}-NG \left(s,\tfrac{x}{N}\right)\int\eta(x,v)f(\eta^{x-e_{i},x,v})\tfrac{\nu_{h}^{ N}(\eta^{x,x-e_{i},v})}{\nu_{h}^{N}\left(\eta\right)}d\nu_{h}^{N}\] \[= NG\left(s,\tfrac{x}{N}\right)\int\eta(x,v)[f(\eta)-f(\eta^{x-e_ {i},x,v})]d\nu_{h}^{N}\] \[+ G\left(s,\tfrac{x}{N}\right)\int\eta(x,v)f(\eta^{x-e_{i},x,v})N \left[1-\tfrac{\nu_{h}^{N}(\eta^{x,x-e_{i},v})}{\nu_{h}^{N}\left(\eta\right)} \right]d\nu_{h}^{N}.\]
Since \(f(n)-f(\eta^{x-e_{i},x,v})=[\sqrt{f(\eta)}-\sqrt{f(\eta^{x-e_{i},x,v})}][\sqrt {f(\eta)}+\sqrt{f(\eta^{x-e_{i},x,v})}]\) and applying Young's inequality, the expression is bounded from above by
\[N^{2}\int\frac{1}{2}[\sqrt{f(\eta^{x-e_{i},x,v})}-\sqrt{f(\eta) }]^{2}d\nu_{h}^{N}+2G\left(s,\tfrac{x}{N}\right)^{2}\int\eta(x,v)(\sqrt{f( \eta)}+\sqrt{f(\eta^{x-e_{i},x,v})})^{2}d\nu_{h}^{N}\] \[+ G\left(s,\tfrac{x}{N}\right)^{2}\int f(\eta^{x-e_{i},x,v})d\nu_ {h}^{N}+\frac{1}{4}\int\eta(x,v)f(\eta^{x-e_{i},x,v})\left[N\left(1-\frac{\nu_ {h}^{N}(\eta^{x,x-e_{i},v})}{\nu_{h}^{N}\left(\eta\right)}\right)\right]^{2}d \nu_{h}^{N}.\]
Using the above estimate, (61) is clearly bounded by \(C_{1}+C_{1}G\left(s,\tfrac{x}{N}\right)^{2}\), by some positive constant \(C_{1}=C_{1}(h)\), using the estimate (46) and the fact that \(f\) is a density with respect to \(\nu_{h}^{N}\). Thus, letting \(C_{0}=C+C_{1}\), the statement of the lemma follows.
Proof of Proposition 6.: Let \(\{G_{m}:1\leq m\leq r\}\) be a sequence of functions in \(C_{c}^{\infty}(\Omega_{T})\) (the space of infinitely differentiable functions with compact support) and \(i\in\{1,\dots,d\}\), and \(k\in\{0,\dots,d\}\). By the entropy inequality, see Remark 6, there exists a constant \(C_{h}>0\) such that
\[E_{\mu^{N}}\left[\max_{1\leq m\leq r}\left\{\mathcal{E}_{i,C_{0}}^{G_{m}}( \pi^{k,N})\right\}\right]\leq C_{h}+\frac{1}{N^{d}}\log E_{\nu_{h}^{N}}\left[ \exp\left\{N^{d}\max_{1\leq m\leq r}\{\mathcal{E}_{i,C_{0}}^{G_{m}}(\pi^{k,N} )\}\right\}\right].\]
Therefore, using Lemma 7 together with the elementary inequalities
\[\limsup_{N\to\infty}N^{-d}\log(a_{N}+b_{N})\leq\limsup_{N\to\infty}\max\Big{\{} \limsup_{N\to\infty}N^{-d}\log(a_{N}),\limsup_{N\to\infty}N^{-d}\log(b_{N}) \Big{\}}\]
and
\[\exp\{\max\{x_{1},\dots,x_{n}\}\}\leq\exp(x_{1})+\dots+\exp(x_{n})\]
we set that
\[E_{\mathbb{Q}^{*}}\left[\max_{1\leq m\leq r}\left\{\mathcal{E}_{i,\widetilde{C }_{0}}^{G_{m}}(\pi^{k,N})\right\}\right]=\lim_{N\to\infty}E_{\mu^{N}}\left[ \max_{1\leq m\leq r}\left\{\mathcal{E}_{i,\widetilde{C}_{0}}^{G_{m}}(\pi^{k,N}) \right\}\right]\leq C_{h}+C_{0}.\]
Using this, the equation (55) and the monotone convergence Theorem, we obtain the desired result.
## Appendix A Uniqueness of Weak Solutions
To conclude the proof of the hydrodynamic limit, it remains to prove the uniqueness of weak solutions to (16). In the following, consider \(\mathfrak{C}=+\infty\) and \((\rho^{1},\varrho^{1})\), \((\rho^{2},\varrho^{2})\) two weak solutions of (16)
with the same initial condition. Denote their difference by \((\overline{\rho},\overline{\varrho})=(\rho^{1}-\rho^{2},\varrho^{1}-\varrho^{2})\). Let us define the set \(\{\psi_{z}\}_{z}\) given by \(\psi_{z}(u)=\sqrt{2}\sin(z\pi u)\) for \(z\geq 1\) and \(\psi_{0}(u)=1\) which is an orthonormal basis of \(L^{2}([0,1])\). Note that \((\overline{\rho},\overline{\varrho})=(\overline{p}^{0},\overline{p}^{1}, \ldots,\overline{p}^{d})=0\) if, and only if, each component is equal to zero, which means that \(\overline{p}^{k}=0\) for \(k=0,\ldots,d\). Let
\[V_{k}(t)=\sum_{z\geq 0}\frac{1}{2a_{z}}\langle\overline{p}_{t}^{k},\psi_{z} \rangle^{2}\]
where \(a_{z}=(z\pi)^{2}+1\). We claim that \(V_{k}^{\prime}(t)\leq CV_{k}(t)\), where \(C\) is a positive constant. Since \(V_{k}(0)=0,\,\forall\,k=0,\ldots,d\), from Gronwall's inequality we will conclude that \(V_{k}(t)\leq 0\), but since we know by definition that \(V_{k}(t)\geq 0\), we are done. Now we need to show that the claim is true. Note that
\[V_{k}^{\prime}(t)=\sum_{z\geq 0}\frac{1}{a_{z}}\langle\overline{p}_{t}^{k}, \psi_{z}\rangle\frac{d}{dt}\langle\overline{p}_{t}^{k},\psi_{z}\rangle,\]
and from the integral formulation (16) we have that
\[\frac{d}{dt}\langle\overline{p}_{t}^{k},\psi_{z}\rangle= \langle\frac{d}{dt}\overline{p}_{t}^{k},\psi_{z}\rangle+\langle \overline{p}_{t}^{k},\frac{d}{dt}\psi_{z}\rangle\] \[= \frac{1}{2}\langle\overline{p}_{t}^{k},\Delta\psi_{z}\rangle+ \langle\chi(\theta_{v}(\Lambda(\rho_{t}^{1},\varrho_{t}^{1})))-\chi(\theta_{v }(\Lambda(\rho_{t}^{2},\varrho_{t}^{2}))),\partial_{u}\psi_{z}\rangle.\]
Since \(\psi_{z}(u)=\sqrt{2}\sin(z\pi u)\) we have that \(\partial_{u}\psi_{z}(u)=\sqrt{2}z\pi\cos(z\pi u)\) and \(\Delta\psi_{z}(u)=-(z\pi)^{2}\sqrt{2}\sin(z\pi u)=-(z\pi)^{2}\psi_{z}\), then
\[V_{k}^{\prime}(t)=\sum_{z\geq 0}\frac{-(z\pi)^{2}}{2a_{z}}\langle\overline{p}_{ t}^{k},\psi_{z}\rangle^{2}+\sum_{z\geq 0}\frac{1}{a_{z}}\langle\overline{p}_{t}^{k}, \psi_{z}\rangle\langle\chi(\theta_{v}(\Lambda(\rho_{t}^{1},\varrho_{t}^{1}))) -\chi(\theta_{v}(\Lambda(\rho_{t}^{2},\varrho_{t}^{2}))),\partial_{u}\psi_{z}\rangle.\]
Using Young's inequality on the second term on the right-hand side of last identity, we bound that term from above by
\[\frac{1}{2A}\sum_{z\geq 0}\frac{1}{a_{z}}\langle\overline{p}_{t}^{k},\psi_{z} \rangle^{2}+\frac{A}{2}\sum_{z\geq 0}\frac{1}{a_{z}}\langle\chi(\theta_{v}( \Lambda(\rho_{t}^{1},\varrho_{t}^{1})))-\chi(\theta_{v}(\Lambda(\rho_{t}^{2}, \varrho_{t}^{2}))),\partial_{u}\psi_{z}\rangle^{2},\forall A>0.\]
Observe that \(\partial_{u}\psi_{z}=z\pi\phi_{z}(u)\), with \(\phi_{z}(u)=\sqrt{2}\cos(z\pi u)\) for \(z\geq 1\) and \(\phi_{0}(u)=1\). Therefore, the second term at right-hand side in last display can be rewritten as
\[\frac{A}{2}\sum_{z\geq 0}\frac{(z\pi)^{2}}{a_{z}}\langle\chi( \theta_{v}(\Lambda(\rho_{t}^{1},\varrho_{t}^{1})))-\chi(\theta_{v}(\Lambda( \rho_{t}^{2},\varrho_{t}^{2}))),\phi_{z}\rangle^{2}\] \[\leq \frac{A}{2}\sum_{z\geq 0}\langle\chi(\theta_{v}(\Lambda(\rho_{t}^{1},\varrho_{t}^{1})))-\chi(\theta_{v}(\Lambda(\rho_{t}^{2},\varrho_{t}^{2}))), \phi_{z}\rangle^{2}\]
because of the choice for \(a_{z}\). Observe that, since \(\{\phi_{z}\}_{z}\) is an orthonormal basis of \(L^{2}[0,1]\), we can rewritten the last display as
\[\frac{A}{2}\int_{0}^{1}\left(\chi(\theta_{v}(\Lambda(\rho_{t}^{1},\varrho_{t}^{ 1})))-\chi(\theta_{v}(\Lambda(\rho_{t}^{2},\varrho_{t}^{2})))\right)^{2}du.\]
Since \(\chi(\theta_{v}(\Lambda(\cdot)))\) is Lipschitz, the last display is bounded from above by \(\frac{A}{2}\|\overline{p}_{t}\|_{2}^{2}\). Putting all this together, we conclude that
\[V_{k}^{\prime}(t)\leq\sum_{z\geq 0}\left(\frac{-(z\pi)^{2}}{2a_{z}}+\frac{1}{2 Aa_{z}}+\frac{A}{2}\right)\langle\overline{\rho}_{t},\psi_{z}\rangle^{2}.\]
Taking \(A=1\), we get
\[V_{k}^{\prime}(t)\leq\sum_{z\geq 0}\left(\frac{1}{2a_{z}}+\frac{1}{2}\right) \langle\overline{\rho}_{t},\psi_{z}\rangle^{2}=\frac{1+a_{z}}{2a_{z}}\langle \overline{\rho}_{t},\psi_{z}\rangle^{2}=C\,V_{k}(t).\]
The proof of uniqueness of weak solutions of (16) with \(\mathfrak{C}=0\) is similar to the previous one, considering the orthonormal basis \(\phi_{z}(u)=\sqrt{2}\cos(z\pi u)\) of \(L^{2}([0,1])\). Details are omitted here.
We note that adapting the proof above to the case \(\mathfrak{C}=1\) does no longer work since we could use the orthonormal basis of \(L^{2}([0,1])\) given by a linear combination of \(\sin\)'s and \(\cos\)'s, but the problem is that, when we derive this basis the resulting set is no longer a basis of \(L^{2}([0,1])\) and so the argument does not follow. Fortunately, we have an answer about that uniqueness for the 1-dimensional case, see [8]. For higher dimensions the proof is left open.
### Acknowledgements
O.A. thanks Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior, Brasil (CAPES) - Finance Code 001. P.G. thanks Fundacao para a Ciencia e Tecnologia FCT/Portugal for financial support through the projects UIDB/04459/2020 and UIDP/04459/2020. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovative programme (grant agreement n. 715734).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.